I’ve been working on a new programming language, called Raven. It’s still experimental, and not ready for serious (or even silly) users. But curious language nerds such as yourself may find it interesting. If you want to follow along then sponsor me, and I’ll give you early access to the repo along with regular behind-the-scenes updates.
Raven is small but smart. It should feel tight and intuitive, lightweight enough for interactive notebooks and scripting, yet capable and adaptable for big projects with hard constraints. It’s difficult to summarise an entire language, so here’s a small snippet to warm up with:
fn fib(n) { fib(n-1) + fib(n-2) }
fn fib(1) { 1 }
fn fib(0) { 0 }
fn fibSequence(n) {
xs = []
for i in range(1, n) {
append(&xs, fib(i))
}
return xs
}
show fibSequence(10)
(For more samples try my brainfuck interpreter, the current malloc or complex numbers.)
If you save this in a file like test.rv
, compile it and run it you’ll get the output:
fibSequence(10) = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
Despite the familiar C-ish look, Raven is primarily a functional language, and a Lisp. It comes with fast multi-dispatch, as in the overloaded fib
function. Basic types are defined as Haskell-y ADTs which you can pattern match on, as in:
bundle Optional[T] { Some(x: T), Nil() }
But the standard library will also provide Python-like dictionaries and lists, which most users will stick to. It’s geared towards seamless metaprogramming and DSLs, and in fact has no built-in keywords: fn
, for
, show
and co are technically macros, and could have been user-defined.
Raven is inspired by Clojure’s ideas about state and change, and borrows its convenient persistent data structures (without modifying theirs, of course). We’re not zealous about functional purity, but you have to go out of your way to get shared mutable state. Unlike Clojure we embrace control flow: what looks like mutation is syntax sugar, and append(&xs, x)
is equivalent to xs = append(xs, x)
.1 This shortcut helps programmers, making it easier to express certain algorithms, but also aids the compiler, enabling Swift-style “copy on write” optimisations. I think we can embrace values without unduly harming performance.
And performance is a priority, despite Raven’s high-level feel. Immutability and pattern matching mix wonderfully with Julia-like dataflow type inference, allowing Raven to generate skinny WebAssembly code without needing type annotations.2 The abstract interpreter is free to specialise on values and unroll loops, for powers similar to Zig’s comptime
or Jax’s partial evaluation. The compiler is incremental and demand-driven, enabling live coding and interactive use (browser demo for sponsors). Meanwhile batch mode produces minimal .wasm
binaries that run anywhere. The plan is to support separate compilation, too, so that (for example) a compiled plotting library can be streamed to a browser-based notebook on the fly, instantly.
There’s a programming aphorism that a language must be at least ten times better than its predecessors in order to take off. I think Raven’s combination of features makes that goal feasible – support for web browsers alone cuts all the friction usually suffered to get to “hello, world”. I see the future of programming in terms of apps like Notion or Figma: rather than traditional source repos and editors, think interactive, collaborative, canvas-style workspaces that live on any device, accessible by the billion or so people who access the internet only through a smartphone, creating shards of logic that run at zero marginal cost in the cloud. My highest ambition is for Raven to be the default choice both when prototyping ideas in a notebook and when publishing those ideas as explorable explanations.
But I also don’t want people to be limited as prototype evolves into practice, which means having excellent tools for deployment, performance tuning and managing large codebases. Like TypeScript, Raven can turn its analysis into great tooling, and many errors can be found ahead of time. But rather than building on untyped JavaScript, Raven was designed for flow-based inference, making it easier to get precise results. As one example, Raven discourages if
statements in favour of match
clauses which ensure all cases are covered (and narrow types more effectively too). As another, we have no trouble inferring recursive functions, like one that builds a linked list out of tuples:
fn foo(n: Int) {
if n == 0 {
return nil
} else {
return [n, foo(n-1)]
}
}
No system I’m aware of can infer code like this, without an explicitly-defined recursive type.3 Raven can compute the type (T = nil | [Int64, T])
, and this will work however complex your code gets, whether producing trees containing linked lists or whatever. Which is great for prototyping – but you can still spec out the recursive type and check foo
conforms, if you want.
Underneath, there isn’t really such a thing as a user-defined “type” at all. In a definition like bundle Complex(re, im)
, the bundle
keyword is really a macro that generates constructors like fn Complex(re, im) { pack(tag"Complex", re, im) }
(alongside hooks for printing and pattern matching). pack
is Raven’s primitive tuple type and tag
s are symbols used to distinguish different tuples. Just as in Smalltalk everything is an object, or in Mathematica everything is an expression, in Raven everything is a pack
– even basic numbers. User-defined values are just as good as built-in ones, because the compiler infers structure, noticing when the tag is constant and storing only the contents on the stack.
raven> showPack Complex(1, 2)
Complex(1, 2) = pack(tag"common.Complex", 1, 2)
raven> showPack [1, 2, 3]
[1, 2, 3] = pack(tag"common.List", 1, 2, 3)
raven> showPack 1/3
(1 / 3) = pack(tag"common.core.Float64", bits"0011111111010101010101010101010101010101010101010101010101010101")
Raven does rely on implicit runtime support for things like memory management. But what it needs is predictable and generated on the fly, so you can write code comparable to Rust’s #[no_std]
. The language’s own memory allocator is part of the standard library. I wouldn’t sell Raven to Linux kernel maintainers, but it’s capable of low-level work when needed.
But the usual approach to abstraction is flipped around. Rather than a set of CPU-specific primitives upon which more abstract data structures are built, Raven starts with an abstract core and treats the CPU like an optional library. Interfacing with the hardware or operating system is not logically different to using a web API.4 By making the CPU less of a special case, it’s easier to work with other hardware, or even different programming models entirely. It should be more feasible to do compiler-supported autodiff (like Zygote) or probabilistic programming (like Stan or Infer.net) in Raven than anywhere else.5 Factor graphs, differential equations and logical query languages can all be viewed as alternative “backends” for a single source language, just like WebAssembly, SPIR-V or XLA are today.
So that’s a whirlwind tour of Raven’s ideas and goals. To be clear, the project is still a proof-of-concept rather than a practical tool. But the foundations are just about laid, which means things are just getting interesting. I’m ready for early poking and feedback on its designs, which is why I’ve written about topics like syntax, the type system, or why I’m using reference counting and targeting WebAssembly. And there’s much more to talk about still, including ideas for structured concurrency, the error model and effect handlers.
In time Raven will, of course, be completely open source (it’s already MIT licensed, despite the hidden repo). But for now I’m looking for a small, focused audience I can engage fully with. I also want to figure out sustainable funding so that I can commit to this work. Hence the sponsorship model. It’s early days and I’m still figuring this all out, but wherever the road leads, I hope you’ll join me for the ride – it’s going to be fun.
-
The first thing the compiler does is convert code to SSA form, whose “basic blocks” are equivalent to a set of mutually recursive functions with immutable variables. You don’t have to use mutable locals, but if so you’re only doing SSA conversion by hand, so there’s no benefit to abstaining. ↩︎
-
For example, the
fibSequence
output above is inferred as a list of machine integers and stored contiguously in memory, rather than as Python-style pointers to arbitrary objects. ↩︎ -
In fact similar examples crash the compiler in Mojo and Crystal. ↩︎
-
In the default configuration everything ends up on the CPU: the hardware is accessible to user code (eg via explicit pointers), and it’s also the backend for the abstract core (eg lists turn into pointers). But these are subtly distinct roles! Raven code might equally compile for a TPU (via XLA), or to logic gates (like Verilog), or to a set of differential equations (like Modelica), and in those cases the CPU “library” would not be available. ↩︎
-
A good way for the language to differentiate itself. ↩︎