passerine
passerine copied to clipboard
Refactor Compiler Pipeline
We've added two new compiler features, but haven't done much in the way of project restructuring to accommodate this change. Referencing #33, the new compiler pipeline should be:
- lex
- parse
- desugar (split from macro expansion)
- hoist (move up before expansion)
- expand (split from desugar)
- infer
- gen
With the syntax trees moved out to another module. I think we should implement some traits that represent a compilation step / syntax tree.
This, along with the goals outlined in #34, will be the target of the 0.9.3 release.
In addition to scoping variables, we also need to scope macros and type definitions. This will be interesting.
This is happening in the big-refactor
branch.
Regarding macros I just saw Unseemly and thought it might be a really good place to take some inspiration from as it offers "typed macros" and seeing it in practice kind of sparked my interest in that idea (syntax aside :wink:).
Another way to do types macros is to leverage a two-level lambda calculus with a kind-based macro system. I'll give Unseemly a look, I wonder if its using a similar system under the hood.
Update: Holy cow, Unseemly is pretty cool! I'm a huge fan of how the language is bootstrapped from first principles, the examples are pretty incredible. I like the way it does syntactic extension:
extend_syntax
Expr ::=also forall T . '{
[
lit ,{ DefaultToken }, = 'if'
cond := ( ,{ Expr<Bool> }, )
lit ,{ DefaultToken }, = 'then'
then_e := ( ,{ Expr<T> }, )
lit ,{ DefaultToken }, = 'else'
else_e := ( ,{ Expr<T> }, )
]
}' conditional -> .{
'[Expr | match ,[cond], {
+[True]+ => ,[then_e],
+[False]+ => ,[else_e], } ]' }. ;
in
...
There's this really nice mapping between patterns and expressions. I'd love to figure out a better syntax for this idea, and explore the ergonomics a bit.
The link you shared kinda started a rabbit hole, here are some links I've found interesting:
- http://composition.al/blog/2017/07/31/my-first-fifteen-compilers/
- https://en.wikipedia.org/wiki/Catamorphism
- https://github.com/nanopass/
- https://www.youtube.com/watch?v=Os7FE3J-U5Q
- https://www.nand2tetris.org/
- https://www.cis.upenn.edu/~bcpierce/tapl/
- https://github.com/paulstansifer/unseemly/blob/master/src/examples/worked_example.unseemly
I highly recommend the youtube video.
I propose to move expand
and infer
before desugar
.
There are two benefits gained by moving infer and expand up in the pipeline.
First and foremost, errors derived from macro expansion or ill-typed expressions can be reported in the same way the user sees them (that is, with the original source location and specific error messages).
Secondly, the compilation can fail without doing further passes if the program is syntactically incorrect or ill-typed.
This is how GHC (Haskell compiler) does it. [reference]
Probably the most important phase in the frontend is the type checker, which is located at compiler/GHC/Tc/. GHC type checks programs in their original Haskell form before the desugarer converts them into Core code. This complicates the type checker as it has to handle the much more verbose Haskell AST, but it improves error messages, as those message are based on the same structure that the user sees.
PS: I am not sure where to place hoist
First and foremost, errors derived from macro expansion or ill-typed expressions can be reported in the same way the user sees them (that is, with the original source location and specific error messages).
Passerine reports the original location of all error messages because it uses a span-based error reporting system. In essence, every artifact produced by the compiler keeps track of set of spans of source code used to produce it. This allows for accurate error reporting everywhere in the compiler, including at runtime.
Secondly, the compilation can fail without doing further passes if the program is syntactically incorrect or ill-typed.
This, however, is a good thing! My only concern is that operating on the parse tree (for type inference, specifically) may add extra redundancy to those passes. For example, there are a few constructs like:
3 . double . print
-- the same as
print (double 3)
If type inference were done before desugaring, we'd have to handle both these cases separately, even they both are just different ways to write function calls.
PS: I am not sure where to place
hoist
Hoist is a fairly simple step that consistently renames all variables that refer to the same value to the same name, and builds a set of which variables are captured by each function/scope. This step needs to happen before infer
, because type inference expects 'normalized' values, so to speak. Macro expansion doesn't need normalized values, but moving infer
before expansion makes it easier to manage hygiene - instead of mangling symbols, we simply have to introduce a new unique symbol.
I guess the main idea is that desugar does not convert the parse tree to a small core language, it just reduces the tree to a normalized representation that is easier to work with in later passes. Some compilers would include the equivalent of passerine's desugar
step in the parser - doing this and removing the desugar step might be the best course of action.