RFCs
RFCs copied to clipboard
Refactor bad code, build a testable specification and write a compiler guide
The Neon Genesis Evolution Plan
北海虽赊,扶摇可接;东隅已逝,桑榆非晚。-- 滕王阁序 It is better late than never.
Introduction
Lacking a compiler documentation and a language specification with elaborate testable examples are the long-standing obstacles which hinder the Nim language from gaining the popularity it deserves. A compiler documentation gives developers the overview of the compiler internals and helps them step into the development of the Nim compiler. A language specification clarifies how the language works. Armed with detailed testable examples, it can prevent regressions from happening and act as a delicate guide to developers proving that the feature works and the language works.
The RFC is intended to be progressive and grows gradually. The task should start from commenting the compiler code. A testable specification can roll out at the same time. Finally the comments and the specification should help form a guide summarizing how the Nim compiler works.
The tasks are divided into three steps: comment the compiler code, build a testable specification and write a compiler guide.
Comment the compiler code
Compiler code without comments tends to be hard to understand. Comments help developers gain the insights of why the code is written like this and how it probably works. Comments could be obsolete and meaningless as time goes by. However, It is worthwhile updating the comments periodically since the code and comments will be read thousands of times.
The Nim CI should have a measure to calculate the comment ratio of the code. Ideally it should measure the comment ratio per pull request. The comment ratio is defined as comments lines / total lines (excluding empty lines).
- [ ] add the comment ratio metric to the Nim CI
The contribution of the compiler documentation should be in the form of a pull request. The pull request should comment a certain module or functionality in the Nim compiler.
- [x] mConStrStr => https://github.com/nim-lang/Nim/pull/20257
- [ ] todo ...
Build a testable specification
A specification should describe what should work or fail, with abundant testable examples. It should cover as many cases as possible.
As a convention, a describe
block can be used. The implementation starts from
template describe*(x: string, body: typed) =
block:
body
It may be extended for better documentation generation in the future. The testable specification for the Nim manual should be put under the tests/spec/manual
directory. A simple example is given below:
# manual_bool_type.nim
import stdtest/specutils
describe "The internal integer value of a Boolean type":
describe "0 for false":
doAssert ord(false) == 0
describe "1 for true":
doAssert ord(true) == 1
describe "A Boolean type is an Ordinal type":
doAssert bool is Ordinal
doAssert true is Ordinal
doAssert false is Ordinal
describe "The size of a Boolean type is one byte":
doAssert sizeof(bool) == 1
doAssert sizeof(true) == 1
doAssert sizeof(false) == 1
- [ ] Lexical Analysis
- [ ] Encoding
- [ ] Indentation
- [ ] Comments
- [ ] Multiline comments
- [ ] Identifiers & Keywords
- [ ] Identifier equality
- [ ] Keywords as identifiers
- [ ] String literals
- [ ] Triple quoted string literals
- [ ] Raw string literals
- [ ] Generalized raw string literals
- [ ] Character literals
- [ ] Numeric literals
- [ ] Custom numeric literals
- [ ] Operators
- [ ] Other tokens
- [ ] Syntax
- [ ] Associativity
- [ ] Precedence
- [ ] Dot-like operators
- [ ] Grammar
- [ ] Order of evaluation
- [ ] Constants and Constant Expressions
- [ ] Restrictions on Compile-Time Execution
- [ ] Types
- [ ] Ordinal types
- [ ] Pre-defined integer types
- [ ] Subrange types
- [ ] Pre-defined floating-point types
- [ ] Boolean type
- [ ] Character type
- [ ] Enumeration types
- [ ] String type
- [ ] cstring type
- [ ] Structured types
- [ ] Array and sequence types
- [ ] Open arrays
- [ ] Varargs
- [ ] Unchecked arrays
- [ ] Tuples and object types
- [ ] Object construction
- [ ] Object variants
- [ ] cast uncheckedAssign
- [ ] Set type
- [ ] Bit fields
- [ ] Reference and pointer types
- [ ] Nil
- [ ] Mixing GC'ed memory with ptr
- [ ] Procedural type
- [ ] Distinct type
- [ ] Modeling currencies
- [ ] Avoiding SQL injection attacks
- [ ] Auto type
- [ ] Type relations
- [ ] Type equality
- [ ] Subtype relation
- [ ] Convertible relation
- [ ] Assignment compatibility
- [ ] Overload resolution
- [ ] Overloading based on 'var T'
- [ ] Lazy type resolution for untyped
- [ ] Varargs matching
- [ ] iterable
- [ ] Overload disambiguation
- [ ] Named argument overloading
- [ ] Statements and expressions
- [ ] Statement list expression
- [ ] Discard statement
- [ ] Void context
- [ ] Var statement
- [ ] Let statement
- [ ] Tuple unpacking
- [ ] Const section
- [ ] Static statement/expression
- [ ] If statement
- [ ] Case statement
- [ ] When statement
- [ ] When nimvm statement
- [ ] Return statement
- [ ] Yield statement
- [ ] Block statement
- [ ] Break statement
- [ ] While statement
- [ ] Continue statement
- [ ] Assembler statement
- [ ] Using statement
- [ ] If expression
- [ ] When expression
- [ ] Case expression
- [ ] Block expression
- [ ] Table constructor
- [ ] Type conversions
- [ ] Type casts
- [ ] The addr operator
- [ ] The unsafeAddr operator
- [ ] Procedures
- [ ] Export marker
- [ ] Method call syntax
- [ ] Properties
- [ ] Command invocation syntax
- [ ] Closures
- [ ] Creating closures in loops
- [ ] Anonymous procedures
- [ ] Do notation
- [ ] Func
- [ ] Routines
- [ ] Type bound operators
- [ ] Nonoverloadable builtins
- [ ] Var parameters
- [ ] Var return type
- [ ] Future directions
- [ ] NRVO
- [ ] Overloading of the subscript operator
- [ ] Methods
- [ ] Multi-methods
- [ ] Inhibit dynamic method resolution via procCall
- [ ] Iterators and the for statement
- [ ] Implicit items/pairs invocations
- [ ] First-class iterators
- [ ] Converters
- [ ] Type sections
- [ ] Exception handling
- [ ] Try statement
- [ ] Try expression
- [ ] Except clauses
- [ ] Custom exceptions
- [ ] Defer statement
- [ ] Raise statement
- [ ] Exception hierarchy
- [ ] Imported exceptions
- [ ] Effect system
- [ ] Exception tracking
- [ ] EffectsOf annotation
- [ ] Tag tracking
- [ ] Side effects
- [ ] GC safety effect
- [ ] Effects pragma
- [ ] Generics
- [ ] Is operator
- [ ] Type classes
- [ ] Implicit generics
- [ ] Generic inference restrictions
- [ ] Symbol lookup in generics
- [ ] Open and Closed symbols
- [ ] Mixin statement
- [ ] Bind statement
- [ ] Delegating bind statements
- [ ] Templates
- [ ] Typed vs untyped parameters
- [ ] Passing a code block to a template
- [ ] Varargs of untyped
- [ ] Symbol binding in templates
- [ ] Identifier construction
- [ ] Lookup rules for template parameters
- [ ] Hygiene in templates
- [ ] Limitations of the method call syntax
- [ ] Macros
- [ ] Debug example
- [ ] bindSym
- [ ] Post-statement blocks
- [ ] For loop macro
- [ ] Case statement macros
- [ ] Special Types
- [ ] static[T]
- [ ] typedesc[T]
- [ ] typeof operator
- [ ] Modules
- [ ] Import statement
- [ ] Include statement
- [ ] Module names in imports
- [ ] Collective imports from a directory
- [ ] Pseudo import/include paths
- [ ] From import statement
- [ ] Export statement
- [ ] Scope rules
- [ ] Block scope
- [ ] Tuple or object scope
- [ ] Module scope
- [ ] Packages
- [ ] Compiler Messages
- [ ] Pragmas
- [ ] deprecated pragma
- [ ] compileTime pragma
- [ ] noreturn pragma
- [ ] acyclic pragma
- [ ] final pragma
- [ ] shallow pragma
- [ ] pure pragma
- [ ] asmNoStackFrame pragma
- [ ] error pragma
- [ ] fatal pragma
- [ ] warning pragma
- [ ] hint pragma
- [ ] line pragma
- [ ] linearScanEnd pragma
- [ ] computedGoto pragma
- [ ] immediate pragma
- [ ] redefine pragma
- [ ] compilation option pragmas
- [ ] push and pop pragmas
- [ ] register pragma
- [ ] global pragma
- [ ] Disabling certain messages
- [ ] used pragma
- [ ] experimental pragma
- [ ] Implementation Specific Pragmas
- [ ] Bitsize pragma
- [ ] Align pragma
- [ ] Noalias pragma
- [ ] Volatile pragma
- [ ] nodecl pragma
- [ ] Header pragma
- [ ] IncompleteStruct pragma
- [ ] Compile pragma
- [ ] Link pragma
- [ ] passc pragma
- [ ] localPassC pragma
- [ ] passl pragma
- [ ] Emit pragma
- [ ] ImportCpp pragma
- [ ] Namespaces
- [ ] Importcpp for enums
- [ ] Importcpp for procs
- [ ] Wrapping constructors
- [ ] Wrapping destructors
- [ ] Importcpp for objects
- [ ] ImportJs pragma
- [ ] ImportObjC pragma
- [ ] CodegenDecl pragma
- [ ] cppNonPod pragma
- [ ] compile-time define pragmas
- [ ] User-defined pragmas
- [ ] pragma pragma
- [ ] Custom annotations
- [ ] Macro pragmas
- [ ] Foreign function interface
- [ ] Importc pragma
- [ ] Exportc pragma
- [ ] Extern pragma
- [ ] Bycopy pragma
- [ ] Byref pragma
- [ ] Varargs pragma
- [ ] Union pragma
- [ ] Packed pragma
- [ ] Dynlib pragma for import
- [ ] Dynlib pragma for export
- [ ] Threads
- [ ] Thread pragma
- [ ] Threadvar pragma
- [ ] Threads and exceptions
- [ ] Guards and locks
- [ ] Guards and locks sections
- [ ] Protecting global variables
- [ ] Protecting general locations
These examples should generate a pretty HTML documentation. They can be linked to the Nim manual or included into the Nim manual in the foldable form (defaults to folded).
Write a compiler guide
A compiler guide helps developers learn the internals of the Nim compiler without wasting time in reading the compiler code. What is a magic function? How the Nim method is implemented? Is it efficient to use &
to concatenate multiple strings? These are questions that a compile guide should attempt to answer. The guide should focus on the explanation of the specific terms and the gist of the implementation.
For instance:
`$` function is a magic proc used to concatenate strings and chars. The Nim compiler does some optimizations to make it perform as good as the in-place version.
There are four overloads for `$` function in the system module.
```nim
proc `&`*(x: string, y: char): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x, y: char): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x, y: string): string {.magic: "ConStrStr", noSideEffect.}
proc `&`*(x: char, y: string): string {.magic: "ConStrStr", noSideEffect.}
```
All of them have the same magic `mConStrStr`, which is needed in the optimization phases.
```nim
s = "Hello " & name & ", how do you feel" & 'z'
```
Here is the ast of the right-side expression:
```nim
StmtList
Infix
Ident "&"
Infix
Ident "&"
Infix
Ident "&"
StrLit "Hello "
Ident "name"
StrLit ", how do you feel"
CharLit 122
```
- [x] Is it efficient to use
$
function to concatenate multiple strings?(https://ringabout.github.io/neon/concatenate) - [ ] todo ...
I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.
Compiler code without comments tends to be hard to understand. Comments help developers gain the insights of why the code is written like this and how it probably works. Comments could be obsolete and meaningless as time goes by. However, It is worthwhile updating the comments periodically since the code and comments will be read thousands of times.
Just as a forewarning since this has been a massive sticking point for folks at my workplace: Is the intent to go beyond just doc comments into comments inside of macros/procs/templates? The experience I've made is that comments within functions get outdated first as somebody updates something that doesn't change the outcome of a function but how it achieves it. doc comments have a somewhat better rate of staying current and correct. That's why I tend to steer away from comments that aren't just doc comments.
I haven't looked at the compiler, but is it an option to have the comments bit only be about doc comments?
Doc comments still need to explain every quirk in the procs because the Nim compiler is too complex. There are some procs has hundreds or even thousands of lines. It is painful to go through all the code without comments.
Nim does generate documentation for the compiler procs => https://nim-lang.org/docs/compiler/astalgo.html
I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.
I'd pitch in as well (with a smaller amount I can afford).
I wholeheartedly agree with this. And in fact (if someone tells me how), I'm willing to put up a bounty of up to $1,000 USD, depending on the amount of work done.
I'd pitch in as well (with a smaller amount I can afford).
I actually missed this one. Given that I'm already throwing money into the ecosystem, might as well pitch into this bit as well. Which...err... leads me to the question if we have a mechanism to deal with such an idea of a "community sponsored bounty" or if we best handle this all individually.
if we have a mechanism to deal with such an idea of a "community sponsored bounty" or if we best handle this all individually.
There is. See the bounty for Interfaces proposal => https://github.com/nim-lang/RFCs/issues/39#issuecomment-1018706344
In terms of documentation one of the most scary parts of the compiler is anything that does semantic check sem*.nim
and also sigmatch. I dread going there.
I also missed the part about the closure iterator transformation. It's an important area for async, laden with bugs due to the complex interactions with defer/try+except/blocks ...
The Nim CI should have a measure to calculate the comment ratio of the code.
Current breakdown of the compiler documentation (single-hash comments are not considered documentation) comment ratio
I'm going to push the setup to set an example for later updates when this plan is accepted.
Another obvious statistics while we are at it - this heatmap shows correlation of edits between different parts of the project code. The left side shows total number of edits of the file (at the time of analysis). Each cell in the heatmap represents the percentage of times the bottom file was edited together with one on the left.
Left ordered by total number of edits
Or by total number of co-edits with different files
Compiler description in general should also include guidelines about relationship between different parts of the compiler - no just who calls who in what order, but changes that are introduced in the semantic layer, how types defined in one part are used in another, how to improve certain parts. People come to the project with questions, and it is important to anticipate common ones, such as "how do I improve an error message X", "there is an RFC Z, I really like it and want to help implement", where do I start", "I changed something in place A and now things break for test B - what might be wrong" and so on and so forth.
@haxscramper Just curious - how did you produce those heat maps and charts?
I'm writing a tool for this kind of analysis https://github.com/haxscramper/code_forensics - for now it is a proof of concept, but I'm working on more stats such as an average commit message length
Files edited together 100 times or more
150 times or more
75 times or more
Someone might say it is obvious, but for someone who just came to the project, this kind of structural description can be very valuable.
I prefer massive refactorings over massive documentation efforts. I know that we can do both, but if I have to choose, I take "refactorings".
- [ ] Rewrite the codegen to be based on a stream of tokens, not based on Ropes. (Memory consumption, performance.)
- [ ] Change how the TLineInfo is stored so that we can save both space and support big files at the same time.
- [ ] Investigate which features are unreasonably costly to maintain for the benefits they bring and phase them out for whatever Nim version that is allowed to break things.
But the biggest elephant in the room is probably "phase ordering issues". We need a set of refactorings so that the transformation pipeline is simple and correct. Then "who calls whom" might also be clean enough so that it doesn't need that much documentation.
I agree that commenting is not appropriate. It is better to have a clear code structure. The point that "Rewrite bad code; don't write comments" from https://www.youtube.com/watch?v=NLebZ3XT92E&t=308s seems fair.
The refactorings should have been in the part one. I would like to investigate into refactorings which makes the compiler code more readable and more accessible to newcomers.
sigmatch.paramTypesMatch has a total of >>NINE<< one-character variables - m, f, a, x, y, z, c, r, i, all involved in a complex algorithm of the best candidate argument search - that's about a third of the English alphabet. Good to know the algorithm complexity has such a robust limiting mechanism - at most you can write code that is three times as complex, not more, so I'm not sure if we even need to talk about refactoring to be honest
@haxscramper I don't understand. A refactoring can be as simple as "rename a variable".
For the reference - it was a joke, although maybe someone can interpret it as an example starting point or example of how code should not be written. I was just blown away by the sheer number of one-char variables in what appears to be one of the very important procedures in one of the most complex parts of the compiler (semantic analysis). This appears to be a common theme in the sem, with proliferation of undescriptive names that make it harder to intuit what the code was meant to do.
Rewrite the codegen to be based on a stream of tokens, not based on Ropes. (Memory consumption, performance.)
I might work on it in a few days, since it doesn't sound too difficult for me. I suppose it does the reversed work compared to the lexer.