janet
janet copied to clipboard
Optimizations in the Bytecode Compiler
The compilation of Janet terms into bytecode for its abstract virtual machine provides a nice venue for peephole optimizations - small steps which recognize opportunities for optimization of individual sequences of bytecode and replace them during compilation, resulting in overall more efficient code.
Available Optimizations
ldi
+ add
to addim
Here's an example of two functionally equivalent functions:
(fn [x] (+ x 10))
(fn [x] (def y 10) (+ x y))
However, the first function disassembles into two operations, @[(addim 1 0 10) (ret 1)]
, and the second disassembles into three: @[(ldi 1 10) (add 2 0 1) (ret 2)]
A peephole optimizer could determine that the integer in the ldi
instruction is only referred to in the subsequent add
instruction, and fold it directly into an addim
instruction so that the two functions compiled to identically efficient code.
Of course, equivalent optimizations could be implemented for other numeric operations.
The current compiler is quite dumb and could definitely be improved with several peephole optimizers as well as code inlining. I have been hesitant to dive into pretty much any optimization at all currently for the sake of debugging, but I suppose this kind of peephole optimization should be fairly harmless.
@bakpakin Do you anticipate that if work proceeded in the direction of having more optimizations that it would be infeasible / impractical to have a compilation mode that didn't perform optimizations?
If practical to do so it seems nicer from a debugging perspective as you mentioned already. Not just the bytecode debugger but also if a source step debugger were to come about that seems like it would also benefit from an unoptimized compilation mode.
The optimizations that have been added shouldn’t affect debugging much, if they do that’s a bug. There isn’t any plan to add optimizations that will prevent debugging.
Thanks!