haxe
haxe copied to clipboard
Server maintenance
Once we've merged https://github.com/HaxeFoundation/haxe/pull/8730 we'll be able to poll the connected socket to see if there's a pending request. This should allow us to implement a pattern like so:
let rec loop block =
match read_from_socket block with
| Some data ->
process_data data
| None ->
if there_is_more_maintenance_to_do() then begin
do_some_small_maintenance();
loop false
end else
loop true
in
loop false
This assumes that the socket read function returns Some data if there's data, and None otherwise.
It is important to keep maintenance tasks small so an incoming request isn't delayed too much. Such tasks could include:
- Do some GC maintenance (https://github.com/HaxeFoundation/haxe/pull/8727).
- Walk a type's fields and check if
cf_expr_unoptimizedis equal tocf_expr, in which case we can unset it (this currently nearly doubles the memory required for typed AST storage). - Check if we have compilation contexts which haven't been accessed in a long time and discard them.
- Once we have a binary format (https://github.com/HaxeFoundation/haxe/issues/8275), check if we have some data which hasn't been used for a while and could be "demoted" to binary to save memory.
Generally, the goal is to keep memory usage at an acceptable level while not disrupting operations.
This requires some design to abstract the maintenance tasks.
Something else we can do during this downtime: Run cl_restore on our cached classes.
Another thing: Register a task for each directory when exploring class paths. This would improve startup time at the risk of missing out on some types when requesting a really quick toplevel completion.
This is mostly done, or partially obsolete with the changed caching.
There's still
Check if we have compilation contexts which haven't been accessed in a long time and discard them.
Which could now be saved to disk instead of being discarded. Should we open a separate issue for this, or is it not worth adding anyway?
Hmm yes, that one might be a good idea still. I'll reopen.