nomennescio
nomennescio
> Well for me, if it was 50 millis it’s a 50% performance reduction. But in your example it’s like 15% which if it held true to my computer is...
Some more statistics in #2577
> > I'll also be doing a compression experiment with a `load-all` image; if we can practically start using that, then run-time loading of vocabs can be prevented at all....
@mrjbq7 do you have any objections if I put this already in the master branch? We can always adapt things later.
> Well, a few things need to be addressed I guess if you want to eventually merge it: > > 1. Where does zstd.c come from? I don't see it...
I think it's important that we're on the same page here; do you understand that compression/decompression is transparently done in the VM loader, which is determined by inspecting the header...
> Yes, I also understand it's 1) a lot slower that uncompressed 2) not used by anyone by default 3) not tested with deploy 4) not necessarily the best approach...
> What is the point of implementing this if nothing ever uses it by default, if `save` doesn't save a compressed image, if `deploy` doesn't deploy a compressed image, if...
> Because if this is only so that you can load compressed images, it's not going to get merged. > > It would get merged if: > > 1. compressed...
Changed code to : hello-world ( -- ) "Hello World!" print flush ; with no change in results. If I load the same vocabulary in the Listener, the output does...