Submit Your Feature Requests and Ideas
If you have ideas, wishes, feature request and feedback - please add them in the comments below
Can I also inject javascript objects from the main process into the QuickJS runtime? I'm building a node express based data server, where I want to allow clients to run javascript queries on that data (https://github.com/SimplyEdit/SimplyStore/)
Hey @poef
Yes, you can. See Data Exchange Between Host and Guest
You can use env, and provide strings, numbers, arrays, objects and functions.
const { evalCode } = await createRuntime({
env: {
MY_PROCESS_ENV: 'some environment variable provided by the host',
KV: {
set: (key: string, value: string) => keyValueStoreOnHost.set(key, value),
get: (key: string) => keyValueStoreOnHost.get(key),
},
},
})
Read standard input to V8's d8 (/proc/PID/fd/0) with WebAssembly. Right now I'm using QuickJS via os.system() https://github.com/guest271314/native-messaging-d8/blob/quickjs-stdin-read/nm_d8.js#L16, https://github.com/guest271314/native-messaging-d8/blob/quickjs-stdin-read/read_d8_stdin.js. If we can do this with WebAssembly we can get rid of os.system() which call sh.
@guest271314 thanks for your feedback. As far as I understand, if the "regular" deadline nodejs module is available inside of the quickjs runtime, it should solve the issue or? So, the code inside the sandbox would look like this
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
rl.on('line', (line) => {
console.log(line);
});
rl.once('close', () => {
// end of input
});
As far as I understand, if the "regular" deadline nodejs module is available inside of the quickjs runtime
Where does Node.js API's get in to your WASM QuickJS build?
The idea is to use the least amount of resources to read d8 shell standard input. QuickJS (quickjs-ng) is around 1.2 MB.
I was thinking I could use WebAssembly/WASI to read stdin to d8 using WebAssembly.compile().
I'm basically trying to do this https://github.com/guest271314/native-messaging-webassembly without a WASM runtime, using the built-in WebAssembly object alone - within d8. I created a different solution for Mozilla SpiderMonkey.
What I was meaning is, that you can provide functions and data from a regular Node.js host application, to the wasm guest system. Basically, you would read the standard-in in the host application and provide this data than to the guest system, as a copy. What you try is, to disable the isolation of the wasm, and to allow to access the host system directly. Kind of disable security and enhance WASI in the direction of wasix. In this project, this not wanted, as the JavaScript should run in an isolated sandbox, ensuring that it is not possible to break out, and access host functionality directly, in an uncontrolled manner.
What I was meaning is, that you can provide functions and data from a regular Node.js host application, to the wasm guest system.
It doesn't make sense to me to use 108.7 MB node executable that depends on V8 to read standard input to v8 at 37.9 MB.
I use deno and bun and qjs and tjs the same that I use node, so I don't think of node as a "regular Node.js application". node is just another JavaScript runtime in the JavaScript toolbox for me.
That's why I chose qjs at 1.2 MB to do the task.
I have been trying to do this using d8s readline(), though have not succeeded, yet.
I saw your work and this issue requesting features and decided to place a feature request.
In this project, this not wanted, as the JavaScript should run in an isolated sandbox, ensuring that it is not possible to break out, and access host functionality directly, in an uncontrolled manner.
I don't think it is possible to achieve that requirement. I have broken out of too many alleged "sandbox" to think for a moment that it can't be done in this case, too.
Thanks!
Hey, No worries, it's totally wanted to place such issues - even if I can't help here. Can ask you, what's the general use case you like to achieve?
I don't have a use case for running applications in a "sandbox".
I generally break out of sandboxes that folks try to set up.
For people interested in "sandbox" code we already have that with Worker and WebAssembly in and out of the browser and SharedWorker, Worklet interfaces in the browser.
Runtime Limits
-
Set Limits Maximum Memory Max CPU Time - as a decimal between 0-1 1 being its allowed to consume 100% of the core/process that it is running on.
-
Observability It should be possible to poll the sandbox to find out, how much memory is being consumed (either as a value or as a percentage), same as CPU time.
-
Informative Errors If the sandbox is destroyed because it exceeds the max memory or the cpu is used too much there should be a clean informative error thrown (according to the new design it might be good to throw this at both the
runtime.runSandboxedlevel as well as at thesandbox.evalCodelevel.
I don't think its possible to include anything at present like the --allow-net feature of deno in node. However instead of marshaling (to and from) a fetch replacement which imposes this limitation within javascript. It would be desirable to look towards seeing if there is any way possible to impose a limitation on the sandbox runtime. If it was a separate process, there are ways to do this from the OS. But I have not found out a compatible solution for worker threads. However if there was any possible way to do this, it would be very good.
Hey @digipigeon thanks for your feedback. There was a similar question recently Limit CPU and memory usage. The idea of polling from outside is interesting, but there is no simple working solution for it in node. As long as the eval function is running, the host side is kind of blocked. If there is no eval executed, it is already possible to get memory information. As the intention is, to have a highly controlled and isolated sandbox, even if it would be technically possible to allow direct net access, I don't think I will add it (at least per default). The current focus is, to provide an environment, which is as close as possible to node and similar runtimes.
Hi @sebastianwessel, would you consider implementing https://nodejs.org/api/worker_threads.html#performanceeventlooputilizationutilization1-utilization2 as an alternative to accomplish something similar?
@digipigeon Not directly in this library, as the focus is on providing a sandbox, data (de-)serialization, runtime compatibility inside the sandbox, and developer experience (DX).
The developer should be free to choose the method that fits best. My recommendation is to use libraries that are specialized for this, such as the poolifier-web-worker package, which is used in the Server Example here.
For instance, the usage in the browser might differ from that in the backend. In the browser, you might need only one sandbox, while in the backend, as many sandboxes as possible are required.
Can there be fuel metering like in wasmtime?
Since it's similar concept in terms of running JS in an isolated environment using some engine compiled to WASM
@aashutoshrathi as quickly is used, there is the option to do something like this:
‘‘‘typescript let interruptCycles = 0 runtime.setInterruptHandler(() => ++interruptCycles > 1024)
‘‘‘
@sebastianwessel and I can use interruptCycles as fuel here?
Kind of - it highly depends on your use case and what you like to achieve I guess.
Personally, I do not see any real world use case, where it makes sense, to count such things, because in this case, you will need up front what is executed in the sandbox, to find the correct value. It is simply the wrong layer to control such things imo. When it comes to resource consumption, you probably would need to do it on the webassembly level. Meaning you would need to configure node/bun/browser to restrict the wasm resources.
Please make this possible: https://github.com/justjake/quickjs-emscripten/tree/main/examples/cloudflare-workers
@bitnom Interesting. Cloudflare Workers used as a server?
I use Cloaudflare workerd, QuickJS-NG, and Bytecode Alliance's javy separately.
Javy depends on QuickJS, compiles JavaScript to WASM.
The Cap 'n Proto configuration model of Cloudflare, in my opinion, is unnecessarily complicated when JSON would suffice.
Since Javy support WASI, I think in theory you should be able to use the socket interface.
Did you consider extracting fs, test runner, TS support, etc., into a separate package to keep the core library lighter and more focused?
Did you consider extracting fs, test runner, TS support, etc., into a separate package to keep the core library lighter and more focused?
No. At the moment, I’m focusing more on stabilization than optimization. I think the first steps are to separate synchronous and asynchronous processes and (maybe) introduce some kind of dynamic Node module (import from 'node:***') selection.
Regarding size: I guess the required duplication for sync/async handling and Node compatibility modules is much more significant than the few extra lines needed for TypeScript support or the test runner.
Maybe some kind of config and build script could be useful — so that you can selectively include only what you really need and generate a custom, optimized build. That might be the best approach 🤷♂️?
Re TypeScript support in Node.js node depends on Node.js Amaro, which depends on SWC. Basically the types are just stripped.
@guest271314 Think we are good with regular typescript package, as it works in browsers as well. It is also an optional dependency, and not shipped per default
Why use Microsoft TypeScript tsc if it's not needed? bun doesn't use tsc to parse TypeScript syntax. Nor does Facebook's Static Hermes, which also can be compiled to WASM. I don't think Deno does either, except for deno check. You could just strip types of TypeScript input like JavaScript runtimes do.
@guest271314 well, because it is simple, functional, and always 100% in sync with the latest TypeScript version.
For example, if you have a TypeScript enum in your code, it won’t work in current Node versions, as TypeScript sometimes requires more than just stripping away types.
What is the issue with regular typescript package? In most cases - especially in the backend - it is already in the (dev)dependencies
Not so long ago Microsoft TypeScript lagged to write out its version of resizable ArrayBuffer that had been already shipped in the browser, Node.js, and Bun.
It's a dependency.
TypeScript ultimately gets executed by JavaScript runtimes after stripping or "transforming" types.
Bun can execute WASM directly. Deno and Bun can execute AssemblyScript directly.
If the idea is a "sandbox" where TypeScript and JavaScript can be executed, Deno already provides that with the builtin permissions policies and flags.
To me the idea of using QuickJS is to get away from all kinds of dependencies.
Anyway, good luck!
@sebastianwessel Got it, makes sense that stabilization is the priority right now. My interest isn’t really about size or Node compatibility, but more about having a solid single-responsibility core library, namely a JS sandbox with an event loop. Extra features like Node compatibility or a test runner could then live as separate packages (wrappers, plugins, etc).
The issue with the event loop is, that it must be inside of the wasm. I believe, that an (fake) event loop on host side is the wrong place, and will never become a stable, solid solution.
From a host perspective, the execution of the wasm is sync (blocking). This limitation/behavior is on top of the other restricted possibilities, to control something inside the sandbox from the host. Here we hard rely on the implementation of QuickJS itself. Similar for streaming, networking access...
As an example: Something "simple" like execution timeout is simply not possible from host, as there is no "kill wasm" in node. Also, adding a logic which could trigger this kill switch is hard to implement, when the wasm execution is blocking the host event loop.
There are other interesting projects, like winterjs & co, which implemented more or less the full nodejs functionality.