v8
v8 copied to clipboard
Native constructors
I believe native constructors are possible through v8::FunctionTemplate
.
It would allow users to create instances from native code. You can SetClassName
(which results in new Date().constructor.name === 'Date'
) and set a prototype on it, among other things.
Ideally, it would map to a golang struct somehow. Maybe that's too hard, I'm not overly concerned with it.
What's the relationship between this and Interceptors
? I'll need to do some research to understand the concepts involved here.
I'm not sure, I don't know enough about those either.
I have been looking at C++ libraries that abstract v8 more for inspiration like: https://github.com/pmed/v8pp
I wonder if we might be able to use one of them directly here to simplify and make the code more robust.
What's the use case here? Is it for directly mapping go structs to V8 without copying or JSON encoding them?
I don't think we can get away without copying the memory and still have super-fast V8 access since go memory is managed and moved around on-the-fly.
With accessor interceptors, we could avoid copying them initially, but anytime they are read from V8 we'd have to call back into go and re-copy the latest value.
I'm experimenting with this as an alternative for running arbitrary user code in v8 to handle HTTP requests (and more, eventually.)
Right now we use node.js with isolated-vm.
We have an extensive "base" environment which defines and shims a lot the Web API (Request
, Response
, Headers
, URL
, FetchEvent
, etc.) which aren't available in v8 natively.
My thought was to make this all native. For instance, a user could do let u = new URL(path, base)
and it would call a golang function as a constructor and the returned object would be a wrapped golang struct. In v8, the following would be true: instanceof u === URL
.
URL
is not the best of examples. Something with cryptography probably makes a lot more sense. The advantage to native functions is also the "base heap" allocated by each context. If you have a massive snapshot (right now our product's snapshot is about 700KB) and a big global
object (right now it sits at about 3MB), then you're already using quite a bit of heap. If you run 10+ contexts in the same isolate, then each has its global object weighing in 3MB. This is all theory because I don't exactly know how native function bindings affect the heap.
Of course, the alternative is to do as we currently already do. We create the constructors and everything in JS and only call out to C++ for specific functions (well, in our case this goes back to node, to v8 -> C++ -> node (v8)).
I think it's safe to categorize this issue as much further along the roadmap.
More generally, this would also be handy to define an API largely through Go types, obviating the need to either maintain JS representations of your objects or to provide some kind of global mapping layer through either objects or bound functions.
A few thoughts:
-
V8 <--> native C/C++ code is very fast. However, V8 <--> native go code has substantially higher overhead. This is due to the cgo function calling overhead as well as the memory incompatibilities -- we generally need to copy between go and C memory because of go's runtime.
-
In terms of convenience for exposing a Go API to v8, this can almost already be done now. It basically requires some code to automatically convert an arbitrary go function into a
v8.Callback
.
That is, create a v8.Callback that does:
- Use reflection to inspect all of the arguments of the target function. For each argument, write a function that converts a
*v8.Value
to that type. (like the inverse of.Create
) - Need to think about to handle too many or too few args. What about pointers?
- Call the target function with the args.
- Use reflection to inspect the return values of the function and convert those to
*v8.Value
s. Special handling around error values. Think about multiple return values (perhaps just disallow them?).
Then enhance .Bind
to do this wrapping automatically.