build
build copied to clipboard
Continuous deployment of node nightly executable only for GitHub and CDN
What is the problem this feature will solve?
Continuous deployment of node nightly executable only for GitHub and CDN
Node.js evidently does not provide a download for the node executable only (https://github.com/nodejs/node/discussions/42593#discussioncomment-2526986, https://www.reddit.com/r/node/comments/tvoi8x/is_there_a_nodejs_download_link_for_only_the_node/).
What is the feature you are proposing to solve the problem?
Continuous build of only node nightly executable.
What alternatives have you considered?
In Chromium and Chrome browsers where File System Access is supported I can get node executable from binary downloads. The reason for this issue is I do not need the rest of the files in the binary or source tar ball downloads.
let fileSystemHandle = await showSaveFilePicker({ suggestedName: 'node' }),
writable,
writer;
await import('https://cdnjs.cloudflare.com/ajax/libs/pako/2.0.4/pako.min.js');
try {
// throws, see https://github.com/InvokIT/js-untar/issues/30, handle error
// untar is still defined globally
await import('https://unpkg.com/[email protected]/build/dist/untar.js');
} catch (e) {
console.log(e);
} finally {
// https://stackoverflow.com/a/65448758
// trying to avoid fetching files that won't be used
const request = await fetch('node-v16.14.2-linux-x64.tar.gz');
// Download gzipped tar file and get ArrayBuffer
const buffer = await request.arrayBuffer();
// Decompress gzip using pako
// Get ArrayBuffer from the Uint8Array pako returns
const decompressed = await pako.inflate(buffer);
// Untar, js-untar returns a list of files
// (See https://github.com/InvokIT/js-untar#file-object for details)
const files = await untar(decompressed.buffer);
writable = await fileSystemHandle.createWritable();
writer = writable.getWriter();
const file = files.find(
({ name }) => name === 'node-v16.14.2-linux-x64/bin/node'
).buffer;
await writer.write(file);
await writer.close();
}
Should probably be moved to nodejs/build?
Whatever division you think will actually provide a remedy.
@bnoordhuis This is probably better suited for nodejs/build. Can you kindly transfer the issue there?
I actually thought there would already be a continuous build and download of the official node
executable already.
Some context:
"... run Node.js, entirely inside your browser" nodejs/node#658
WebContainers run Node.js entirely inside your browser, not on a remote server or local binary.
We currently do not expose a way to use WebContainer outside of StackBlitz.com
I am trying to make that so, FOSS.
I can move to the build repo, but there would be a lot of work related (both one time and per release) to supporting a whole additional set of download artifacts across the platforms. It has been discussed in previous contexts (runtime versus sdk for example) and has never gotten traction for that reason.
@bnoordhuis there is already a nightly build and download, but the tarball includes more than just the node
executable itself. @guest271314 is asking for an additional download artifact which is just the binary so that they don't have to extract from what is already available.
@guest271314 what it comes down to is volunteers being willing to spend their time/effort on this. The available volunteer time to keep the build infra going is limited. Is this important enough to you that you would be willing to volunteer your time to create/modify scripts to extract the binary, serve it at an endpoint and then keep the required infra running? If there is a volunteer who will take on the work that might help the discussion move forward.
Some work
download_node.js
let fileSystemHandle = await showSaveFilePicker({ suggestedName: 'node' }),
writable,
writer,
file;
// await import('https://cdnjs.cloudflare.com/ajax/libs/pako/2.0.4/pako.min.js');
let status = await fileSystemHandle.requestPermission({ mode: 'readwrite' });
console.log(status);
class PaxHeader {
constructor(fields) {
this._fields = fields;
return this.parse(fields);
}
applyHeader(file) {
// Apply fields to the file
// If a field is of value null, it should be deleted from the file
// https://www.mkssoftware.com/docs/man4/pax.4.asp
this._fields.forEach((field) => {
let fieldName = field.name;
let fieldValue = field.value;
if (fieldName === 'path') {
// This overrides the name and prefix fields in the following header block.
fieldName = 'name';
if (file.prefix !== undefined) {
delete file.prefix;
}
} else if (fieldName === 'linkpath') {
// This overrides the linkname field in the following header block.
fieldName = 'linkname';
}
if (fieldValue === null) {
delete file[fieldName];
} else {
file[fieldName] = fieldValue;
}
});
}
parse(buffer) {
// https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.bpxa500/paxex.htm
// An extended header shall consist of one or more records, each constructed as follows:
// "%d %s=%s\n", <length>, <keyword>, <value>
// The extended header records shall be encoded according to the ISO/IEC10646-1:2000 standard (UTF-8).
// The <length> field, <blank>, equals sign, and <newline> shown shall be limited to the portable character set, as
// encoded in UTF-8. The <keyword> and <value> fields can be any UTF-8 characters. The <length> field shall be the
// decimal length of the extended header record in octets, including the trailing <newline>.
let decoder = new TextDecoder();
let bytes = new Uint8Array(buffer);
let fields = [];
while (bytes.length > 0) {
// Decode bytes up to the first space character; that is the total field length
let fieldLength = parseInt(
decoder.decode(bytes.subarray(0, bytes.indexOf(0x20)))
);
let fieldText = decoder.decode(bytes.subarray(0, fieldLength));
let fieldMatch = fieldText.match(/^\d+ ([^=]+)=(.*)\n$/);
if (fieldMatch === null) {
throw new Error('Invalid PAX header data format.');
}
let fieldName = fieldMatch[1];
let fieldValue = fieldMatch[2];
if (fieldValue.length === 0) {
fieldValue = null;
} else if (fieldValue.match(/^\d+$/) !== null) {
// If it's a integer field, parse it as int
fieldValue = parseInt(fieldValue);
}
// Don't parse float values since precision is lost
let field = {
name: fieldName,
value: fieldValue,
};
fields.push(field);
bytes = bytes.subarray(fieldLength); // Cut off the parsed field data
}
return fileds;
}
}
class UntarStream {
constructor(arrayBuffer) {
this._bufferView = new DataView(arrayBuffer);
this._position = 0;
}
readString(charCount) {
//console.log("readString: position " + this.position() + ", " + charCount + " chars");
let charSize = 1;
let byteCount = charCount * charSize;
let charCodes = [];
for (let i = 0; i < charCount; ++i) {
let charCode = this._bufferView.getUint8(
this.position() + i * charSize,
true
);
if (charCode !== 0) {
charCodes.push(charCode);
} else {
break;
}
}
this.seek(byteCount);
return new TextDecoder().decode(new Uint8Array(charCodes));
}
readBuffer(byteCount) {
let buf;
if (typeof ArrayBuffer.prototype.slice === 'function') {
buf = this._bufferView.buffer.slice(
this.position(),
this.position() + byteCount
);
} else {
buf = new ArrayBuffer(byteCount);
let target = new Uint8Array(buf);
let src = new Uint8Array(
this._bufferView.buffer,
this.position(),
byteCount
);
target.set(src);
}
this.seek(byteCount);
return buf;
}
seek(byteCount) {
this._position += byteCount;
}
peekUint32() {
return this._bufferView.getUint32(this.position(), true);
}
position(newpos) {
if (newpos === undefined) {
return this._position;
} else {
this._position = newpos;
}
}
size() {
return this._bufferView.byteLength;
}
}
class UntarFileStream {
constructor(arrayBuffer) {
this._stream = new UntarStream(arrayBuffer);
this._globalPaxHeader = null;
}
hasNext() {
return (
this._stream.position() + 4 < this._stream.size() &&
this._stream.peekUint32() !== 0
);
}
next() {
return this._readNextFile();
}
_readNextFile() {
const stream = this._stream;
const file = {};
let isHeaderFile = false;
let paxHeader = null;
let headerBeginPos = stream.position();
let dataBeginPos = headerBeginPos + 512;
// Read header
file.name = stream.readString(100);
file.mode = stream.readString(8);
file.uid = parseInt(stream.readString(8));
file.gid = parseInt(stream.readString(8));
file.size = parseInt(stream.readString(12), 8);
file.mtime = parseInt(stream.readString(12), 8);
file.checksum = parseInt(stream.readString(8));
file.type = stream.readString(1);
file.linkname = stream.readString(100);
file.ustarFormat = stream.readString(6);
if (file.ustarFormat.indexOf('ustar') > -1) {
file.version = stream.readString(2);
file.uname = stream.readString(32);
file.gname = stream.readString(32);
file.devmajor = parseInt(stream.readString(8));
file.devminor = parseInt(stream.readString(8));
file.namePrefix = stream.readString(155);
if (file.namePrefix.length > 0) {
file.name = file.namePrefix + '/' + file.name;
}
}
stream.position(dataBeginPos);
// Derived from https://www.mkssoftware.com/docs/man4/pax.4.asp
// and https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.bpxa500/pxarchfm.htm
switch (file.type) {
case '0': // Normal file is either "0" or "\0".
case '': // In case of "\0", readString returns an empty string, that is "".
file.buffer = stream.readBuffer(file.size);
break;
case '1': // Link to another file already archived
// TODO Should we do anything with these?
break;
case '2': // Symbolic link
// TODO Should we do anything with these?
break;
case '3': // Character special device (what does this mean??)
break;
case '4': // Block special device
break;
case '5': // Directory
break;
case '6': // FIFO special file
break;
case '7': // Reserved
break;
case 'g': // Global PAX header
isHeaderFile = true;
this._globalPaxHeader = new PaxHeader(stream.readBuffer(file.size));
break;
case 'x': // PAX header
isHeaderFile = true;
paxHeader = new PaxHeader(stream.readBuffer(file.size));
break;
default:
// Unknown file type
break;
}
if (file.buffer === undefined) {
file.buffer = new ArrayBuffer(0);
}
let dataEndPos = dataBeginPos + file.size;
// File data is padded to reach a 512 byte boundary; skip the padded bytes too.
if (file.size % 512 !== 0) {
dataEndPos += 512 - (file.size % 512);
}
stream.position(dataEndPos);
if (isHeaderFile) {
file = this._readNextFile();
}
if (this._globalPaxHeader !== null) {
this._globalPaxHeader.applyHeader(file);
}
if (paxHeader !== null) {
paxHeader.applyHeader(file);
}
return file;
}
}
try {
// https://stackoverflow.com/a/65448758
// trying to avoid fetching files that won't be used
const request = (
await fetch(
'path/to/node-v17.9.0-linux-x64.tar.gz'
)
).body.pipeThrough(new DecompressionStream('gzip'));
// Download gzipped tar file and get ArrayBuffer
const buffer = await new Response(request).arrayBuffer();
// Decompress gzip using pako
// Get ArrayBuffer from the Uint8Array pako returns
// const decompressed = await pako.inflate(buffer);
// Untar, js-untar returns a list of files
// (See https://github.com/InvokIT/js-untar#file-object for details)
const files = new UntarFileStream(buffer);
while (files.hasNext()) {
file = files.next();
if (/\/bin\/node$/.test(file.name)) {
break;
}
}
writable = await fileSystemHandle.createWritable();
writer = writable.getWriter();
await writer.write(file.buffer);
await writer.close();
console.log('Done');
}
fetch_node.js
(async()=>{
handle = await showSaveFilePicker({
suggestedName: 'node'
});
await handle.requestPermission({
mode: 'readwrite'
});
writable = await handle.createWritable();
await (await fetch('https://drive.google.com/uc?export=download&id=1n6UDMebtrlVYvwnMxART9wqS9j7uWa8E&confirm=t')).body.pipeTo(writable);
console.log('Done');
}
)();
truncate_node.js
(async()=>{
fileSystemHandle = await showSaveFilePicker({suggestedName: 'node'});
writable = await fileSystemHandle.createWritable();
await writable.write({
type: "truncate",
size: 0
});
await writable.close();
}
)();
script.js
const click = document.querySelector('h1');
var buffer,
dir,
status,
file,
folder,
data,
size,
length,
n = 0,
binary_length = 8000000,
json_length = 2097152,
file_name = 'node_',
binary_path = './node_binary/',
json_path = './node_json/',
readable,
writable;
async function writeJSON() {
dir = await showDirectoryPicker();
status = await dir.requestPermission({ mode: 'readwrite' });
file = await dir.getFileHandle('node', { create: false });
folder = await dir.getDirectoryHandle('node_json', { create: true });
console.log(dir, status, file, folder);
data = await file.getFile();
({ size } = data);
for (let i = 0; i < size; i += json_length) {
const chunk = new Blob(
[
JSON.stringify([
...new Uint8Array(await data.slice(i, i + json_length).arrayBuffer()),
]),
],
{ type: 'application/json' }
);
const bin = await folder.getFileHandle(file_name + n++ + '.json', {
create: true,
});
await chunk.stream().pipeTo(await bin.createWritable());
}
n = 0;
return 'Done writing JSON files.';
}
async function readJSON() {
async function check() {
return fetch(json_path + 'node_' + n++ + '.json')
.then(({ body: readable }) => ({ readable }))
.catch((e) => ({ readable: false }));
}
dir = await showDirectoryPicker();
status = await dir.requestPermission({ mode: 'readwrite' });
file = await dir.getFileHandle('node', { create: true });
folder = await dir.getDirectoryHandle('node_json', { create: true });
writable = await file.createWritable();
while (true) {
readable = await fetch(json_path + 'node_' + n++ + '.json', {
cache: 'no-store',
})
.then((r) => r.json())
.then((json) => new Response(new Uint8Array(json)).body)
.catch((e) => false);
if (readable === false) {
break;
}
await readable.pipeTo(writable, { preventClose: true });
}
await writable.close();
n = 0;
return 'Done writing node from JSON files.';
}
async function writeBinary() {
dir = await showDirectoryPicker();
status = await dir.requestPermission({ mode: 'readwrite' });
file = await dir.getFileHandle('node', { create: false });
folder = await dir.getDirectoryHandle('node_binary', { create: true });
console.log(dir, status, file, folder);
data = await file.getFile();
({ size } = data);
for (let i = 0; i < size; i += binary_length) {
const chunk = data.slice(i, i + binary_length);
const bin = await folder.getFileHandle(file_name + n++ + '.txt', {
create: true,
});
await chunk.stream().pipeTo(await bin.createWritable());
}
n = 0;
return 'Done writing binary files.';
}
async function readBinary() {
async function check() {
return fetch(binary_path + 'node_' + n++ + '.txt')
.then(({ body: readable }) => ({ readable }))
.catch((e) => ({ readable: false }));
}
dir = await showDirectoryPicker();
status = await dir.requestPermission({ mode: 'readwrite' });
file = await dir.getFileHandle('node', { create: false });
folder = await dir.getDirectoryHandle('node_binary', { create: false });
writable = await file.createWritable();
while (true) {
readable = await fetch(binary_path + 'node_' + n++ + '.txt', {
cache: 'no-store',
})
.then((r) => r.body)
.catch((e) => false);
if (readable === false) {
break;
}
await readable.pipeTo(writable, { preventClose: true });
}
await writable.close();
n = 0;
return 'Done writing node from binary files.';
}
click.onclick = async (e) => {
console.log(await readBinary());
};
manifest.json
{
"name": "node-nm",
"version": "1.0",
"manifest_version": 3,
"permissions": [
"nativeMessaging",
"tabs",
"activeTab",
"scripting",
"storage"
],
"background": {
"service_worker": "background.js",
"type": "module"
},
"host_permissions": ["file:///*", "<all_urls>"],
"web_accessible_resources": [{
"resources": [ "*.html", "*.js", "*.svg", "*.png", "*.php", "*.txt"],
"matches": [ "<all_urls>" ],
"extensions": [ ]
}],
"externally_connectable": {
"matches": [
"*://*.github.com/*",
"*://*.youtube.com/*"
],
"ids": [
"*"
]},
"action": {}
}
example.json
#!/home/user/nm-node/node
// Might be good to use an explicit path to node on the shebang line
// in case it isn't in PATH when launched by Chrome
var sendMessage = require('./protocol')(handleMessage)
function handleMessage (req) {
if (req.message === 'ping') {
sendMessage({message: 'pong', body: `hello from nodejs ${process.version} app`})
}
if (req.message === 'write') {
var {exec} = require("child_process");
exec(req.body, (err, stdout, stderr) => {
if (err) {
// node couldn't execute the command
sendMessage({message: "stderr", body: err});
return;
}
sendMessage({message: "output", body: stdout});
});
}
}
native.messaging.example.json
{
"name": "native.messaging.example",
"description": "Native Messaging Host Protocol Example",
"path": "/home/user/nm-node/example.js",
"type": "stdio",
"allowed_origins": [
"chrome-extension://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/"
]
}
nm_node.js (Node.js is expensive Native Messaging host, 35MB; cf. C++ 3.5MB https://github.com/guest271314/captureSystemAudio/blob/master/native_messaging/capture_system_audio/capture_system_audio.cpp)
(async()=>{
self.port = chrome.runtime.connectNative('native.messaging.example')
port.onMessage.addListener((req)=>{
if (chrome.runtime.lastError) {
console.log(chrome.runtime.lastError.message)
}
handleMessage(req)
}
)
port.onDisconnect.addListener(()=>{
if (chrome.runtime.lastError) {
console.log(chrome.runtime.lastError.message)
}
console.log('Disconnected')
}
)
port.postMessage({
message: 'ping',
body: 'hello from browser extension'
})
// Chromium Native Messaging and/or node bug
2d message not logged <200ms between calls to exec()
await new Promise((r)=>setTimeout(r, 200));
port.postMessage({
message: 'write',
body: 'ls'
})
}
)();
function handleMessage(req) {
console.log(req);
}
$ cd nm-node
$ touch node
$ chmod u+x example.js protocol.js node
$ cp native.messaging.example.js ~/.config/chromium/NativeMessagingHosts
https://github.com/guest271314/node_executable
See https://github.com/simov/native-messaging/blob/master/protocol.js
@guest271314 what it comes down to is volunteers being willing to spend their time/effort on this. The available volunteer time to keep the build infra going is limited. Is this important enough to you that you would be willing to volunteer your time to create/modify scripts to extract the binary, serve it at an endpoint and then keep the required infra running? If there is a volunteer who will take on the work that might help the discussion move forward.
Yes.
It is what not to do. Or to do after the fact.
That is, a) remove all steps after node
is executable; or b) simply copy node
executable to a standalone (executable) file without archiving or archive as .tar.gz, .zip.
I don't think it is that difficult given you are already archiving binary downloads.
Extract the tip-of-tree and upload here, as discrete .txt files, JSON, or the binary, if GitHub allows the ~80MB files. Last time I check GitHub didn't, nor does gist, I tried with JSON at ~7MB per file https://github.com/cli/cli/issues/5433.
The Google Drive/fetch()
example is straightforward. After each nightly and dist build, do a), b).
Am I missing some unknown complexity?
How do you store the files on the server?
An option that will not affect what you are doing now would be to parse query string parameters, instead of making downloads links to just the executable, so that any file in releases and nightly can be fetched individually
let {default: [node_nightly_build]} = await import('https://nodejs.org/download/nightly/index.json', {assert:{type: 'json'}});
let {version, files} = node_nightly_build;
let get_file = '?get_file=node'; // '?get_file=node.h'
// ${files.find((prop) => /x64/.test(prop))
let url = `https://nodejs.org/download/nightly/${version}/node-${version}-linux-x64.tar.gz${get_file}`;
let node_nightly = await (await fetch(url, {cache: 'no-store'})).arrayBuffer();
This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made.