cockpit
cockpit copied to clipboard
Improvements on the Build process for Arm chips or Wasm builds
why are we not utilizing the arm chip for esbuild.
esbuild does utilize the @esbuild/darwin-arm64 or @esbuild/linux-arm64 to make the build for arm chips a bit faster cause it uses wasm when it is not x64 and it is very slow for arm chip computers.
if esbuild can choose the correct one why are we defaulting to wasm when arch is not x64
We use wasm in our builds because we don't want to use a binary provided by node_modules (and can't on our CI /build infrastructure). This is why all non-x86 architectures use WASM.
cause esbuild-wasm is 10 times slower , adding to that because of the memory issue you guys build every file separately during wasm,
maybe we can create semaphore logic there to atleast process multiple files together atleast it will be much faster rather than building each one on its own.
@Mgrdich That splitting of pages has to happen because the wasm build runs out of memory otherwise -- it only has a 32 bit address space. The native x86 build can build everything in one go, which is a magnitude faster, but also doesn't fit into 32 bits :cry:
yeah what i mean we can still use wasm but only build let's say 3 files concurrently , this will not cause the memory to out. and we still atleast can build some stuff concurrently which can be faster than the normal case @martinpitt
For the record: I don't have any objections against adding @esbuild/linux-arm64 and adjusting ./build.js accordingly to use the native one, if there is someone (like you) who wants to develop cockpit on ARM. We just insist on also having the wasm fallback to keep the package buildable on any architecture (as it often happens in e.g. Fedora).
@martinpitt that would be great , but we can still use wasm with some concurrency without having to overload the memory
https://github.com/cockpit-project/cockpit/blob/21a978fd2ecb9fe002a24b7fe09c7651d449ea9a/build.js#L155-L186
Instead of the upper snippet we can do it like this
if (useWasm) {
const numEntries = entryPoints.length;
const batchSize = 3; // Number of concurrent calls
for (let i = 0; i < numEntries; i += batchSize) {
const batch = entryPoints.slice(i, i + batchSize);
const promises = [];
const contexts = [];
// Build each entry point in the batch concurrently
for (const [index, entryPoint] of batch.entries()) {
console.log("building", entryPoint);
const context = await esbuild.context({
...pkgOptions,
entryPoints: [entryPoint],
plugins: [
...(index === 0 ? pkgFirstPlugins : []),
...pkgPlugins,
...(index === numEntries - 1 ? pkgLastPlugins : []),
],
});
contexts.push(context);
promises.push(context.rebuild());
}
await Promise.all(promises);
for (const context of contexts) {
context.dispose();
}
}
// Build all tests in one go after building entry points
console.log("building qunit tests");
const testContext = await esbuild.context({
...qunitOptions,
entryPoints: testEntryPoints,
plugins: [
cockpitTestHtmlPlugin({ testFiles: tests }),
],
});
await testContext.rebuild();
testContext.dispose();
}
TBH I'm sceptical -- I've tried this back then when we moved to esbuild, and got a lot of crashes even with moderate parallelism. Even with single page builds we occasionally get crashes. Of course you are welcome to send a PR and we can try it -- but I think for seriously developing on arm you really rather want the native version anyway.
@martinpitt
one last question , as i know esbuild is usually used for dev environment , like for example a bundler like vite that uses esbuild for dev environment and rollup for production, why the choice of the for esbuild for prod environment.
Well, what can I say.. there aren't so many bundlers, esbuild works, is super fast, and rather nice to work with, so we replaced webpack with it. If there's some new hotness which is better than esbuild, we can certainly consider it, but someone needs to do the work to examine it. We have some plugins which are a non-trivial amount of work to port to a new bundler, such as building the translation files.
Some "john39" in our Matrix channel just said (not sure if that's you @Mgrdich
The recent builds of wasm are being provided in 64bit as well, so going 64 bit on all of esbuild would help to raise the memory limit of esbuild.
I just did a very quick test:
--- build.js
+++ build.js
@@ -9,7 +9,7 @@ import process from 'process';
import { getFiles, getTestFiles, all_subdirs } from './files.js';
const production = process.env.NODE_ENV === 'production';
-const useWasm = os.arch() != 'x64';
+const useWasm = true;
// ensure node_modules is present and up to date
child_process.spawnSync('tools/node-modules', ['make_package_lock_json'], { stdio: 'inherit' });
@@ -152,7 +152,7 @@ async function build() {
: [],
];
- if (useWasm) {
+ if (false) {
// build each entry point individually, as otherwise it runs out of memory
// See https://github.com/evanw/esbuild/issues/3006
const numEntries = entryPoints.length;
and with that, ./build.js explodes.
I thought the 32 bit address space limitation was a property of the wasm VM itself, not of esbuild; but I'm happy to be wrong. If we can change something in the build system or npm dependencies, I'm all ears!