OED icon indicating copy to clipboard operation
OED copied to clipboard

Migrate from Webpack to Vite

Open hyperupcall opened this issue 3 years ago • 27 comments

Description

New Edits

  • I have interactively rebased the previous commits and split them up (with extra comments in the Git description) to make things hopefully easier to follow
  • As noted below, performance will improve much better than indicated by the benchmarks, due to upgrading from Vite 4.2 to 4.3!
  • There are no current issues with these changed (from my testing), so this PR has been undrafted :rocket:

Description

This migrates the build system from Webpack to Vite. The web ecosystem as a whole is moving away from Webpack to newer, faster, and more modern solutions like Vite.

Vite is different than Webpack. In general, Webpack is very lenient with what it accepts. For example, when ECMAScript Modules and CommonJS modules are mixed in a weird/incorrect way, Webpack tries to find some way to correctly resolve the imports, construct a module graph, and output the bundle.

On the other hand, Vite will fail fast if there is something wrong with the imports. Internally, vite uses the ESBuild bundler for the development server, and then the Rollup bundler when bundling for production. Since there are two bundlers, imports and requires must "work"/"be compatible" with both bundlers.

Much of the changes in this PR change the imports to be more precise and to make both Rollup and ESBuild happy. There are still some issues though, as mentioned further down.

The webpack npm scripts are replaced with Vite-specific ones: npm run vite:dev and npm run vite:build

Fixes #871

Some statistics about bundle time and size are shown below for comparison:

Webpack vs Vite Stats

Note that the Vite numbers are actually significantly better since updating to Vite 4.3. Vite 4.3 performs 40-76% better in benchmarks compared to version 4.2!

Docker-Compose

Webpack:

  • command: docker-compose up
  • time: 1 minute 19 seconds

Vite

  • command: docker-compose up
  • time: 48 seconds

Bundling

Webpack:

  • command: npm run webpack:build
  • time: 58 seconds
  • bundle size: 5.79 MiB (bundle.js)

Vite:

  • command: npm run vite:build
  • time: 17 seconds
  • bundle size: ~4.6 MiB (bundle.js)

Current Issues

  • ~~As mentioned in cde54ed7a8b2710a3732c9c999079ba72a838acf, Vite seemed to have a weird issue with running on port 9229 (it ran on port 9230 instead). As a workaround, port 9230 was also exposed in the docker-compose file. Since it seems running the server through docker-compose was the most documented method of running the server, I didn't think this was a big deal? Maybe this should probably still be fixed.~~ (this has been fixed in eb0f7091c3ec53dacbe243967d94fc579e7d1d4b by modifying the docker-compose.yml file)

For Further Investigation

  • As mentioned in d0375a3399719df77214e10e485971224c31241d, plotly.js takes up about 3.5 gigabytes of space. In the future, this could be addressed.
  • In the future, routes within RouteComponent.tsx could be lazy-loaded (React.lazy) for improved performance. This isn't related to the Vite migration, but the thought came up.

Type of change

(Check the ones that apply by placing an "x" instead of the space in the [ ] so it becomes [x])

  • [x] Note merging this changes the node modules
  • [ ] Note merging this changes the database configuration.
  • [x] This change requires a documentation update

Checklist

(Note what you have done by placing an "x" instead of the space in the [ ] so it becomes [x]. It is hoped you do all of them.)

  • [x] I have followed the OED pull request ideas
  • [x] I have removed text in ( ) from the issue request

hyperupcall avatar Apr 03 '23 11:04 hyperupcall

Thanks to @hyperupcall for taking this on and helping OED move forward. I'm sorry for not working on this sooner. The project has been focused on v1.0 release. This change is non-trivial and still seems to have an issue associated with it. Given the v1.0 timeline and the need to verify this change does not cause issues at production sites, I am putting this on hold until after v1.0. It should then receive the attention it deserves and I hope we can move forward with it. If anyone wants to look at the issues noted in the PR then that would be great.

huss avatar Apr 22 '23 16:04 huss

@huss No worries, it would be good to get out v1.0 first. I've been meaning to rebase over main and make the commits a bit more nice and fix the last remaining issues, but I have not got around to doing so yet.

hyperupcall avatar Apr 23 '23 02:04 hyperupcall

Epic! So this issue is now ready! As noted in the original post, I interactively rebased the commits to make them easier to follow. As mentioned by @huss , it seems like this change is pretty big and will land after v1 is released. So this issue might sit here for a while

hyperupcall avatar May 19 '23 08:05 hyperupcall

@hyperupcall I very much appreciate your work on this. I take responsibility for how long it has sat. I have been focused on OED v1.0.0 which is finally done (release should be very soon). Unexpected circumstances have slowed things down. I am sorry for this.

I have been thinking about this and wanted to better understand why you closed this PR. Is it because it sat too long? Is it because it now needs more work? Something else? I was hoping to understand because the PR could be worked on/moved forward in the near future. Thanks for any insights.

huss avatar Oct 03 '23 12:10 huss

@huss No don't worry about it - I totally understand! This is a big change and you want to focus on v1.0.0. Usually, I don't leave PRs sit for too long, so that's kind of why I closed it :P - kind of a silly reason.

Sorry about the confusion, I'll reopen this - just let me know when you are ready (no rush), and when that happens, I'll rebase (or merge) this over master.

hyperupcall avatar Oct 03 '23 16:10 hyperupcall

@hyperupcall Absolutely no concerns and your actions were completely justified. I just finished what should be the final merge for v1.0.0 and have cleared all the outstanding pull requests except ones for package update and one that is waiting. I plan to hold off on the others and allow this pull request to be prepared to merge into OED. If you are willing to do the changes to resolve any conflicts with development that would be great. I will begin testing it from my end as soon as that is appropriate and hope it can clear fairly quickly. Again, many thanks for all your efforts and your patience.

huss avatar Oct 03 '23 20:10 huss

That sounds good! I'll take a look and finish this up later today

hyperupcall avatar Oct 14 '23 12:10 hyperupcall

yay! ci passes (except the codeql thingy). i tested it manually and things seem to work. let me know if it works for you and if there is anything else i can do

hyperupcall avatar Oct 16 '23 12:10 hyperupcall

Thanks to @hyperupcall for all this work. I'm trying it out by doing docker compose up in the main OED directory. I get this error message:

... oed-web-1 | NPM install... oed-web-1 | oed-dev-server-1 | oed-dev-server-1 | > [email protected] client:dev oed-dev-server-1 | > vite --config ./src/vite.config.js --host oed-dev-server-1 | oed-dev-server-1 | sh: 1: vite: not found oed-dev-server-1 exited with code 127

Is it required to install vite first or is the install not finding it in the needed container? (or maybe something else I don't understand) It does seem to run ahead before the npn install finished.

huss avatar Oct 16 '23 19:10 huss

Yes that would be it - there seems to be a race condition since these commands are ran in parallel. Not sure how this didn't come up before, but I'll see how I can fix it tomorrow (wish i could do it today, but i am about to start a work shift)

hyperupcall avatar Oct 16 '23 21:10 hyperupcall

No problems. I have had these type of issues myself. It may be that it installed the first time so the container and node_modules had vite so it was fine after that. I've learned to do update and new installs for major OED changes to try to see these issues. Let me know if I can help in any way.

huss avatar Oct 16 '23 23:10 huss

I fixed the race by waiting ni the devStartVite.sh script

hyperupcall avatar Oct 17 '23 13:10 hyperupcall

The build issue seems fixed. Thanks. When I build OED I see what is below followed by it getting some of the OED assets when I go to localhost:3000. However, I get a 404 about not finding assets and nothing loads. Do you know what is happening?

I went into the DB container and the Postgres setup looks good. I also went into the web container and could run the tests to see some pass. The dev-server seems okay but I'm less certain about what it is doing.

...

oed-web-1 | NPM install finished. oed-web-1 | oed-dev-server-1 | oed-dev-server-1 | > [email protected] client:dev oed-dev-server-1 | > vite --config ./src/vite.config.js --host oed-dev-server-1 | oed-dev-server-1 | oed-dev-server-1 | VITE v4.3.8 ready in 517 ms oed-dev-server-1 | oed-dev-server-1 | ➜ Local: http://localhost:9229/ oed-dev-server-1 | ➜ Network: http://172.19.0.4:9229/ oed-web-1 | Attempting to create database... oed-web-1 | oed-web-1 | -----start of npm run createdb output----- oed-web-1 | oed-web-1 | > [email protected] createdb oed-web-1 | > node ./src/server/services/createDB.js oed-web-1 | oed-web-1 | -----end of npm run createdb output----- oed-web-1 | oed-web-1 | database creation had no errors so assume schema creation worked. oed-web-1 | oed-web-1 | > [email protected] createUser oed-web-1 | > node ./src/server/services/user/createUser.js "[email protected]" "password" oed-web-1 | oed-web-1 | [INFO@2023-10-18T14:03:58.006+00:00] User [email protected] existed so not created oed-web-1 | oed-web-1 | User creation had no errors so default user '[email protected]' with password 'password' should exist oed-web-1 | Creating developer DB function oed-web-1 | oed-web-1 | > [email protected] developerdb oed-web-1 | > node -e 'require("./src/server/util/developer.js").createShiftReadingsFunction()' oed-web-1 | oed-web-1 | OED install finished oed-web-1 | Starting OED in development mode oed-web-1 | oed-web-1 | > [email protected] start:dev oed-web-1 | > nodemon --legacy-watch --inspect=0.0.0.0 ./src/bin/www oed-web-1 | oed-web-1 | [nodemon] 2.0.22 oed-web-1 | [nodemon] to restart at any time, enter rs oed-web-1 | [nodemon] watching path(s): src/server//* src/bin//* src/common/**/* oed-web-1 | [nodemon] watching extensions: js,mjs,json oed-web-1 | [nodemon] starting node --inspect=0.0.0.0 ./src/bin/www oed-web-1 | Debugger listening on ws://0.0.0.0:9229/9acc68b7-2602-40ed-ae39-3fd4c5ae4004 oed-web-1 | For help, see: https://nodejs.org/en/docs/inspector oed-web-1 | [INFO@2023-10-18T14:04:09.063+00:00] Listening on port 3000 oed-web-1 | oed-web-1 | GET /api/version 304 2.990 ms - - oed-web-1 | GET /logo.png 404 4.378 ms - 22 oed-web-1 | GET / 404 1.498 ms - 22 oed-web-1 | GET /favicon.ico 200 2.341 ms - 1150

huss avatar Oct 18 '23 14:10 huss

It seems that docker compose is still configured to run the webpack dev server. So docker-compose up is running both vite and webpack dev servers. Notice that vite is on port 9229, while webpack is on 3000.

It appears that the nodemon(start:dev) server already uses 9229 for the debugger The vite dev server isn't consistently using the same port, sometimes the port its configured to is 'in use' so it gets bumped up to the next (9230, instead of 9229)

Aside from this I am having issues with HMR when making changes to files I get console errors and the page doesn't automatically refresh.

ReferenceError: can't access lexical declaration 'UIOptionsComponent' before initialization
    <anonymous> UIOptionsContainer.ts:39
[127.0.0.1:9230:12975:25](http://127.0.0.1:9230/)
[hmr] Failed to reload /client/app/components/UIOptionsComponent.tsx. This could be due to syntax errors or importing non-existent modules. (see errors above) [127.0.0.1:9230:12975:25](http://127.0.0.1:9230/)
[vite] hot updated: /client/app/components/UIOptionsComponent.tsx [client.ts:462:12](http://127.0.0.1:9230/@vite/client.ts)
Source map error: Error: request failed with status 404
Resource URL: null
Source Map URL: react_devtools_backend_compact.js.map

ChrisMart21 avatar Oct 22 '23 15:10 ChrisMart21

Thank you for the extra data point - I have pushed a few fixes replacing the Webpack runs with Vite. I must have missed them because they were added when I did the recent merge.

I don't think this will fix everything, so I'll come back later today once I'm off the road to see what else needs to be done

hyperupcall avatar Oct 22 '23 15:10 hyperupcall

I know @hyperupcall plans more work but I wanted to report what happened when I tied to run the latest version:

  • The first time I did docker compose up I got these messages:

oed-web-1 | NPM install... oed-web-1 | oed-dev-server-1 | oed-dev-server-1 | > [email protected] client:dev oed-dev-server-1 | > vite --config ./src/vite.config.js --host oed-dev-server-1 | oed-dev-server-1 | sh: 1: vite: not found oed-dev-server-1 exited with code 127 oed-web-1 | npm WARN deprecated [email protected]: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.

but on the second install it was all fine.

  • If I connect to localhost:9229 it seems to partly work (:3000 and :9230 do not) but I get errors in the console similar to:

oed-dev-server-1 | 5:09:15 PM [vite] http proxy error at /api/version: oed-dev-server-1 | Error: connect ECONNREFUSED 127.0.0.1:3000 oed-dev-server-1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16)

It also seems it cannot get data from the DB.

huss avatar Oct 22 '23 17:10 huss

Again, thanks to @hyperupcall for taking on this significant task. I've been holding off the package upgrade and merging non-essential PRs so this one could move forward without conflict. I don't want to be pushy but wanted to ask if you have any idea when there will be further work (or possible finishing of this PR)? FYI that I expect more students to be putting in PRs later in this semester and those will be harder to put off.

huss avatar Oct 27 '23 20:10 huss

@huss You're right, I want to get this done ASAP, before the end of this week, honestly. It looks like there are only a few things that remain, and don't hesitate to ping me if there seems to be no progress. I have time right now to finish investigate the remaining errors and stuff

hyperupcall avatar Oct 30 '23 19:10 hyperupcall

The first time I did docker compose up I got these messages: ... but on the second install it was all fine.

That's so odd, because 6476534a94ae6b0a8efed56a2664daeb362bb1d3 was supposed to fix that

If I connect to localhost:9229 it seems to partly work (:3000 and :9230 do not) ... It also seems it cannot get data from the DB.

It looks like the database fails to initialize. Because the web services has a depends_on: ['database'], the failure for the database to initialize means that the web container isn't started. Which is why Vite is getting proxy errors.

~~About port 9230, that is no longer used - it was originally used to temporarily workaround a configuration issue, but it has now been fixed. So 9229 is used, just like before.~~ Now, port 8085 is used.

I'll take a look at port 3000 - it's supposed to work as the same way before, where it only works after you do npm run client:build. It's not supposed to be used when starting the development server

hyperupcall avatar Oct 30 '23 19:10 hyperupcall

FWIW I tried switching to the development branch, and I get the same "attempting to create database error":

oed-web-1       | Attempting to create database...
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] ERROR:  cannot change return type of existing function
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] DETAIL:  Row type defined by OUT parameters is different.
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] HINT:  Use DROP FUNCTION meter_line_readings_unit(integer[],integer,timestamp without time zone,timestamp without time zone,reading_line_accuracy,integer,integer) first.
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] STATEMENT:  CREATE OR REPLACE FUNCTION date_trunc_up(interval_precision TEXT, ts TIMESTAMP) RETURNS TIMESTAMP LANGUAGE SQL IMMUTABLE AS $$ SELECT CASE WHEN ts = date_trunc(interval_precision, ts) THEN ts ELSE date_trunc(interval_precision, ts + ('1 ' || interval_precision)::INTERVAL) END $$; CREATE OR REPLACE FUNCTION shrink_tsrange_to_real_readings(tsrange_to_shrink TSRANGE, meter_ids INTEGER[]) RETURNS TSRANGE AS $$ DECLARE readings_max_tsrange TSRANGE; BEGIN SELECT tsrange(min(start_timestamp), max(end_timestamp)) INTO readings_max_tsrange FROM (readings r INNER JOIN unnest(meter_ids) meters(id) ON r.meter_id = meters.id); RETURN tsrange_to_shrink * readings_max_tsrange; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION shrink_tsrange_to_meters_by_day(tsrange_to_shrink TSRANGE, meter_ids INTEGER[]) RETURNS TSRANGE AS $$ DECLARE readings_max_tsrange TSRANGE; BEGIN SELECT tsrange(min(lower(time_interval)), max(upper(time_interval))) INTO readings_max_tsrange FROM daily_readings_unit dr INNER JOIN unnest(meter_ids) meters(id) ON dr.meter_id = meters.id; RETURN tsrange(date_trunc_up('day', lower(tsrange_to_shrink)), date_trunc('day', upper(tsrange_to_shrink))) * readings_max_tsrange; END; $$ LANGUAGE 'plpgsql'; CREATE MATERIALIZED VIEW IF NOT EXISTS daily_readings_unit AS SELECT r.meter_id AS meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (sum( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (sum( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) END AS reading_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (max(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (max(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as max_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (min(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (min(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as min_rate, tsrange(gen.interval_start, gen.interval_start + '1 day'::INTERVAL, '()') AS time_interval FROM ((readings r INNER JOIN meters m ON r.meter_id = m.id) INNER JOIN units u ON m.unit_id = u.id) CROSS JOIN LATERAL generate_series( date_trunc('day', r.start_timestamp), date_trunc_up('day', r.end_timestamp) - '1 day'::INTERVAL, '1 day'::INTERVAL ) gen(interval_start) GROUP BY r.meter_id, gen.interval_start, u.unit_represent ORDER BY gen.interval_start, r.meter_id; CREATE MATERIALIZED VIEW IF NOT EXISTS hourly_readings_unit AS SELECT r.meter_id AS meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (sum( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (sum( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) END AS reading_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (max(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (max(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as max_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (min(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (min(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as min_rate, tsrange(gen.interval_start, gen.interval_start + '1 hour'::INTERVAL, '()') AS time_interval FROM ((readings r INNER JOIN meters m ON r.meter_id = m.id) INNER JOIN units u ON m.unit_id = u.id) CROSS JOIN LATERAL generate_series( date_trunc('hour', r.start_timestamp), date_trunc_up('hour', r.end_timestamp) - '1 hour'::INTERVAL, '1 hour'::INTERVAL ) gen(interval_start) GROUP BY r.meter_id, gen.interval_start, u.unit_represent ORDER BY gen.interval_start, r.meter_id; CREATE EXTENSION IF NOT EXISTS btree_gist; CREATE INDEX if not exists idx_daily_readings_unit ON daily_readings_unit USING GIST(time_interval, meter_id); CREATE OR REPLACE FUNCTION meter_line_readings_unit ( meter_ids INTEGER[], graphic_unit_id INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP, point_accuracy reading_line_accuracy, max_raw_points INTEGER, max_hour_points INTEGER ) RETURNS TABLE(meter_id INTEGER, reading_rate FLOAT, min_rate FLOAT, max_rate FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE requested_range TSRANGE; requested_interval INTERVAL; requested_interval_seconds INTEGER; unit_column INTEGER; frequency INTERVAL; frequency_seconds INTEGER; current_meter_index INTEGER := 1; current_meter_id INTEGER; current_point_accuracy reading_line_accuracy; BEGIN SELECT unit_index INTO unit_column FROM units WHERE id = graphic_unit_id; WHILE current_meter_index <= cardinality(meter_ids) LOOP current_point_accuracy := point_accuracy; current_meter_id := meter_ids[current_meter_index]; requested_range := shrink_tsrange_to_real_readings(tsrange(start_stamp, end_stamp, '[]'), array_append(ARRAY[]::INTEGER[], current_meter_id)); IF (current_point_accuracy = 'auto'::reading_line_accuracy) THEN IF (upper(requested_range) = 'infinity') THEN current_point_accuracy := 'daily'::reading_line_accuracy; ELSE requested_interval := upper(requested_range) - lower(requested_range); requested_interval_seconds := (SELECT * FROM EXTRACT(EPOCH FROM requested_interval)); SELECT reading_frequency INTO frequency FROM meters WHERE id = current_meter_id; frequency_seconds := (SELECT * FROM EXTRACT(EPOCH FROM frequency)); IF ((requested_interval_seconds / frequency_seconds <= max_raw_points) OR (frequency_seconds >= 86400)) THEN current_point_accuracy := 'raw'::reading_line_accuracy; ELSIF ((requested_interval_seconds / 3600 <= max_hour_points) AND (frequency_seconds <= 3600)) THEN current_point_accuracy := 'hourly'::reading_line_accuracy; ELSE current_point_accuracy := 'daily'::reading_line_accuracy; END IF; END IF; END IF; IF (current_point_accuracy = 'raw'::reading_line_accuracy) THEN RETURN QUERY SELECT r.meter_id as meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN ((r.reading / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)) / 3600)) * c.slope + c.intercept) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN ((r.reading * 3600 / u.sec_in_rate) * c.slope + c.intercept) END AS reading_rate, cast('NaN' AS DOUBLE PRECISION) AS min_rate, cast('NaN' AS DOUBLE PRECISION) as max_rate, r.start_timestamp, r.end_timestamp FROM (((readings r INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE lower(requested_range) <= r.start_timestamp AND r.end_timestamp <= upper(requested_range) AND r.meter_id = current_meter_id ORDER BY r.start_timestamp ASC; ELSIF (current_point_accuracy = 'hourly'::reading_line_accuracy) THEN RETURN QUERY SELECT hourly.meter_id AS meter_id, hourly.reading_rate * c.slope + c.intercept as reading_rate, hourly.min_rate * c.slope + c.intercept AS min_rate, hourly.max_rate * c.slope + c.intercept AS max_rate, lower(hourly.time_interval) AS start_timestamp, upper(hourly.time_interval) AS end_timestamp FROM (((hourly_readings_unit hourly INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE requested_range @> time_interval AND hourly.meter_id = current_meter_id ORDER BY start_timestamp ASC; ELSE RETURN QUERY SELECT daily.meter_id AS meter_id, daily.reading_rate * c.slope + c.intercept as reading_rate, daily.min_rate * c.slope + c.intercept AS min_rate, daily.max_rate * c.slope + c.intercept AS max_rate, lower(daily.time_interval) AS start_timestamp, upper(daily.time_interval) AS end_timestamp FROM (((daily_readings_unit daily INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE requested_range @> time_interval AND daily.meter_id = current_meter_id ORDER BY start_timestamp ASC; END IF; current_meter_index := current_meter_index + 1; END LOOP; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION group_line_readings_unit ( group_ids INTEGER[], graphic_unit_id INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP, point_accuracy reading_line_accuracy, max_hour_points INTEGER ) RETURNS TABLE(group_id INTEGER, reading_rate FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE meter_ids INTEGER[]; requested_range TSRANGE; requested_interval INTERVAL; requested_interval_seconds INTEGER; meters_min_frequency INTERVAL; BEGIN SELECT array_agg(DISTINCT gdm.meter_id) INTO meter_ids FROM groups_deep_meters gdm INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id; IF (point_accuracy = 'auto'::reading_line_accuracy OR point_accuracy = 'raw'::reading_line_accuracy) THEN requested_range := shrink_tsrange_to_real_readings(tsrange(start_stamp, end_stamp, '[]'), meter_ids); IF (upper(requested_range) = 'infinity') THEN point_accuracy := 'daily'::reading_line_accuracy; ELSE requested_interval := upper(requested_range) - lower(requested_range); requested_interval_seconds := (SELECT * FROM EXTRACT(EPOCH FROM requested_interval)); IF (requested_interval_seconds / 3600 <= max_hour_points) THEN point_accuracy := 'hourly'::reading_line_accuracy; ELSE point_accuracy := 'daily'::reading_line_accuracy; END IF; IF (point_accuracy = 'hourly'::reading_line_accuracy) THEN SELECT min(reading_frequency) INTO meters_min_frequency FROM (meters m INNER JOIN unnest(meter_ids) meters(id) ON m.id = meters.id); IF (EXTRACT(EPOCH FROM meters_min_frequency) > 3600) THEN point_accuracy = 'daily'::reading_line_accuracy; END IF; END IF; END IF; END IF; RETURN QUERY SELECT gdm.group_id AS group_id, SUM(readings.reading_rate) AS reading_rate, readings.start_timestamp, readings.end_timestamp FROM meter_line_readings_unit(meter_ids, graphic_unit_id, start_stamp, end_stamp, point_accuracy, -1, -1) readings INNER JOIN groups_deep_meters gdm ON readings.meter_id = gdm.meter_id INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id GROUP BY gdm.group_id, readings.start_timestamp, readings.end_timestamp ORDER BY readings.start_timestamp ASC; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION meter_bar_readings_unit ( meter_ids INTEGER[], graphic_unit_id INTEGER, bar_width_days INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP ) RETURNS TABLE(meter_id INTEGER, reading FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE bar_width INTERVAL; real_tsrange TSRANGE; real_start_stamp TIMESTAMP; real_end_stamp TIMESTAMP; unit_column INTEGER; num_bars INTEGER; BEGIN bar_width := INTERVAL '1 day' * bar_width_days; real_tsrange := shrink_tsrange_to_meters_by_day(tsrange(start_stamp, end_stamp), meter_ids); real_start_stamp := lower(real_tsrange); real_end_stamp := upper(real_tsrange); num_bars := floor(extract(EPOCH FROM real_end_stamp - real_start_stamp) / extract(EPOCH FROM bar_width)); real_start_stamp := real_end_stamp - (num_bars * bar_width); real_end_stamp := real_end_stamp - bar_width; SELECT unit_index INTO unit_column FROM units WHERE id = graphic_unit_id; RETURN QUERY SELECT dr.meter_id AS meter_id, SUM(dr.reading_rate * 24) * c.slope + c.intercept AS reading, bars.interval_start AS start_timestamp, bars.interval_start + bar_width AS end_timestamp FROM (((((daily_readings_unit dr INNER JOIN generate_series(real_start_stamp, real_end_stamp, bar_width) bars(interval_start) ON tsrange(bars.interval_start, bars.interval_start + bar_width, '[]') @> dr.time_interval) INNER JOIN unnest(meter_ids) meters(id) ON dr.meter_id = meters.id) INNER JOIN meters m ON m.id = meters.id) INNER JOIN units u ON m.unit_id = u.id AND u.unit_represent != 'raw'::unit_represent_type) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) GROUP BY dr.meter_id, bars.interval_start, c.slope, c.intercept; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION group_bar_readings_unit ( group_ids INTEGER[], graphic_unit_id INTEGER, bar_width_days INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP ) RETURNS TABLE(group_id INTEGER, reading FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE bar_width INTERVAL; real_tsrange TSRANGE; real_start_stamp TIMESTAMP; real_end_stamp TIMESTAMP; meter_ids INTEGER[]; BEGIN SELECT array_agg(DISTINCT gdm.meter_id) INTO meter_ids FROM groups_deep_meters gdm INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id; RETURN QUERY SELECT gdm.group_id AS group_id, SUM(readings.reading) AS reading, readings.start_timestamp, readings.end_timestamp FROM meter_bar_readings_unit(meter_ids, graphic_unit_id, bar_width_days, start_stamp, end_stamp) readings INNER JOIN groups_deep_meters gdm ON readings.meter_id = gdm.meter_id INNER JOIN unnest(group_ids) gids(id) on gdm.group_id = gids.id GROUP BY gdm.group_id, readings.start_timestamp, readings.end_timestamp; END; $$ LANGUAGE 'plpgsql';
oed-web-1       | 
oed-web-1       | -----start of npm run createdb output-----
oed-web-1       | 
oed-web-1       | 
oed-web-1       | > [email protected] createdb
oed-web-1       | > node ./src/server/services/createDB.js
oed-web-1       | 
oed-web-1       | 
oed-web-1       | -----end of npm run createdb output-----
oed-web-1       | 
oed-web-1       | 
oed-web-1       | FAILURE: creation of database failed so stopping install. Use --continue_on_db_error if you want install to continue
oed-web-1 exited with code 3

Maybe it is possible that changes to the database like df70d758e66a1dc61c098ec6766edce0a3ccdf2a or 0e3ace8f1a6e1b38c8b229d896fdc20444920c91 affected things? it could be that my database is corrupted, so i will have to investigate that

hyperupcall avatar Oct 30 '23 21:10 hyperupcall

It appears that the nodemon(start:dev) server already uses 9229 for the debugger The vite dev server isn't consistently using the same port, sometimes the port its configured to is 'in use' so it gets bumped up to the next (9230, instead of 9229)

Oops, I missed that - ~~I'll integrate those changes when I get off shift~~ Pushed those changes. Since nodeman defaults to 9229, we now change our default for dev server to 8085.

@ChrisMart21 by any chance was the server not restarted when you pulled in the latest changes / switched branches? That might have been why HMR failed

hyperupcall avatar Oct 30 '23 23:10 hyperupcall

@huss I've been thinking, since this PR contains many changes that aren't just removing Webpack / adding Vite, it might be better hold off on this PR for now, and make separate PRs for those things (like issues that indirectly fix things for Vite, formatting inconsistencies). That way, people can submit PRs now without having to worry that things will change under their feet. It might also be possible to add Vite, side-by-side to Webpack to smoothen the transition, although I haven't tried that before. What do you think?

hyperupcall avatar Nov 02 '23 08:11 hyperupcall

@hyperupcall Do I understand correctly that you are proposing to create a PR for the other (non-Vite) changes and then see about a Vite PR? I think it is okay to separate out the work, esp. if it helps you move it along. I would try to review any PR in timely fashion to so it can clear. Thanks for thinking about this and let me know if I have the wrong idea.

huss avatar Nov 02 '23 12:11 huss

@huss Yes! I don't want this to be blocking other things, especially if it's other students doing their first Open Source contribution. I've also been busy lately, so I think this strategy would reduce uncertainty on both ends. Most of the work is already done, I can just cherry-pick the bigger things and turn those into new PRs. Do you think this PR should be closed now, or after I submit most of the new PRs?

hyperupcall avatar Nov 02 '23 13:11 hyperupcall

Thanks to @hyperupcall for continued work. I tried to install this and got:

ed-web-1 | For help, see: https://nodejs.org/en/docs/inspector oed-web-1 | /usr/src/app/src/server/routes/unitReadings.js:12 oed-web-1 | import * as moment from 'moment' oed-web-1 | ^^^^^^ oed-web-1 | oed-web-1 | SyntaxError: Cannot use import statement outside a module oed-web-1 | at Object.compileFunction (node:vm:352:18) oed-web-1 | at wrapSafe (node:internal/modules/cjs/loader:1031:15) oed-web-1 | at Module._compile (node:internal/modules/cjs/loader:1065:27) oed-web-1 | at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10) oed-web-1 | at Module.load (node:internal/modules/cjs/loader:981:32) oed-web-1 | at Function.Module._load (node:internal/modules/cjs/loader:822:12) oed-web-1 | at Module.require (node:internal/modules/cjs/loader:1005:19) oed-web-1 | at require (node:internal/modules/cjs/helpers:102:18) oed-web-1 | at Object. (/usr/src/app/src/server/app.js:23:33) oed-web-1 | at Module._compile (node:internal/modules/cjs/loader:1101:14) oed-web-1 | [nodemon] app crashed - waiting for file changes before starting...

I tired changing the import to be similar to other moment ones:

import moment from 'moment';

but it gave the same error. Any ideas?

Also, the npm ci build failed on GitHub. Not sure exactly why but it seems the package files are inconsistent. I wonder if this is a first time issue due to the changes. Something to figure out at some point.

huss avatar Dec 04 '23 15:12 huss

@huss When running NodeJS, the import... syntax is not enabled by default (we have no problem with the client-side code because Webpack already understands it). To enable it and prevent the error, one must add "type": "module" to the package.json. But it's not that simple because adding that key/value pair to the package JSON will break various things and cause various issues. I haven't looked at that file in particular, but it sounds like it should be using CommonJS (const moment = require('moment')).

For now though, I'm only using this PR so people can better see the progress on the migration. Before I merged from development, there were maybe +16,000 changed lines, and now there are only +3,600 changed lines. I am hoping I can soon reduce it to something even closer to zero. The next steps would be to figure out if Babel should be removed/fixed, and to make it so the Moment/Lodash imports work with both Vite and Webpack (I think it would be good to have Vite and Webpack side-by-side for a gradual transition).

In any case, I'll ping you when this is ready for review again. I wouldn't want you to be spending time reviewing this when it's not quite ready :)

hyperupcall avatar Dec 05 '23 03:12 hyperupcall

@hyperupcall Thanks for the note. I know I let this sit for a long time in the past so I think I am wanting to move on it whenever I see an update. I think it is correct that I wait until you let us know that this is ready. Thanks for everything.

huss avatar Dec 05 '23 13:12 huss

Superceded by #1262

hyperupcall avatar Jun 11 '24 06:06 hyperupcall