haxelib icon indicating copy to clipboard operation
haxelib copied to clipboard

Refactoring the install mechanism

Open ibilon opened this issue 9 years ago • 3 comments

Currently when installing a library it gets downloaded, extracted, and dependencies are recursively installed.

This isn't without issues, if something goes wrong in the middle things are left in a broken state, like #244 or you end up getting several versions of the same library #218

Because of that you have no idea how many libraries will get installed or how much space they'll use (similar as to why people want an upgrade summary #163).

I propose we change to a more "classical" approach, like the one found in most (all?) linux package managers and tools like them:

  • First you resolve the required libraries, for that we need access to haxelib.json #124 or storing in the db the dependency field
  • Second you present it all to the user, which libraries (name and version) will he get, how much download that is, and how much space they'll take
  • Third you download them, could even do it in parallel to increase speed. Resuming download after a failed attempt is easy since nothing was installed yet #133
  • Last you extract everything to disk and if there was no error you actually install (move) it, to avoid a broken half installed state.

It sound like a big change but the download, dependency find and install parts are already there, it's mostly a change in how they are sequenced. The new thing is to get the recursive dependency list before actually installing it.

Also a detail but we may want to only display the entire list of installed files in verbose mode.

ibilon avatar Dec 19 '15 21:12 ibilon

I like the proposal. It looks to me that we should store dependencies in a Version db object and return that array in a VersionInfos structure from the SiteApi.infos method and then use it in the haxelib client. Additionally we could also display dependencies/dependants on a website.

However, since it's an addition to the public API, we should decide on the dependency format first (#238).

Another question is what's the process of migrating our server databases if we add that new field? cc @jasononeil @back2dos

As for providing haxelib.json without downloading the whole zip file (#124), I think it's a separate matter.

nadako avatar Dec 21 '15 12:12 nadako

There is no process in place. The database is basically the same since a decade. The only change that was made (with Haxe 3) was to store the version components separately.

This seems like a very complex change, particularly if you consider the fact that in the future some dependencies are to be located in VCSs. I think the two don't go very well together.

back2dos avatar Dec 21 '15 16:12 back2dos

I don't know, sure we probably (?) can't calculate the size when VCSs come into play, but we could still implement some kind of transaction system that can roll-back the whole command if something goes wrong. It should be done after the basic cleanup/refactoring though.

nadako avatar Dec 21 '15 20:12 nadako