gameyfin icon indicating copy to clipboard operation
gameyfin copied to clipboard

[Feature Request] Add library folders through the UI & migration scripts for breaking changes

Open shawly opened this issue 2 years ago • 7 comments

Add library folders through the UI

I'd like to suggest that adding libraries through the UI would be a great feature. With #63 we already get a "Library mappings" tab so we could also add an "Add library" button to the UI which adds a file browser for the gameyfin.sources folder(s). Then there would be no necessity to restart Gameyfin when adding or removing libraries and gameyfin.sources would basically only contain the root folder(s) where the actual libraries are stored.

Reason I would prefer adding it through the UI is, to add all my libraries with the new platforms support, I'd have to add a list of 26 folders which is cumbersome to add and maintain through gameyfin.sources or rather the GAMEYFIN_SOURCES var since I use Docker.

Folders listed in gameyfin.sources would then act as the root for libraries, so the .gameyfin folder with the DB and caches would have to be migrated. IMHO the .gameyfin dirs default path should be $HOME/.config/gameyfin on Linux, %APPDATA%/gameyfin on Windows and ~/Library/Applications Support/gameyfin on macOS.

I'd like to implement this, but I'm not yet sure how to elegantly handle migration of existing libraries or the .gameyfin folder.

Proposed solution for migration

Like the DB migration with Flyway, Gameyfin should save its version within a migration table, e.g. gameyfin_version_history. That table should contain basically the same table structure as the flyway_schema_history table. With that we can add migration scripts (I'd suggest Groovy for flexibility) which are executed similar to Flyway when a new script is detected but no entry for that version has been found in the DB.

With that we can implement breaking changes like adding support for libraries through the UI and have migration scripts that handle file operations like copying existing .gameyfin folders to the user home. I think this will also help with future feature implementations as development doesn't have to be constrained around the current architecture.

What do you think @grimsi ?

shawly avatar Oct 23 '22 10:10 shawly

Yes, that feature is currently my third highest priority (after implementing the missing tests and an account system since that is currently the most requested feature).

This would make Gameyfin feel more like Jellyfin/Plex in terms of UX since they all allow library management in the UI. I think we can even completely remove the gameyfin.sources configuration, Jellyfin for example just opens the file browser in the root folder of your system (or the container if youre using Docker). This would make setting up Gameyfin even easier since then the only settings you have to set before the first start are the API user and key for IGDB (and that will also be moved into the setup UI in the future).

The default paths should be ok, however we should make sure there are no permission issues (especially when running in a Docker container).

Tbh I never really thought about migration scripts but I think it makes sense. I probably would have just released v2.0.0 and told the users they would have to add their libraries manually after the upgrade, because I simply have no experience with migrations using custom scripts. But it looks like you already know what you're doing and your solution sounds pretty straightforward.

grimsi avatar Oct 23 '22 10:10 grimsi

I probably would have just released v2.0.0 and told the users they would have to add their libraries manually after the upgrade, because I simply have no experience with migrations using custom scripts.

That is indeed the easier solution. But users having huge libraries will have to wait quite long for their games to be reindexed, with the chance that the IGDB rate limiting kicks in and the scan effectively failing they'd have to restart their scan again. I experienced the rate limiting issue a few times with only two libraries of around 260 games.

Having the possibility to migrate through breaking changes makes the user experience much more comfortable.

But it looks like you already know what you're doing and your solution sounds pretty straightforward.

Alright! I'll give it a try and implement the migration solution when I have time. It's probably a bigger task to get this implemented properly. Adding platform support also took more time than anticipated.

shawly avatar Oct 23 '22 10:10 shawly

That is indeed the easier solution. But users having huge libraries will have to wait quite long for their games to be reindexed, with the chance that the IGDB rate limiting kicks in and the scan effectively failing they'd have to restart their scan again. I experienced the rate limiting issue a few times with only two libraries of around 260 games.

I also experience this from time to time and I spent some time trying to fix it, but just can't find a solution. According to the IGDB docs, there is a limit of 4 requests per second (and 8 open in total). Gameyfin already only uses 2 requests per second (and 4 in total) since anything higher triggers the rate limiter quite consistently. For me it looks like the IGDB rate limiter isn't implemented properly or they changed the limits without updating the docs. If you're experiencing these problems more often, you can overwrite the gameyfin.igdb.api.max-concurrent-requests and gameyfin.igdb.api.max-requests-per-second properties.

Alright! I'll give it a try and implement the migration solution when I have time. It's probably a bigger task to get this implemented properly. Adding platform support also took more time than anticipated.

Take your time and enjoy your free time😄

grimsi avatar Oct 23 '22 11:10 grimsi

My guess is that the parallelStream used in the LibraryService when processing the games might be causing this issue, but it's just a theory.

Maybe implementing the LibraryService#scanGameLibrary method (or at least the part with the parallelStream) with Reactor might circumvent this issue? Though more problems with rate limiting might arise because now users can refresh games manually and also with platform support it is technically possible to start multiple scans.

The best solution might be to implement a queue so it is not possible to send too many requests at once. Might be wrong though since I'm not that familiar with the reactive libraries. I only touched the reactive stuff when fiddling around with creating a Discord bot, it's really cool but I still have issues with wrapping my head around implementing async functions. Making reactive code readable and understandable is also not so easy, at least for me. :smile:

shawly avatar Oct 23 '22 12:10 shawly

Maybe implementing the LibraryService#scanGameLibrary method (or at least the part with the parallelStream) with Reactor might circumvent this issue?

This shouldn't be an issue since the WebClient responsible for calling the IGDB API is a singleton. But I will investigate your suspicion, maybe you're correct and I messed something up in the implementation.

grimsi avatar Oct 23 '22 12:10 grimsi

So apologies for just jumping in here without a ton of knowledge about the internals of the project. But is gameyfin not caching it's requests? I'm working on a similar project myself and I plan to cache every request to prevent api limiting. I would have to imagine the data isn't changing too often so only busting the cache every couple weeks or so should do the trick, then reimporting under a new library structure should only hit the cache rather than the actual api. It would also have the side benefit of only doing one request per game even if you had multiple platforms. At least depending on how you actually searching the games.

freitagdavid avatar Nov 21 '22 18:11 freitagdavid

No it's actually storing everything in a local database since the datasets almost never change. Its only refreshed if you explicitly tell Gameyfin to refresh the metadata for a game. But there is no extra cache if you delete a library and re-import it. But this should happen very rarely anyway, so while caching every dataset would be possible I don't think it's necessary.

grimsi avatar Nov 21 '22 19:11 grimsi