Replace OfflinePlayer or improve upon it
According to Paper, the use of OfflinePlayer asserts this response, even if you surround the code in a try/catch. Since I would rather not assume what changes you'd prefer or prioritize, I think an alternative should be implemented to minimize or mitigate the need for this verbose exception to be printed since it oftentimes isn't useful, given the user in-game is already notified that it was unable to find an associated user or it does inevitably identify the correct user.
[02:43:45] [Region Scheduler Thread #20/INFO]: Simon70 issued server command: /oe leyeffer
[02:43:45] [Folia Async Scheduler Thread #0/WARN]: Couldn't find profile with name: leyeffer
com.mojang.authlib.exceptions.MinecraftClientHttpException: Couldn't find any profile with name leyeffer
at com.mojang.authlib.minecraft.client.MinecraftClient.readInputStream(MinecraftClient.java:103) ~[authlib-6.0.58.jar:?]
at com.mojang.authlib.minecraft.client.MinecraftClient.get(MinecraftClient.java:56) ~[authlib-6.0.58.jar:?]
at com.mojang.authlib.yggdrasil.YggdrasilGameProfileRepository.findProfileByName(YggdrasilGameProfileRepository.java:116) ~[canvas-1.21.8.jar:?]
at net.minecraft.server.players.GameProfileCache.lookupGameProfile(GameProfileCache.java:89) ~[canvas-1.21.8.jar:1.21.8-DEV-e040cc5]
at net.minecraft.server.players.GameProfileCache.get(GameProfileCache.java:156) ~[canvas-1.21.8.jar:1.21.8-DEV-e040cc5]
at org.bukkit.craftbukkit.CraftServer.getOfflinePlayer(CraftServer.java:2111) ~[canvas-1.21.8.jar:1.21.8-DEV-e040cc5]
at OpenInv-5.1.15.jar/com.lishid.openinv.util.PlayerLoader.matchExact(PlayerLoader.java:138) ~[OpenInv-5.1.15.jar:?]
at OpenInv-5.1.15.jar/com.lishid.openinv.util.PlayerLoader.match(PlayerLoader.java:149) ~[OpenInv-5.1.15.jar:?]
at OpenInv-5.1.15.jar/com.lishid.openinv.command.OpenInvCommand.getTarget(OpenInvCommand.java:128) ~[OpenInv-5.1.15.jar:?]
at OpenInv-5.1.15.jar/com.lishid.openinv.command.PlayerLookupCommand$1.run(PlayerLookupCommand.java:66) ~[OpenInv-5.1.15.jar:?]
at OpenInv-5.1.15.jar/com.github.jikoo.openinv.lib.nahu.scheduler-wrapper.implementation.folia.FoliaWrappedScheduler.lambda$runTaskAsynchronously$2(FoliaWrappedScheduler.java:101) ~[OpenInv-5.1.15.jar:?]
at io.papermc.paper.threadedregions.scheduler.FoliaAsyncScheduler$AsyncScheduledTask.run(FoliaAsyncScheduler.java:226) ~[canvas-1.21.8.jar:?]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1090) ~[?:?]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614) ~[?:?]
at java.base/java.lang.VirtualThread.run(VirtualThread.java:456) ~[?:?]
Frankly, this is a bit of a Paper issue. I assume Spigot is affected too as it appears to be originating from Mojang code, though I will admit I didn't check if Paper added it. Forcing OI and others to rework previously-functioning and API-based player lookups to be NMS-based so we can check if a player exists without tripping alarms is a regression. The only API-based method that won't cause this is now is iterating over every single offline player, which is the nuclear option I want to move away from in general. They shouldn't be printing a stack trace like this during a normal API method call, or if it is unavoidable without modification of a high flux area of Mojang's code they should add new APIs to cover and avoid cases that cause an unpreventable stack trace. That said, user cache access via API has always been poor to nonexistent.
As a temp fix you can safely add a log filter to ignore these, but that's not good practice.
A full replacement will likely just be #267, creating our own full user cache. I've done some toying with it (though I don't think I pushed that branch because I wasn't entirely sure how I wanted it to look), and maintaining our own name+UUID table is inexpensive and should work fine. Unfortunately, the only easy way to do fuzzy matching as we were requires fetching all entries from the database. The more elegant solution of doing matching on the SQLite side requires loading the Spellfix1 extension, which in turn introduces the need for per-system code and binary loading. It gets messy very quickly.
I had been tempted previously to add results of fuzzy-matching to a separate table for ease, but I ended up dropping the idea. Some fuzzy matches are definitely worth preserving, like not knowing the exact numbers at the end of a name and omitting them, but a lot are typos that aren't made twice. Maybe it would be more useful if entries were timestamped so we could drop older/unused results faster? That's a bit of an implementation detail that could be added later though. The initial raw name+UUID would be the most important part.
Frankly, this is a bit of a Paper issue. I assume Spigot is affected too as it appears to be originating from Mojang code, though I will admit I didn't check if Paper added it. Forcing OI and others to rework previously-functioning and API-based player lookups to be NMS-based so we can check if a player exists without tripping alarms is a regression. The only API-based method that won't cause this is now is iterating over every single offline player, which is the nuclear option I want to move away from in general. They shouldn't be printing a stack trace like this during a normal API method call, or if it is unavoidable without modification of a high flux area of Mojang's code they should add new APIs to cover and avoid cases that cause an unpreventable stack trace. That said, user cache access via API has always been poor to nonexistent.
As a temp fix you can safely add a log filter to ignore these, but that's not good practice.
A full replacement will likely just be #267, creating our own full user cache. I've done some toying with it (though I don't think I pushed that branch because I wasn't entirely sure how I wanted it to look), and maintaining our own name+UUID table is inexpensive and should work fine. Unfortunately, the only easy way to do fuzzy matching as we were requires fetching all entries from the database. The more elegant solution of doing matching on the SQLite side requires loading the Spellfix1 extension, which in turn introduces the need for per-system code and binary loading. It gets messy very quickly.
I had been tempted previously to add results of fuzzy-matching to a separate table for ease, but I ended up dropping the idea. Some fuzzy matches are definitely worth preserving, like not knowing the exact numbers at the end of a name and omitting them, but a lot are typos that aren't made twice. Maybe it would be more useful if entries were timestamped so we could drop older/unused results faster? That's a bit of an implementation detail that could be added later though. The initial raw name+UUID would be the most important part.
Yeah, I've been outright replacing the method with PlayerProfiler or some other advanced alternative in my plugins to produce the same result more or less. I can try figuring something out on my own and PRing it if you're satisfied
Yeah, I've been outright replacing the method with PlayerProfiler or some other advanced alternative in my plugins to produce the same result more or less. I can try figuring something out on my own and PRing it if you're satisfied
Sorry about the slow reply. If you have a stand-in that's already compatible with Paper and Spigot that would be ideal. If not, I think I'd like to push out what I had been working on a while back (assuming I don't hate what I did when I look at it again) and get your opinion on the abstraction, because then we'd be free to add Spigot-/Paper-/our own DB-backed implementations after the fact.
Okay, so I don't think it's worth pushing my previous implementation because there were a ton of unknowns I needed to experiment with on the SQLite backend side and I never actually hooked it up to the real lookup process. In terms of API, it basically boiled down to an interface that had methods to add a profile, push pending profiles for shutdown, fetch exact profile, and fetch best matching profile. I also had a small record for profiles so that they'd have hash collisions for just identical names for ease. I'll try to squeeze that in this coming week, but boy howdy is my capstone consuming a lot of time.