poetry
poetry copied to clipboard
Dependency solver does not detect infinite loops during dependency resolution
- Poetry version: Poetry (version 1.3.2)
- Python version: 3.9
- OS version and name: Ubuntu 22, inside a py39 conda environment
-
pyproject.toml:
Just the relevant part:
python= "~3.8.0 || ~3.9.0"
torch = [
{version = "1.9.0+cu111", markers = "python_version <= '3.6' and platform_machine == 'x86_64' and platform_system=='Linux'"},
{version = "1.9.0+cu111", markers = "python_version > '3.6' and platform_machine == 'x86_64' and platform_system=='Linux'"},
{version = "~1.9.0", markers = "python_version > '3.6' and platform_machine == 'aarch64' and platform_system=='Linux'"},
]
torchvision = [
{version = ">=0.7.0,<1.0", markers = "python_version <= '3.6' and platform_machine == 'x86_64' and platform_system=='Linux'"},
{version = "0.10", markers = "python_version > '3.6' and platform_machine == 'x86_64' and platform_system=='Linux'"},
{version = "0.10.0+4.9.253tegra", markers = "python_version > '3.6' and platform_machine == 'aarch64' and platform_system=='Linux'"},
]
- [ x] I am on the latest stable Poetry version, installed using a recommended method.
- [ x] I have searched the issues of this repo and believe that this is not a duplicate.
- [ x] I have consulted the FAQ and blog for any relevant entries or release notes.
- [x ] If an exception occurs when executing a command, I executed it again in debug mode (
-vvv
option) and have included the output below.
Issue
Dependency resolution using "poetry lock" is resulting in an infinite loop that is silent unless you pass -vvv and notice what is going on.
1: derived: not torchvision (==0.10.0+4.9.253.tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on numpy (*)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on torch (1.10.0+4.9.253-tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on pillow (>=5.3.0)
1: derived: not torchvision (==0.10.0+4.9.253.tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on numpy (*)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on torch (1.10.0+4.9.253-tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on pillow (>=5.3.0)
1: derived: not torchvision (==0.10.0+4.9.253.tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on numpy (*)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on torch (1.10.0+4.9.253-tegra)
1: fact: torchvision (0.10.0+4.9.253.tegra) depends on pillow (>=5.3.0)
After doing "poetry lock" with just torch in pyproject.toml, follwed by "poetry add torchvision", I get: $ poetry add torchvision Using version ^0.14.1 for torchvision
Updating dependencies Resolving dependencies... (0.0s)
Because no versions of torchvision match >0.14.1,<0.15.0
and torchvision (0.14.1) depends on torch (1.13.1), torchvision (>=0.14.1,<0.15.0) requires torch (1.13.1).
So, because robovision-algorithms-solov2 depends on both torch (1.9.0+cu111) and torchvision (^0.14.1), version solving failed.
So somehow poetry picked a version of torchvision that's not compatible with the version of torch it already resolved.
The dependency resolver should be able to detect cycles like this and terminate with an error, even if it can't determine the cause of the cycle. I'm not familiar with the dependency solver algotihm, but perhaps it would be enough to detect if you derive the same thing twice and bail?
@jonathanpeppers FYI
We've moved this issue to the Backlog milestone. This means that it is not going to be worked on for the coming release. We will reassess the backlog following the current release and consider this item at that time. To learn more about our issue management process and to have better expectation regarding different types of issues you can read our Triage Process.
Btw: How do we profile memory allocations in MAUI on iOS and maccatalyst? Xamarin. Profiler does not work for the MAUI applications.
I wrote a thing to be able to do this on Android:
https://github.com/jonathanpeppers/Mono.Profiler.Android
I built libmono-profiler-log.so
from dotnet/runtime, which is not shipped or redistributed anywhere. It has the same old hooks into the Mono runtime the Xamarin profiler used.
Right now, this is currently the only way for apps running on Mono to get memory information. They are investigating if this is something that can be added to dotnet-trace
and dotnet-dsrouter
in the future.
@jonathanpeppers do you know if it can be done for iOS also? Thanks
@PureWeen I saw that you added to the Backlog (I still have an issue opened in June 2022 in the Backlog). If this is really a leak it is really big one because it means that every page leaks in iOS and Maccatalyst (I encountered this while trying to debug a problem for one of my clients and it has some really big pages and after a while I receive the applicationDidReceiveMemoryWarning in AppDelegate and then the application crashes).
I also have modified the sample and tried it in release mode and I have the same issues. Windows and Android don't have the issue as they GC's SecondPage
@danardelean you might try Instruments, but it will likely only tell you if memory is growing and not what objects.
I saw an interesting sample here:
https://github.com/ivan-todorov-progress/maui-collection-view-memory-leak-bug
You can try their attached property MemoryTracker.IsTracked="True"
to find out if an object is leaking.
@danardelean you should review my recent PRs and the tests I'm adding, lol.
@rolfbjarne does the debugger "work" with breakpoints in the finalizer? Maybe that is just being collected and the debugger is not being notified/break/something?
It could be one of these actual memory leaks:
- https://github.com/dotnet/maui/pull/13260
- https://github.com/dotnet/maui/pull/13327
- https://github.com/dotnet/maui/pull/13333
- https://github.com/dotnet/maui/pull/13400
- https://github.com/dotnet/maui/pull/13530
- https://github.com/dotnet/maui/pull/13550
- https://github.com/dotnet/maui/pull/13656
I will retest this one after #13656 is merged.
@rolfbjarne does the debugger "work" with breakpoints in the finalizer? Maybe that is just being collected and the debugger is not being notified/break/something?
The debugger should work in finalizers, but I'm untrusting of debuggers by nature, and would use some other method to report if objects are collected (Console.WriteLine tends to work well).
@rolfbjarne If you look at the updated sample I am using a label with the number of instances still in memory. Just to be sure this is not an error of the interpreter on iOS and macOS I ran the sample in release and the instances are still not freed so , it would seem, that the memory leak is real
In the sample above, I just changed:
private async void OnCounterClicked(object sender, EventArgs e)
{
GC.Collect();
GC.WaitForPendingFinalizers();
await Shell.Current.GoToAsync($"/{nameof(SecondPage)}");
}
And I consistently only see 1 or 2 pages alive on Windows and Android.
iOS and MacCatalyst the number increases by one each time you navigate, which would indicate Page
's leaking.
The page is really simple:
<ContentPage...>
<VerticalStackLayout>
<Label
x:Name="lblInstances"
Text="Welcome to .NET MAUI!"
VerticalOptions="Center"
HorizontalOptions="Center" />
</VerticalStackLayout>
</ContentPage>
So this must be related to all iOS/MacCatalyst pages, or perhaps shell navigation?
I can also reproduce it if I remove the Shell Route, and just do Navigation.PushAsync(new SecondPage());
.
So this must be something that happens for all Page
's.
@danardelean you might try Instruments, but it will likely only tell you if memory is growing and not what objects.
I saw an interesting sample here:
https://github.com/ivan-todorov-progress/maui-collection-view-memory-leak-bug
You can try their attached property
MemoryTracker.IsTracked="True"
to find out if an object is leaking.
I updated 8.0.0-preview.3.8149. I using BindableLayout in StackLayout and add MemoryTracker.IsTracked="True"
to check object leaking. I check by going to this page and back many times. As a result, on iOS the alive object always increments until it crashes.
I also tried it on Xamarin and the alive object decreased after a few tries.
Hello lovely human, thank you for your comment on this issue. Because this issue has been closed for a period of time, please strongly consider opening a new issue linking to this issue instead to ensure better visibility of your comment. Thank you!