Results 17 comments of Dilip Parasu

By physically seeing, I see one fan for cpu, and one for gpu. Lemme know if looks are deceiving 🧐 And I am not removeing the back cover, because warranty....

Turbo mode is able to ramp up my fans! But I don't see the overclock being applied. I tried doing a cuda accelerated AI model training, and monitored the GPU...

There is a 100MHz overclock on both GPU cores and Memory Cores on windows side, but not on linux side. And I dont think that is gonna do a huge...

I am now getting this error, lemme fix it and let you know ``` [ 6062.793714] facer: version magic '5.15.5-zen1-1-zen SMP preempt mod_unload ' should be '5.15.6-zen2-1-zen SMP preempt mod_unload...

Couldnt get benchmark mode on Geeks 3d working properly So I tested with unigene Here are the results Just fan at 100% ![Default(Fan_100)](https://user-images.githubusercontent.com/88489071/145323353-ffd4db4c-fc7d-466c-a67a-ed0234caa999.png) Turbo Mode ON ![Default(Turbo)](https://user-images.githubusercontent.com/88489071/145323385-1b6a2194-911e-42a0-9d6a-35b546ff026d.png) During Testing, I...

I used the prime render offload to offload it to GPU, and I had GPU monitored for the whole. I did see the GPU being used As for why unigene...

If you have some time, then try making your own custom template. https://github.com/FriendsOfFlarum/upload/wiki/Custom-Templates

Thanks for merging the PR! (https://github.com/huggingface/deep-rl-class/pull/19) Closing the issue now :)

This also can be used to implement onnx to pytorch conversion (https://github.com/nebuly-ai/nebullvm/blob/main/nebullvm/operations/conversions/converters.py#L126) Or is it implemented elsewhere?

I've started working (by directly importing the onnx2torch module). Not sure if I can assign myself to this because its gonna take me some time to understand the convertors api...