Results 111 comments of matbee

> Hi! I noticed this issue and wanted to share that I’ve implemented inverse kinematics support, including: > > * Forward kinematics via DH parameters. > * Analytical Jacobian computation....

From some research, the reason its not supported on SDXL is due to the "sd-image-variations" model was only finetuned on sd 1.5. I believe there's been an SDXL (or SD...

> [@pkooij](https://github.com/pkooij) Hi that might be a silly question. I have a pair of ( follower and leader ) arm, and I want to update to so101 version. But how...

What exactly was done to install mkl support manually?

Any idea when these PRs can land?

> > Hi @zRzRzRzRzRzRzR > > But as you mentioned in your paper, you already have an image-to-video version of CogVideoX > > ![图片](https://private-user-images.githubusercontent.com/33491471/355837737-192de65d-d980-4937-9233-f7620bb6240c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMwNDAwMjcsIm5iZiI6MTcyMzAzOTcyNywicGF0aCI6Ii8zMzQ5MTQ3MS8zNTU4Mzc3MzctMTkyZGU2NWQtZDk4MC00OTM3LTkyMzMtZjc2MjBiYjYyNDBjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA3VDE0MDg0N1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI3YTAzMzNiODBhZDdkNGQyMGJhOGE0NjIwMDNjZGY3ZGZmOGNiMjZmNzBjYjQyOWI0MzJhMzVkYTg5MDdiNTImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.SpCRl7MK_NtyHe0sS_O5tKsLjOACZbCgyBoXip1yFDs) > > Yes, the above reply...

I'm also having this issue; I've had it happen on my desktop while trying to install UEFI GPT/MBR 's conflicting with my system- happened while trying to make bootable windows...

I've got a, hopefully, working win10e version here: https://github.com/matbee-eth/WindowsAgentArena/tree/win10e-support It does seem like the issue is related to GPT/MBR or VirtIO drivers, but I'm unable to figure out why. Probably...

Ollama Integration isn't the way- OpenAI API support is the way. Simply allow configurable: Model, Base URL and API Key. This will support any reputable local/self hosted llm service.

Couldnt you simply use: `CUDA_HOME=/usr/local/cuda-11.7/ python ....`