Nick Petalas

Results 18 comments of Nick Petalas

> ``` > use {'ms-jpq/chadtree', branch = 'chad', run = 'python3 -m chadtree deps' } > ``` > > See packer [docs](https://github.com/wbthomason/packer.nvim#quickstart) You can find there how to use `branch`...

@alexcrichton @s1gtrap @peter9477 Hello, could you help me out with a vaguely related issue please? If not I can open a new question. I have an HtmlCanvasElement, associated CanvasRenderingContext2d and...

Not on screen but yes, I want to maintain a 'live' reference to the underlying buffer, and modify it by calling methods on the associated context. I think you're right...

> Yea I highly doubt you'd get away with that for the time being (interesting proposal over at [whatwg/html#5173](https://github.com/whatwg/html/issues/5173)). It's not even a rust thing, just a matter of limited...

> Use half precision, it boost your performance around 50% I have tried removing the --no-half and --precision full args, it barely makes a difference in my case, ~16.6 vs...

@aliencaocao I've updated the PR, now upgrading fully to cu117 using xformers 0.0.16rc395, this all seems compatible but performance is still not great. @ninele7 mentioned [maybe building with cuda 11.8](https://github.com/ninele7/xfromers_builds/pull/1#issuecomment-1363609348)...

> > [ninele7/xfromers_builds#1 (comment)](https://github.com/ninele7/xfromers_builds/pull/1#issuecomment-1363609348) > > I have been using xformers on cuda 11.8 since this repo existed, with cu117 torch packages. No issues so far. @aliencaocao good to know,...

> Yes i am building myself, but only for 0.14.0 as I am facing some errors when building 0.15.0 and newer. Currently using the official 0.16 wheels build on 11.7....

> I dont feel its slow. I am on 3080ti. I mean it's all relative but knowing 25+ is possible 11 it/s is slow imo, basically not getting the full...

> My current setup IS your branch lol. I do ML myself so I already have torch+cu117 installed before this repo even existed. I use torch 1.13.1+cu117 though. Sorry, I...