Deleted user

Results 142152 comments of Deleted user

@zpcore ``` import torch_xla, diffusers, builtins, imageio, os, PIL.Image, controlnet_aux, sys, torch os.environ.pop('TPU_PROCESS_ADDRESSES') reader = imageio.get_reader('/kaggle/input/controlnet/pose.mp4', 'ffmpeg') openpose = controlnet_aux.DWposeDetector(det_config='yolox_l_8xb8-300e_coco.py', pose_config='dwpose-l_384x288.py') poses = [openpose(PIL.Image.fromarray(reader.get_data(_)).resize((512, 768))) for _ in builtins.range(16)] #reader.count_frames()...

@zpcore @JackCaoG I try the code in https://www.kaggle.com/code/chaowenguoback/stablediffusion/notebook?scriptVersionId=201780362 ``` %%bash python3 -m pip install -U imageio[ffmpeg] controlnet-aux openmim mim install mmengine mmcv mmdet mmpose curl -O https://raw.githubusercontent.com/huggingface/controlnet_aux/master/src/controlnet_aux/dwpose/dwpose_config/dwpose-l_384x288.py curl -O https://raw.githubusercontent.com/huggingface/controlnet_aux/master/src/controlnet_aux/dwpose/yolox_config/yolox_l_8xb8-300e_coco.py...

I want to run stable diffusion specifically. I can not find any stable diffusion code in https://github.com/pytorch/xla/tree/master/examples. I want to get a minimum working example for stable diffusion in huggingface...

If you can please give me a minimum working example for stable diffustion and use all tpu cores in kaggle v3-8. Not just one tpu core.

@zpcore i do not need to train the model. i need to use tpu to produce picture. right now i only can run 1 tpu core. i need to use...

@Inbal-Tish this was added in your change: https://github.com/wix/react-native-calendars/pull/2607 cc @ethanshar

I've read those conventions but I just wanted to make things a bit more clear about the types already present in the repo. I know that cstdint is C++11 only...

I haven't been able to fix it, but I noticed a few interesting things. method nquads (meaning 4 parameters) but skos definition has three arguments not four. I tried to...

Bug is reproducible on git version too.