hysts

Results 32 comments of hysts

Hi, @patrickvonplaten Thanks for the reply. In the training loop, `noise` and `timesteps` are sampled with global seed. https://github.com/huggingface/diffusers/blob/92b6dbba1a25ed27b0bae38b089715132c7e6bcc/examples/train_unconditional.py#L125-L130 So every time this condition https://github.com/huggingface/diffusers/blob/92b6dbba1a25ed27b0bae38b089715132c7e6bcc/examples/train_unconditional.py#L162 is met, the global seed...

Let me add a concrete toy example. 1. In the case without val: ```python torch.manual_seed(123) # train loop for epoch in range(3): print(f'--- {epoch=} ---') # mini batches for i...

Sure. I'll look into it and make a PR.

@alvitawa You can also download the pretrained models from Hugging Face Model Hub. https://huggingface.co/THUDM/CogView2

Hi, @younesmch I think both the gaze vector and head pose are in camera coordinate system. So, I think we can get the gaze point on the screen with the...

@younesmch @tomilles I experimented to see how the predicted gaze point on the screen shifts depending on the head pose, and the following are the results: ![](https://user-images.githubusercontent.com/25161192/151332841-e2b93c1f-56e2-431a-95f9-9b74fc82b314.png) I took 200...

@younesmch I forgot to mention, but I used a model pretrained with the ETH-XGaze dataset, which covers much wider range of gaze and head direction, in the above experiment. The...

Hi, @ffletcherr Oh, that's wonderful! Thank you very much for the information. Sorry for not updating anything on this issue. I've been busy recently and haven't had time to do...

Hi, @Hyrtsi When using `python:3.9.13-slim` docker image, you can install `ptgaze` without problems with the following steps: ```bash apt update apt install -y git build-essential cmake git clone https://github.com/hysts/pytorch_mpiigaze_demo cd...

Oh, forget what I said earlier. It seems the installation was actually failed. I'll look into it, but it may take some time.