Joosep Pata

Results 115 comments of Joosep Pata

The current timing on a machine with 1xA100 is as follows: ![image](https://github.com/jpata/particleflow/assets/69717/2b376025-0c32-43b5-b66f-5b71313750ae) This is using the model `mlpf_21M_attn2x6x512_bs40_relu_tt_qcd_zh400k_checkpoint25_1xa100_fp32_fused.onnx` and 10 events in a single thread. Note that this is not...

Now we can try this out in pytorch: https://github.com/jpata/particleflow/issues/235

Here's a paper where the idea is described: http://arxiv.org/pdf/0712.4250.pdf Here's the Higgs boson discovery plot where this is used to draw the error bars and also infer the statistical uncertainty...

Pytorch 2.3.0 was released which seems to have improved FlashAttention support on ROCm builtin. https://github.com/pytorch/pytorch/releases/tag/v2.3.0 Need to wait for the new rocm/pytorch tag: https://hub.docker.com/r/rocm/pytorch/tags Also I haven't seen any update...

This is the current information from the LUMI team. ``` Thank you for sending this improvement request. We are very aware of issues associated with ROCm environment not being updated...

Things are moving at LUMI but at a glacial pace: "We will be taking the system offline for maintenance starting on Monday, 19 August, 2024. LUMI won't be accessible as...

FYI @etiennedreyer I made the issue here so there's a bit more of a public record. I think it's ddsim not Key4HEP at this point.

Added an initial model card here: https://huggingface.co/jpata/particleflow https://huggingface.co/jpata/particleflow/blob/main/clic/clusters/v1.6/README.md

ONNX export from pytorch currently doesn't work because the MLPF forward function expects pytorch-geometric style inputs, the padding is done internally if an attention/gnn-lsh based model is used. ``` def...

Actually nevermind, changing the forward function as follows worked: ``` def forward(self, element_features, batch_idx): # unfold the Batch object if self.ssl: input_ = element_features.float()[:, : self.input_dim] VICReg_embeddings = element_features.float()[:, self.input_dim...