calculate-flops.pytorch icon indicating copy to clipboard operation
calculate-flops.pytorch copied to clipboard

The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer(Bert、LlaMA etc Large Language Model)

Results 22 calculate-flops.pytorch issues
Sort by recently updated
recently updated
newest added

I tried to calculate the FLOPs of EfficientNet, but there were some errors. Can you help me? ` File "/.conda/envs/test/lib/python3.8/site-packages/calflops/pytorch_ops.py", line 361, in newFunc flops, macs = funcFlopCompute(*args, **kwds) File...

https://github.com/MrYxJ/calculate-flops.pytorch/blob/29ca58443fc15252888de377b842fa0308a5cfe8/calflops/pytorch_ops.py#L248 we calculate the upsample flops but never set them to `flops = `. Does this become an issue and lead to incorrect numbers for subsequent layers?

What is the difference between setting the parameter 'forward_mode' in function calculate_flops() as 'generate' and setting it as 'forward' (default)?

You can add a CITATION.cff file to the root of a repository to let others know how you would like them to cite your work.

I am trying to run: ```python from calflops import calculate_flops_hf model_name = "EleutherAI/pythia-70m" batch_size = 1 max_seq_length = 10 flops, macs, params = calculate_flops_hf(model_name, input_shape=(batch_size, max_seq_length)) ``` Here is the...

For models with two inputs, can I wrapper them like this? ```python class Wrapper(nn.Module): def __init__(self, model): super(Wrapper, self).__init__() self.model = model def forward(self, x): class_labels = torch.eye(1000, device=device)[torch.randint(0, 1000,...

I was trying to calculate the flops of InstructPix2Pix from huggingface using the method described in the README, but I got this error: ``` ValueError: Library `diffusers` is not supported...

Hi, the input shape of point cloud is not the shape of image. And what should I input?

Though 是FLOPs is around twice as much as MACS, isn't the proportion of each submodule supposed to be equal.