cotta
cotta copied to clipboard
Query about speed of inference on semantic segmentation.
Hi, thanks for your nick work. I find it is a little bit slow to inference on semantic segmentation code, the speed is less than 1 task/s on 1 v100. Is this a normal case? or there are anything to do? Thanks:-)
Same here
@zwbx Hi, did you figure out what is the most reason? I think this is because CoTTA uses augmentation processes like [0.5, 1.0, 1.5, ..] when doing each image's inferences.
yes, there is nothing wrong.
发件人: Daeun Lee @.> 发送时间: 2023年11月26日 3:07 收件人: qinenergy/cotta @.> 抄送: Wenbo Zhang @.>; Mention @.> 主题: Re: [qinenergy/cotta] Query about speed of inference on semantic segmentation. (Issue #19)
CAUTION: External email. Only click on links or open attachments from trusted senders.
@zwbx Hi, did you figure out what is the most reason? I think this is because CoTTA uses augmentation processes like [0.5, 1.0, 1.5, ..] when doing each image's inferences.
— Reply to this email directly, view it on GitHubhttps://github.com/qinenergy/cotta/issues/19#issuecomment-1826372844, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AJFDSL4IFYBPJQ5LVN7AMCLYGINGHAVCNFSM6AAAAAAV4YGJ6WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRWGM3TEOBUGQ. You are receiving this because you were mentioned.Message ID: @.***>