oneDNN icon indicating copy to clipboard operation
oneDNN copied to clipboard

rfc: align corners in resampling primitive

Open isuruf opened this issue 2 years ago • 6 comments

Link to a rendered document.

isuruf avatar May 18 '23 06:05 isuruf

Hi @isuruf, thank you for the RFC. I have several questions:

  1. Are there any other frameworks that support this functionality?
  2. What level of performance benefit it brings to have this feature in oneDNN?
  3. Is align_corner feature affect only linear algorithm or nearest neighbor as well?
  4. What are other potential algorithms of sampling should oneDNN team anticipate? So far we do none.

Based on 3) and 4), the other option might be to extend existing list of algorithms - it's less flexible, but no API change. It would be good to write it down, too.

Thanks.

dzarukin avatar May 18 '23 16:05 dzarukin

Are there any other frameworks that support this functionality?

Yes, pytorch does and opencv too.

What level of performance benefit it brings to have this feature in oneDNN?

There's no particular performance benefit. This is a new feature addition.

Is align_corner feature affect only linear algorithm or nearest neighbor as well?

It's only for linear algorithm.

What are other potential algorithms of sampling should oneDNN team anticipate? So far we do none.

An option to remove half pixel centers to match tensorflow 1.x series might be another option.

I've added all of these to the RFC doc.

isuruf avatar May 25 '23 06:05 isuruf

Are there any other frameworks that support this functionality?

Yes, pytorch does and opencv too.

Well, there are a bunch of AI solutions out there. If this functionality to be built for a specific one, this definitely decreases its priority and chances for implementation/promotion since each new feature comes with maintenance and validation cost.

What level of performance benefit it brings to have this feature in oneDNN?

There's no particular performance benefit. This is a new feature addition.

So... the feature is built to achieve something. If this "something" can't be measured, why doing it? Framework fallback code should be enough then.

Thanks.

dzarukin avatar May 25 '23 20:05 dzarukin

There's no particular performance benefit. This is a new feature addition.

So... the feature is built to achieve something. If this "something" can't be measured, why doing it? Framework fallback code should be enough then.

We have several requests from framework developers on this topic as fallback code for GPU creates some duplication and does not handle blocked formats. Unless this mode adds complexity beyond changing the grid I believe it has value.

4. What are other potential algorithms of sampling should oneDNN team anticipate? So far we do none.

One more thing that recently landed on my desk is nearest-exact model in Pytorch's interpolate.

vpirogov avatar May 25 '23 20:05 vpirogov

@isuruf, could you please also look at differences between Pytorch nearest/nearest-exact algorithms and oneDNN's implementation?

vpirogov avatar Jun 01 '23 21:06 vpirogov

@vpirogov, Pytorch's nearest-exact algorithm and oneDNN's nearest are the same. Pytorch's nearest is slightly different.

oneDNN's nearest (Pytorch's nearest-exact) is implemented as

static inline float linear_map(dim_t y, dim_t y_max, dim_t x_max) {
    return ((y + 0.5f) * x_max / y_max) - 0.5f;
}
static inline dim_t nearest_idx(dim_t y, dim_t y_max, dim_t x_max) {
    return (dim_t)roundf(linear_map(y, y_max, x_max));
}

Pytorch's nearest would be implemented as

static inline dim_t nearest_exact_idx(dim_t y, dim_t y_max, dim_t x_max) {
    return (dim_t)roundf(y * x_max / y_max);
}

isuruf avatar Jun 18 '23 06:06 isuruf