openage icon indicating copy to clipboard operation
openage copied to clipboard

Use bicubic filtering to reverse dimetric projection

Open MayeulC opened this issue 4 years ago • 4 comments

Related: #720, #985.

This is the current implementation:

https://github.com/SFTtech/openage/blob/439d224aac94c4d6114e8d0405abf7fd17db924d/openage/convert/processor/export/texture_merge.py#L165-L173

As written in the chat, I see several issues with this implementation:

  • homemade, inefficient matrix multiplication algorithm (while the images are quite small, I'd still leave that to optimized algorithms
  • No antialiasing. The code takes values from nearest neighbours instead of interpolating them depending on their proximity (bicubic, bilinear, etc)
  • A permutation index is reconstructed from scratch each time the function is called.

There are several libraries that can implement this in a more efficient way, I chose pillow based on the fact that it's used elsewhere. It doesn't have the greatest choice of antialiasing algorithms nor the most efficient implementation, but that's probably enough. Example code based on the same stackoverflow question and this tutorial, though I have ignored a few things, like the matrix inversion, and improved a few things with these two wikipedia pages

I suggest something similar to the following code:

import numpy as np
from PIL import Image
import matplotlib.pyplot as plt

file = "45c0011e-e279-11e6-9ed5-30f00391cc87.png"

img = Image.open(file)

def rot_mat(angle_rad):
    return np.array([[np.cos(angle_rad),-np.sin(angle_rad), 0],
                     [np.sin(angle_rad),np.cos(angle_rad),  0],
                     [                0,               0,   1]])

def aoe2_transf(scale=1.118):
    return np.array([[  1,  -1,     0],
                     [0.5, 0.5, scale],
                     [  0,   0,     0]])

# unsure, intuitively deduced from the above
def aoe2_inv_transf(scale=1.118):
    return np.array([[ 0.5, 1,       0],
                     [-0.5, 1, 1/scale],
                     [   0, 0,       0]])


def trans_mat(x,y):
    return np.array([[1,0,x],
                     [0,1,y],
                     [0,0,1]])

# Inverse transform should be something along those lines, with a 512x512 img size:
# trans =    trans_mat(-512,256) @ aoe2_transf() @ trans_mat(256,-256)

# We combine those transformations, starting by centering the image
trans =  trans_mat(256,256) @ aoe2_inv_transf() @trans_mat(-512,0)

# Note that this could be an PERSPECTIVE transform if we were to lose the "[:6]".
transformed = img.transform(
    (1000,512),
    Image.PERSPECTIVE,
    (trans).flatten(),
    resample=Image.BICUBIC)

transformed.save("transformed_bicubic.png")
plt.imshow(transformed)

Here is an example with the image from #720

transformed_nearest transformed_bilinear transformed_bicubic

Nearest neighbour, bilinear and bicubic transforms, respectively. I hope that helps make the case for bicubic filtering.

As seen in these pictures, though, I still haven't found a satisfactory projection matrix as the edges (most notably the corner on the right) are cut.

MayeulC avatar Aug 25 '20 16:08 MayeulC

I'm confused why your example images are in dimetric projection, because the converter should take the dimetric tiles from the non-HD and DE versions, and then reverse their projection so we get the flat texture. But surely we can improve the current algorithm, indeed :)

TheJJ avatar Aug 25 '20 18:08 TheJJ

Yeah, I initially didn't want to dig deep inside the converter algorithm to extract the original dimetric-projected images, so I just generated a dimetric projection to work with. I now did that.

Changing the transformation matrix (as well as the output image size) is enough to get the opposite transformation. As commented out above, the reverse transform is:

import numpy as np
from PIL import Image
import matplotlib.pyplot as plt

file = "input_dimetric.png"
img = Image.open(file)

def rot_mat(angle_rad):
    return np.array([[np.cos(angle_rad),-np.sin(angle_rad), 0],
                     [np.sin(angle_rad),np.cos(angle_rad),  0],
                     [                0,               0,   1]])

def aoe2_to_dimetric():
    return np.array([[  1,  -1,     0],
                    [0.5, 0.5, 0],
                    [  0,   0,     1]])

def translation_mat(x,y):
    return np.array([[1,0,x],
                     [0,1,y],
                     [0,0,1]])

w = 481

transformation_matrix = translation_mat(w,0) @ aoe2_to_dimetric() @ rot_mat(-np.pi/2) @ translation_mat(-w,0)

# Note that this could be a PERSPECTIVE transform if we were to lose the "[:6]".
transformed = img.transform(
    (w,w),
    Image.AFFINE,
    (transformation_matrix).flatten()[:6],
    resample=Image.BICUBIC)

transformed.save("flattened.png")
plt.imshow(transformed)

Example on SWGB "metal" assets

(I suppose this is fair use?). The first image is the flattened version from the current converter. The second one is flattened with the above script.

source dimetric textures

flattened_base flattened

Notable things (don't hesitate to back-and-forth):

  • There seems to be a slight bug in the current converter, the diagonal pixels are slightly larger than in my version (I suppose it's a bug in the current converter as it deforms lines a bit).
  • There are some irregularities in the current converter output that are smoothed out.
  • For a lossless back-and-forth conversion, the texture should be upscaled before the transformation, and stored upscaled.
  • If going into enhance-over-the-original territory in the future, more gains can be likely be achieved (though not matching the original pixel-per-pixel) if getting rid of the dithering (gaussian blur? can easily be tuned with an fft ) before transforming, together with some MLAA
  • My version with Pillow isn't perfect, due to the bicubic interpolator needing neighbors at the edge of the image. It takes alpha pixels as the neighbors, which prevents the edge pixels from being perfect (there is vertical bar on the right with a lot of alpha). Scipy allows specifying padding for missing pixel values, but using pillow, we'd have to pad ourselves beforehand (this is a texture, so preferably from samples from the other side, that could be achieved when reconstructing the dimetric image).

I would open a PR, but I am unsure what relation that TextureAtlas has with Pillow image formats. I'll dig more...

MayeulC avatar Aug 25 '20 20:08 MayeulC

I have many objections to this, but maybe we can work something out here because I, too, would like to use numpy for the projection calculation :D It just didn't work so far because the projection method of Genie seems to be slightly off.

There seems to be a slight bug in the current converter, the diagonal pixels are slightly larger than in my version (I suppose it's a bug in the current converter as it deforms lines a bit).

I can assure you that the current converter method is correct. We know from the flat HD textures how the flat AoC textures are supposed to look and the position of pixels matches up.

The reason we cannot just multiply with the transformation matrix is that there is a shift at the diagonal that we have to account for by ceiling the position result:

https://github.com/SFTtech/openage/blob/439d224aac94c4d6114e8d0405abf7fd17db924d/openage/convert/processor/export/texture_merge.py#L170-L171

Otherwise the texture would be 1 pixel less wide and high than it should be. You can see it in your result where the lowest row and right-most column are not colored. I don't know if this is an error in the original calculation method that Ensemble used or a result of the uneven resolution of 481x481, but we must account for it to get the right result.

There are some irregularities in the current converter output that are smoothed out. For a lossless back-and-forth conversion, the texture should be upscaled before the transformation, and stored upscaled. If going into enhance-over-the-original territory in the future, more gains can be likely be achieved (though not matching the original pixel-per-pixel) if getting rid of the dithering (gaussian blur? can easily be tuned with an fft ) before transforming, together with some MLAA

The irregularities are from the original texture and should stay. I am very hesitant to include any "enhancements" of textures in the converter as opposed to external tools. Chances are very high that the textures will look off in the final game.

No antialiasing. The code takes values from nearest neighbours instead of interpolating them depending on their proximity (bicubic, bilinear, etc)

There is no interpolation at all in the converter code, not even nearest neighbor. It is a 1-to1 translation of pixel positions.


Also I would like to not use pillow at all for operations and solely use numpy like we do now. We need to input an array into the libpng service and converting back and forth between pillow images and arrays sounds too expensive for me.

heinezen avatar Aug 26 '20 12:08 heinezen

I might be way off here, but I think the slope data could provide some insight in the original transforms?

I haven't bothered to look into how they are generated, I just got it working and fast enough. But they're basically LUTs to look up several offsets in the source SLP for each target pixel (which is a PITA because then I couldn't do alpha blending like a normal person...).

The loqmaps might be easier, though, without all the annoying blending and lighting: https://github.com/aap/geniedoc/blob/master/loqmaps.txt

heinezen probably knows this better than me though. :-P

sandsmark avatar Aug 29 '20 18:08 sandsmark