aem-core-wcm-components icon indicating copy to clipboard operation
aem-core-wcm-components copied to clipboard

[2530] AdaptiveImage: control skipping/requiring transformation via OSGi config

Open Stefan-Franck opened this issue 1 year ago • 17 comments

Q                       A
Fixed Issues? Fixes #2530
Patch: Bug Fix? No
Minor: New Feature? Yes
Major: Breaking Change? No
Tests Added + Pass? Yes
Documentation Provided Yes (code comments and or markdown)
Any Dependency Changes? No
License Apache License, Version 2.0

Ensures users can configure which image types are never transformed (currently: svg and gif) and which image types are always transformed (currently: none) via OSGi. By this, transformation can be easily disabled for other image types, but it is also possible to require the transformation for jpg/jpeg, thus ensuring custom compression settings are used - currently, editors in AEM have to crop jpg images by a single pixel just to deliver them in a compressed way - otherwise, web performance and green web goals are not met. The default configurations for these settings ensure backward compatibility, as they ensure the system behaves as before.

Stefan-Franck avatar Jun 23 '23 17:06 Stefan-Franck

As discussed with @kwin I updated the change such that gif and svg are always exempt from transformation (as the library cannot handle them). The field is still configurable in order to extend (e.g. adding webp to the files spooled directly).

Stefan-Franck avatar Jun 25 '23 16:06 Stefan-Franck

Hi @Stefan-Franck, I would like to understand your usecase to force a transformation a bit better. I don't get why I would need to transform an image (again) which is already a perfect match? You mention "compression", but why don't you compress them already on author (as part of the AssetUpdate workflow or in Asset compute) to your preferred compression level?

(Personally I am not a fan of the AdaptiveImageServlet at all, I have seen it to cause OOMs too often.)

joerghoh avatar Jun 26 '23 17:06 joerghoh

On a side-note: Can you add a testcase to validate the "enforce transformation" case? Thanks!

joerghoh avatar Jun 26 '23 17:06 joerghoh

Hi @joerghoh and thanks for the fast reply!

I'm completely with you - a perfect match does not require an additional transformation. But in reality, more often than not, images with the highest qualities and sometimes highest resolutions are uploaded to the DAM - and that impacts the delivery. It's also not easy to identify and optimize them if you are working in large installations with thousands of images. What happens is a slow degradation of the page weight, with more and more images adding 1 MB or more to it - thus impacting performance (and CoreWebVital rankings) and also impacting GreenWeb initiatives.

That's why the solution is implemented as an option - out of the box, it works without additional transformation. Only if you make the deliberate choice to have a different behaviour, you can do so.

I will add a test case in the course of this week.

Stefan-Franck avatar Jun 27 '23 07:06 Stefan-Franck

@Stefan-Franck But isn't it just a workaround you are proposing, and potentially not even addressing the real problem?

In your case you would need a set of "web-optimized" renditions with a special emphasis on size; and even a rendition of the correct dimension is present a special size-optimized rendition should be created.

(In my opinion the AdaptiveImageServlet is not helping the GreenWeb initiative either ...)

joerghoh avatar Jun 27 '23 08:06 joerghoh

But where does that special web-optimized rendition come from? Sounds to me like a manual process for thousands of assets that could be optimized on delivery. If somebody has such a process in place, they can use it. But if not, enforcing compression for all compressable assets is a much more reliable approach. Furthermore, the compressed assets should be cached for a rather long time, thus the rendering cost of the AdaptiveImageServlet shouldn't happen in most cases.

Stefan-Franck avatar Jun 27 '23 08:06 Stefan-Franck

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

sonarqubecloud[bot] avatar Jul 08 '23 08:07 sonarqubecloud[bot]

@joerghoh added three test cases:

  1. disabling transformation for asset types
  2. direct spooling is active for asset without forced transformation
  3. with forced transformation, the same asset is delivered compressed.

Stefan-Franck avatar Jul 08 '23 08:07 Stefan-Franck

@joerghoh - As you can see from the test cases, the full-quality Adobe logo has 250kB, the compressed one only has 130kB, already a reduction by 50 % with the default settings (compression rate around .82). We sometimes use compression factors of .5 - .6, so it really reduces the page weight considerably if the compression is applied or not. As you cannot guarantee uploaded images adhere to any compression standards (only image size in general, which depends on many other factors), you will always end up with non-optimized images in the repository. Being able to easily optimize on delivery would help editors a lot - they currently crop every image on every usage by 1px only to make sure the image is compressed, which is very tedious and error-prone. The change would ensure the system behaves like before, and only if one client wants the compression activated (or webp images delivered as-is), they can apply the corresponding configurations.

Stefan-Franck avatar Aug 03 '23 12:08 Stefan-Franck

@Stefan-Franck thanks for the feedback, I will feed it back to our assets team. While I don't have any data to contradict your observation, I find that you should be able to configure the processing profiles in a way, that you get a size-optimized rendition. There should not be any need to use a process like this to generate the renditions on the flight.

joerghoh avatar Aug 03 '23 13:08 joerghoh

Also an interesting approach, and then have the image component including that compressed rendition instead of the original one - that would work well. So we could decide the time the load occurs from on-upload (how about bulk uploads when the new design agency drops another 1.000 images) to on-delivery (how about many parallel requests and generally longer TTFBs during delivery).

Stefan-Franck avatar Aug 03 '23 13:08 Stefan-Franck

Currently the algorithm in https://github.com/adobe/aem-core-wcm-components/blob/68e429d7b072b1d01545836aa629e5eeaeadffce/bundles/core/src/main/java/com/adobe/cq/wcm/core/components/internal/servlets/AdaptiveImageServlet.java#L518 doesn't distinguish between original and post-processed renditions, so as long as you cannot skip originals the approach with processing profiles won't work reliably.

kwin avatar Aug 03 '23 13:08 kwin

@kwin In the my ideal world you would not need to use the AdaptiveImageServlet, because the assets you are delivering are available already with the required dimensions and quality.

joerghoh avatar Aug 03 '23 13:08 joerghoh

I am trying to understand the issue. The problem is that the renditions we produce during asset processing are not as efficient as what the AdaptiveImageServlet creates assuming the same format, dimension, etc?

Or is the original asset not as efficient as it can be?

bobvanmanen avatar Aug 04 '23 15:08 bobvanmanen

@bobvanmanen - the AdaptiveImageServlet takes - if possible - the original asset from the DAM. As it is spooling the original directly, no compression is applied. This leads to AEM delivering unnecessarily large, uncompressed images via this servlet. At my current customer, editors actually crop all images by 1 pixel just to ensure the compression is applied. The solution of this PR allows defining image types that will always be compressed, regardless of editorial applied transformations like the cropping. @joerghoh suggested the alternate approach to have the correct renditions in the DAM up front, so they can be delivered directly without transformations. This would allow an even faster delivery on request, as the processing load is shifted from the delivery time to the upload time. I'm a bit afraid about larger bulk uploads or design changes (you need a new image width for all assets). It would also take a lot of storage space for AEM. In order to implement this approach, we would need to change the DAM Update Asset Workflow to generate compressed and correctly resized (proportionally, not stretched or padded) images according to the design. Those assets could be spooled directly, if the AdaptiveImageServlet resolves them properly (which does not work well according to @kwin ).

Stefan-Franck avatar Aug 05 '23 21:08 Stefan-Franck

@joerghoh @bobvanmanen - any thoughts?

Stefan-Franck avatar Sep 28 '23 14:09 Stefan-Franck

Quality Gate Passed Quality Gate passed

Issues
0 New issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

sonarqubecloud[bot] avatar Feb 27 '24 18:02 sonarqubecloud[bot]