Convert Zend Locale to PHP native code
Summary (*)
Zend_Locale_Data data is loaded from XML files and then it gets cached in a memory store. This causes dozens/hundreds of requests to the cache (I'll just say Redis).
This data almost never changes. When it does, it changes with a code change. I sincerely doubt there is anyone that is using the XML files directly.
This would be a performance and scalability improvement. We've encountered some issues with performance using Redis Elasticache. I believe it's from the sheer volume of requests.
Examples (*)
Add a break point wherever you see if (!self::$_cacheDisabled && ($result = self::$_cache->load($id))) { and or in the load method itself and go to a PDP or checkout or anywhere you have a lot of locale data and you can watch all of the cache requests.
Proposed solution
Rewrite this as a number of PHP classes or arrays. What would be really nice is if used a strategy pattern, but to be simple, it could just be the array representation of this.
With opcode caching, this gets loaded from memory and never incurs a roundtrip to the cache or disk.
I don't think this is a BC change because the methods themselves are not changing, just what they do internally. The XML files could remain, but we would not use them.
I think even putting the XML in heredoc in a PHP method would work to have it cached. Not the cleanest method, but the fastest.
@joshua-bn the performance issue you describe is true for vanilla Magento. depending on the request there might be even thousands of the calls to the cache backend. In OpenMage we have already added a runtime cache in the Zend_Data, so the data is loaded only once per request. I'm open to improve it further, but I would say the bottleneck is not there any more. Please run your performance tests again with most recent OpenMage.
@tmotyl yeah, I am using the improvements from OpenMage. It greatly reduces the number of cache calls. Still, it doesn't reduce them to nothing.
On the PDP, I am counting 8 calls. On checkout, about 5. On category PLP, 9. This isn't terrible, but why do it if we don't have to?
Probably a micro-optimization in the grand scheme though.