incubator-pagespeed-ngx icon indicating copy to clipboard operation
incubator-pagespeed-ngx copied to clipboard

Cache is not working: Failed to read cache clean timestamp

Open frontal1660 opened this issue 6 years ago • 33 comments

Hello,

I have a huge problem with ngx_pagespeed cache. I've been working on it for 7 weeks with no success.

I'm giving you some details but feel free to ask for more if needed.

99% of error.log are filled with : 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:01 [warn] 2023#3040: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:02 [warn] 2022#3054: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 03:24:05 [warn] 2024#2233: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe.

Error.log also contains (which is quite alarming) : 2017/12/11 05:56:01 [error] 2024#2231: [ngx_pagespeed 1.13.35.1-0] PageSpeed Serf fetch failure rate extremely high; only 4 of 9 recent fetches fully successful; is fetching working?

Nginx is compiled with : nginx -V nginx version: nginx/1.13.7 built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5) built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_gzip_static_module --with-http_ssl_module --with-http_stub_status_module --with-http_v2_module --with-pcre-jit --add-module=/usr/local/src/nginx-ps/ps --add-module=/usr/local/src/nginx-ps/ngx_brotli --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

Extract from pagespeed config : pagespeed FileCachePath "/dev/shm/ngx_cache"; pagespeed FileCacheSizeKb 2304000; pagespeed FileCacheCleanIntervalMs 3600000; pagespeed FileCacheInodeLimit 500000; pagespeed LRUCacheKbPerProcess 16384; pagespeed LRUCacheByteLimit 32768; pagespeed CreateSharedMemoryMetadataCache "/dev/shm/ngx_cache" 1024000; pagespeed DefaultSharedMemoryCacheKB 1024000;

Extract from nginx config : user www-data;

/dev/shm/ permissions : l /dev/shm/ total 0 drwxr-xr-x 3 www-data www-data 60 Dec 9 01:04 ngx_cache

/dev/shm/ngx_cache permissions (I tried 777 in order to check if the problem comes from here) : l /dev/shm/ngx_cache total 0 drwxrwxrwx 3 www-data www-data 60 Dec 9 01:04 shm_metadata_cache

I use ngx_pagespeed on two sites (with same configurations) and both have the same problem. I did NOT have the cache problem before septembre 2017 (not really sure but it might be the case).

Would you please help me to find a solution? I am so stuck.

Thanks a lot.

frontal1660 avatar Dec 12 '17 02:12 frontal1660

Hi @frontal1660 Have you tried touch /dev/shm/ngx_cache/cache.flush ?

Lofesa avatar Dec 12 '17 08:12 Lofesa

Hi @Lofesa

Thank you for this quick answer.

cache.flush did NOT exist in the directory.

Now I have : bash# ls -la /dev/shm/ngx_cache/ total 0 drwxrwxrwx 3 www-data www-data 60 Dec 9 01:04 shm_metadata_cache -rwxrwxrwx 1 www-data www-data 0 Dec 12 14:13 cache.flush

I kept the 777 permissions for now.

I havejust restarted Nginx, PHP and Memcached. Usually the errors come after few dozens of minutes. I'll check and come back to you.

frontal1660 avatar Dec 12 '17 14:12 frontal1660

Bad news. It did not solve the problem.

I still have many lines like that in error.log:

2017/12/12 16:37:15 [warn] 1868#2733: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 16:37:15 [warn] 1868#2733: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 16:37:15 [warn] 1868#2733: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe. 2017/12/12 16:37:15 [warn] 1868#2733: [ngx_pagespeed 1.13.35.1-0] Failed to read cache clean timestamp /dev/shm/ngx_cache/!clean!time!. Doing an extra cache clean to be safe.

On the other server I use the /var/cache/pagespeed directory but I have the same issue.

One more thing I didn't say: my two servers are dedicated OVH servers.

frontal1660 avatar Dec 12 '17 15:12 frontal1660

Hi @frontal1660 Try to change the file name to !clean!time! In other hand sound strange for me that do not exist a v3 directory. Have you configued the file cache directives?

Lofesa avatar Dec 13 '17 09:12 Lofesa

Hi @Lofesa

Thanks for your anwser.

The file cache directory, the following lines exist in my pagespeed config file: pagespeed FileCachePath "/var/cache/pagespeed"; pagespeed FileCacheSizeKb 2304000; pagespeed FileCacheCleanIntervalMs 3600000; pagespeed FileCacheInodeLimit 500000; pagespeed LRUCacheKbPerProcess 16384; pagespeed LRUCacheByteLimit 32768; pagespeed CreateSharedMemoryMetadataCache "/dev/shm/ngx_cache" 102400; pagespeed DefaultSharedMemoryCacheKB 102400;

Does it answer your questions about file cache directives ?

Actually I don't understand your comments. What file do you want me to change ? And what about the v3 directory ? Could you be more explicit ?

I've just checked the /dev/shm/ngx_cache/ directory and the file cache.flush you told me to create is not here anymore. It has been deleted but not by me (pagespeed ?).

Here is what I have in the directory: bash# ls -la /dev/shm/ngx_cache/ total 0 drwxrwxrwx 3 www-data www-data 60 Dec 9 01:04 shm_metadata_cache

I want to say that I read every page of the official pagespeed documentation and I searched Internet for hours with no success.

I've been using ngx_pagespeed for a long time with no problem and I'm thinking of disabling... what a shame isn't it ? My performances are so bad!

frontal1660 avatar Dec 13 '17 11:12 frontal1660

FileCachePath & CreateSharedMemoryMetadataCache should be set up with paths that match [1]. This looks misconfigured, and possibly fixing that may eliminate the log messages you observed.

The offending line probably is: pagespeed CreateSharedMemoryMetadataCache "/dev/shm/ngx_cache" 102400; You can remove it, or configure a corresponding FileCachePath.

[1] https://www.modpagespeed.com/doc/system#shm_cache.

Otto

On Wed, Dec 13, 2017 at 12:42 PM frontal [email protected] wrote:

Hi @Lofesa https://github.com/lofesa

Thanks for your anwser.

The file cache directory, the following lines exist in my pagespeed config: pagespeed FileCachePath "/var/cache/pagespeed";

pagespeed FileCacheSizeKb 2304000; pagespeed FileCacheCleanIntervalMs 3600000; pagespeed FileCacheInodeLimit 500000; pagespeed LRUCacheKbPerProcess 16384; pagespeed LRUCacheByteLimit 32768;

pagespeed CreateSharedMemoryMetadataCache "/dev/shm/ngx_cache" 102400; pagespeed DefaultSharedMemoryCacheKB 102400;

Does it answer your questions about file cache directives ?

Actually I don't understand your comments. What file do you want me to change ? And what about the v3 directory ? Could you be more explicit ?

I've just checked the /dev/shm/ngx_cache/ directory and the file cache.flush you told me to create is not here anymore. It has been deleted but not by me (pagespeed ?).

Here is what I have in the directory: bash# ls -la /dev/shm/ngx_cache/

total 0 drwxrwxrwx 3 www-data www-data 60 Dec 9 01:04 shm_metadata_cache

I want to say that I read every page of the official pagespeed documentation and I searched Internet for hours with no success.

I've been using ngx_pagespeed for a long time now and I'm thinking of disabling... what a shame isn't it ?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/pagespeed/ngx_pagespeed/issues/1516#issuecomment-351367453, or mute the thread https://github.com/notifications/unsubscribe-auth/ACIsRF7Ij_5RqXKsM5ZrJdBoV8V9Nr7xks5s_7gcgaJpZM4Q-V1J .

oschaaf avatar Dec 13 '17 12:12 oschaaf

Thanks a lot @oschaaf.

I feel very optimistic regarding this solution.

I will implement the corrections as soon as possible and come back to you.

Once again thanks for your reactivity

frontal1660 avatar Dec 13 '17 12:12 frontal1660

Once again bad news :(

@oschaaf : your trick did not work.

My second server is based on this pagespeed config (extract with more details) and also has the same problem in error.log: pagespeed FetchHttps enable; pagespeed RespectVary on; pagespeed MaxCombinedCssBytes -1; pagespeed Domain https://www.my-domain.com;

pagespeed FileCachePath /var/cache/pagespeed; pagespeed FileCacheSizeKb 2304000; pagespeed FileCacheCleanIntervalMs 3600000; pagespeed FileCacheInodeLimit 500000; pagespeed LRUCacheKbPerProcess 16384; pagespeed LRUCacheByteLimit 32768; pagespeed CreateSharedMemoryMetadataCache "/var/cache/pagespeed/" 102400; pagespeed DefaultSharedMemoryCacheKB 102400;

pagespeed MemcachedThreads 1; pagespeed MemcachedServers "localhost:11211";

As you can see the FileCachePath and CreateSharedMemoryMetadataCache directives DOES match.

If you want me to extract more config files, feel free to ask me.

Thanks again for your time.

COMMENT : I've just noticed that both path are not identical... a / is missing. I'm going to add it right now

frontal1660 avatar Dec 13 '17 12:12 frontal1660

Well, actually it's not an exact match, with regards to the trailing slash..

On Wed, 13 Dec 2017 at 13:32, frontal [email protected] wrote:

Once again bad news :(

@oschaaf https://github.com/oschaaf : your trick did not work.

My second server is based on this pagespeed config (extract with more details) and also has the same problem in error.log: pagespeed FetchHttps enable; pagespeed RespectVary on; pagespeed MaxCombinedCssBytes -1; pagespeed Domain https://www.my-domain.com;

pagespeed FileCachePath /var/cache/pagespeed;

pagespeed FileCacheSizeKb 2304000; pagespeed FileCacheCleanIntervalMs 3600000; pagespeed FileCacheInodeLimit 500000; pagespeed LRUCacheKbPerProcess 16384; pagespeed LRUCacheByteLimit 32768;

pagespeed CreateSharedMemoryMetadataCache "/var/cache/pagespeed/" 102400; pagespeed DefaultSharedMemoryCacheKB 102400;

pagespeed MemcachedThreads 1; pagespeed MemcachedServers "localhost:11211";

As you can see the FileCachePath and CreateSharedMemoryMetadataCache directives DOES match.

If you want me to extract more config files, feel free to ask me.

Thanks again for your time.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/pagespeed/ngx_pagespeed/issues/1516#issuecomment-351378019, or mute the thread https://github.com/notifications/unsubscribe-auth/ACIsRLJh1pEkXeqtQKjPLlP2cD8UF0s1ks5s_8PvgaJpZM4Q-V1J .

oschaaf avatar Dec 13 '17 12:12 oschaaf

And also there is not quote. I'm adding them right now.

Here come my updates: pagespeed FileCachePath "/var/cache/pagespeed/"; pagespeed CreateSharedMemoryMetadataCache "/var/cache/pagespeed/" 102400;

I'll keep you informed.

frontal1660 avatar Dec 13 '17 12:12 frontal1660

I am not glad to say that the problem is still here.

About 20 minutes after my update the lines in error.log reappeared. I am desperate.

frontal1660 avatar Dec 13 '17 14:12 frontal1660

Are all the directory permissions set up so your nginx child processes can read/write your file-cache? E.g. is the file-cache directory owned by the same username that is running the nginx processes?

jmarantz avatar Dec 13 '17 14:12 jmarantz

Hi @jmarantz

Thank you for your anwser. Directory permissions issue was also my first guess.

Here comes some information. You may be able to find something wrong. I could NOT.

/etc/nginx/nginx.conf user www-data;

/etc/php/7.0/fpm/pool.d/www.conf user = www-data group = www-data listen.owner = www-data listen.group = www-data listen.mode = 0660

bash# ps aux | grep nginx root 10792 0.0 0.0 181724 4268 ? Ss 13:56 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf www-data 10793 0.1 0.2 704824 83324 ? Sl 13:56 0:09 nginx: worker process www-data 10794 0.1 0.2 704416 69136 ? Sl 13:56 0:10 nginx: worker process www-data 10795 0.1 0.2 706992 74728 ? Sl 13:56 0:15 nginx: worker process www-data 10796 0.1 0.1 702948 58500 ? Sl 13:56 0:14 nginx: worker process www-data 10797 0.1 0.1 703868 54368 ? Sl 13:56 0:09 nginx: worker process www-data 10798 0.1 0.2 701636 66236 ? Sl 13:56 0:10 nginx: worker process www-data 10799 0.1 0.1 702232 64232 ? Sl 13:56 0:12 nginx: worker process www-data 10800 0.1 0.1 702100 47648 ? Sl 13:56 0:09 nginx: worker process

bash# ps aux | grep php www-data 658 0.2 0.1 1374220 60988 ? S 16:23 0:01 php-fpm: pool www root 1750 0.0 0.2 1361852 94036 ? Ss Dec12 0:02 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf) www-data 20351 0.1 0.1 1441276 62568 ? S 14:59 0:11 php-fpm: pool www www-data 23942 0.1 0.1 1370320 57676 ? S 15:23 0:06 php-fpm: pool www www-data 23977 0.1 0.1 1374284 61364 ? S 15:23 0:05 php-fpm: pool www www-data 23995 0.1 0.1 1363988 51900 ? S 15:23 0:07 php-fpm: pool www www-data 24010 0.1 0.1 1364240 54356 ? S 15:23 0:06 php-fpm: pool www www-data 24075 0.1 0.2 1445356 65892 ? S 15:24 0:07 php-fpm: pool www www-data 24076 0.1 0.1 1374424 60824 ? S 15:24 0:05 php-fpm: pool www www-data 24084 0.1 0.1 1440988 61840 ? S 15:24 0:07 php-fpm: pool www www-data 24085 0.1 0.1 1441048 62448 ? S 15:24 0:05 php-fpm: pool www www-data 24086 0.2 0.2 1378644 67048 ? S 15:24 0:08 php-fpm: pool www www-data 24087 0.1 0.1 1443332 64312 ? S 15:24 0:07 php-fpm: pool www www-data 24102 0.1 0.1 1441040 62516 ? S 15:24 0:06 php-fpm: pool www www-data 24158 0.1 0.1 1441072 61852 ? S 15:24 0:06 php-fpm: pool www www-data 24162 0.1 0.1 1441148 61928 ? S 15:24 0:05 php-fpm: pool www www-data 24251 0.1 0.1 1374708 63144 ? S 15:25 0:07 php-fpm: pool www www-data 24285 0.1 0.1 1366380 54180 ? S 15:25 0:06 php-fpm: pool www

bash# ls -la /var/cache/pagespeed/ total 12K drwxr-xr-x 3 www-data www-data 4.0K Dec 13 13:56 . drwxr-xr-x 11 root root 4.0K Nov 11 01:42 .. drwxrwxrwx 3 www-data www-data 4.0K Dec 13 13:56 shm_metadata_cache

bash# ls -la /var/cache/pagespeed/shm_metadata_cache total 12K drwxrwxrwx 3 www-data www-data 4.0K Dec 13 13:56 . drwxr-xr-x 3 www-data www-data 4.0K Dec 13 13:56 .. drwxrwxrwx 3 www-data www-data 4.0K Dec 13 13:56 snapshot

I'm stuck.

frontal1660 avatar Dec 13 '17 15:12 frontal1660

I'm not sure what is up but https://groups.google.com/forum/m/#!topic/ngx-pagespeed-discuss/JBvQU0unpag looks very similar

oschaaf avatar Dec 13 '17 16:12 oschaaf

Hi @frontal1660 I will say chage the file name cache.flush to !clean!time!. When File cache is working a directory v3 exist in the /var/cache/pagespeed/ path, under that v3 directory sub-directories with name of the sites, path and files exists..... these files are served by pagespeed.

Seems that the user that run nginx cannot write to these directory

Lofesa avatar Dec 13 '17 17:12 Lofesa

@oschaaf :

  • I have already read this thread many times but it does not apply. There is no suitable answer. I found other links but none of them is interesting :(

@Lofesa :

  • I do not have this directory (v3) but do I really have to create it ? I will do it but I do not think that it's "my job"... if this directory is usefull for pagespeed then it should create it :)
  • I will create !clean!time! file (create because cache.flush has been auto deleted) but with no conviction. I would like to know to source of the problem.
  • Regarding to directory/file permissions, please check my previous post

bash# ls -la /var/cache/pagespeed total 4 drwxrwxr-x 4 www-data www-data 100 Dec 13 19:39 . drwxr-xr-x 11 root root 4096 Sep 30 23:54 .. -rwxrwxrwx 1 www-data www-data 0 Dec 13 19:39 !clean!time! drwxrwxrwx 3 www-data www-data 60 Dec 13 13:43 shm_metadata_cache drwxrwxrwx 2 www-data www-data 40 Dec 13 19:39 v3

frontal1660 avatar Dec 13 '17 18:12 frontal1660

Yes, v3 directory sould be created by pagespeed, because it is not created make me thinking in permissions. In the last post, have you ceated the v3 directory or is pagespeed ? Have you selinux enabled?

Lofesa avatar Dec 13 '17 20:12 Lofesa

@Lofesa I have some news and I still have a problem.

Your questions and comments:

  • I created the "v3" directory and the "!clean!time!"... not pagespeed... following your advice.
  • I also think that it's a permission problem but where is the issue ? I don't know. See my config files if needed
  • SELinux is NOT enabled and not used (Ubuntu 16.04.05 LTS)

Creating your file (!clean!time!) and/or directory (v3) had an impact but did not solve the problem:

  • I haven't any more message in error.log BUT my cache miss graph on pagespeed_admin Console page shows me a 43.91% rate! I have the same rate since at least beginning of october 2017. Sometimes it goes over 56% !

If anyone has an idea? :(

frontal1660 avatar Dec 13 '17 22:12 frontal1660

So now it looks like the permission problem is gone, but you see a different problem: the cache hit ratio seems lower then you would expect. Correct?

Perhaps points 1 & 2 from the following faq entry are worth checking out: https://www.modpagespeed.com/doc/faq#varying-results

oschaaf avatar Dec 13 '17 22:12 oschaaf

@oschaaf :

Thank you for your research.

Indeed it seems that permission problem is gone but how? I had to create

  • a v3 directory which stays empty
  • a !clean!time! file which has a very "strange" name don't you think?

The fact that I had to tune this way my pagespeed installation does not confort me. I think that this tuning is too specific and my first question would be: has anyone ever done something like that (directory and file creations)?

Actually the cache misses problem (Console graph problem) is not really new. I have this problem since at least the beginning of october (average cache misses rate: 50%).

Regarding to your link I have once again already read this page :( It is a little bit complicated and:

  • I am not sure to be in one of the discussed cases
  • I am not sure to fully understand what is says

I will try to apply some of the advices given in the page.

My two sites are both served in HTTPS so it may be usefull to specify: pagespeed MapOriginDomain http://localhost https://www.my-domain.com;

I can add few things:

  • I don't use any CDN
  • I use Nginx Brotli module
  • I use Memcached

What do you think?

frontal1660 avatar Dec 13 '17 23:12 frontal1660

@oschaaf

Bad news... still

I had to restart my server for maintenance. After reboot I checked two things:

  • if the V3 directory and the !clean!time! file were still in the /var/cache/pagespeed directory and they do NOT exist anymore... so it means that I would have to recreate these file/directory at every reboot... not a good news and not a normal behaviour I guess
  • if the error.log file was "clean" and it is NOT the case... pagespeed error came back!!! Maybe because of the lack of the file/directory.

I decided NOT to recreate the file/directory but to implement the MapOriginDomain directive instead to see what's next.

Once again I say the same thing: I've using pagespeed for something like 10 servers for almost 2 years with no problem and I have this big problem only on my two last servers with last Nginx and last Pagespeed versions.

@oschaaf, @Lofesa, @jmarantz: I really thank you for help but I don't what to add. I feel desesperate... again.

frontal1660 avatar Dec 14 '17 02:12 frontal1660

I think manually messing with !clean!time! is really trying to solve a symptom of something else. I have another wild guess though: are you out of space in your /dev/shm partition?

Also note -- I don't really think the low apparent cache hit rate is necessarily a problem. PageSpeed may -- based on some combination of enabled filters and site URL patterns -- attempt to do a lot of lookups for properties of an HTML page that rarely get hits. E.g. if your JavaScript inserted query-params with high entropy.

But the !clean!time! messages are annoying and it's probably worth fixing.

jmarantz avatar Dec 14 '17 02:12 jmarantz

@jmarantz

1 - I totally agree with you when you speak about symptoms. It is not the root cause.

2 - Actually (thanks to @oschaaf) I only use /var/cache/pagespeed for both pagespeed directives: pagespeed FileCachePath "/var/cache/pagespeed/"; pagespeed CreateSharedMemoryMetadataCache "/var/cache/pagespeed/" 102400;

3 - Regarding to the size/space of partitions:

  • /var/cache/pagespeed/ belongs to /var which is a 16Go partition with 15Go of available space
  • /dev/shm is a 16Go partition with 15.5Go of available space

4 - By the way, even after a certain period of time, I have files in my /var/cache/pagespeed directory but a "du -m /var/cache/pagespeed" tells me that these files take less than 1Mo... which might be strange... bash@ du -m /var/cache/pagespeed/ 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed/,2Fmetadata_cache/1/64/8904/128 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed/,2Fmetadata_cache/1/64/8904 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed/,2Fmetadata_cache/1/64 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed/,2Fmetadata_cache/1 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed/,2Fmetadata_cache 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache/pagespeed 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar/cache 1 /var/cache/pagespeed/shm_metadata_cache/snapshot/,2Fvar 1 /var/cache/pagespeed/shm_metadata_cache/snapshot 1 /var/cache/pagespeed/shm_metadata_cache 1 /var/cache/pagespeed/

5 - Regarding the cache hit rate you might be right but a rate value between 50 and 56% does not seem to be very good. I've just checked on an older server (6 months older than the two servers which experience issues) and my "cache missed graph" indicates a value of 7.19%

6 - Again you're right... these messages are really annoying and I'd really like to find what the REAL problem is.

7 - Important: as I said before I implemented @oschaaf advice pagespeed MapOriginDomain http://localhost https://www.my-domain.com; and.... !clean!time! messages in error.log are definitly back...

8 - I do not consider myself as a newbie but I have to admit that I feel useless in this case.

9 - Does anyone know the thing that triggers this message in pagespeed source code ?

frontal1660 avatar Dec 14 '17 02:12 frontal1660

The message is triggered here: https://github.com/pagespeed/mod_pagespeed/blob/338c37915355c0a4dd8726643cd90718bc394d6f/pagespeed/kernel/cache/file_cache.cc#L403

However, the module tries to create !clean!time here: https://github.com/pagespeed/mod_pagespeed/blob/338c37915355c0a4dd8726643cd90718bc394d6f/pagespeed/kernel/cache/file_cache.cc#L360

Looking at the code, I think the warning actually will always be emitted at least once, because FileCache::ShouldClean runs before FileCache::CleanWithLocking which is responsible for creating the file. But maybe I'm missing something...

oschaaf avatar Dec 14 '17 10:12 oschaaf

Just stumbled on this too. Everything is working but the error:

Failed to read cache clean timestamp /var/cache/pagespeed/!clean!time!. Doing an extra cache clean to be safe.

... is being triggered.

Indeed, looks like the code referenced by @oschaaf is responsible, in a nutshell it expects the file to be there.

I think an easy solution would be having PageSpeed touching the !clean!time! file on startup, to make sure it's there from the get-go.

dvershinin avatar Mar 09 '20 08:03 dvershinin

I am having the same problem. I identified the !clean!time! file is not getting written at all when using the pagespeed MemcachedServers directive. So it looks like the cache manager is looking for a file which got never created when memcache is configured for caching.

project0 avatar Oct 27 '20 23:10 project0

@dvershinin and @Project0 : are you getting the message repeatedly? Or is it just coming out once? If it's coming out just once I think that's not a problem, and you should just ignore it. If it's repeatedly coming out then I could see it becoming an issue for log-file growth.

jmarantz avatar Oct 28 '20 00:10 jmarantz

If it's repeatedly coming out then I could see it becoming an issue for log-file growth.

Indeed, that what is happening. Its usually thrown several times in a row (i guess per worker), I am now creating this file manually at startup (docker entrypoint) to remove the noise from our logs. That drives my crazy since years :D

project0 avatar Oct 28 '20 07:10 project0

@jmarantz It is coming out once, but as per @Project0 I'm not surprised it may be also multiplied by the number of workers.

While it won't pollute the logs too much, I think it is still a bug, because the message is misleading.

  • Either the file should be pre-created upon NGINX startup, or:
  • Keep some in-memory record that the cleanup was never run after startup, then skip the file check

dvershinin avatar Oct 28 '20 08:10 dvershinin

I don't think it will be multiplied by the number of workers because the first one that runs will write that file.

It also should not happen once for each time nginx is restarted; the file will be there after the restart, as will the contents of the cache, unless it's on tmpfs and the system is restarted.

You are right that it would be nicer not to print that spurious message, so it's perfectly fine to have the startup script touch that file if it's not present. However that's not something PageSpeed itself does.

jmarantz avatar Oct 28 '20 12:10 jmarantz