Print performance issue using the CUPS with Lexmark UPD driver
The users are complaining that with the new printer and new driver, there is a significant delay in printing. They are reporting a 15-20 seconds wait before the jobs will start printing…..
To Reproduce Install the driver from lexmark.com https://support.lexmark.com/content/support/en_us/support/download.DRI1000645.html Create the Print queue using LPD/Socket Print Test Page
Expected behavior No delay in printing
System Information:
- OS: Ubuntu 22.04/ RedHat 8.4
- Cups Version 2.4.1
Additional context From the logs we have found the delay of 18-24 sec
UPD With LPD
D [07/Jan/2025:11:15:28 -0500] [Client 20] HTTP_WAITING Closing on EOF D [07/Jan/2025:11:15:28 -0500] [Client 20] Closing connection. D [07/Jan/2025:11:15:28 -0500] cupsdSetBusyState: newbusy="Active clients and dirty files", busy="Active clients and dirty files" D [07/Jan/2025:11:15:28 -0500] PID 88047 (/usr/lib/cups/cgi-bin/printers.cgi) exited with no errors. D [07/Jan/2025:11:15:28 -0500] [Client 13] Closing because Keep-Alive disabled. D [07/Jan/2025:11:15:28 -0500] [Client 13] Closing connection. D [07/Jan/2025:11:15:28 -0500] cupsdSetBusyState: newbusy="Dirty files", busy="Active clients and dirty files" D [07/Jan/2025:11:15:28 -0500] [Client 13] HTTP_WAITING Closing on EOF D [07/Jan/2025:11:15:28 -0500] [Client 13] Closing connection. D [07/Jan/2025:11:15:28 -0500] cupsdSetBusyState: newbusy="Dirty files", busy="Dirty files" I [07/Jan/2025:11:15:46 -0500] Saving job.cache... D [07/Jan/2025:11:15:46 -0500] cupsdSetBusyState: newbusy="Not busy", busy="Dirty files" D [07/Jan/2025:11:16:41 -0500] cupsdAcceptClient: skipping getpeercon() D [07/Jan/2025:11:16:41 -0500] [Client 13] Accepted from localhost (Domain) D [07/Jan/2025:11:16:41 -0500] Report: clients=1 D [07/Jan/2025:11:16:41 -0500] Report: jobs=12 D [07/Jan/2025:11:16:41 -0500] Report: jobs-active=0 D [07/Jan/2025:11:16:41 -0500] Report: printers=18 D [07/Jan/2025:11:16:41 -0500] Report: stringpool-string-count=11288
UPD With Socket
D [07/Jan/2025:12:00:05 -0500] [Client 20] HTTP_WAITING Closing on EOF D [07/Jan/2025:12:00:05 -0500] [Client 20] Closing connection. D [07/Jan/2025:12:00:05 -0500] cupsdSetBusyState: newbusy="Active clients and dirty files", busy="Active clients and dirty files" D [07/Jan/2025:12:00:05 -0500] PID 101519 (/usr/lib/cups/cgi-bin/printers.cgi) exited with no errors. D [07/Jan/2025:12:00:05 -0500] [Client 13] Closing because Keep-Alive disabled. D [07/Jan/2025:12:00:05 -0500] [Client 13] Closing connection. D [07/Jan/2025:12:00:05 -0500] cupsdSetBusyState: newbusy="Dirty files", busy="Active clients and dirty files" D [07/Jan/2025:12:00:06 -0500] [Job 621] Unloading... D [07/Jan/2025:12:00:30 -0500] cupsdAcceptClient: skipping getpeercon() D [07/Jan/2025:12:00:30 -0500] [Client 13] Accepted from localhost:631 (IPv4) I [07/Jan/2025:12:00:30 -0500] Saving job.cache... D [07/Jan/2025:12:00:30 -0500] cupsdSetBusyState: newbusy="Not busy", busy="Dirty files" D [07/Jan/2025:12:00:30 -0500] [Client 13] GET /printers/LASRTN01?which_jobs=completed HTTP/1.0 D [07/Jan/2025:12:00:30 -0500] cupsdSetBusyState: newbusy="Active clients", busy="Not busy" D [07/Jan/2025:12:00:31 -0500] [Client 13] Authorized as svc.ftcupsadm using Basic D [07/Jan/2025:12:00:31 -0500] [CGI] argv[0] = "/usr/lib/cups/cgi-bin/printers.cgi"
Also attaching the logs CUPS_LOG.txt
- Job 618 - UPD with LPD
- Job 621 - UDP with socket
We modify the printer's CUPS configuration (cupsd.conf) to enable Keep-Alive for longer connection persistence from /etc/cups/cupsd.conf with -
KeepAlive On KeepAliveTimeout 30
But there is no improvement in the performance.
We capture the log with debug2
Here is the snippet
d [16/Jan/2025:09:06:40 -0500] cupsdRemoveSelect(fd=14) d [16/Jan/2025:09:06:40 -0500] cupsdEndProcess(pid=59557, force=0) D [16/Jan/2025:09:06:40 -0500] [Client 13] Closing because Keep-Alive disabled. D [16/Jan/2025:09:06:40 -0500] [Client 13] Closing connection. D [16/Jan/2025:09:06:40 -0500] cupsdSetBusyState: newbusy="Dirty files", busy="Active clients and dirty files" d [16/Jan/2025:09:06:40 -0500] cupsdAddSelect(fd=13, read_cb=0x556bf8ab2e00, write_cb=(nil), data=0x556bf930bb40) d [16/Jan/2025:09:06:40 -0500] select_timeout: JobHistoryUpdate=0 d [16/Jan/2025:09:06:40 -0500] [Client 13] cupsdReadClient error=0, used=0, state=HTTP_WAITING, data_encoding=HTTP_ENCODE_LENGTH, data_remaining=2147479179, request=(nil)(), file=-1 D [16/Jan/2025:09:06:40 -0500] [Client 13] HTTP_WAITING Closing on EOF D [16/Jan/2025:09:06:40 -0500] [Client 13] Closing connection. D [16/Jan/2025:09:06:40 -0500] cupsdSetBusyState: newbusy="Dirty files", busy="Dirty files" d [16/Jan/2025:09:06:40 -0500] cupsdRemoveSelect(fd=13) d [16/Jan/2025:09:06:40 -0500] select_timeout: JobHistoryUpdate=0 d [16/Jan/2025:09:06:41 -0500] select_timeout: JobHistoryUpdate=0 d [16/Jan/2025:09:06:41 -0500] select_timeout(0): 17 seconds to write dirty config/state files I [16/Jan/2025:09:06:58 -0500] Saving job.cache... D [16/Jan/2025:09:06:58 -0500] cupsdSetBusyState: newbusy="Not busy", busy="Dirty files" d [16/Jan/2025:09:06:58 -0500] cupsdCheckJobs: 0 active jobs, sleeping=0, reload=0, curtime=1737036418 d [16/Jan/2025:09:06:58 -0500] select_timeout: JobHistoryUpdate=0 d [16/Jan/2025:09:06:58 -0500] select_timeout(0): 86400 seconds to do nothing d [16/Jan/2025:09:07:00 -0500] cupsdAcceptClient(lis=0x556bf9205920(10)) Clients=0 D [16/Jan/2025:09:07:00 -0500] cupsdAcceptClient: skipping getpeercon() D [16/Jan/2025:09:07:00 -0500] [Client 13] Accepted from 10.227.205.77:631 (IPv4) d [16/Jan/2025:09:07:00 -0500] cupsdAddSelect(fd=13, read_cb=0x556bf8ab2e00, write_cb=(nil), data=0x556bf930bb40) d [16/Jan/2025:09:07:00 -0500] select_timeout: JobHistoryUpdate=0 d [16/Jan/2025:09:07:00 -0500] [Client 13] cupsdReadClient error=0, used=0, state=HTTP_WAITING, data_encoding=HTTP_ENCODE_LENGTH, data_remaining=0, request=(nil)(), file=-1
@michaelrsweet could you please check once why this delay is happening. Or any suggestion if you have.
@tillkamppeter @michaelrsweet Please let me know if more info is needed to investigate this. We are looking for the reason why this delay is happening. Is there something to improve in our driver code?