serve_forever hang on while close_all
serve_forever hang on while close_all
on linux hangs on, on windows works well
You should provde some mor einfo. How do you stop the server? Via a signal? I suggest you put some print statements in close_all method in order to figure out where it hangs.
I use threading ftpserver class to initialize. And I start the server in a thread, and then stop via close_all() in another thread. The close_all can be executed fully. But the code below server_forever never be executed. This issue is under Linux, but windows won't hang on.
I guess the epoll has some issue?
Code version is newest, Dec 1
Please paste the code you're using. Il 11/dic/2015 00:28, "PonyPC" [email protected] ha scritto:
Code version is newest, Dec 1
— Reply to this email directly or view it on GitHub https://github.com/giampaolo/pyftpdlib/issues/367#issuecomment-163785407 .
class FtpServer(threading.Thread,metaclass=Singleton):
authorizer = None
server = None
handler = None
def __init__(self):
self.closed = False
self.running = True
self.authorizer = DummyHashAuthorizer()
dtp_handler = ThrottledDTPHandler
dtp_handler.timeout = 60
dtp_handler.read_limit = 0
dtp_handler.write_limit = 100 * 1024
dtp_handler.auto_sized_buffers = False
self.handler = MyFTPHandler
self.handler.timeout = 60
self.handler.use_gmt_times = False
self.handler.use_sendfile = True
self.handler.authorizer = self.authorizer
self.handler.dtp_handler = dtp_handler
self.handler.abstracted_fs = MyFilesystem
self.handler.passive_ports = range(30000, 31000)
threading.Thread.__init__(self)
self.setDaemon(True)
def run(self,*args,**kwargs):
while not self.closed:
try:
if self.running:
self.server = ThreadedFTPServer(('0.0.0.0', 21), self.handler)
self.server.max_cons = 100
self.server.max_cons_per_ip = 3
self.server.serve_forever()
print('stopped')####NEVER EXECUTE UNDER LINUX####
time.sleep(0.5)
except Exception as error:
logger.error(traceback.format_exc())
def stop(self):
self.running = False
self.server.close_all()
def restart(self):
self.running = True
def close(self):
self.closed = True
self.stop()
on main thread: ftp = FtpServer() ftp.stop()
~~I try a temporary solution for this issue:
in ioloop.py class Epoll()
delete this line self._poller.close() in close() procedure.~~
still hang on sometimes
I came upon the same problem and discovered an error in the docstring of serve_forever():
def serve_forever(self, timeout=None, blocking=True, handle_exit=True):
"""Start serving.
- (float) timeout: the timeout passed to the underlying IO
loop expressed in seconds (default 1.0).
1.0 being the default is not quite true because it's set to "None" right here and never changed later.
So when I call serve_forever() without a timeout, calling close_all() wait's indefinitely. When I specify a timeout serve_forever(timeout=10) calling close_all() returns like expected and the server is killed after some seconds.
@muffl0n but how do you deal with forever running except manual shutdown?
@muffl0n Oh.... I catch you. I'll try your solution later....Thanks
@muffl0n after several hours test, it is seem that adding timeout parameter is working very well. thanks.
I'm always happy to help! :) I'm still curious what is right: the code or the documentation. @giampaolo, maybe you could help us here?
Sorry for joining late guys.
So I understand the problem is that you want to shutdown the server via code, you do so by calling close_all but close_all hangs unless you specify a timeout != None for serve_forever (note: docstring stating the default is 1 is clearly wrong).
Also I understand you use a threaded server, and I suppose this does not happen if you use the plain "async" server.
Is this correct?
I don't know whether it does happen by using plain async server. Instead, I use a threaded server. I mean the serve_forever hangs unless specify a timeout, but close_all can be executed fully.
Firstly, I thought the timeout is the server running time. Obviously I'm wrong. The timeout is killing server timeout.
The timeout parameter is passed directly to the underlying IO loop (which uses select() / epoll() / whatever syscall) and it causes it to wait / hang until something "happens" to the connected file descriptors (read or write events).
As such, I suppose, close_all() closes all fds, but the underlying select / epoll syscall will keep hanging.
Perhaps it makes sense to have timeout default to 1 sec or something for the ThreadedFTPServer class.
Question: are you on Windows?
Centos. I also use same method to stop and restart http server in python. It is seemed that the http library works well and can stop immediately, although I don't know the reason. ☺ be surely, set timeout to a specific value is useful.
I'm using Gentoo. +1 for setting the default value to 1 or some other value other than None. Having no timeout at all feels pretty weird.
Done.
Awesome! Thank you! :)
good job
;-)
a week later, the problem comes: After issuing close_all, the threaded ftp server print [W 2016-10-23 04:00:05] thread <Thread(('xxx', 62906), started daemon 140644651616000)> didn't terminate; ignoring it And hang on again.
[W 2016-10-23 04:00:05] thread ≤Thread(('xxx', 62906), started daemon 140644651616000)> didn't terminate; ignoring it
So that means one of the threads kept hanging (e.g. got stuck in a time.sleep() call or something) and as such also the main thread couldn't exit.
I would argue that in this case pyftpdlib is doing the right thing.
As for what concerns your code, you can use os._exit(0) after close_all.
That will just kill the interpreter. Not very nice but when dealing with threads that's often used a common solution.
Emm... I don't want to kill the process cause there are other threads to server continuously. Http server library never hang on after close all at my environment. Is the thread model difference between http and pyftpdlib cause the problem? Or the timeout of sub thread is invalid?
You should identify what thread is hanging and where exactly.
I suppose you're using ThreadedFTPServer instead of async FTPServer because you have blocking operations. Is that blocking operation an HTTP request you do by using httplib? In that case what I would recommend is set a timeout for httplib (say 5 secs), then ThreadedFTPServer.join_timeout = 8 or something.
That way the http request will eventually timeout and raise an exception in the running threaded, but it will "unblock" the thread so that it can be terminated.
After few days finding, I guess there is some issue in ioloop. I use ThreadedFTPServer for serve FTP. I do nothing blocking in original ftp functions. httplib is used in my project but it is isolated with ftplib. httplib never hang on its sub thread, every thread can terminate after timeout. I compared 2500 log files in 8 servers in one year, and found a regular condition.
Thread may not be terminated after user operating this command: Login -> CWD -> ... -> CWD -> not disconnect after timeout Login -> CWD -> ... -> STOR -> not disconnect after timeout Login -> CWD -> ... -> DELE -> not disconnect after timeout It is not easy to recurrent this situation. maybe it relies on unstable internet connection.
Then if you run close_all(), you will get an error 'thread didn't terminate; ignoring it'.
I believe this is being cause by issue #48 within the threading module.