Implement async input()
Currently, asyncio doesn't provide any helper to read asynchronously data from
sys.stdin, like input(). Twisted implements twisted.internet.stdio:
https://twistedmatrix.com/trac/browser/trunk/twisted/internet/stdio.py
https://twistedmatrix.com/trac/browser/trunk/twisted/internet/_posixstdio.py
https://twistedmatrix.com/trac/browser/trunk/twisted/internet/process.py#L106
https://twistedmatrix.com/trac/browser/trunk/twisted/internet/_win32stdio.py
See also https://code.google.com/p/tulip/issues/detail?id=147
Original issue reported on code.google.com by [email protected] on 2 Nov 2014 at 10:55
Hi,
This seems suspiciously similar to BaseSubprocessTransport and
SubprocessProtocol. It seems like it could be implemented in a very similar
matter using sys.stdin and sys.stdout. Thoughts?
Original comment by [email protected] on 17 Nov 2014 at 9:19
You can use loop.add_reader(fd, cb) and add a callback function to stdin file descriptor and change tty settings to cbreak. Or you can make a coroutine to read from curses screen getch method with nodelay option. I'm sure there is a more robust/complicated way to this, maybe including StreamReader.
I implemented something similar as part of this project. It provides the following coroutines:
get_standard_streams(*, use_stderr=False, loop=None): return two streams corresponding tostdinandstdout(orstderr)ainput(prompt=None, *, loop=None): asynchronous equivalent to ainput.
Everything is implemented in stream.py. It should work even if sys.stdin and sys.stdout don't have a file descriptor (inside IDLE for instance).
https://gist.github.com/nathan-hoad/8966377 - is it good ? @vxgmichel
I'm vote on implementing stream wrappers for generic file objects.
Something like simple and stupid
(reader, writer) = asyncio.wrap_fileobject(fileobj)(usefileobj.read(),fileobj.write(),fileobj.flush()internally)(reader, writer) = asyncio.wrap_file_descriptor(fd)(useos.read(fd),os.write(fd)internally)(reader, writer) = asyncio.wrap_streaming_socket(socket_obj)(usesocket_obj.send(),socket_obj.recv()internally)
@socketpair There are a few differences between the example in your link and the way I wrote it:
- I use
sys.stdoutinstead ofos.fdopen(0, 'wb')(I'm not sure what is best though) - I subclass
StreamReaderandStreamWriterto avoid closing the stream in the__del__method - I use an executor if
stdinandstdoutdon't support the file interface (e.g. in IDLE) - I use a caching mechanism to avoid creating new streams, though it's probably a bad idea :)
About the wrappers you described, I'm not sure it's a good idea to create high level streams from low level objects (file objets, descriptors, sockets). For instance, in order to open a new socket connection, you can either use:
loop.create_connection: return (transport, protocol) from a socket (or host and port)asyncio.open_connection: return streams from host and port
Same thing for subprocesses:
loop.connect_read_pipe,loop.connect_write_pipe: return (transport, protocol) from a pipeloop.subprocess_exec: return (transport, protocol) from a commandasyncio.create_subprocess_exec: return a high level process object from a command
So I would expect file streams to work the same:
loop.connect_read_file,loop.connect_write_file: return (transport, protocol) from a file descriptorasyncio.open_file: return streams from a file name
@vxgmichel unfortunatelly, there are cases where file descriptor is pre-exist. For example, systemd's socket activation, or xinetd.
Also, if stdout/stdin is pipe, input and output may block easily, so wrapping them to asyncio stream is convenient.
FWIW an issue with my gist is that it will break print() calls for sufficiently large output, because stdout is... surprise surprise, non-blocking. Even if you decide to only have a non-blocking stdin you'll hit issues, because stdin and stdout are actually the same object for TTYs, as per this issue: https://github.com/python/asyncio/issues/147.
Also, os.fdopen(1, 'wb') and sys.stdout are interchangeable and there's no reason to use one over the other.