Could ic also print timing information?
I often use ic to print intermediate values of computations that might take a little bit of time. It would be great if there were an optional mode where ic would print not only the value of the computation, but also the amount of time taken. For example, something like:
ic| test.py:174 in <module>
f(300): 3.9 <1m11.744s>
@kach this is a wonderful suggestion and definitely useful
a natural place to put this would be in the context, along with filename and line number
what timing info would you like to see, exactly? would a timestamp accurate to milliseconds suffice for your needs?
in the interim, one thing you can do to add this is to write a custom prefix function that includes a timestamp of your desired format https://github.com/gruns/icecream?tab=readme-ov-file#configuration
I would simply like to see the time taken to evaluate the expression, similar to the %time line magic in Jupyter notebooks (this is different from a timestamp!). It would be nice if this information were displayed in a reasonable human-readable format, choosing resolution/precision depending on the order of magnitude of the time interval being displayed. For example, depending on how long the computation takes, it might say 1h3m25s (long computation) or it might say 33.5ms (very short computation).
Thanks for considering this request! :)
awesome request
but! the problem isn't capturing the timing, which can be done with a context manager. like:
with ic():
foo()
---
ic 22ms| foo(): 1
the problem is how to capture the return value of foo() in the above. context managers cant implicitly pick up return values of evaluations inside the with block
maybe there's some sort of hack that can be done with
with ic.profile():
ic(foo())
where the ic() detects its inside an ic.profile() context manager and throws an exception with the return value of foo() which ic.profile() catches
ill look into this. lmk your thoughts!
hat tip to @salabim
I prefer (2), because I think the most common use case is to want to opt-in for a particular call to ic() rather than all calls globally.
If I may, I think the kwarg should be called “print_timing” or something — to me, “profile” has the connotation of running multiple times and computing statistical analyses on the result. But it doesn’t matter too much either way.
I don't think somthing like
ic(foo())
ic(foo(), profile=True)
could provide timing information, as parameters are always evaluated first before given to the function. So, no way to time this way, IMHO. A lambda could help, but is far from intuitive.
@salabim 100%; updated comment above 🙏
See an implementation on www.salabim.org/peek/
I like this version:
with ic.timing():
ic(foo())
Imagine that when you enter the context manager you start the timer, and then each time you call ic() within the context manager you print the time on that timer. So if you have multiple ic() calls, you should see the cumulative time.
with ic.timing():
ic(5 + 1) # 6 <1ms>
ic(time.sleep(1)) # --> "None <1.001s>"
ic(time.sleep(1)) # --> "None <2.001s>"
(You can implement this by maintaining a global stack of "timers" which store the start time of the current context manager. When you enter a context manager, you push the current time onto the stack. When you exit, you pop. When you call ic(), you peek the stack, and if it is non-empty, then you print the time delta since the time at the top of the stack.)
@kach i like this version, too 👍
with ic.timing():
ic(foo())
and this implementation sgtm
(You can implement this by maintaining a global stack of "timers" which store the start time of the current context manager. When you enter a context manager, you push the current time onto the stack. When you exit, you pop. When you call ic(), you peek the stack, and if it is non-empty, then you print the time delta since the time at the top of the stack.)
would you have time to take a stab at this with a PR?
I don't have time to understand the icecream codebase and figure out how to add this feature. But here is "starter code" for the solution I described above:
from contextlib import contextmanager
import time
timers = []
@contextmanager
def timing(): # maintain global stack of timers
timers.append(time.time())
yield
timers.pop()
def ftime(dt): # format time to be human-readable
if dt < 1e-6:
return f'{dt * 1e9:.2f}ns'
if dt < 1e-3:
return f'{dt * 1e6:.2f}µs'
if dt < 1:
return f'{dt * 1e3:.2f}ms'
if dt < 60:
return f'{dt:.2f}s'
if dt < 3600:
return f'{int(dt // 60):d}m {dt % 60:.2f}s'
return f'{int(dt // 3600):d}h {int((dt % 3600) // 60):d}m {dt % 60:.2f}s'
def ic(x):
if len(timers) > 0:
dt = time.time() - timers[-1]
print(f'{x} <{ftime(dt)}>')
else: # not in any context manager
print(f'{x}')
## example
ic("apple")
with timing():
time.sleep(0.5)
ic("banana")
with timing():
ic("cherry")
ic("durian")
ic("elderberry")
It shouldn't be too hard to incorporate this, for someone who is familiar with the codebase?
@kach i dont have time to implement this right now. startup life 🙃. but if you, or anyone else, has time to take a stab here, please let 'er rip! 🙌 it would be awesome to better support granular timing in icecream
@salabim do you have time to add this to icecream? or port the functionality over from http://www.salabim.org/peek/ or other?