Type coercion
@yukinarit Just found this library, since I am looking into beartype and want to use that, and figured there must be non-pydantic libraries with good (de)serialization options that interop with type validation libraries, rather than re-inventing.
Personal preference here is to follow the "unix principle", and prefer interop with existing validation libraries, rather than re-implementing runtime type checking? Could be very powerful combo to use pyserde with beartype, or possibly pydantic v2 when that comes out (and will be completely validation-focused, only supporting json serialization via JSONSchema, going fwd).
If you like the idea, I will be testing out this interop and can report back and/or contribute examples to the docs?
Hi @tbsexton!
Thanks for the good suggestion. I looked into beartype and like it so much 👍
I really appriciate if you could investigate pyserde + beartype
Hey @yukinarit, I know it's been a bit, but I definitely haven't dropped this. You might be interested in this thread. Most of the ways I could think of to make things "easier to reason about in terms of extensibility" would be to possibly use some kind of dispatch.
I'm gathering that part of what makes serde.rs elegant is that it leans heavily on the super nice traits system, and plum could be a decently nice way to get that approximated? Anyway, doing coercion/checking in a sane way is probably going to get massively easier with that DOOR api the beartype dev mentions in that thread, regardless if you wanted to do ad hoc polymorphism a la beartype, classes, etc. or not.
Hi @tbsexton
Thanks for the updates. Extensibility like a trait in Rust sounds really cool. Plum looks cool too! But isn't dynamic dispatch expensive, especially if the object is fairly large? 🤔 pyserde generates most of the (de)serialization code at import time for speed. Do you have any idea to address the performance issue?
@yukinarit hey just seeing your new release! Congrats on the beartype refactoring, you should totally let @leycec know about another cool application 😊 plum also refactored to use beartype, and jaxtyping supports it natively.
Now that cyclopts exists and seems to support runtime type-checking decorators, I am psyched to write all the weird serialization cli tools my team has always needed. Thanks! 🙏♥️
Super hotness. Thanks so much for pinging me on, @rtbs-dev – my favourite NIST galaxy brain who is changing the world through the mystifying power of... probabalistic ML!?! So. That is a thing too now, huh? ChatGPT's "Creative Level" probably just got a whole lot more "creative." I await your upcoming publications with my knuckles in my mouth.
@yukinarit: And thank you for choosing @beartype. I promise to let you down only occasionally. Please ping me if you need anything whatsoever for pyserde. I really love that pyserde could be the @beartype-backed alternative to Pydantic that I have been begging the Universe for. Congratulations on the massive release that will redefine dataclasses as we know it.
Also, this:
Plum looks cool too! But isn't dynamic dispatch expensive, especially if the object is fairly large? 🤔
This is a great question. Let's ping @wesselb, because he is the super-smart Plum guy. But even without @wesselb, I'm sorta confident that I have a tenuous grip on Plum's runtime efficiency:
- Plum is @beartype-based. So, it's fast. Stupid fast. Err... smart fast?
- Technically, Plum exhibits
O(k)time complexity for eachk-overloaded function – I think, anyway. Thankfully,kis usually small. Pragmatically, this means that Plum exhibitsO(1)time complexity. For all intents and purposes, it's perceptually instantaneous.
Let's pretend I know what I'm talking about here. :smiling_face_with_tear:
Hi @rtbs-dev! Thank you for suggesting beartype to me! :+1:
Hi @leycec! Great work! beartype is one of my most favorite python packages :heart:
Hey all! Thanks for tagging me. :) I'm definitely putting pyserde on my list of packages to furter explore!
@leycec, yes, all type checking within Plum uses @beartype and therefore is super fast! Moreover, Plum attempts to cache things as much as possible, so repeated invocations should be blazingly fast.
For a function with k methods, I believe that the resolution algorithm (which is only run the first time a method is called, due to caching) technically runs in O(k^2) time. It is possible to construct pathological scenarios where this can become slow. However, I've never encountered such a scenario in practice, which is also the reason why we've not attempted to optimise this runtime complexity.
In practice, more than likely the initial dispatch overhead by running the resolution algorithm should be negigible, and repeated invocations should be extremely fast due to caching.
O(k^2)time.
:fearful: :scream: :scream_cat:
In practice, more than likely the initial dispatch overhead by running the resolution algorithm should be negigible, and repeated invocations should be extremely fast due to caching.
Oh, thank Big-O. Amortized O(1) time through extreme memoization, huh? My heart was racing and I'm pretty sure I almost died there before you pulled me back from the brink. I'm grateful. Now, I can sleep with only mild palpitations.
Seriously, though. Thank you for all your amazing work with Plum over these increasingly many years, @wesselb. I promise @beartype 0.18.0 will start doing things that help Plum. Deep dict[..., ...] type-checking: let's gooooooooo.
😨 😱 🙀
Haha, yeah, maybe we should take this seriously and come up with something better. I think we can do better!
Seriously, though. Thank you for all your amazing work with Plum over these increasingly many years, @wesselb. I promise https://github.com/beartype 0.18.0 will start doing things that help Plum. Deep dict[..., ...] type-checking: let's gooooooooo.
Ahh, thanks @leycec! 😊 A huge thanks to you for your incredible work on @beartype, which makes everyone's lives just so much better. :) I can't wait for deep type checking! 🚀