algebird icon indicating copy to clipboard operation
algebird copied to clipboard

algebird-serialization

Open sritchie opened this issue 12 years ago • 34 comments

This module should contain thrift mappings for all Algebird case classes. Out of this we'll publish scrooge and java jars of these structs.

sritchie avatar Feb 14 '13 09:02 sritchie

@sritchie how do you see this working? I can see defining a bunch of thrift structs that mirror the existing algebird case classes, and making sure all the logic for working with them is moved into the corresponding monoids... or I guess I could see using implicits to make Rich* wrapper classes and put behavior there... but is there some fancier way to use Scrooge that would let it instantiate objects that we could actually define methods on directly?

avibryant avatar Jun 06 '13 04:06 avibryant

I don't think so, re: the fancy interactions with Scrooge. My idea here was to define the thrift mappings as you've described, then write Bijections between the case classes and thrift classes.

Using Scrooge and moving logic out of the case classes into the Monoids is a better idea :) That, or we forgo thrift and its versioning woes and go with the Bufferable solution.

sritchie avatar Jun 06 '13 19:06 sritchie

Bufferable makes me uneasy, to be honest; feels like a step backwards. Having a good separate spec for the structures and their serialization - that could even be used from other languages, in theory - would be nice. Having looked at it a bit, though: with Thrift we're going to run into problems with the lack of recursive types... we'd do a little better with protobuf (or Avro, probably).

avibryant avatar Jun 06 '13 20:06 avibryant

I'm going to wake this issue up - I think if we can get this right (and that probably does not involve using Thrift) we'll remove a good amount of pain from using these data structures in Summingbird.

sritchie avatar Oct 24 '16 19:10 sritchie

Looks like more discussion on this happened here as well: #463

sritchie avatar Oct 24 '16 20:10 sritchie

I'd just like to mention that we are probably better off using thrift / protobuf / avro / etc. than using our own serialization format. Not because we couldn't come up with a good format, but because ensuring that we never accidentally introduce an incompatible change to the format is hard, and these libraries do a pretty good job of giving you rules for what you can / can't do if you want to maintain long term compatibility.

That said, it's a burden on pretty much everyone, whichever one of the above we do pick. If your company is using avro and we pick protobuf, now you've got to use both avro and protobuf. I don't know if there's any alternatives that don't involve code generation libraries, but I think if we can use some sort of serialization library that thinks explicitly about compatibility over time that'd be helpful.

If not, then I thin we'll need some very strict tests around compatibility and we'll need to keep old serialized copies of these objects around for unit tests. I think it's difficult to get right unfortunately.

isnotinvain avatar Oct 24 '16 22:10 isnotinvain

the problem with picking something else is that then we have version hell, and not everyone uses that other thing.

If we just have a tested system with golden-data committed we can handle the conversion to-and-from bytes fairly easily.

The main thing is that we would then have tests to see if we changed anything.

If you want thrift, you all have it: just define an injection from algebird -> thrift and never change your thrift.

johnynek avatar Oct 24 '16 22:10 johnynek

It's not the correctness of the bijection, it's the correctness over time as the schema changes (if it changes). It's pretty easy to miss an unexpected corner case. And introducing even one incompatible change causes a lot of headache for users because they often don't find out until months later, when they realize a chunk of their old data isn't compatible anymore.

I think if we build our own serialization format, we'll need some very strict tests. We will also have to make sure that we check in new golden data every time we make a change to the relevant code. Or we can do what other libraries do and introduce versioned data formats, where we write what version something was serialized with into the output. There's definitely options but none of them are super straightforward.

Maybe something we could try is a shaded copy of protobuf / thrift / avro -- keep that detail entirely private but use their schema compatibility features.

isnotinvain avatar Oct 24 '16 22:10 isnotinvain

yes, the shaded copy of protobuf is what I was going to suggest for one approach. I do agree with @johnynek though that it could be hellish.

I think we should very slowly opt data structures into this system. We can be fairly strict - if we ever change the representation, we have to make a new data structure.

sritchie avatar Oct 24 '16 23:10 sritchie

The thing is, I don't think is solves the problem. You still need the same testing. You can just as easily break backwards compatibility by changing the shape of your protobuf.

johnynek avatar Oct 24 '16 23:10 johnynek

Yeah, but protobuf has itself been tested. You only need to test that you follow the right rules for changing the protobuf schema.

isnotinvain avatar Oct 24 '16 23:10 isnotinvain

yeah, but unless we trust humans to do that, we still need to build the golden data sets.

johnynek avatar Oct 24 '16 23:10 johnynek

libraires like Lucene use versioned serialization, and support reading / writing old versions in all newer versions. That's a little bit of a separate problem though.

isnotinvain avatar Oct 24 '16 23:10 isnotinvain

actually, I don't think serialization itself is that hard (at all). The challenge here is testing and not breaking.

People will make mistakes, so if we do this project, I don't see a ton of value unless we have strong tests on it.

johnynek avatar Oct 24 '16 23:10 johnynek

@johnynek you actually only need a test that the protobuf schema hasn't changed incorrectly. So instead of a golden data set, you just need a copy of the old schema

isnotinvain avatar Oct 24 '16 23:10 isnotinvain

if you build a schema migration checker.

johnynek avatar Oct 24 '16 23:10 johnynek

yes exactly

isnotinvain avatar Oct 24 '16 23:10 isnotinvain

Whatever we decide, I should state quickly my reason for waking this up!

I felt a few years ago, and I still feel now, that serialization is the primary blocker for folks leaning in to using these more advanced data structures. Adding Kryo was a HUGE help for Cascalog/Scalding.... unfortunately folks have to come back down to earth after their initial exploration and figure out how to store data The Right Way.

We should be providing really tight serializations for all of the data structures that we're all using in production.

The reason I don't really like using the existing frameworks is that it's hard to publish/share the schemas. Better to just rely on algebird to serialize/deserialize and maintain compatibility here. If you really want to store it your way, great! Write your own serializer. Totally fine. But we want all of the powerful data structures we've built to work out-of-the-box with excellent defaults when used with Summingbird / Scalding.

sritchie avatar Oct 24 '16 23:10 sritchie

I'm concerned the translation to an intermediate data structure will be a pretty high perf cost given that the real issue is data stability (which I think requires the golden data to guard against any accidental changes to the underlying schemas/library even in the other case).

johnynek avatar Oct 24 '16 23:10 johnynek

I would like to wake this issue again as this is primarily the biggest impediment to introducing algebird to a larger audience at my current $work. Data scientists(read python programmers/matlab ninjas) hesitate to reach for some of the data structure as they inevitably run into serialization/persistence issue. One of the reason HLL is so widely used is because it comes with toByte/fromByte methods. Can we at least provide examples on how to serialize CMS QTree etc in any framework (thrift, avro etc) so that people can translate to their in house serialization framework? Right now they dont even know where to start.

MansurAshraf avatar Apr 26 '17 01:04 MansurAshraf

@MansurAshraf I think everyone likes the idea. I think we are all just trying to get the other one to do it. :)

johnynek avatar Apr 26 '17 01:04 johnynek

@johnynek algebird-internal already has all the good stuff, we just need to convince @isnotinvain to open source it.

MansurAshraf avatar Apr 26 '17 01:04 MansurAshraf

yeah, or maybe @benpence @piyushnarang @sriramkrishnan etc...

johnynek avatar Apr 26 '17 01:04 johnynek

Open sourcing the algebird internal stuff should not be hard. I'm +1 to that.

On Tue, Apr 25, 2017 at 9:14 PM P. Oscar Boykin [email protected] wrote:

yeah, or maybe @benpence https://github.com/benpence @piyushnarang https://github.com/piyushnarang @sriramkrishnan https://github.com/sriramkrishnan etc...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twitter/algebird/issues/119#issuecomment-297209500, or mute the thread https://github.com/notifications/unsubscribe-auth/AAWw6dMAc6hk_yD9fPof4nMJpTBNlxDNks5rzpp_gaJpZM4AbiuC .

-- Sent from my phone

isnotinvain avatar Apr 26 '17 02:04 isnotinvain

Great! This could be something useful indeed.

On Tue, Apr 25, 2017 at 4:53 PM Alex Levenson [email protected] wrote:

Open sourcing the algebird internal stuff should not be hard. I'm +1 to that.

On Tue, Apr 25, 2017 at 9:14 PM P. Oscar Boykin [email protected] wrote:

yeah, or maybe @benpence https://github.com/benpence @piyushnarang https://github.com/piyushnarang @sriramkrishnan https://github.com/sriramkrishnan etc...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twitter/algebird/issues/119#issuecomment-297209500, or mute the thread < https://github.com/notifications/unsubscribe-auth/AAWw6dMAc6hk_yD9fPof4nMJpTBNlxDNks5rzpp_gaJpZM4AbiuC

.

-- Sent from my phone

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twitter/algebird/issues/119#issuecomment-297222935, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEJdobEkXPwbfqCosJDJThe2PYA5U9iks5rzrGcgaJpZM4AbiuC .

johnynek avatar Apr 26 '17 03:04 johnynek

Thanks @isnotinvain, any idea on timelines?

MansurAshraf avatar Apr 27 '17 20:04 MansurAshraf

I'm asking CDL team what they think (I switched teams), but I think we should be able to do this.

isnotinvain avatar Apr 28 '17 16:04 isnotinvain

There's also good discussion in this thread from before. We never really picked how we wanted to handle this. We can just make our thrift stuff available, but it's going to add a thrift dependency to algebird, isn't going to help folks that wanted to use protobuf or avro, and w/o a schema compatibility checker, it'd be good to have some more tests too. There was also talk of shading in this thread as well. Any thoughts on those issues?

isnotinvain avatar Apr 28 '17 16:04 isnotinvain

my 2cents are that you guys should provide the existing thrift serializer as reference implementations so that you are not responsible for schema evolution etc. however this give other people a way to port it to their own binary format such as avro, proto etc. In my case we dont use thrift so I will have to implement the same serializer in Avro & Proto for different teams, but if I have a battle tested thrift implementation to based my work off of, i would be much more confident that i implemented the serializers correctly

Let me know what you think

MansurAshraf avatar Apr 28 '17 18:04 MansurAshraf

Yeah, that seems fine. Ideally I guess we'd have all three (thrift, avro, proto) in algebird sub-modules if / when they get contributed. (don't know if your $work will let you do that).

isnotinvain avatar Apr 28 '17 18:04 isnotinvain