event_sourcery
event_sourcery copied to clipboard
Improve modelling of events and custom event types
Right now, the Event class (after it's persisted) includes all the information we know about an event. Custom event classes only change the type of that class, there's no nice way to change the way you access the data in an event or validate an event before it's instantiated/saved.

Current state example:
aggregate_id = SecureRandom.uuid
ItemAdded = Class.new(EventSourcery::Event)
event = ItemAdded.new(aggregate_id: uuid, body: { name: 'Test' })
event.body.fetch('name') # => "Test"
event.type # => "item_added"
event.class # => ItemAdded
# how we save events
event_store.sink(event)
# how projectors etc read events
event = event_store.get_next_from(0).last
event.id # => 1
event.body # => { "name" => "Test" }
event.type # => "item_added"
Proposed changes
This issue is a proposal to improve that situation by making a distinction between an event descriptor/record event and and event.
aggregate_id = SecureRandom.uuid
class ItemAdded
def initialize(name:)
raise ArgumentError, "Invalid name!" if name.nil?
@name = name
end
attr_reader :name
def upcased_name
name.upcase
end
end
event = ItemAdded.new(name: 'Test')
event.type # => "item_added"
event.name # => "Test"
# change to the API to save events here
event_store.sink(aggregate_id, [event])
class MyAggregate
include EventSourcery::AggregateRoot
def add_item(name)
apply_event ItemAdded, name: name
end
apply ItemAdded do |event|
event.class # => ItemAdded
# Aggregate doesn't handle RecordedEvent's. This feels ok because certain things can't be known up front (before saving), like the global sequence id. Most other things could be, except created_at since we set that at the db level but that could be changed.
@items << Item.new(event.name)
end
end
class MyProjector
include EventSourcery::Postgres::Projector
process ItemAdded do |recorded_event|
recorded_event.event.name # => "Test"
recorded_event.event.class # => ItemAdded
table.insert(item_id: recorded_event.event.aggregate_id, name: recorded_event.event.name)
end
end
# reading events (this is what projectors etc would receive)
recorded_event = event_store.get_next_from(0).last
recorded_event.type # => "item_added"
recorded_event.event.class # => ItemAdded
recorded_event.event.type # => "Test"
recorded_event.event.name # => "Test"
recorded_event.event.upcased_name # => "TEST"
By making the event only about the event data (type & body) we can easily provide custom accessors or add validation as we see fit without it being awkward (I can point to some examples in our current projects where we do this if you're interested in how awkward it is). We'd need to add a serialisation strategy to support dumping the event classes to and loading them from JSON.
I think this would be a nice improvement... Interested in your thoughts!
My initial thoughts centre on the added complexity in the processors. There's a lot of added noise here with the extra argument, nested object navigation and more complex class structure. Is the balance right?
I wonder if there's a solution that maintains the simplicity of the event processors, but also provides a way to define event attributes and validation. Perhaps choosing a type/validation library would help.
Re: adding a type/validation library… yeah I imagine we’d use something to define attribute :blah, String in custom events like Virtus/Dry Types etc. The interface would be the same though.
You could draw a shopping cart aggregate like this:

I think having the custom event classes only about the type & body makes sense conceptually. It's only after an event is saved that some parts of data are available, like the global sequence ID and in our case created_at is set by the db so that's only available after persisting.
@orien any suggestions on how the changes to projectors could be improved?
While I appreciate the intent of this change, I can't say I'm a fan of the approach. My first question whenever I review suggested changes like these is whether or not this will make it easier for people to pick up event sourcing and understand what's going on, and I think this fails that litmus test.
Specifically, having aggregates and downstream processes (projectors/reactors) deal with different kinds of classes seems like a recipe for confusion.
By way of a quick fix, I think I could maybe consider putting aggregate_id and uuid on the Event class rather than EventDescriptor. That would mean the downstream processes could also handle the events rather than the event descriptors, since it's only really those attributes that the projector/reactor would be interested in. I can understand there might be reasons that's infeasible though.
More generally, I'd suggest asking for a clearer definition what's broken now and whether this is the only way to achieve that. What other options are there? If validation is the issue, can we handle that with external classes? If access is the issue, can we create access objects that wrap around the events?
Well put @vonconrad.
Further to the quick fix, if we were to provide the event_descriptor as the second argument to the process block, processors will be freed up to ignore it unless needed.
class MyProjector
include EventSourcery::Postgres::Projector
process ItemAdded do |event, event_descriptor|
event.name # => "Test"
event.class # => ItemAdded
table.insert(item_name: event.name, created_at: event_descriptor.created_at)
end
# when the event_descriptor is not needed
process ItemAdded do |event|
table.insert(item_name: event.name)
end
end
But as @vonconrad points out. The event should contain all the domain data, including the aggregate_id.
The driver behind this proposed change is to fix the modelling issues I see with custom event types in EventSourcery. The Event and it's metadata are coupled together and that makes defining an event schema, type coercions or validations quite awkward.
Some examples of this awkwardness in our apps:
Separate body class in application code
In this example we're defining a separate class for event body attributes, type coercion and validations. This example is an attempt to break the coupling of event metadata and the event itself within the current constraints of event sourcery's custom types.
class AuthorActivated < Event
class Body < EventBody
attribute :activated_at, Types::Form::Time
validations do
required(:activated_at).filled(:time?)
end
end
end
I think the custom Event class (AuthorActivated) should have those attributes defined on itself and there should be no need for that separate class (in application code). So the goal is for this to become something like:
class AuthorActivated
# include Dry::Types/Struct / Virtus
attribute :activated_at, Types::Form::Time
validations do
required(:activated_at).filled(:time?)
end
end
Event body accessors (& type coercions)
class RecordIngested < EventSourcery::Event
define_method :transaction_id, -> { Integer(body[‘transaction_id']) }
end
RecordIngested.new(body: { 'transaction_id' => 1 }).transaction_id # => 1
Here we're adding accessor methods to the Event (metadata) class to access things inside the event body. This leads to some places in code accessing things like event.transaction_id and other things being accessed like event.body.fetch('item_name'). So that code would become something like this:
class RecordIngested
def initialize(transaction_id:, item_name:)
@transaction_id = Integer(transaction_id)
@item_name = item_name
end
attr_reader :transaction_id, :item_name
end
# or
class RecordIngested
include Dry::Struct/Types/Virtus
attribute :transaction_id, Integer
attribute :item_name, String
end
event = RecordIngested.new(transaction_id: 1, item_name: 'Test')
event.transaction_id # => 1
event.item_name # => "Test"
Taking a step back
Do we agree that this is a problem? Are there other ways to solve these issues that I'm not thinking of?
Feedback to proposed solution
By way of a quick fix, I think I could maybe consider putting aggregate_id and uuid on the Event class rather than EventDescriptor
I don't think that aggregate_id should be included in the custom event class grouping, I think it should be type & body only. Here's an attempt to explain why I think this... If you think of aggregates as streams of events, you can imagine the same "event" happening to different streams (e.g. ItemAddedToCart(item_id: 1)), so I think the representation of the event should be the same in both cases. The cart it happened to (aggregate_id) is a core piece of metadata. The same event ItemAddedToCart(item_id: 1) can happen to different aggregates.
So, is this the list of requirements?
- Typed attributes with more structured access instead of
body.fetch('my_attr'). - Validation of attributes before save.
- Coercion of attribute values on init. (Do we really need this?)
I agree that we should keep the external API simple. Perhaps something similar to the following:
class AuthorActivated < EventSourcery::Event
attribute :activated_at, Time
# Simplest that would work is a method hook. Just raising here but could instead raise a collection of errors.
# Could go more sophisticated if we use Dry types etc...
def validate
activated_at.present? or raise InvalidEvent, 'activated_at is required'
end
end
a = AuthorActivated.new(aggregate_id: '33e3c451-e211-4efe-b556-e7f0b7a0b5b3', activated_at: Time.now)
a.aggregate_id # => '33e3c451-e211-4efe-b556-e7f0b7a0b5b3'
a.activated_at # => "2017-06-16 15:30:16 +0800"
a.type # => 'AuthorActivated'
a.created_at # => "2017-06-16 15:31:16 +0800"
We could have a build class method that news up the event and calls validate or just have the event sink call validate before saving. Can we not just have code in the Event class that populates the body attribute on init? If we keep these as immutable that should be fairly straightforward.
I'm comfortable with metadata attributes being exposed at the same level as attributes. Not sure how much of an issue name clashes between attributes and metadata would be in practice.