curator
curator copied to clipboard
Not really an issue - but a design pondering
Having trying forcing DDD onto Rails in the past, I'm wondering if you guys too bumped into the following impedance mismatch (from the code looks like you did).
The idea is, that instead of doing a static SomeRepo.save
and statically swapping a data provider you should do a some_repo.save
where some_repo is an instance of a repository instantiated and injected per X, where X might be:
- session (request start, instantiate, throw away at request end. best practice)
- request (everytime you want to do something to a domain object)
- application lifetime (singleton, dangerous)
This enables the contract for the controller to just requires an instance that quacks like a repository, and the dependency isn't hard-coded as a static dependency. Makes easier testing, puppies happy etc.
These almost always requires incorporating IoC, which by popular opinion "isn't needed in a dynamic language such as Ruby".
This pokes any thought?
Thanks
In general, instances are better than static/class methods. We did change the data stores to be instances and may want to do it for the Repositories as well in the future. I see nothing wrong with dependency injection (IoC) in dynamic languages.
However, I was thinking about testing a different way. Rather than using mocks/stubs, what if you just switch the data store to in-memory for the majority of tests. Then, you're testing against the real repositories, but not persisting the data.
You'll still want some integration level tests that hit the real data store, but these will be far fewer than your typical unit/functional tests. In general, I prefer to test with real classes as much as possible, since mocks/stubs by definition don't behave the same as the real classes.
When it comes to testing, if there's a way I can use the real classes, but get the speed/isolation benefits of not actually persisting, I'm in favor of that.
Thoughts?
Thanks for the detailed answer Paul, sounds reasonable. I guess the main reasons not to swap out to in-memory stores is
- if you intend to swap a SQLite store with a SQLite in-mem mode, well, there aren't such many options for other kinds of data stores.
- if you intend to inject an in-memory implementation into an abstract store contact - well then this in-mem implemenation may have the risk of someone leaking some logic into - and being a maintenance hurdle.
and lastly, I guess from an idealistic view, a feature change should cause a code change of as little as possible - ideally touching one unit (module/file/etc). and one unit test. changing the storage or anything in up to a logical module will break all of the unit tests in the chain.
So I guess to conclude - I have been on both camps, many times. Sometimes it depends on which side of the bed I wake up at I guess. Some times I even do away and just make a dirty test followed by clearing up all of the data (if the store is fast enough e.g. Redis).
Just wanted to learn from someone else who just bumped into this - thanks! :+1:
One of the goals of curator is a good separation of layers in your application. Your code interacts with the repositories, but shouldn't care how the data is actually persisted. Your code just hands objects to the repository, and retrieves objects from find methods.
The repository is in charge of serialization/deserialization and handing off to the data store. The data store is in charge of actually persisting the data somewhere.
If everything is working correctly, you should be able to change the data store without changing any of your app code. The interface to the Repository will stay the same, but the actual implementation of persistence will be different. So you should be able to use an in-memory curator data store for testing and still be confident that once you switch to something like Riak, all of your code will work the same way.
Obviously, there might be bugs that only manifest themselves with a specific data store, but those will be fixed as they are found.
Does this make sense?