biggy
biggy copied to clipboard
BiggyList in memory approach.
I am still not happy with the in memory flag on BiggyList. It is braking Clean Code and also SOLID in a few places. Additional in a case you are using Json you have a two copies of dataset.
I know that your scenario is that you can load data from source 1 then synchronise it with memory list and then dump it into different data source. However this scenario can be achieved in a different means as well.
I was thinking that we can create the new IDataStore implementation for in memory and add some functionality to all stores to add ability to transfer data form one to another. Another possible solution is to still create a separate store for in memory process but too always use it in the BiggyList and make json/postgres/sqliite/azue only the additional persistence storage
All of the above is why I held off on that for now. I don;t disagree, but I want to spend some time firming up the core functionality and API before we go hog-wild with many additional features. I thinkt he In-memory implementation (however we end up doing it) is important, but should be thought out carefully.
Agreed on the SOLID/Clean violations though.
One of the goals of re-implementing Biggy was that we wanted to simplify things, and stop trying to cram every possible feature into the basic repo. At least, not until a nice, sweet API had be solidified.
Also, Rob was particularly bothered by the growing level of abstraction being laid on top of what should be a fairly simple idea. Obviously, Rob has withdrawn from active involvement, but I trend to agree with him on a lot of this.
I'm going to be stuck working the day job all day today. I wouldn't mind trying on a few options (in a feature branch or branches), and play with it for a while before committing to a direction on this.
All of that said, it wouldn;t be difficult to create an InMemoryList from which the actual BiggyList could be derived. After all, all BiggyList does is keep an internal list synced with the backing store.
Which is actually why it seemed easy enough to add an InMemory flag . . .
See what you come up with in a feature branch . . .
I agree that this is not the top priority feature and I think we can keep this issue as a reminder.
Of topic: is there any roadmap/list of features which you want to implement?
I've been thinking about that. Not so much about more features, as:
- Sweet, easy-to-use API
- Solidify existing featureset, test extensively
- Stable API, so we can get up on Nuget asap
- Keep the perf high, especially on reads
I can see adding a "filter" to the list, such that one can load/reload/work with a subset of data within the list.
Don't get me wrong - Always open for good feature ideas, but I think we want to keep the structure from bloating. In the original project, we started seeing feature-creep, and it is one reason we went "off the grid" for a bit - to pare things back down.
Also really important to me is to use it in some pseudo-real-world-like scenarios. It SEEMS easy to work with and stuff with our contrived tests and silly demos, but does it really add value in the context of an application?
I want to try that out and see (a test application, but not a contrived demo).
I think staying simple is important, and I really want to try to stick to at least the core of Rob's original vision.
It's really tempting the start adding features, but we really don't want to solve "already-solved-problems" here (meaning, we're not trying to build a complete ORM, or re-implement EF, etc).
I'm totally open to input though. Are there features YOU would like to see?