LiteDB.FSharp
LiteDB.FSharp copied to clipboard
Update to LiteDB5.0
All tests will be passed, After a bug of LiteDB being fixed
- Remove UsingCustomJsonConverter
- Make more types Internal or private
- Remove
Linq.fs
, This is not related to this repo - Fixing Datetime Serialization and deserialzation #46(May be fixed)
- ~~Remove fullserach as this is not recommanded in LiteDB5.0~~
Hi there @humhei thanks a lot for the PRs, I will be reviewing/merging them soon so we can get a new version 😄
Since fullsearch is usefull when query with union case
,
I add it back to LiteDB.FSharp
This PR should not be merged Until the bug in LiteDB get fixed
Hi @humhei I have merged in all your PRs and published the library as of v2.16 and simplified the repository structure a lot!
Before rewriting the code for the LiteDB 5 upgrade, I think we should discuss how the next version v3 should look like.
I see that LiteDB v5 supports a SQL-like language which may return different shapes and values than the predefined collection types so in v3 I am thinking of maybe removing the automatic serialization/deserialization altogether and building a new conversion model similar to Thoth.Json
I see that LiteDB v5 supports a SQL-like language which may return different shapes and values than the predefined collection types
Sorry, i didn't understand this line well
As far as i know
SQL-like systax query still return an BsonDocument
which only including base types as the same to LiteDB v4?
While using BsonArray
in it to represent collection types
so in v3 I am thinking of maybe removing the automatic serialization/deserialization altogether and building a new conversion model similar to Thoth.Json
I am not familiar with Thoth.Json
But after reading the documents roughly,
Seems Thoth.Json
still exists automatic serialization/deserialization which named (AutoDecoder
and AutoEncoder
)
As far as i know SQL-like systax query still return an BsonDocument which only including base types as the same to LiteDB v4?
Well say you have a document that looks like this
{
"_id": 1,
"name: "John",
"age": 20
}
and you have a type in F#
type Person = { id: int; name: string; age: int }
Now you query the "persons" collection and only return name
from the collection like this:
SELECT $.name FROM persons
The result would be that each document has the shape
{
"name": "John"
}
and this BsonDocument cannot be deserialized into Person
anymore because of the missing fields.
Meaning that each query results in a new projection that doesn't necessarily look like the original type which is why I am proposing to have a mechanism to deserialize the documents using Decoder<'T>
from Thoth but instead of JSON, we have BSON
Thanks for the explanation, It's awesome!
so in v3 I am thinking of maybe removing the automatic serialization/deserialization altogether and building a new conversion model similar to Thoth.Json
type Person = { id: int; name: string; age: int }
SELECT $.name FROM persons
In my option(may be immature)
The automatic serialization/deserialization
doesn't confict to this SQL statment,
We can consider SELECT $.name
as an deserialization
part to person's name
,
which means,
We can using an expression <@ fun person -> person.name @>
to get the name
sharp from bsonDocuments
,
Then We can deserialize person.name (here is a string
, but may be same more complex types like a record
) with fsharpJsonconverter
and then get all the names
without deserilizing whole the person
sharp
Some addition info on how LiteDB deal with it(it use an Select
Expression on IQueryable
)
/// https://github.com/mbdavid/LiteDB/blob/master/LiteDB/Client/Database/LiteQueryable.cs#L182
colPerson.Query().Select(fun person -> person.name).Where(fun personName -> personName = "John")
I see that LiteDB v5 supports a SQL-like language which may return different shapes and values than the predefined collection types so in v3 I am thinking of maybe removing the automatic serialization/deserialization altogether and building a new conversion model similar to Thoth.Json
@Zaid-Ajaj if it helps I think having no reflection in serialization/deserialization path might help with perf. Also I feel like explicit conversion is less "magical" which for me is a good thing.
- Not sure if it is possible but potentially it might make library friendly for AoT compilation similar to https://github.com/rejemy/UltraLiteDB
@twop Serialization/Deserialization is actually plenty fast but my problem is the fact that we use record types to define shapes both for reading and writing where in reality writing might be a subset (i.e. without ID) and reading might be a superset (i.e. fetching from multiple collections)