scany icon indicating copy to clipboard operation
scany copied to clipboard

Named query parameters implemented

Open anton7r opened this issue 3 years ago • 33 comments

I've integrated my pgxe library into scany.

~~Note:~~ ~~What this PR still lacks however is the possiblity to do named queries that do not return anything (NamedExec()).~~

~~Also test cases for the new api methods need to be added and the documentation needs to be improved.~~

~~The query parser as of right now only supports : as the delimeter, if we want to support more delimeters i need to update the code~~

anton7r avatar Dec 03 '21 21:12 anton7r

By caching the index of a field we make querying values by field name faster, but however where we only query one field the optimizations wont help performance as seen in the _2 suffixed benchmarks

_3 suffixed benchmarks query 3 fields _4 suffixed benchmarks query 3 fields 5 times

The reason why we don't want to use reflect.Value.FieldByName is because it is O(n^2) and the IndexMap implementations are roughly O(n)

goos: windows
goarch: amd64
pkg: github.com/georgysavva/scany/queryparser/utils
cpu: AMD Ryzen 7 2700X Eight-Core Processor         
BenchmarkGetNamedField-16      	 6453988	       184.1 ns/op	      32 B/op	       3 allocs/op
BenchmarkIndexMap-16           	 3876772	       309.1 ns/op	      88 B/op	       3 allocs/op
BenchmarkIndexMapV2-16         	 4150629	       291.5 ns/op	      88 B/op	       3 allocs/op
BenchmarkGetNamedField_2-16    	 6405780	       186.7 ns/op	      32 B/op	       3 allocs/op
BenchmarkIndexMap_2-16         	 3698179	       316.2 ns/op	      88 B/op	       3 allocs/op
BenchmarkIndexMapV2_2-16       	 4254351	       283.4 ns/op	      88 B/op	       3 allocs/op
BenchmarkGetNamedField_3-16    	 2653401	       451.5 ns/op	      64 B/op	       7 allocs/op
BenchmarkIndexMap_3-16         	 2809712	       463.2 ns/op	      96 B/op	       5 allocs/op
BenchmarkIndexMapV2_3-16       	 2951157	       404.9 ns/op	      96 B/op	       5 allocs/op
BenchmarkGetNamedField_4-16    	  545352	      2290 ns/op	     320 B/op	      35 allocs/op
BenchmarkIndexMap_4-16         	  922863	      1353 ns/op	     224 B/op	      21 allocs/op
BenchmarkIndexMapV2_4-16       	  922898	      1297 ns/op	     224 B/op	      21 allocs/op
PASS
ok  	github.com/georgysavva/scany/queryparser/utils	17.691s

anton7r avatar Dec 04 '21 18:12 anton7r

Performance is no longer a concern

goos: windows
goarch: amd64
pkg: github.com/georgysavva/scany/queryparser/utils
cpu: AMD Ryzen 7 2700X Eight-Core Processor         
BenchmarkGetNamedField-16      	 6281607	       187.7 ns/op	      32 B/op	       3 allocs/op
BenchmarkIndexMap-16           	 3786922	       352.3 ns/op	      88 B/op	       3 allocs/op
BenchmarkIndexMapV2-16         	 6816387	       175.7 ns/op	      24 B/op	       2 allocs/op
BenchmarkGetNamedField_2-16    	 6460810	       188.0 ns/op	      32 B/op	       3 allocs/op
BenchmarkIndexMap_2-16         	 3744350	       323.1 ns/op	      88 B/op	       3 allocs/op
BenchmarkIndexMapV2_2-16       	 6756135	       180.1 ns/op	      24 B/op	       2 allocs/op
BenchmarkGetNamedField_3-16    	 2528263	       473.2 ns/op	      64 B/op	       7 allocs/op
BenchmarkIndexMap_3-16         	 2742052	       437.9 ns/op	      96 B/op	       5 allocs/op
BenchmarkIndexMapV2_3-16       	 4174024	       289.0 ns/op	      32 B/op	       4 allocs/op
BenchmarkGetNamedField_4-16    	  524474	      2358 ns/op	     320 B/op	      35 allocs/op
BenchmarkIndexMap_4-16         	  844498	      1393 ns/op	     224 B/op	      21 allocs/op
BenchmarkIndexMapV2_4-16       	 1000000	      1175 ns/op	     160 B/op	      20 allocs/op
PASS
ok  	github.com/georgysavva/scany/queryparser/utils	17.387s

anton7r avatar Dec 04 '21 18:12 anton7r

Hello! Thank you for your effort and for drafting this PR. I've gone through the code and it seems to me that you are inserting data inside the SQL query text directly, so you are escaping the data yourself. It might be dangerous (SQL injections) and error-prune. We shouldn't take this job from the underlying database library.

Instead, the new named-query logic should compile the SQL query in a different way:

"INSERT INTO users (id, name, age) (:id, :name, :age)", userStruct ---> "INSERT INTO users (id, name, age) ($1, $2, $3)", userID, userName, userAge 

So it replaces our name placeholders with the default placeholders for the database library and passes data from the struct argument as corresponding positional arguments to the database library function call.

Here is an example of how it's implemented for the sqlx library: https://github.com/jmoiron/sqlx/blob/a62bc6088664eb854c3d5ebbd08a1ffd59662ef1/named.go#L441

georgysavva avatar Dec 10 '21 10:12 georgysavva

sure, when i wrote the lexer 6 months ago i just wanted to learn how they work.

It is not only error prone, but it also doesn't support every data type.

anton7r avatar Dec 10 '21 18:12 anton7r

having looked at the way sqlx has done it i noticed that it is not done by the most optimal way as it converts the entire struct into a map[string]interface{} which would be slower on larger structs.

instead of doing exactly what sqlx has done i would probably take an approach of making a map which stores the indecies of already mapped fields where the field name is the key and the value is the index where the field is mapped to.

anton7r avatar Dec 10 '21 19:12 anton7r

allocating that much memory for the strings still sounds kinda bad in my opinion

maybe i could use the "ordinal" value of the field as the key which would reduce memory usage, but at the same time would maybe increase the processing time a bit.

anton7r avatar Dec 10 '21 19:12 anton7r

@georgysavva should the NamedGet and NamedSelect actually be GetNamed and SelectNamed since it would probably be easier when autocompleting in IDEs?

anton7r avatar Dec 11 '21 19:12 anton7r

Yes. we don't need to stick to the sqlx implementation completely, I was talking more about the overall design. Yes, GetNamed, SelectNamed look better for me. Few more observations:

  1. Now you hardcode $ sign on dbscan package level, but this sign is only valid for PostgreSQL database so it should be set at the pgxscan package. For sqlscan we don't know which database is actually been used so it should be configurable with no default.
  2. I see that named parameters in SQL query are in camel case for struct fields. I think they should look as column names in the SELECT statement. For example:
type User struct {
    FirstName string
    Post struct {
        Title string
    }
}

For such struct Valid SQL queries using scany would be:

SELECT first_name as "first_name", post as "post.title" FROM users
----
INSERT INTO users(first_name, post) VALUES( :first_name, :post.title)

The default columns scany would look for are: "first_name", "post.title". The same names parameters should be used in named queries (snake cases and separated by a dot for nested structs). More specifically I think we should reuse the getColumnToFieldIndexMap() function to map struct fields to columns (name parameters in our case).

georgysavva avatar Dec 13 '21 06:12 georgysavva

If we do not give the sqlscan interface a default value for the compile delimeter. would we then need to get rid of the package level api that it has?

anton7r avatar Dec 13 '21 17:12 anton7r

Sorry, I don't quite understand your message. Could you please elaborate?

The way I see it: In the dbscan package, we have an optional field in the API object for the delimiter value which isn't set by default and if someone tries to use Named queries logic with no delimiter set, dbscan should return an error. In the sqlscan package, we don't set any default delimiter to the dbscan API object. In the pgxscan package, we set '$' as the delimiter to the dbscan API object by default.

georgysavva avatar Dec 14 '21 06:12 georgysavva

Few more observations:

  1. The delimiter isn't always a constant character. So substitution logic must be smarter. Consider: for MySQL it's always '?', but for PorstresSQL it's: '$1', '$2' and etc. See sqlx code for details https://github.com/jmoiron/sqlx/blob/a62bc6088664eb854c3d5ebbd08a1ffd59662ef1/named.go#L373-L392
  2. I think all this Named query compile logic should live in dbscan API object. No need to introduce a separate package like lexer and internal utils. Just put in a separate named.go file inside the dbscan package.
  3. In case of named parameters for structs we should use existing getColumnToFieldIndexMap() function.
  4. In pgxscan/sqlscan packages we should also introduce the fourth named method: function QueryNamed(...) (pgx/sql.Rows, error) because user might want to benefit from our named parameters logic, but want to handle rows on herself.

georgysavva avatar Dec 20 '21 07:12 georgysavva

i dont think using the getColumnToFieldIndexMap() with the same casing as the database table names etc... because if you have the named params camel cased it increases readability in my opinion.

INSERT INTO users(first_name, post) VALUES( :first_name, :post.title)

INSERT INTO users(first_name, post) VALUES( :FirstName, :Post.Title)

But i can for sure use the getColumnFieldIndexMap if i just add the camel cased option to it if it doesn't have it

anton7r avatar Dec 20 '21 23:12 anton7r

The point here is we should have the same name mapping in both ways SQL -> Go, Go -> SQL. It is just simpler for users to reason about names that way. The name you used for the column alias to scan data from the database is the same you use to pass data into the database. Consider this:

type User struct {
    ID string
    FullName string
    ...
}

type Post struct {
    ID string
    UserID string
    Title string
    ...
}

type UserPost struct {
   User *User
   Post *Post
}

up:= &UserPost{User: &User{FullName: "foo"}, Post: &Post{Title: "bar"}}
GetNamed(ctx, db, up /* dst */, `SELECT u.full_name AS "user.full_name", p.title AS "post.title", ... FROM users u JOIN posts p ON u.id = p.user_id WHERE u.full_name = :user.full_name AND p.title = :post.title`, up /* arg */)

The struct always translates to the same names: "user.full_name", "post.title", ... . And you use them for both column aliases and named parameters.

georgysavva avatar Dec 21 '21 07:12 georgysavva

I undestand your point but in this case simple may not be the best solution if we want scany to be a high productivity tool. We also do not want to over complicate the library as it also brings its own new set of problems.

So should we then add an option to change the named query casing which would be by default the way that you want and then the users can configure it how ever they like? We could also ask the users which way they like it best.

anton7r avatar Dec 21 '21 18:12 anton7r

also it seems like getColumnToFieldIndexMap() may need some caching as it seems to do the expensive reflect field name computations

anton7r avatar Dec 21 '21 21:12 anton7r

  1. Users already have two ways to override the default mapping. Firstly, via struct tags. Secondly, providing a custom option WithFieldNameMapper() to API object to override the default one (snake case) to camel case or whatever they want. And those overriding methods are applied to both cases: scanning into a destination and named parameters.
  2. sqlx library uses exactly the same approach, they expect the same mapping for both, destination type scanning and named parameters. So it means that users that come from sqlx will find our implementation familiar.
  3. You are right, getColumnToFieldIndexMap() needs some caching. It's a separate task that's on my list. Now when we have the global API object it can be used to store the cached data.

georgysavva avatar Dec 22 '21 05:12 georgysavva

  1. Oh yeah that seems to be the case with sqlx i just have forgotten about it since i have not used it in a while.
  2. The caching is already done in my implementation using cmap, i could check wether or not it is faster than go's native sync.Map as they serve a very similar purpose and it would be preferable to use native implementations in order to reduce the number of 3rd party dependencies.

anton7r avatar Dec 22 '21 14:12 anton7r

as i have seperated lexer.Compile() and the function that binds the field names to args it currently would be pretty simple to add an option to prepare queries before hand. But if we don't want to add prepared queries we should combine them to reduce unnecessary allocations.

anton7r avatar Dec 22 '21 15:12 anton7r


var typeMap cmap.Cmap = cmap.Cmap{}

//getStructIndex tries to find the struct from the typeMap if it is not found it creates it
func getStructIndex(e reflect.Value) structIndex {
	t := e.Type()

	val, found := typeMap.Load(t)
	if found {
		return val.(structIndex)
	}

	return indexStruct(t)
}

//indexStruct creates an index of the struct
func indexStruct(t reflect.Type) structIndex {
	indexMap := structIndex{
		mappings: map[string]int{},
	}

	for i := 0; i < t.NumField(); i++ {
		indexMap.mappings[t.Field(i).Name] = i
	}

	typeMap.Store(t, indexMap)
	return indexMap
}

this is basically how the caching of struct reflection works in the getStructIndex() which works pretty similarly compared to getColumnToFieldIndexMap()

anton7r avatar Dec 22 '21 20:12 anton7r

why does getColumnToFieldIndexMap() return an integer array?

anton7r avatar Dec 22 '21 20:12 anton7r

  1. The standard library also has a concurrent map: sync.Map{} you can check that one as well.

The caching is already done in my implementation using cmap, i could check wether or not it is faster than go's native sync.Map as they serve a very similar purpose and it would be preferable to use native implementations in order to reduce the number of 3rd party dependencies.

  1. Yeah, adding prepared named queries would be a desirable thing, go ahead!

  2. getColumnToFieldIndexMap() returns a map where values are indexes of struct fields. Every struct field has an index and since we can have nested structs and we visit all of them - we get an array to uniquely identify and find the field. Consider example:


type User struct {
    Name string
    Post *Post
}

type Post struct {
    Title string
    Date time.Time
}

getColumnToFieldIndexMap(&User{}) 
// returns map:
// "name" -> [0]
// "post" -> [1]
// "post.title" [1, 0]
// "post.date" -> [1, 1]

georgysavva avatar Dec 23 '21 11:12 georgysavva

prepared queries still need tests and documentation other than that it should function as it should

anton7r avatar Dec 23 '21 23:12 anton7r

Yeah it is safe to say that we should go with the sync,Map from the standard library as the docs for it pretty clearly state that it is generally thought to be used in this sort of scenario.

https://pkg.go.dev/sync#Map

anton7r avatar Jan 22 '22 19:01 anton7r

any updates?

fuadarradhi avatar Feb 16 '22 01:02 fuadarradhi

Hi @fuadarradhi. I am waiting for the final notice from @anton7r that PR is done and ready for review. It seems to be still WIP.

georgysavva avatar Feb 16 '22 06:02 georgysavva

This is ready feature wise. But some of my chnages still may need better documentation and tests.

anton7r avatar Feb 19 '22 20:02 anton7r

@anton7r understood. I will review it soon. Thank you for your work!

georgysavva avatar Feb 21 '22 05:02 georgysavva

Similar PR for sqlx, which uses https://github.com/muir/sqltoken: https://github.com/jmoiron/sqlx/pull/775

Personally I'm not a huge fan of having a separate ExecNamed() etc. interface for this; it's a lot of "API clutter", and would prefer if the regular Exec() would work with named parameters. I've been using sqltoken in my own library for a while and the performance impact is very small (though not zero). Could maybe add a knob to enable/disable this, if desired.


More importantly, will your current implementation work with things like:

-- :site refers to the ID
select * from sites where id = :site

Will it replace the :site in the comment? I didn't run the code, but reading through it, I think it might replace :site here? This is an ever bigger issues in strings (select * from sites where id = ':site') or if you use some word for which there isn't a parameter (resulting in an error).

I also spent quite some time trying to find failures in sqltoken and couldn't find any, so I think that's probably a better way to go(?) It would also allow things like rebinding ? placeholders to $1.

arp242 avatar Apr 01 '22 18:04 arp242

Hello! @anton7r Thank you for your work and such a huge contribution. I am sorry, I need to put this feature on hold for now. I don't have enough time to dig deeper for review and etc.

georgysavva avatar Apr 20 '22 03:04 georgysavva

Similar PR for sqlx, which uses https://github.com/muir/sqltoken: jmoiron/sqlx#775

Personally I'm not a huge fan of having a separate ExecNamed() etc. interface for this; it's a lot of "API clutter", and would prefer if the regular Exec() would work with named parameters. I've been using sqltoken in my own library for a while and the performance impact is very small (though not zero). Could maybe add a knob to enable/disable this, if desired.

More importantly, will your current implementation work with things like:

-- :site refers to the ID
select * from sites where id = :site

Will it replace the :site in the comment? I didn't run the code, but reading through it, I think it might replace :site here? This is an ever bigger issues in strings (select * from sites where id = ':site') or if you use some word for which there isn't a parameter (resulting in an error).

I also spent quite some time trying to find failures in sqltoken and couldn't find any, so I think that's probably a better way to go(?) It would also allow things like rebinding ? placeholders to $1.

Indeed it will replace the :site in the comment however i'd like to know in which circumstances it becomes beneficial to leave the comments inside the query string, when those could be left outside of the string.

The sqltoken library could be definitely used, but i would have to dive deeper in to it in order to know more about its pros and cons. I would prefer that the sqltoken library would also have a callback function in order to avoid (in this case) unnecessary array creation and modification operations.

anton7r avatar Jul 28 '22 17:07 anton7r