Karel Minarik
Karel Minarik
Hmm, this is even more mysterious, since you're not executing the API concurrently... Is it really the case? Is it really just a plain loop? Can you also try with...
So I have tried now this isolated script, and it doesn't trigger any race condition: ```golang //+build ignore package main import ( "log" "os" "github.com/elastic/go-elasticsearch/v8" "github.com/elastic/go-elasticsearch/v8/estransport" ) func main() {...
Same story when I add a little goroutine around it: ```golang //+build ignore package main import ( "log" "os" "sync" "github.com/elastic/go-elasticsearch/v8" "github.com/elastic/go-elasticsearch/v8/estransport" ) func main() { log.SetFlags(0) var wg sync.WaitGroup...
> I think the issue is mutating the *http.Request after passing it to the Transport. That would indeed trigger a race condition, but the strange thing is that it's hard...
No success with `7.6`. I know I did a _lot_ of testing for the retry feature during development, so it would be pretty strange if it would manifest even in...
I'll have a look into it. You're starting Elasticsearch with `9200` published, but configuring the client to use `9201`, is that correct?
Can you try the same on your side but with the `master` branch? Elasticsearch version isn't perhaps that important, but try with `8.0.0-SNAPSHOT`. What I'm doing is first launching this...
Yes, unfortunately, I can't replicate it, neither with `master` nor with `7.6`, no matter how many runs I do... Which version of Go are you using? I'm using `go1.14 darwin/amd64`.
Thanks for the reproduction and the patch, @pengux! I've left couple of comments on the pull request,
I think at some point in time ActiveModel required/recommended/etc `has_attribute` -- thanks for the fix, I'll have a look into the wider context.