Michael Camara Pendon
Michael Camara Pendon
@SergerGood - I have never validated your fix yet, I will keep this one open for now. But, I will do revisit ASAP and cover this in the next release.
For more information, I am referencing these [discussions](https://twitter.com/mike_pendon/status/1636489405098491910) from Twitter. Link: https://twitter.com/mike_pendon/status/1636489405098491910
I have investigated this and it sounds to me that it works as expected. But, you as a user must know the caveat of it. **In short, this is not...
Referencing: https://github.com/mikependon/RepoDB/issues/380
We will create the simulation in the PGSQ database, we are hopeful that this 2100 limit is not present there so we can simulate your use-case. We will post the...
If you use `BulkInsert` (equivalent to `BinaryBulkImport`), you are levaraging the real bulk operations and these cached are skipped. In the screenshot below, you will notice that we adjusted the...
Hmmm - interestingly, seems I can replicate your issue in PGSQL. I modified the program and enable the 100 max batch size with 120 columns in a table. Below is...
The `BinaryBulkInsert` only requires 40 MB as it does not cache anything.  Project: [InsertAllMemoryLeaksPostgreSql-BinaryBulkInsert.zip](https://github.com/mikependon/RepoDB/files/11006687/InsertAllMemoryLeaksPostgreSql-BinaryBulkInsert.zip)
Make sense and it is possible of course. I am only interested of having all the existing Integration Tests work after the PR. Sure, you can make a PR of...
The reason for this is because the type of the column is an INT and there is no way for us to identify the value passed unless we changed the...