mastodon
mastodon copied to clipboard
[Security] Bump fstream from 1.0.11 to 1.0.12
Bumps fstream from 1.0.11 to 1.0.12. This update includes a security fix.
Vulnerabilities fixed
Sourced from The GitHub Security Advisory Database.
Moderate severity vulnerability that affects fstream Versions of fstream prior to 1.0.12 are vulnerable to Arbitrary File Overwrite.
Affected versions: < 1.0.12
Commits
-
4235459
1.0.12 -
6a77d2f
Clobber a Link if it's in the way of a File - See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) -
@dependabot badge me
will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in the .dependabot/config.yml
file in this repo:
- Update frequency (including time of day and day of week)
- Automerge options (never/patch/minor, and dev/runtime dependencies)
- Pull request limits (per update run and/or open at any time)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
Finally, you can contact us by mentioning @dependabot.
@oneturkmen Thanks for opening this very detailed issue with a nice reproducer. We'll look into this.
Just to double-check, are you running on Linux? (It seems so based on the use of ulimit
.)
Yup, it's Linux
On Mon, Mar 25, 2024, 02:10 Gabor Szarnyas @.***> wrote:
Just to double-check, are you running on Linux? (It seems so based on the use of ulimit.)
— Reply to this email directly, view it on GitHub https://github.com/duckdb/duckdb/issues/11334#issuecomment-2017296361, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEJDMLF65DYMPKH2ZR2IP2TYZ65WXAVCNFSM6AAAAABFGETBX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGI4TMMZWGE . You are receiving this because you were mentioned.Message ID: @.***>
@oneturkmen , did you try setting the following duckdb configs and check if spills to disk.
conn.execute("SET memory_limit = '3GB';")
conn.execute("SET max_memory = '3GB';")
Noticed that without this setting explicitly set, larger than memory operations caused OOM exceptions for me.
I am experiencing this same issue when trying to utilize spilling to disk when running on linux for Windows (WSL2). When I explicitly set the memory_limit to a low number in the database config and provide a persistent database, I see a block start to get written, but then the process errors out with "Out of Memory Error: could not allocate block of size 10.6 MiB...".
@michaelkovatt I tried your solution as well as configuring the same settings in the config parameter of the connection definition as shown below, but they did not work:
duckdb.connect(database='persistent_duckdb_storage/queries.db', config={'memory_limit': '100MB', 'max_memory': '100MB'})
Duckdb throwing an OOM error when there is plenty of disk space available and I have explicitly set memory limits and a persistent database has stumped me. Wondering if this is a linux-specific error?
@szarnyasg is there any update on this yet?
Thanks for reporting! This should now be fixed in https://github.com/duckdb/duckdb/pull/12730. The issue reported here was specifically about the union_by_name
setting in the CSV reader which causes too many files to be opened and too much data to be kept cached when reading many files using this setting.