Andrew Gaul
Andrew Gaul
> Is it possible to get the archive ID? #170 fixes the missing archive ID. @uskudnik could you merge this?
Related to #18.
Amazon announced strong consistency today: https://aws.amazon.com/s3/consistency/ I will try to re-run this sometime this month.
@snarfed Sounds great! Please investigate this and #7. It may require some additional work to use a non-jclouds provider for the various operations.
> The caveat is that if you want to lint a specific set of non-consecutive shas, you'll have to write a for loop: The only difference is whether the loop...
On my system this check fails due to `check_CVE_2018_3615` relying on the value of `cpu_flush_cmd` which is not populated due to: ``` wrmsr: pwrite: Operation not permitted ``` [StackOverflow suggests](https://stackoverflow.com/a/50474653/2800111)...
This still fails for me with 6e799e8b013c6543c5d1fef3f7d69ce172a9ff52 and Linux 5.3.12-300.fc31.x86_64: `SUMMARY: CVE-2017-5753:OK CVE-2017-5715:OK CVE-2017-5754:OK CVE-2018-3640:OK CVE-2018-3639:OK CVE-2018-3615:KO CVE-2018-3620:OK CVE-2018-3646:OK CVE-2018-12126:OK CVE-2018-12130:OK CVE-2018-12127:OK CVE-2019-11091:OK CVE-2019-11135:OK CVE-2018-12207:OK`
With 0cd7e1164f1ebcbcc13484fe1b1218f1154ecbb2 spectre-meltdown-checker reports: ``` * L1 data cache invalidation * FLUSH_CMD MSR is available: UNKNOWN (your kernel is locked down (Fedora/Red Hat), please reboot without secure boot and retry)...
@akiradeveloper I have a similar S3 implementation, [S3Proxy](https://github.com/andrewgaul/s3proxy), which uses s3-tests. You can see how I configured it via Travis here: - https://github.com/andrewgaul/s3proxy/blob/master/.travis.yml - https://github.com/andrewgaul/s3proxy/blob/master/src/test/resources/run-s3-tests.sh - https://github.com/andrewgaul/s3-tests/tree/4347002946bbdc25daaea52184d69069fcc3b317 The most important...
This will be a DDoS vector for the public-facing service.