asce
asce
What about multipart upload in S3? We can split input stream to parts and upload them via s3 low-level API.
1. It’s a “fake responsibility” that library is responsive for serializing types unsupported by json. We can’t cover all the cases and shouldn’t try. 2. Explicit is better than implicit....
I see 2 options for supporting bytes: 1. Low level. Add some flag (e.g. raw=True) to get/set methods, that allows to bypass serialization. 2. High level. Add decorator (e.g. memoize_bytes)...
My bad, I mean `__bytes__: True`. ``` test_data = ( {"__bytes__": True}, {"__bytes__": True, "bytes": "AAA"}, {"__bytes__": True, "bytes": "!"}, {"__bytes__": True, "bytes": "Cg=="} ) for case in test_data: jsonstring...
@sh4nks are there any updates?
I’ve obtained CVE ID (CVE-2021-33026) for this issue.
Pylibmc also uses pickle by default. @sh4nks maybe we should force (de-)serialize data on our own? https://github.com/lericson/pylibmc/blob/8e783a69cc69fb04b9faad6c61ab43193569ab0f/src/_pylibmcmodule.c#L1245-L1284
> Just like your database, the cache should be properly secured and behind authentication > So when you have a properly secured system and application, it's is a non-issue. Security...
> Can't you just remove those limits from nginx? > What's it doing deciding these things anyway for this app? > Just remove the limit. > So I would question...
Also consider the following case: We have a few indexes (e.g. chunk_1...chunk_N) and we want to reindex some of them (schema or morphology changes) or add a new one (resharding)....