amazon-redshift-utils
amazon-redshift-utils copied to clipboard
SimpleReplay: Change the string append logic to use StringIO for faster performance
Issue If the S3 audit logs contain multiple queries with 16 MB (the max limit for Redshift) with multiple lines ("\n"), the query extraction takes more time to complete the run. In my case, I provided 2 files in S3 with 250 MB each, it took almost 7 hours and 20 mins to extract the queries. On further debugging, I noticed that this logic takes more time when it faces a 16 MB query. While parsing at the initial position, it is fast, then slowly the performance degrades then the moment it completes the 16MB query, then it is performing fast again for the new query then slowly it starts to degrade again. For 22 hours (the usual recommendation for the replay window) for my use case it might take up to 4 to 5 days just for query extract, which is not ideal.
Description of changes: On googling, I found, the Python string concat function (+=) is not the best for concatenating large numbers of string chunks. So, I changed the code to use StringIO. After changing this, it took me 45 seconds to get the query extraction for the same set of files.
I tested my changes by comparing both the logs files (the same 2 hours of logs) with the actual utility and with my changes and I found the output matches exactly the same. As there is no test case, not able to validate further with the code base.
If you find any bugs or improvements, feel free to fork/edit/improve this further.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.