Prometheus
Prometheus copied to clipboard
[BUG] File Size (Parsing Errors)
Describe the bug When uploading a 2mb file I got stuck on the Parsing Screen.
Expected behavior To obfuscate the file.
To Reproduce Upload a file 2mb in size.
Screenshots
Additional context Nope, just wish it worked with medium sized files.
(Update: Still on the parsing screen its now been 24 minutes)
This is a know issue, and I am working on fixing this. My parser (actually mostly the tokenizer) is really slow.
This is a know issue, and I am working on fixing this. My parser (actually mostly the tokenizer) is really slow.
This is also a problem in other obfuscators such as ByteLuaObfuscator another really good one, it can only handle 500KB before erroring because of out of memory.
This is a know issue, and I am working on fixing this. My parser (actually mostly the tokenizer) is really slow.
Also do you know when you will push a fix, were still waiting for the random strings mentioned in another ticket? Im not sure what you would call this.... a ticket a problem a issue ? ? Its not really a issue it was more of a suggestion so i will call it a ticket (Sorry if that come out rude, i dident mean it like that.)
I am currently not abled to work on Prometheus a lot, because I am in an Internship where I have to work full time, but I try to adress these issues as fast as possible
I am currently not abled to work on Prometheus a lot, because I am in an Internship where I have to work full time, but I try to adress these issues as fast as possible
That sucks,
I am currently not abled to work on Prometheus a lot, because I am in an Internship where I have to work full time, but I try to adress these issues as fast as possible
Also do you think theres a way around this 500kb limit??
The only way that I think would be possible, would be to rewrite the parser and tokenizer
Or to split the entire file into multiple chunks that require each other
Or to split the entire file into multiple chunks that require each other
That could work but its kinda cheap, rewriting the tokenizer and parser is needed, thanks!
Closing this issue, due to #83 being the same