Kansa icon indicating copy to clipboard operation
Kansa copied to clipboard

Kansa on weaker machines

Open Skwerl23 opened this issue 5 years ago • 6 comments

I have had to rewrite large portions of Kansa to allow for it to run in my weaker machine. Just running against 400 machines required 8 gigs of RAM due to it opening PSsessions with all of them. Due to this I had to rewrite the get-target function calls to listen to your throttle limit. I also had to rewrite the log collections to even make log parser work for analysis. And rewrite the get-analysis function to handle errors. Because running something that takes 2+ hours for only 400 machines and then have an analysis fail is wasteful. Please update Kansa to actually be useful for weaker machines and scale larger than 1-200 machines with out failure. And not quit on analysis. Or at least allow for an analysis option to run on a folder after targetdata has been acquired. No options allow me to post analyze. I had to fake the outputfolder so it had to rerun on 1 machine and then do analysis.

Skwerl23 avatar Aug 16 '19 22:08 Skwerl23

@Skwerl23 Have you seen the Splunk & Graylog output options that I wrote? Using either of them would take the processing burden off of the machine you're using to gather Kansa output and would also allow you to analyze collected data at scale. I've used the Splunk output option to gather logs from a globally dispersed set of machines at scale without issues.

One thing to note though, I wrote the additional output options such that data will be pulled back to the machine you used to kick off Kansa, so it is possible you'd still encounter the same issues...

dmoore44 avatar Aug 18 '19 19:08 dmoore44

@Skwerl23 Have you submitted PRs for the changes you had to make? Sounds like others could benefit from your work and not have to do the hard work you've already done to solve the issues.

athegist avatar Aug 19 '19 16:08 athegist

I submitted a correction to the errors issue with analyzing. I will submit some more code, my only concern is it isn't for everyone, but it corrects burden issues from pssession as it requires about 20-30 megs of ram for every PC in your target list... my change, splits your list into your throttlelimit so if you have a throttle limit of 50, it will pssession to 50 machines at a time. This is a little slower, than just having them all, but it also doesn't lock your machine up. Especially If you're a smaller organization and only looking to scan maybe a couple hundered devices with a 4gigs of ram machine. Note: I corrected the real code, to have a computerlimit and throttlelimit, as to not confuse what throttlelimit does.

Skwerl23 avatar Aug 20 '19 14:08 Skwerl23

Ok i just submitted code for a computer limit parameter to allow for slower machines to handle kansa. Maybe a much more powerful powershellista can make it a little cleaner than i did. I know code, but not the best at proper terms and such. Also there may be an easier way to algorithm out my splitting code. But i did what i can.

Skwerl23 avatar Aug 20 '19 16:08 Skwerl23

here is the version i proposed https://github.com/Skwerl23/Kansa/blob/patch-2/kansa.ps1

Skwerl23 avatar Aug 21 '19 14:08 Skwerl23

@Skwerl23 Have you seen the Splunk & Graylog output options that I wrote? Using either of them would take the processing burden off of the machine you're using to gather Kansa output and would also allow you to analyze collected data at scale. I've used the Splunk output option to gather logs from a globally dispersed set of machines at scale without issues.

One thing to note though, I wrote the additional output options such that data will be pulled back to the machine you used to kick off Kansa, so it is possible you'd still encounter the same issues...

I appreciate the greatness you have created, but i don't have access to those tools in the manner you suggest :/ and many others don't.

Skwerl23 avatar Aug 21 '19 14:08 Skwerl23