Karthik Palaniappan

Results 9 comments of Karthik Palaniappan

Yup -- you can still set the initial cluster size using `--num-workers` and `--num-preemptible-workers`.

Hey as an update, autoscaling just launched to Beta today! A few updates since alpha: 1) The minimum cooldown period is now 2 minutes 2) Monitoring autoscaling and cluster metrics...

Dataproc already has a mechanism for this -- the job id. You cannot have Dataproc jobs with duplicate ids. As long as you don't delete jobs after they finish, this...

Ah, fair enough. Another solution to consider is using [restartable jobs](https://cloud.google.com/dataproc/docs/concepts/jobs/restartable-jobs) and letting Dataproc re-run jobs on failure. You can specify a `request_id` ([docs](https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#submitjobrequest)) so that when your pod is...

I ran into this issue too, and I believe you can change those properties in `platforms/hdp/configuration.json` and continue using auto mode.

Does this issue still exist? I'm not able to repro it -- I created multiple 1099-B forms, and then clicked edit and hit backspace in different boxes and it didn't...

Is a buffer size negotiated in the plain algorithm? You only need to call wrap() or unwrap() if qop=auth-int or auth-conf, which it won't be for the plain mechanism. Otherwise...

Do the maintainers have any code pointers on how this could be implemented?