dftimewolf icon indicating copy to clipboard operation
dftimewolf copied to clipboard

`gcp_forensics_gke`

Open zkck opened this issue 3 years ago • 2 comments

gcp_forensics_gke

With the new capabilities of LCF shown below we can now create a cluster object, get its workload and list its nodes. This now allows for a variation of the gcp_forensics recipe, taking a GKE workload instead of an instance name.

from libcloudforensics.providers.gcp.internal import gke

cluster = gke.GkeCluster('cluster-id', 'zone-id', 'cluster-id')
cluster.GetDeployment('nginx', 'default').GetCoveredNodes()

Details

$ dftimewolf gcp_forensics_gke --help
...
...
....
usage: dftimewolf gcp_forensics_gke [-h] [--analysis_project_name ANALYSIS_PROJECT_NAME] [--incident_id INCIDENT_ID] [--workload_name WORKLOAD_NAME] [--workload_namespace WORKLOAD_NAMESPACE] [--create_analysis_vm] [--cpu_cores CPU_CORES]
                                    [--boot_disk_size BOOT_DISK_SIZE] [--boot_disk_type BOOT_DISK_TYPE] [--zone ZONE]
                                    remote_project_name remote_cluster_name remote_cluster_zone

Copies a GKE cluster’s node disks to an analysis project, creates an analysis VM, and attaches the cluster’s node disks. A workload can be specified in addition to the cluster, to only copy disks of the nodes supporting the workload and not of all nodes in the cluster.

positional arguments:
  remote_project_name   Name of the project containing the cluster to copy
  remote_cluster_name   Name of the cluster whose disks to copy
  remote_cluster_zone   Zone of the cluster whose disks to copy

optional arguments:
  -h, --help            show this help message and exit
  --analysis_project_name ANALYSIS_PROJECT_NAME
                        Name of the project where the analysis VM will be created (default: None)
  --incident_id INCIDENT_ID
                        Incident ID to label the VM with. (default: None)
  --workload_name WORKLOAD_NAME
                        Name of the workload whose disks to copy. (default: None)
  --workload_namespace WORKLOAD_NAMESPACE
                        Namespace of the workload whose disks to copy. (default: None)
  --create_analysis_vm  Create an analysis VM. (default: True)
  --cpu_cores CPU_CORES
                        Number of CPU cores of the analysis VM (default: 4)
  --boot_disk_size BOOT_DISK_SIZE
                        The size of the analysis VM boot disk (in GB) (default: 50.0)
  --boot_disk_type BOOT_DISK_TYPE
                        Disk type to use [pd-standard, pd-ssd] (default: pd-standard)
  --zone ZONE           The GCP zone where the Analysis VM and copied disks will be created (default: us-central1-f)

GoogleCloudCollector refactoring

The shared behavior between gcp_forensics_gke and gcp_forensics, i.e. the creation of an analysis VM and copying of disks, must be reflected in the code with both recipes having a common module.

This will require some refactoring of the GoogleCloudCollector module, to be split into two modules: one for storing the disks to be copied into a container, the other for copying the stored disks into an analysis VM. These modules will respectively be named GCEDiskCollector and GCEDiskCopier.

zkck avatar Oct 26 '21 09:10 zkck

This PR is stalling since a while. I will pull this PR and refactor it to fit into the code base as there has been a lot of changes since then and it needs more work.

Hello @zkck, I hope things are going well with you :) if you are planning to do anything with it soon please let me know

sa3eed3ed avatar Jul 15 '22 13:07 sa3eed3ed

Hello there @sa3eed3ed! I haven't been working on this PR, I've been busy with my thesis and other things. I also don't have access to a GCP project to test it on.

zkck avatar Jul 19 '22 13:07 zkck

This PR hasn't been touched in some time. Going to close it out. Please reopen if needed.

ramo-j avatar Mar 11 '24 03:03 ramo-j