Things you should know about GCP Identity-Aware Proxy
Data Analysis Skills 13
In the last post create a secure vpc network in google cloud, we created a vpc without public IPs (except the NAT). Now let’s see what we can do with it.
Google Cloud Platform’s Identity-Aware Proxy is probably the best developer friendly feature of GCP, comparing with other cloud providers. One of its main features is that it allows users to securely connect to VM inside a private VPC without a VPN. However, even you have been using GCP for a while, you might not be using full potential of IAP.
Prerequisites
Please follow the vpc post to setup vpc and firewall for IAP. We now assume any VMs in the vpc with tag allow-iap
is accessible by google IAP.
Create the VM
Below are some recommendation on the VM’s settings:
- Use network tag
allow-iap
so that traffic from port 22 can be allowed. - Enable os-login for the VM. Simply set the metadata of the VM:
enable-oslogin = "TRUE"
. We already handle this by setting at the project level in the vpc post. - Create a new service account with minimum permission. In our example, no permission is needed since we are just sshing to the VM. So the SA will have empty permission in IAM.
gcloud iam service-accounts create iap-test-sa \
--description="service accoount for IAP test" \
--display-name="IAP Test"
- Allow your user group or a specific user to use the SA. We’ll just allow an user
john@example.com
to use this service account. This is required for the users in the group to login to the VM since it’s associated with the service account.
email=john@example.com
project=my-project-id
sa-name=iap-test-sa
sa=$sa-name@$project.iam.gserviceaccount.comgcloud iam service-accounts add-iam-policy-binding \
$sa \
--member="user:$email" \
--role="roles/iam.serviceAccountUser"
Use below command to create the vm instance.
name=iap-test
zone=us-central1-a
subnet=vm-devgcloud compute instances create $name \
--image-family=rhel-8 \
--image-project=rhel-cloud \
--zone=$zone \
--subnet $subnet \
--tags allow-iap \
--no-address \
--service-account $sa
Use the power of IAP
Now we have the test VM setup. Let’s see what IAP can do.
SSH to VM
First of course the very basic, using IAP tunnel, we can connect to the VM inside a VPC , even it doesn’t have any public IP attached.
project=my-project-id
zone=us-central1-agcloud compute ssh --project $project --zone $zone --tunnel-through-iap $name
For this command to work correctly, you might need to run gcloud auth login
first. This way, google IAP confirm the user identity before allowing the user to login the vm.
Now, let’s install netcat
package so that it can be used as an ssh proxy.
sudo yum install netcat -y
Use the VM as ssh proxy to another server
Add below lines to your .ssh/config
file.
Host example.com
ProxyCommand gcloud compute ssh --project $project --zone $zone $name -- -q nc %h %p
When you ssh to example.com, it’ll now go through the VM. This is very useful when
- Your VPC is only setup to connect to onprem network but not internet. You can use this VM as a proxy to securely connect to onprem network.
- You are testing a web service that is running inside the vpc that the vm is running inside, without a public IP.
SOCKS5 proxy with browser
Using SOCK5 proxy through the VM in gcp, you can use the chrome browser on your laptop to access any web page that is accessible by the proxy vm, as if your network traffic is coming from the VM, hence it’s a proxy. Use below commands in Mac OS will open a new Chrome window. Any URL from this new window will be going through the VM’s network connection.
gcloud compute ssh --project $project --zone $zone --tunnel-through-iap $name 2>/dev/null -- -D $port -fN/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --proxy-server=socks5://localhost:$port --user-data-dir=/tmp/chrome-temp
Manage private GKE cluster
Assume you have a private GKE cluster that is only accessible from a private subnet. The control plan does not have a public IP and only allows a subnet inside the private vpc to connect. You can then create a VM in this subnet and use it to manage the GKE cluster.
- Get the kubernetes config file
export KUBECONFIG=~/.kube/mycluster
cluster_name=myclustergcloud container clusters get-credentials $cluster_name --zone $zone --project $project
2. Now, setup an SSH tunnel from your local computer’s port 6443 to the kubernetes API server’s port 443.
project=anbc-mgmt
zone=us-east4-b
name=abc-git-proxygcloud compute ssh --project $project --zone $zone $name -- -fNL 6443:$api_server:443
3. Edito the KUBECONFIG
file to replace the IP address of API server with kubernetes:6443
4. Add an entry to /etc/hosts
127.0.0.1 kubernetes
Now you can use kubctl
commands as if there is a direct connection from your local computer to the Kubernetes API server. But really, traffic is going through the VM and use the VM’s network accessibility to connect to the GKE API server.