Austin Wise
Go to home page
2023-12-17

Using IAM identity with Google Cloud SQL

I recently setup a Google Cloud SQL PostgreSQL database for some of my personal projects. One of my goals for this new instances was to do all authentication using service account credentials and making all connections to the database using the Cloud SQL Auth Proxy. This frees me from having to manage passwords for PostgreSQL accounts and from having to manage which IP addresses are allowed to connect to the Cloud SQL instance.

While I work at Google, I don't work on the Cloud SQL team. This post is to document how I setup my own services and will include links to authoritative documentation.

Some notes about the environment

We are going to be creating everything in the austinsql Google Cloud project. The database instance name is test-sql . To reduce how much the following shell commands hard code these values, I define the follow variables in my shell:

PROJECT_ID=austinsql
REGION=us-central1
SQL_INSTANCE=test-sql

Create a PostgreSQL database server

This creates a password for the postgres user and creates the database. The umask command ensures that only your user can read the password.txt file. The database will have a public IP address by default but not allow any direct connections because there is are no authorized networks by default. All connections will have to go through the Cloud SQL Auth Proxy. It would also be possible to disable the public IP address and enable only a private IP. Note that connecting to a Cloud SQL instance that has a private IP requires using Serverless VPC Access, which adds cost and complexity to the setup.

umask 077
openssl rand -hex 12  | tr -d '\n' > password.txt
gcloud sql instances create $SQL_INSTANCE --database-version=POSTGRES_15 \
    --region=$REGION --tier=db-f1-micro --root-password=$(cat password.txt) \
    --connector-enforcement=REQUIRED --project=$PROJECT_ID \
    --database-flags=cloudsql.iam_authentication=on

This will take a few minutes.

Granting a service account access to a Cloud SQL database

To use a service account to login into the Cloud SQL instance, you must add it as user to the instance in the Cloud Console. The service account will also need to have the cloudsql.instances.connect permission.

Personally I created one service account for each of the different services and associated database I will run. Here are the commands to create the service account named mastodon and grant it access to the server.

gcloud iam service-accounts create mastodon --project $PROJECT_ID
gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member=serviceAccount:mastodon@${PROJECT_ID}.iam.gserviceaccount.com \
    --role=roles/cloudsql.client --condition=None
gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member=serviceAccount:mastodon@${PROJECT_ID}.iam.gserviceaccount.com \
    --role=roles/cloudsql.instanceUser --condition=None
gcloud sql users create mastodon@austinsql.iam --project $PROJECT_ID -i $SQL_INSTANCE \
    --type CLOUD_IAM_SERVICE_ACCOUNT

Creating a database

This is the one step that needs to be accomplished using the default postgres account. We will create a database and grant ownership of the database to the service account. In this example, a database is created to run Mastodon.

In one terminal window, start the Cloud SQL Auth Proxy:

cloud-sql-proxy $PROJECT_ID:$REGION:$SQL_INSTANCE

In another terminal connect with psql:

PGPASSWORD=$(cat password.txt) psql -h localhost -U postgres

Note that the PostgreSQL documentation does not recommend using the PGPASSWORD environmental variable this way if you are on a shared computer. Consider using a password file in that case.

Now you can create a database and grant the mastodon user access to it.

CREATE ROLE mastodon;
GRANT mastodon TO postgres;
GRANT mastodon TO "mastodon@austinsql.iam";
CREATE DATABASE mastodon WITH OWNER = mastodon TEMPLATE template1;
GRANT ALL PRIVILEGES ON DATABASE mastodon TO mastodon;
\c mastodon;
ALTER SCHEMA public OWNER TO mastodon;
GRANT ALL ON SCHEMA public TO mastodon;

Note that when referring to the service account, you need to use the Project ID in the domain part of the email address. This can be the same as the project name, but is often not.

Commands to impersonate a service account and connect using psql

To load data into the new database, it can be helpful to impersonate the service account when connecting to the SQL server. This will ensure that when you create new tables, the service account is the owner of them. If you use the postgres account to load data, you have to be careful that the service account is the owner of any tables you create. If you accidently assign the postgres user ownership of the tables, you might encounter problems later one when trying to update the database schema.

First, get a key for the service account. This step only needs to be done the first time. This example command saves the key to a file called mastodon.json.

gcloud iam service-accounts keys create mastodon.json \
    --iam-account=mastodon@${PROJECT_ID}.iam.gserviceaccount.com

Next, start the Cloud SQL Auth proxy:

cloud-sql-proxy ${PROJECT_ID}:${REGION}:${SQL_INSTANCE} --auto-iam-authn \
    -c ./mastodon.json

Now you can connect to the database using psql in another terminal:

psql -h localhost -U mastodon@${PROJECT_ID}.iam -d mastodon

You should not be prompted for password.

Once you are done loading data, it is a good idea to delete the keys using gcloud iam service-accounts keys delete .

Configuring a Mastodon server to connect using Cloud SQL Auth Proxy using Docker Compose

I use Docker Compose to run my Mastodon server. The upstream Mastodon docker-compose.yml file runs Postgres as a container. I have customized my docker-compose.yml file to run the Cloud SQL Auth Proxy instead. The relevant snippet of the file is here:

version: '3'
services:
  db:
    restart: always
    image: us.gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.8
    networks:
      - external_network
    command:
      - "--structured-logs"
      - "--auto-iam-authn"
      - "--address=0.0.0.0"
      - "--http-address=0.0.0.0"
      - "--health-check"
      - "austinsql:us-central1:austinsql"
    healthcheck:
      test: ["CMD", "/cloud-sql-proxy", "wait", "--http-address=0.0.0.0", "-m", "3s"]
    ports:
      - '127.0.0.1:5432:5432'

A couple of things to point out:

  • We use external_network instead of internal_network. This is because the internal_network does not have access to the internet and thus cannot connect to our Google Cloud SQL instance. Despite the name "external_network" and the --address=0.0.0.0 argument, the ports section ensures that auth proxy is only listening for connections on localhost.
  • Since there is no curl command available inside the container, we used the cloud-sql-proxy wait command to implement a health check.

In the .env.production file, the database connection is defined like this:

DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon@austinsql.iam
DB_PASS=test

Configuring a Cloud Run service to connect using the Cloud SQL Auth Proxy

While Cloud Run has support for connecting using the Cloud SQL Auth Proxy built in, there is no C# connector library to enable using IAM authentication. Therefore I deploy the Cloud SQL Proxy as a sidecar container

In YAML, the service definition looks something like this:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/maxScale: '3'
        run.googleapis.com/execution-environment: gen2
        run.googleapis.com/startup-cpu-boost: 'true'
    spec:
      containerConcurrency: 80
      containers:
      - env:
        - name: ConnectionStrings__Postgres
          value: >-
            host=127.0.0.1;
            database=mastodon;
            username=mastodon@austinsql.iam;
            password=test;
        image: YourImageHere:latest
        name: dkp
        ports:
        - containerPort: 8080
          name: http1
        resources:
          limits:
            cpu: 1000m
            memory: 512Mi
        startupProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 8080
          periodSeconds: 2
          timeoutSeconds: 1
      - args:
        - --structured-logs
        - --auto-iam-authn
        - --health-check
        - --http-address=0.0.0.0
        - austinsql:us-central1:test-sql
        image: us.gcr.io/cloud-sql-connectors/cloud-sql-proxy:2
        name: sql
        resources:
          limits:
            cpu: 1000m
            memory: 256Mi
        startupProbe:
          failureThreshold: 5
          httpGet:
            path: /startup
            port: 9090
          periodSeconds: 2
          timeoutSeconds: 1
      serviceAccountName: mastodon@austinsql.iam.gserviceaccount.com
      timeoutSeconds: 300
  traffic:
  - latestRevision: true
    percent: 100

The important parts are:

  • The Cloud SQL Auth Proxy container is configured to expose a health check and Cloud Run is configured to check it.
  • The connection string on the main container uses the IAM name for the user name and connects to localhost.
  • The second generation execution environment is used. While this is not directly related to Cloud SQL, other parts of the software I was running were not compatible with generation one, so I have only tested this configuration on generation two.

My actual yaml file is here.

Conclusion

This post mostly exists to help me remember how I setup my own Postgres server. Hopefully it gives you some ideas on different ways you can use Google Cloud SQL.