SQS Queue-Depth Autoscaling
If your application processes messages from Amazon SQS, Porter can automatically scale your worker services based on queue depth. When messages pile up, Porter scales up your workers. When the queue drains, it scales back down.
This guide uses the SQS Exporter Porter addon — a lightweight Prometheus exporter that polls SQS queue depth and feeds it to Porter’s built-in metric-based autoscaler via KEDA.
This guide assumes you’re running on EKS and uses IRSA (IAM Roles for Service Accounts) so the exporter and worker pick up short-lived, auto-rotated credentials from a pod-scoped IAM role — no long-lived keys to store or rotate.
Prerequisites
- An AWS account with an SQS queue
- A worker service on Porter that processes messages from the queue
- Permissions to create IAM roles in your AWS account
Step 1: Create an IAM Role for the Exporter
Porter deploys Helm chart addons with the release name helmchart, so the exporter’s Kubernetes service account will always be named helmchart-sqs-exporter. You’ll reference this in the trust policy below.
1. Find your OIDC provider URL.
In the AWS Console, go to EKS → your cluster → Overview tab → OpenID Connect provider URL. Copy the URL and strip the https:// prefix — this is your OIDC_PROVIDER.
2. Create an IAM role with a custom trust policy.
Go to IAM → Roles → Create role → Custom trust policy and paste the following, substituting your values:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::{ACCOUNT_ID}:oidc-provider/{OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"{OIDC_PROVIDER}:sub": "system:serviceaccount:{NAMESPACE}:helmchart-sqs-exporter",
"{OIDC_PROVIDER}:aud": "sts.amazonaws.com"
}
}
}]
}
ACCOUNT_ID: your AWS account ID
OIDC_PROVIDER: the value from step 1 (without https://)
NAMESPACE: the namespace of your Porter deployment target (typically default)
3. Attach a permissions policy to the role. Create an inline policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sqs:GetQueueAttributes",
"Resource": "arn:aws:sqs:{REGION}:{ACCOUNT_ID}:{QUEUE_NAME}"
}]
}
Keep the role ARN handy — you’ll use it in the next step.
Step 2: Deploy the SQS Exporter
- Navigate to your application in the Porter dashboard
- Go to the Add-ons tab
- Click Add New and select Helm Chart
- Configure the Helm chart:
- Repository URL:
oci://ghcr.io/porter-dev
- Chart Name:
sqs-exporter
- Chart Version:
0.1.1
- In the Helm Values section, paste:
sqsQueueUrls: "https://sqs.us-east-1.amazonaws.com/123456789/your-queue-name"
serviceAccount:
roleArn: "arn:aws:iam::{ACCOUNT_ID}:role/{ROLE_NAME}"
aws:
region: "us-east-1"
pollIntervalSeconds: 10
Replace sqsQueueUrls with your full SQS queue URL. To monitor multiple queues, provide a comma-separated list of URLs.
- Click Deploy
Verify the exporter is running by checking the add-on logs — you should see using ambient AWS credential chain, followed by the exporter polling your queue with no AccessDenied errors.
Step 3: Grant the Worker AWS Access
The worker pod needs AWS credentials to consume from SQS. First, create an IAM role with these permissions on your queue:
sqs:ReceiveMessage
sqs:DeleteMessage
sqs:GetQueueAttributes
The role’s trust policy must name the worker’s service account so IRSA can assume it:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::{ACCOUNT_ID}:oidc-provider/{OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"{OIDC_PROVIDER}:sub": "system:serviceaccount:{NAMESPACE}:{APP_NAME}-{SERVICE_NAME}",
"{OIDC_PROVIDER}:aud": "sts.amazonaws.com"
}
}
}]
}
Porter names the worker’s service account {APP_NAME}-{SERVICE_NAME} (e.g. sqs-poller-poller for app sqs-poller with service poller).
Then attach the role to the worker using one of the following.
Option A: porter.yaml
Declare the awsRole connection under your worker service. If your app doesn’t have a porter.yaml, you can create one at the root of your repo with just the worker service declared — Porter merges it with your dashboard config.
services:
- name: worker
# ...
connections:
- type: awsRole
role: my-worker-sqs-access
Option B: Annotate the service account manually
If you’d rather not introduce a porter.yaml, annotate the worker’s service account directly:
kubectl annotate sa {APP_NAME}-{SERVICE_NAME} -n {NAMESPACE} \
eks.amazonaws.com/role-arn=arn:aws:iam::{ACCOUNT_ID}:role/{ROLE_NAME}
Either option results in the service account carrying the eks.amazonaws.com/role-arn annotation, which EKS uses to inject IRSA credentials into the pod.
Set the worker’s environment variables
Under the worker service’s Environment tab, set:
SQS_QUEUE_URL={QUEUE_URL}
AWS_REGION={AWS_REGION}
- Navigate to your application in the Porter dashboard
- Go to the Services tab and click on your worker service
- Under Autoscaling, select the Metric-based tab
- Fill in the following fields:
- Min replicas:
0 (scale to zero when queue is empty) or 1 (always keep one worker warm)
- Max replicas:
10 (adjust based on your workload)
- Metric name:
sqs_messages_visible
- Query:
avg(sqs_messages_visible{queue_name="your-queue-name"})
- Threshold:
10 (target number of messages per worker instance)
Replace your-queue-name with the name portion of your SQS queue URL (e.g., for https://sqs.us-east-1.amazonaws.com/123456789/order-processing, use order-processing). The queue_name label must match exactly or autoscaling won’t trigger.
Scaling Latency
Scale-up isn’t instant — expect on the order of a minute or two between messages arriving in your queue and new worker pods becoming ready. This latency comes from a combination of SQS polling, metrics scraping, and the stabilization period before the autoscaler commits to a scale-up.
For queue-based workloads where faster scale-up matters, you can shorten the stabilization window via your worker service’s Settings tab → Helm overrides:
keda:
hpa:
scaleUp:
stabilizationWindowSeconds: 0
policy:
type: Percent
value: 100
periodSeconds: 15
Node provisioning is often the remaining bottleneck once the autoscaling pipeline is tuned.