Example deployment: Kubernetes
Plain manifests. No Helm. Copy, apply, connect.
This example deploys pgagroal as a standalone Kubernetes Deployment with a ClusterIP Service. It assumes PostgreSQL is already running in the cluster or reachable from it. For a Helm-based deployment, see the Kubernetes (Helm) page.
Kubernetes cluster
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgagroal
namespace: pgagroal
spec:
replicas: 2
selector:
matchLabels:
app: pgagroal
template:
metadata:
labels:
app: pgagroal
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: pgagroal
image: elevarq/pgagroal:1.0.0
ports:
- containerPort: 6432
name: pooler
env:
- name: PG_BACKEND_HOST
value: "postgres.database.svc.cluster.local"
- name: PG_BACKEND_PORT
value: "5432"
- name: MAX_CONNECTIONS
value: "50"
- name: PGAGROAL_LOG_LEVEL
value: "warn"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
readinessProbe:
exec:
command:
- pgagroal-cli
- "-c"
- /etc/pgagroal/pgagroal.conf
- ping
initialDelaySeconds: 3
periodSeconds: 5
failureThreshold: 2
livenessProbe:
exec:
command:
- pgagroal-cli
- "-c"
- /etc/pgagroal/pgagroal.conf
- ping
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: "1"
memory: 256MiService
apiVersion: v1
kind: Service
metadata:
name: pgagroal
namespace: pgagroal
spec:
type: ClusterIP
selector:
app: pgagroal
ports:
- port: 6432
targetPort: pooler
protocol: TCP
name: poolerApply and verify
# Create namespace and apply
kubectl create namespace pgagroal
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
# Wait for pods to be ready
kubectl -n pgagroal rollout status deployment/pgagroal
# Verify from inside the cluster
kubectl -n pgagroal run test --rm -it --image=postgres:17 -- \
psql -h pgagroal.pgagroal.svc.cluster.local -p 6432 \
-U app -d appdb -c 'SELECT 1'Applications connect to pgagroal.pgagroal.svc.cluster.local:6432. Replace with your namespace if different.
Health probes
Both probes run pgagroal-cli ping, which checks that the pooler daemon is responsive. The probes do not verify backend connectivity — this is intentional. A healthy pooler with a temporarily unreachable backend should stay running so it can recover when the backend returns.
| Probe | Delay | Interval | Effect on failure |
|---|---|---|---|
| Readiness | 3s | 5s | Stops receiving traffic after 10s |
| Liveness | 5s | 10s | Restarts container after 30s |
Scaling and connection limits
Each replica maintains its own pool. Two replicas with MAX_CONNECTIONS=50 opens up to 100 total backend connections.
Make sure the total across all replicas does not exceed PostgreSQL's max_connections, leaving room for admin and monitoring connections.
# Scale to 3 replicas (3 x 50 = 150 backend connections)
kubectl -n pgagroal scale deployment/pgagroal --replicas=3Common adjustments
| Change | How |
|---|---|
| Backend address | Change PG_BACKEND_HOST to your PostgreSQL Service or RDS endpoint |
| Pool size | Change MAX_CONNECTIONS and verify total across replicas |
| Credentials | Add PG_USERNAME and PG_PASSWORD from a Secret via envFrom |
| Resource limits | Increase CPU if running >100 connections per replica |
| Expose externally | Change Service type to LoadBalancer — but prefer ClusterIP with application-level access |
See also: Docker Compose example for local development, or Kubernetes (Helm) for chart-based deployment.