Kubernetes
Deployment Guide
This guide covers deploying Team Server on Kubernetes using Helm charts. Choose the method that best fits your infrastructure and database requirements.
Prerequisites
Before starting, ensure you have:
- Kubernetes cluster 1.19+ with RBAC enabled
- Helm 3.8+ installed and configured
kubectlconfigured to access your cluster- PostgreSQL database (external or deployed via Helm chart)
- Team Server configuration file (
production.yaml) - At least 2GB of available memory and 1 CPU core per pod
PostgreSQL Deployment Options
Team Server supports two PostgreSQL deployment scenarios:
- External PostgreSQL - Use an existing PostgreSQL database that you provide
- Helm-Deployed PostgreSQL - Let the Helm chart deploy and manage PostgreSQL for you
Method 1: Quick Start with Helm-Deployed PostgreSQL
This method deploys both Team Server and PostgreSQL using the Helm chart. Ideal for development, testing, or environments where you want the chart to manage the database.
Step 1: Create Namespace
kubectl create namespace endura-team-serverStep 2: Create Configuration File
Create your Team Server configuration file without TLS settings (TLS will be handled by Kubernetes ingress):
# production.yaml - No TLS configuration needed
logger:
enable: true
level: info
format: compact
server:
binding: 0.0.0.0
port: 5150
host: {{ get_env(name="SERVER_URL", default="http://localhost:5150") }}
middlewares:
cors:
enable: true
allow_origins:
- "{{ get_env(name='BASE_URL', default='http://localhost:5150') }}"
database:
uri: {{ get_env(name="DATABASE_URL") }}
auto_migrate: true
settings:
jwt:
sensor:
secret: "{{ get_env(name='JWT_SENSOR_SECRET') }}"
expiration: 31557600 # 1 year in seconds
user:
secret: "{{ get_env(name='JWT_USER_SECRET') }}"
expiration: 604800 # 7 days in seconds
tenant:
name: {{ get_env(name="TENANT_NAME", default="<your_organization_name>") }}
base_url: "{{ get_env(name='BASE_URL', default='http://localhost:5150') }}"
# ODIC Authentication: One of Microsoft, Google, Oracle, or CyberArk
# oidc:
# google:
# client_id: "{{ get_env(name='GOOGLE_OIDC_CLIENT_ID') }}"
# client_secret: "{{ get_env(name='GOOGLE_OIDC_CLIENT_SECRET') }}"
# issuer_url: "https://accounts.google.com"
# redirect_url: "{{ get_env(name='BASE_URL') }}/authentication/google/callback"Step 3: Create ConfigMap
kubectl create configmap endura-team-server-config \
-n endura-team-server \
--from-file=production.yaml=production.yamlStep 4: Create Secrets
# Generate secure random secrets (32+ characters each)
export JWT_SENSOR_SECRET=$(openssl rand -hex 32)
export JWT_USER_SECRET=$(openssl rand -hex 32)
export POSTGRES_PASSWORD=$(openssl rand -hex 16)
kubectl create secret generic endura-team-server-secrets \
-n endura-team-server \
--from-literal=JWT_SENSOR_SECRET="$JWT_SENSOR_SECRET" \
--from-literal=JWT_USER_SECRET="$JWT_USER_SECRET" \
--from-literal=SERVER_URL="http://localhost:5150" \
--from-literal=BASE_URL="http://localhost:5150" \
--from-literal=TENANT_NAME="<your_organization_name>"Step 5: Deploy with Helm
# Install Team Server
helm install endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set postgresql.enabled=true \
--set postgresql.auth.database="endura" \
--set postgresql.auth.username="endura" \
--set postgresql.auth.password="$POSTGRES_PASSWORD" \
--set postgresql.auth.postgresPassword="$POSTGRES_PASSWORD"Step 6: Verify Deployment
# Check deployment status
kubectl get deployments -n endura-team-server
# Wait for pods to be ready
kubectl wait --for=condition=Available deployment/endura-team-server -n endura-team-server --timeout=300s
# Check pod status
kubectl get pods -n endura-team-server
# Verify health
kubectl port-forward -n endura-team-server svc/endura-team-server 5150:5150 &
curl http://localhost:5150/_healthMethod 2: Production Deployment with External PostgreSQL
This method uses an existing PostgreSQL database that you provide. Recommended for production environments.
Step 1: Prepare Database
Ensure your PostgreSQL database is accessible from the Kubernetes cluster and create the required database:
CREATE DATABASE endura;
CREATE USER endura_app WITH ENCRYPTED PASSWORD '<your_secure_password>';
GRANT ALL PRIVILEGES ON DATABASE endura TO endura_app;Step 2: Create Namespace
kubectl create namespace endura-team-serverStep 3: Create Configuration File
Create the same production.yaml as in Method 1 (without TLS settings).
Step 4: Create ConfigMap
kubectl create configmap endura-team-server-config \
-n endura-team-server \
--from-file=production.yaml=production.yamlStep 5: Create Secrets with Database URL
# Set your database connection details
export POSTGRES_HOST="<your-postgres-host.example.com>"
export POSTGRES_PORT="5432"
export POSTGRES_DB="endura"
export POSTGRES_USER="endura_app"
export POSTGRES_PASSWORD="<your_secure_password>"
# Generate application secrets
export JWT_SENSOR_SECRET=$(openssl rand -hex 32)
export JWT_USER_SECRET=$(openssl rand -hex 32)
kubectl create secret generic endura-team-server-secrets \
-n endura-team-server \
--from-literal=DATABASE_URL="postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD@$POSTGRES_HOST:$POSTGRES_PORT/$POSTGRES_DB" \
--from-literal=JWT_SENSOR_SECRET="$JWT_SENSOR_SECRET" \
--from-literal=JWT_USER_SECRET="$JWT_USER_SECRET" \
--from-literal=SERVER_URL="https://your-domain.com" \
--from-literal=BASE_URL="https://your-domain.com" \
--from-literal=TENANT_NAME="<your_organization_name>"Step 6: Deploy with Helm
helm install endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set postgresql.enabled=false \
--set service.type=ClusterIP \
--set ingress.enabled=true \
--set ingress.className="nginx" \
--set ingress.hosts[0].host="your-domain.com" \
--set ingress.hosts[0].paths[0].path="/" \
--set ingress.hosts[0].paths[0].pathType="Prefix"TLS Configuration Scenarios
Scenario 1: Direct TLS Configuration
When you want to use your own TLS certificates directly in the Team Server application:
Step 1: Prepare TLS Certificates
Create your TLS certificate files (or use existing ones):
# Example: Create self-signed certificates for testing
openssl req -x509 \
-newkey rsa:4096 \
-keyout server.key \
-out server.crt \
-days 365 \
-nodes \
-subj "/C=US/ST=State/L=City/O=MyOrg/CN=team-server.your-domain.com"
# Convert to PEM format if needed
cp server.crt server.pem
cp server.key server-key.pemStep 2: Create Certificate ConfigMap
kubectl create configmap endura-team-server-certs \
-n endura-team-server \
--from-file=server.pem=server.pem \
--from-file=server.key=server-key.pemStep 3: Update Configuration
Update your production.yaml to include TLS configuration:
# production.yaml - Add TLS configuration
logger:
enable: true
level: info
format: compact
server:
binding: 0.0.0.0
port: 5150
host: {{ get_env(name="SERVER_URL", default="https://team-server.your-domain.com:5150") }}
middlewares:
cors:
enable: true
allow_origins:
- "{{ get_env(name='BASE_URL', default='https://team-server.your-domain.com:5150') }}"
database:
uri: {{ get_env(name="DATABASE_URL") }}
auto_migrate: true
settings:
jwt:
sensor:
secret: "{{ get_env(name='JWT_SENSOR_SECRET') }}"
expiration: 31557600
user:
secret: "{{ get_env(name='JWT_USER_SECRET') }}"
expiration: 604800
tenant:
name: {{ get_env(name="TENANT_NAME", default="<your_organization_name>") }}
base_url: "{{ get_env(name='BASE_URL', default='https://team-server.your-domain.com:5150') }}"
tls:
certificate: "config/certs/server.pem"
private_key: "config/certs/server.key"Step 4: Update ConfigMap
kubectl create configmap endura-team-server-config \
-n endura-team-server \
--from-file=production.yaml=production.yaml \
--dry-run=client -o yaml | kubectl apply -f -Step 5: Update Secrets for HTTPS
kubectl patch secret endura-team-server-secrets \
-n endura-team-server \
--patch='{"data":{"SERVER_URL":"'$(echo -n "https://team-server.your-domain.com:5150" | base64 -w 0)'","BASE_URL":"'$(echo -n "https://team-server.your-domain.com:5150" | base64 -w 0)'"}}'Step 6: Update Helm Deployment
Create a values file for TLS deployment:
# values-direct-tls.yaml
service:
type: NodePort
port: 5150
nodePort: 30150
ingress:
enabled: false # TLS handled by application directly
# Additional volume mount for certificates
extraVolumeMounts:
- name: certs
mountPath: /app/config/certs
readOnly: true
extraVolumes:
- name: certs
configMap:
name: endura-team-server-certsDeploy with direct TLS configuration:
helm upgrade endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set image.tag=1.1.0 \
-f values-direct-tls.yaml \
--set postgresql.enabled=falseScenario 2: cert-manager with Let’s Encrypt
When using cert-manager for automatic TLS certificate management:
# values-cert-manager.yaml
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: team-server.your-domain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: team-server-tls
hosts:
- team-server.your-domain.comDeploy with cert-manager configuration:
helm upgrade endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set image.tag=1.1.0 \
-f values-cert-manager.yaml \
--set postgresql.enabled=falseUpdate your secrets to use HTTPS URLs:
kubectl patch secret endura-team-server-secrets \
-n endura-team-server \
--patch='{"data":{"SERVER_URL":"'$(echo -n "https://team-server.your-domain.com" | base64 -w 0)'","BASE_URL":"'$(echo -n "https://team-server.your-domain.com" | base64 -w 0)'"}}'Scenario 3: Service Mesh (Istio)
When using Istio service mesh for TLS termination:
# values-istio.yaml
service:
type: ClusterIP
# Separate Istio Gateway and VirtualService
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: team-server-gateway
namespace: endura-team-server
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: team-server-tls
hosts:
- team-server.your-domain.com
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: team-server
namespace: endura-team-server
spec:
hosts:
- team-server.your-domain.com
gateways:
- team-server-gateway
http:
- route:
- destination:
host: endura-team-server.endura-team-server.svc.cluster.local
port:
number: 5150Deploy with Istio configuration:
helm install endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
-f values-istio.yaml \
--set ingress.enabled=false \
--set postgresql.enabled=false
# Apply Istio resources
kubectl apply -f istio-gateway.yamlCustom Values File
Create a values.yaml file for your production deployment:
# values-production.yaml
replicaCount: 3
image:
repository: ghcr.io/endurasecurity/container/endura-team-server
tag: "1.0.0"
pullPolicy: IfNotPresent
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 1000m
memory: 2Gi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
service:
type: ClusterIP
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hosts:
- host: team-server.your-domain.com
paths:
- path: /
pathType: Prefix
postgresql:
enabled: false # Using external database
nodeSelector:
node-type: application
tolerations:
- key: "application"
operator: "Equal"
value: "true"
effect: "NoSchedule"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- endura-team-server
topologyKey: kubernetes.io/hostnameDeploy with custom values:
helm install endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
-f values-production.yamlInitial Setup and Configuration
Access the Application
Port Forward (Development):
kubectl port-forward -n endura-team-server svc/endura-team-server 5150:5150Open
http://localhost:5150Ingress (Production): Navigate to your configured domain:
https://team-server.your-domain.comNodePort (Testing):
export NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}') export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services endura-team-server -n endura-team-server) echo "http://$NODE_IP:$NODE_PORT"
Database Initialization
Team Server automatically runs database migrations on startup. Monitor the initialization:
kubectl logs -f deployment/endura-team-server -n endura-team-server | grep -i migrationSet Up Your First Administrator
When users first access Team Server, they are automatically assigned the Viewer role with read-only access. To manage Team Server, you need to promote at least one user to the Administrator role.
User Must Log In First
The user must access Team Server and complete authentication before you can change their role. This creates their user record in the database. If you run these commands before the user has logged in, they will fail because the email address will not be found.
Step 1: Log In to Team Server
Open your browser and navigate to your Team Server URL. Complete the authentication process using your configured OIDC provider. This creates your user record in the database.
Step 2: Get the User ID
Run the following command, replacing the email address with your own:
kubectl exec deployment/endura-team-server -n endura -- endura task user_get_id email:your-email@example.comThis command outputs a numeric user ID.
Step 3: Assign the Administrator Role
Run the following command, replacing <user_id> with the numeric ID from the previous step:
kubectl exec deployment/endura-team-server -n endura -- endura task user_set_role id:<user_id> role:administratorStep 4: Verify the Role Change
Refresh your browser or log out and log back in to Team Server. You should now see an Administration menu item in the main navigation, confirming your Administrator role is active.
Updating Team Server
Rolling Update
# Update to specific version
helm upgrade endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set image.tag=1.1.0 \
--reuse-values
# Monitor rollout
kubectl rollout status deployment/endura-team-server -n endura-team-serverConfiguration Updates
# Update ConfigMap
kubectl create configmap endura-team-server-config \
-n endura-team-server \
--from-file=production.yaml=production.yaml \
--dry-run=client -o yaml | kubectl apply -f -
# Restart deployment to pick up new config
kubectl rollout restart deployment/endura-team-server -n endura-team-serverSecrets Updates
# Update secrets
kubectl patch secret endura-team-server-secrets \
-n endura-team-server \
--patch='{"data":{"NEW_VAR":"'$(echo -n "new-value" | base64 -w 0)'"}}'
# Restart to pick up new secrets
kubectl rollout restart deployment/endura-team-server -n endura-team-serverBackup and Restore
Database Backup (Helm-Deployed PostgreSQL)
# Create backup
kubectl exec -n endura-team-server deployment/endura-team-server-postgresql -- \
pg_dump -U endura endura > team-server-backup-$(date +%Y%m%d).sql
# Store backup securely
kubectl create configmap team-server-backup-$(date +%Y%m%d) \
-n endura-team-server \
--from-file=backup.sql=team-server-backup-$(date +%Y%m%d).sqlDatabase Restore
# Restore from backup
kubectl exec -i -n endura-team-server deployment/endura-team-server-postgresql -- \
psql -U endura endura < team-server-backup-20261201.sqlConfiguration Backup
# Backup all configurations
kubectl get configmap endura-team-server-config -n endura-team-server -o yaml > config-backup.yaml
kubectl get secret endura-team-server-secrets -n endura-team-server -o yaml > secrets-backup.yamlMonitoring and Logs
View Logs
# All pods
kubectl logs -f -l app.kubernetes.io/name=endura-team-server -n endura-team-server
# Specific pod
kubectl logs -f deployment/endura-team-server -n endura-team-server
# Previous pod instance
kubectl logs -f deployment/endura-team-server -n endura-team-server --previousMonitor Resource Usage
# Pod resource usage
kubectl top pods -n endura-team-server
# Node resource usage
kubectl top nodesHealth Checks
# Application health
kubectl exec -n endura-team-server deployment/endura-team-server -- \
curl -f http://localhost:5150/_health
# Pod readiness
kubectl get pods -n endura-team-server -l app.kubernetes.io/name=endura-team-serverPrometheus Integration
Team Server exposes metrics for Prometheus scraping. Configure ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: endura-team-server
namespace: endura-team-server
spec:
selector:
matchLabels:
app.kubernetes.io/name: endura-team-server
endpoints:
- port: http
path: /_health
interval: 30sUninstalling Team Server
Helm Uninstall
# Remove Team Server deployment
helm uninstall endura-team-server -n endura-team-server
# Remove ConfigMaps and Secrets
kubectl delete configmap endura-team-server-config -n endura-team-server
kubectl delete secret endura-team-server-secrets -n endura-team-server
# Note: No additional secrets needed - containers are publicly available
# Remove namespace (optional)
kubectl delete namespace endura-team-serverData Cleanup (WARNING: Destructive)
# Remove PostgreSQL data (if using Helm-deployed PostgreSQL)
kubectl delete pvc -n endura-team-server -l app.kubernetes.io/component=database
# Remove all resources
kubectl delete namespace endura-team-serverTroubleshooting
Common Issues
Pod fails to start:
# Check pod events
kubectl describe pod -l app.kubernetes.io/name=endura-team-server -n endura-team-server
# Check logs for errors
kubectl logs -f deployment/endura-team-server -n endura-team-server
# Verify configuration
kubectl get configmap endura-team-server-config -n endura-team-server -o yamlImage pull errors:
# Verify Helm chart values
helm get values endura-team-server -n endura-team-server
# Check container image access
kubectl get pods -n endura-team-server
# Test image access
kubectl run test-pull --image=ghcr.io/endurasecurity/container/endura-team-server:1.0.0 --rm -it --restart=Never -n endura-team-serverDatabase connection issues:
# Test database connectivity
kubectl run -it --rm debug --image=postgres:16-alpine --restart=Never -n endura-team-server -- \
psql "postgresql://endura:password@endura-team-server-postgresql:5432/endura"
# Check PostgreSQL pod
kubectl logs -f deployment/endura-team-server-postgresql -n endura-team-serverIngress not working:
# Check ingress status
kubectl get ingress -n endura-team-server
kubectl describe ingress endura-team-server -n endura-team-server
# Verify ingress controller
kubectl get pods -n ingress-nginxResource constraints:
# Check resource usage
kubectl top pods -n endura-team-server
kubectl describe nodes
# Adjust resource limits
helm upgrade endura-team-server oci://ghcr.io/endurasecurity/helm/endura-team-server \
-n endura-team-server \
--set resources.requests.memory=2Gi \
--set resources.limits.memory=4Gi \
--set image.tag=1.1.0 \
--reuse-valuesGetting Help
If you encounter issues:
- Check pod status and events:
kubectl describe pod -l app.kubernetes.io/name=endura-team-server -n endura-team-server - Review application logs:
kubectl logs -f deployment/endura-team-server -n endura-team-server - Verify configuration and secrets are properly mounted and formatted
- Check database connectivity and Helm chart values
- Verify Helm chart configuration and image references
For additional support, refer to the Configuration Guide and Integration Setup.
Security Best Practices
Pod Security
The Helm chart includes secure defaults:
- Non-root user execution
- Read-only root filesystem
- Dropped capabilities
- Security context enforcement
Secrets Management
For production environments, consider:
- External Secrets Operator for integration with AWS Secrets Manager, Azure Key Vault, etc.
- Sealed Secrets for GitOps workflows
- Helm secrets for encrypted values files
Network Policies
Implement network policies to restrict pod communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: endura-team-server-netpol
namespace: endura-team-server
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: endura-team-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 5150
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/component: database
ports:
- protocol: TCP
port: 5432
- to: []
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
- protocol: UDP
port: 53