graph TB
subgraph "AWS Cloud"
AppConfigAPI["AWS AppConfig API<br/>(StartConfigurationSession /<br/>GetLatestConfiguration)"]
end
subgraph "EKS Cluster (lower-eks-cluster-02 / g360-prod-core-01)"
subgraph "Node 1"
subgraph "AppConfig Agent DaemonSet"
Agent1["appconfig-agent<br/>public.ecr.aws/aws-appconfig/aws-appconfig-agent:2.x<br/>hostPort: 2772 | HTTP_HOST: 0.0.0.0<br/>ACCESS_TOKEN: ••••<br/>POLL_INTERVAL: 30s<br/>IRSA → appconfig:Start*/GetLatest*"]
end
subgraph "Pod: search-service-net"
Search["search-service-net<br/>(dotnet)"]
end
subgraph "Pod: payment-service-v2"
Payment["payment-service-v2<br/>(python)"]
CeleryW["celery-worker"]
end
subgraph "Pod: sso-service"
SSO["sso-service<br/>(ruby)"]
end
subgraph "Pod: groupsync-v3 (BFF)"
GS3["groupsync-v3<br/>(node/next.js)<br/>SSR fetches config server-side"]
end
DD1["Datadog Agent DaemonSet<br/>(existing pattern - hostPath socket)"]
end
subgraph "Node 2"
Agent2["appconfig-agent<br/>(same DaemonSet)"]
MorePods["... more service pods ..."]
end
subgraph "Node N"
AgentN["appconfig-agent<br/>(same DaemonSet)"]
MorePodsN["..."]
end
subgraph "Ingress Layer"
ALB["ALB<br/>(aws-load-balancer-controller)<br/>TLS 1.3 + WAFv2"]
NGINX["NGINX Plus (nginx-ext1)<br/>6 replicas | NodePort 30000/30001"]
end
end
Browser["Browser<br/>(JS Frontend Apps:<br/>dashboard, account,<br/>search_app, etc.)"]
%% External flow - BFF pattern
Browser -->|"HTTPS"| ALB
ALB -->|"HTTP :80"| NGINX
NGINX -->|"route to service"| GS3
%% BFF internal fetch
GS3 -->|"GET http://$NODE_IP:2772<br/>/applications/{app}/environments/{env}<br/>/configurations/{profile}"| Agent1
%% Internal service calls to agent
Search -->|"GET http://$NODE_IP:2772/..."| Agent1
Payment -->|"GET http://$NODE_IP:2772/..."| Agent1
CeleryW -->|"GET http://$NODE_IP:2772/..."| Agent1
SSO -->|"GET http://$NODE_IP:2772/..."| Agent1
%% Agent polls AWS
Agent1 -->|"polls every 30s<br/>(background)"| AppConfigAPI
Agent2 -->|"polls"| AppConfigAPI
AgentN -->|"polls"| AppConfigAPI
classDef daemonset fill:#f59e0b,stroke:#d97706,color:#000
classDef service fill:#3b82f6,stroke:#2563eb,color:#fff
classDef ingress fill:#8b5cf6,stroke:#7c3aed,color:#fff
classDef aws fill:#ff9900,stroke:#cc7a00,color:#000
classDef browser fill:#10b981,stroke:#059669,color:#fff
class Agent1,Agent2,AgentN,DD1 daemonset
class Search,Payment,CeleryW,SSO,GS3,MorePods,MorePodsN service
class ALB,NGINX ingress
class AppConfigAPI aws
class Browser browser
- Why: 2k+ pods in lower environments — sidecar per pod is untenable
- Pattern: Mirrors existing Datadog DaemonSet (hostPath socket)
- Access: Pods use Downward API
status.hostIP→http://${NODE_IP}:2772
- Browser apps (dashboard, account, search_app, etc.) never talk to the agent directly
- SSR/backend layer (groupsync-v3 or any Node/Next.js service behind NGINX) fetches config server-side
- Config is either embedded in rendered HTML (SSR) or exposed via an API endpoint
env:
- name: APPCONFIG_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: APPCONFIG_AGENT_TOKEN
valueFrom:
secretKeyRef:
name: appconfig-agent-token
key: tokenServices call: GET http://${APPCONFIG_AGENT_HOST}:2772/applications/{app}/environments/{env}/configurations/{profile}
With header: Authorization: Bearer ${APPCONFIG_AGENT_TOKEN}
- Current:
appconfig-bootstrap-worker-1.0init container pulls config at pod startup - New: Services fetch config on-demand via HTTP, get live updates without restarts
- Migration: Remove init container +
appconfig-dataemptyDir from Helm templates
ACCESS_TOKENrequired — agent binds0.0.0.0on node- Token distributed to pods via Kubernetes Secret
- IRSA for agent → AWS AppConfig API auth (
appconfig:StartConfigurationSession,appconfig:GetLatestConfiguration)
ARD
The main difference is that the Web apps would do a WebSocket connection with the BFF Proxy.