Conversation
|
/test |
|
❗ By default, the pull request is configured to backport to all release branches.
|
|
@armru, here's the link to the E2E on CNPG workflow run: https://github.com/cloudnative-pg/cloudnative-pg/actions/runs/22718198409 |
|
@mnencia probably this should be tested with a pg_hba config like the one here https://github.com/cloudnative-pg/postgres-keycloak-oauth-validator?tab=readme-ov-file#example-cloudnativepg-configuration |
|
There is a problem with the markdown code for |
|
I created the apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: angus
spec:
instances: 3
podSelectorRefs:
- name: marshall-pooler
selector:
matchLabels:
cnpg.io/podRole: pooler
cnpg.io/cluster: angus
cnpg.io/poolerName: marshall
postgresql:
pg_hba:
- "hostssl all all ${podselector:marshall-pooler} scram-sha-256"
- "host all all all reject"
storage:
size: 1Gi
---
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: marshall
spec:
cluster:
name: angus
instances: 3
type: rw
pgbouncer:
poolMode: session
parameters:
max_client_conn: "1000"
default_pool_size: "10"The cluster defines the I then create these two simple jobs with apiVersion: batch/v1
kind: Job
metadata:
name: marshall-pgbench
spec:
template:
spec:
containers:
- args:
- -i
command:
- pgbench
env:
- name: PGHOST
value: marshall
- name: PGDATABASE
value: app
- name: PGPORT
value: "5432"
- name: PGUSER
valueFrom:
secretKeyRef:
key: username
name: angus-app
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: password
name: angus-app
image: ghcr.io/cloudnative-pg/postgresql:18.1-system-trixie
name: pgbench
restartPolicy: Never
---
apiVersion: batch/v1
kind: Job
metadata:
name: angus-pgbench
spec:
template:
spec:
containers:
- args:
- -i
command:
- pgbench
env:
- name: PGHOST
value: angus-rw
- name: PGDATABASE
value: app
- name: PGPORT
value: "5432"
- name: PGUSER
valueFrom:
secretKeyRef:
key: username
name: angus-app
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: password
name: angus-app
image: ghcr.io/cloudnative-pg/postgresql:18.1-system-trixie
name: pgbench
restartPolicy: NeverThis is the result: $ kubectl get pods
NAME READY STATUS RESTARTS AGE
angus-1 1/1 Running 0 17m
angus-2 1/1 Running 0 17m
angus-3 1/1 Running 0 14m
angus-pgbench-7q56c 0/1 Error 0 7s
csi-hostpathplugin-0 8/8 Running 0 6h21m
marshall-6796d674ff-czmkj 1/1 Running 0 17m
marshall-6796d674ff-gzmf9 1/1 Running 0 17m
marshall-6796d674ff-wmlwk 1/1 Running 0 17m
marshall-pgbench-blm7c 0/1 Completed 0 7s
$ kubectl logs jobs/angus-pgbench
Found 2 pods, using pod/angus-pgbench-7q56c
pgbench: error: connection to server at "angus-rw" (10.96.155.110), port 5432 failed: FATAL: pg_hba.conf rejects connection for host "10.244.1.20", user "app", database "app", SSL encryption
connection to server at "angus-rw" (10.96.155.110), port 5432 failed: FATAL: pg_hba.conf rejects connection for host "10.244.1.20", user "app", database "app", no encryption
pgbench: error: could not create connection for initialization
$ kubectl logs jobs/marshall-pgbench
dropping old tables...
creating tables...
generating data (client-side)...
100000 of 100000 tuples (100%) of pgbench_accounts done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done in 0.10 s (drop tables 0.03 s, create tables 0.00 s, client-side generate 0.03 s, vacuum 0.02 s, primary keys 0.01 s).The |
|
Moreover, the podSelectorRefs:
- ips:
- 10.244.1.16
- 10.244.2.16
- 10.244.3.24
name: poolerIf I delete a pgbouncer pod, a new one is recreated with a different IP and the status is immediately updated: podSelectorRefs:
- ips:
- 10.244.1.16
- 10.244.2.20
- 10.244.3.24
name: pooler |
podSelectorRefs for dynamic pg_hba address resolution
postgres=# SELECT database, user_name, address, netmask FROM pg_catalog.pg_hba_file_rules WHERE auth_method = 'scram-sha-256' and type = 'hostssl';
database | user_name | address | netmask
----------+-----------+-------------+-----------------
{all} | {all} | 10.244.1.16 | 255.255.255.255
{all} | {all} | 10.244.2.20 | 255.255.255.255
{all} | {all} | 10.244.3.24 | 255.255.255.255
(3 rows) |
Although I have not tested the validator, I have introduced the row in the
The dual-stack issue should be addressed globally, as it is not tied to this specific feature (as far as I understand). There is an ongoing discussion about this (#7116), but my response aligns with what I’ve been saying: considering everything, I believe there are more critical areas for the project to focus on at this stage, particularly regarding technical debt. I believe the feature, as it stands, provides significant value and should be considered a major feature for version 1.29. However, it would be great to get your review if you have time. |
b091b17 to
1589982
Compare
|
/test |
|
@mnencia, here's the link to the E2E on CNPG workflow run: https://github.com/cloudnative-pg/cloudnative-pg/actions/runs/22861590448 |
Enable dynamic pod IP resolution in pg_hba rules via named label
selectors. Users can define podSelectorRefs with label selectors
that match application pods, then reference them in pg_hba rules
using ${podselector:<name>} syntax. The operator resolves matching
pod IPs and the instance manager expands ${podselector:<name>}
tokens into one pg_hba line per IP with /32 (IPv4) or /128 (IPv6)
masks.
Signed-off-by: Armando Ruocco <[email protected]>
Co-authored-by: Leonardo Cecchi <[email protected]>
Signed-off-by: Gabriele Bartolini <[email protected]>
Signed-off-by: Gabriele Bartolini <[email protected]>
Signed-off-by: Gabriele Bartolini <[email protected]>
Signed-off-by: Gabriele Bartolini <[email protected]>
Signed-off-by: Marco Nenciarini <[email protected]>
Signed-off-by: Marco Nenciarini <[email protected]>
Add list-type markers to status field for consistent SSA behavior, detect duplicate selector names in the webhook, fix documentation inaccuracies and clarify code comments. Signed-off-by: Marco Nenciarini <[email protected]>
Signed-off-by: Marco Nenciarini <[email protected]>
Add tests for dual-stack pods, multiple selectors, pending pods without IPs, label and deletion timestamp predicate transitions, mixed IPv4/IPv6 HBA expansion, duplicate names, invalid label selectors, and multiple podselector references per line. Assisted-by: Claude Opus 4.6 Signed-off-by: Marco Nenciarini <[email protected]>
Replace ensureCIDR with hostCIDR using net.IPNet for proper CIDR formatting and IPv6 address normalization. Signed-off-by: Marco Nenciarini <[email protected]>
Signed-off-by: Marco Nenciarini <[email protected]>
|
/test |
|
@mnencia, here's the link to the E2E on CNPG workflow run: https://github.com/cloudnative-pg/cloudnative-pg/actions/runs/22873766938 |
Clarify actor attribution between operator and instance manager, remove duplicate zero-matches bullet already covered by the warning admonition, tone down "real-time" wording, drop unrelated max_worker_processes from the sample, and remove dead slices.Sort call from the e2e test. Signed-off-by: Marco Nenciarini <[email protected]>
Introduces a declarative way to manage
pg_hbarules by resolving pod label selectors into IP addresses, eliminating the need for static CIDR ranges or manual updates when client pods restart.Users define named label selectors in
.spec.podSelectorRefsand reference them with${podselector:NAME}inpg_hbaaddress fields. The operator resolves matching pod IPs and the instance manager expands each reference into individual/32or/128entries at render time, with automatic reload on pod lifecycle changes.Closes #10087