Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/postgresql-repmgr] Cannot specify a custom pg_hba.conf #1048

Closed
Scharfenberg opened this issue May 25, 2020 · 19 comments · May be fixed by #73570
Closed

[bitnami/postgresql-repmgr] Cannot specify a custom pg_hba.conf #1048

Scharfenberg opened this issue May 25, 2020 · 19 comments · May be fixed by #73570

Comments

@Scharfenberg
Copy link

Scharfenberg commented May 25, 2020

Description

I'm trying to specify a custom pg_hba.conf file for my postresql-repmgr cluster consisting of two nodes in streaming replication setup.
Unfortunately I cannot use the bind mount approach from the README as I'm using docker swarm.
Therefore I've tried two other approaches -- both without success.

  1. Mounting the pg_hba.conf file to /bitnami/repmgr/conf/pg_hba.conf using the docker swarm config mechanism.
  2. Mounting /bitnami/repmgr/conf as a volume with the nfs driver.
  3. As third approach I've also tried to use docker-compose just as shown in the README. Again without success.

Approach 1
The excerpt from my compose file:

[...]
services:
  pg-0:
    image: bitnami/postgresql-repmgr:12.3.0
[...]
    configs:
    - source: pg_hba.conf
      target: /bitnami/repmgr/conf/pg_hba.conf
      uid: "1001"
      gid: "0"
      mode: 0774
[...]
configs:
  pg_hba.conf:
    file: pg_hba.conf
[...]

The resulting error message (from my log server):

"2020-05-25T10:58:02.049Z","t460s-dockerswarm-2","[38;5;6mrepmgr �[38;5;5m10:58:02.04 �[0m�[38;5;2mINFO �[0m ==> Preparing PostgreSQL configuration..."
"2020-05-25T10:58:02.155Z","t460s-dockerswarm-2","[38;5;6mpostgresql �[38;5;5m10:58:02.15 �[0m�[38;5;2mINFO �[0m ==> Stopping PostgreSQL..."
"2020-05-25T10:58:02.152Z","t460s-dockerswarm-2","cp: cannot create regular file '/bitnami/postgresql/conf/pg_hba.conf': Permission denied"
"2020-05-25T10:58:02.034Z","t460s-dockerswarm-2","[38;5;6mrepmgr �[38;5;5m10:58:02.03 �[0m�[38;5;2mINFO �[0m ==> There are no nodes with primary role. Assuming the primary role..."
"2020-05-25T10:58:02.056Z","t460s-dockerswarm-2","[38;5;6mpostgresql �[38;5;5m10:58:02.05 �[0m�[38;5;2mINFO �[0m ==> postgresql.conf file not detected. Generating it..."

Notes:

  • This approach actually works if I first deploy my service(s) without a custom pg_hba.conf, then update my compose file with the pg_bha.conf file and redeploy. Of course this procedure is very inconvenient -- at least in dev/test scenarios. Maybe it could be a way to go in production as services are deployed once and only redeployed afterwards.
  • When adding user: root to my services to circumvent issues due to the non-root container I encounter the same password authentication issue as in approach 2.

Approach 2
The excerpt from my compose file:

[...]
services:
  pg-0:
    image: bitnami/postgresql-repmgr:12.3.0
[...]
    volumes:
    - pg-primary-vol:/bitnami/postgresql
    - pg-config-vol:/bitnami/repmgr/conf/
[...]
volumes:
  pg-primary-vol:
  pg-config-vol:
    driver: local
    driver_opts:
      type: "nfs"
      o: "nfsvers=4,addr=192.168.137.110,rw"
      device: ":/mnt/storage1/postgresql/conf"
[...]

The resulting error message (from docker service logs):

postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [INFO] connecting to database "user=repmgr password=repmgr host=pg-0 dbname=repmgr port=5432 connect_timeout=5"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [DEBUG] connecting to: "user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [ERROR] connection to database failed
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [DETAIL]
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | FATAL:  password authentication failed for user "repmgr"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    |
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    | [2020-05-25 11:33:45] [DETAIL] attempted to connect using:
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2    |   user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr

Notes:

  • for testing puposes I've copied the pg_hba.conf from a running version of the container (aka without custom pg_hba.conf) to my nfs conf directory.
  • mounting an empty conf directory does not result in this error.
  • adding the files postgresql.conf and repmgr.conf (from a running version of the container) to the conf dir does not help.
  • From the practical point of view I consider this approach as very inconvenient as it implies copying config files from my project directory to some place in the filesystem that I have to mount first.

Approach 3
I've taken the example docker-compose.yml file and added this bind mount to both postgres services:

    volumes:
      - ./conf:/bitnami/repmgr/conf/

Of course I've also set the correct permissions on the host holder and its content.
As long as the folder is empty everything works fine. As soon as I add the pg_hba.conf the start of the primary container fails with error 2 on the first run:

[...]
pg-0_1  | postgresql 20:17:35.41 INFO  ==> Initializing PostgreSQL database...
pg-0_1  | postgresql 20:17:35.41 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
pg-0_1  | postgresql 20:17:35.42 INFO  ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
pg-0_1  | postgresql 20:17:36.89 INFO  ==> Starting PostgreSQL in background...
pg-0_1  | postgresql 20:17:37.03 INFO  ==> Changing password of postgres
pg-0_1  | postgresql 20:17:37.05 INFO  ==> Stopping PostgreSQL...
postgres_pg-0_1 exited with code 2

On the second and each subsequent run (without deleting my volumes) I get this already known error:

[...]
pg-0_1  | postgresql 20:17:55.19 INFO  ==> Deploying PostgreSQL with persisted data...
pg-1_1  | repmgr 20:17:55.20 INFO  ==> Preparing repmgr configuration...
pg-1_1  | repmgr 20:17:55.21 INFO  ==> Initializing Repmgr...
pg-1_1  | repmgr 20:17:55.22 INFO  ==> Waiting for primary node...
pg-0_1  | postgresql 20:17:55.22 INFO  ==> Stopping PostgreSQL...
pg-0_1  | postgresql-repmgr 20:17:55.23 INFO  ==> ** PostgreSQL with Replication Manager setup finished! **
pg-0_1  |
pg-0_1  | postgresql 20:17:55.30 INFO  ==> Starting PostgreSQL in background...
pg-0_1  | postgresql-repmgr 20:17:55.43 INFO  ==> ** Starting repmgrd **
pg-0_1  | [2020-05-25 20:17:55] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
pg-0_1  | [2020-05-25 20:17:55] [INFO] connecting to database "user=repmgr password=repmgr host=pg-0 dbname=repmgr port=5432 connect_timeout=5"
pg-0_1  | [2020-05-25 20:17:55] [DEBUG] connecting to: "user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr"
pg-0_1  | [2020-05-25 20:17:55] [ERROR] connection to database failed
pg-0_1  | [2020-05-25 20:17:55] [DETAIL]
pg-0_1  | FATAL:  password authentication failed for user "repmgr"
pg-0_1  |
pg-0_1  | [2020-05-25 20:17:55] [DETAIL] attempted to connect using:
pg-0_1  |   user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr
postgres_pg-0_1 exited with code 6
pg-1_1  | postgresql 20:19:56.06 INFO  ==> Stopping PostgreSQL...
postgres_pg-1_1 exited with code 1

General steps to reproduce the issues in approach 1 and 2
When modifying and testing my docker-compose.yml file I use these steps to have a clean setup each time:

  1. docker stack rm postgres
  2. docker volume prune on all involved docker nodes
  3. update docker-compose.yml (e.g. add the volume in approach 2)
  4. docker stack deploy --compose-file docker-compose.yml postgres

The full docker-compose.yml file used in approach 1 and 2
(currently approach 2 is active by outcommenting approach 1 elements):

---
version: "3.8"

services:
  pg-0:
    image: bitnami/postgresql-repmgr:12.3.0
    environment:
    - POSTGRESQL_PASSWORD_FILE=/run/secrets/postgres_password
    - REPMGR_PARTNER_NODES=pg-0,pg-1
    - REPMGR_NODE_NAME=pg-0
    - REPMGR_NODE_NETWORK_NAME=pg-0
    - REPMGR_PRIMARY_HOST=pg-0
    - REPMGR_PASSWORD_FILE=/run/secrets/repmgr_password
    - REPMGR_LOG_LEVEL=DEBUG
    volumes:
    - pg-primary-vol:/bitnami/postgresql
    - pg-config-vol:/bitnami/repmgr/conf/
    - type: tmpfs
      target: /dev/shm
      tmpfs:
        size: 256000000
    ports:
    - "5432:5432"
    networks:
    - application-net
    deploy:
      placement:
        constraints:
        - node.labels.type == primary
        - node.role == worker
          #endpoint_mode: dnsrr
    configs:
    - source: additional-postgresql.conf
      target: /bitnami/postgresql/conf/conf.d/additional-postgresql.conf
      #- source: pg_hba.conf
      #  target: /bitnami/repmgr/conf/pg_hba.conf
      #  uid: "1001"
      #  gid: "0"
      #  mode: 0774
    secrets:
    - postgres_password
    - repmgr_password
      #logging:
      #  driver: gelf
      #  options:
      #    gelf-address: 'tcp://192.168.137.101:12201'

  pg-1:
    image: bitnami/postgresql-repmgr:12.3.0
    environment:
    - POSTGRESQL_PASSWORD_FILE=/run/secrets/postgres_password
    - REPMGR_PARTNER_NODES=pg-0,pg-1
    - REPMGR_NODE_NAME=pg-1
    - REPMGR_NODE_NETWORK_NAME=pg-1
    - REPMGR_PRIMARY_HOST=pg-0
    - REPMGR_PASSWORD_FILE=/run/secrets/repmgr_password
    volumes:
    - pg-replica-vol:/bitnami/postgresql
    - pg-config-vol:/bitnami/repmgr/conf/
    - type: tmpfs
      target: /dev/shm
      tmpfs:
        size: 256000000
    ports:
    - "5433:5432"
    networks:
    - application-net
    deploy:
      placement:
        constraints:
        - node.labels.type != primary
        - node.role == worker
          #endpoint_mode: dnsrr
    configs:
    - source: additional-postgresql.conf
      target: /bitnami/postgresql/conf/conf.d/additional-postgresql.conf
      #- source: pg_hba.conf
      #  target: /bitnami/repmgr/conf/pg_hba.conf
      #  uid: "1001"
      #  gid: "0"
      #  mode: 0774
    secrets:
    - postgres_password
    - repmgr_password
      #logging:
      #  driver: gelf
      #  options:
      #    gelf-address: 'tcp://192.168.137.101:12201'

networks:
  application-net:
    driver: overlay
    driver_opts:
      encrypted: "true"

volumes:
  pg-primary-vol:
  pg-replica-vol:
  pg-config-vol:
    driver: local
    driver_opts:
      type: "nfs"
      o: "nfsvers=4,addr=192.168.137.110,rw"
      device: ":/mnt/storage1/postgresql/conf"

configs:
  additional-postgresql.conf:
    file: additional-postgresql.conf
    name: additional-postgresql.conf-${ADDITIONAL_POSTGRES_CONF}
  pg_hba.conf:
    file: pg_hba.conf
    name: pg_hba.conf-${PG_HBA_CONF}

secrets:
  postgres_password:
    external: true
  repmgr_password:
    external: true

Output of docker version

Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b7f0
 Built:             Wed Mar 11 01:25:56 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b7f0
  Built:            Wed Mar 11 01:24:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info

Client:
 Debug Mode: false

Server:
 Containers: 3
  Running: 1
  Paused: 0
  Stopped: 2
 Images: 8
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: t1hnu0591w01txen8chuue8y0
  Is Manager: true
  ClusterID: lv4nsc1znjt3nvuam6sr4jgt7
  Managers: 1
  Nodes: 3
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.137.101
  Manager Addresses:
   192.168.137.101:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.19.0-9-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 1.886GiB
 Name: t460s-dockerswarm-1
 ID: HZD6:A6CE:KFV6:7YME:DOSW:QUPB:NVVH:CZBC:3KWJ:BMHO:EY6S:JMHF
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
@Scharfenberg Scharfenberg changed the title Cannot specify a custom pg_hba.conf on docker swarm Cannot specify a custom pg_hba.conf May 25, 2020
@Scharfenberg
Copy link
Author

Scharfenberg commented May 25, 2020

Some code digging improved my understanding...

The password authentication error is a follow-up error after a first docker run failed. This is what I've noticed with Approach 3 and the same is true for Approach 1 (it's harder to observe with docker swarm as the container is started over and over again so it's easy to miss the first underlying error).
Explanation: on the first run postgresql already created some files in its data directory but the container setup was interrupted before the repmgr database user was created. On the second run the creation of the database user is skipped as the presence of files in the postgresql folder is interpreted as "setup complete".

The underlying error is the same in all approach, but it's much easier to see with docker-compose as the failing containers are not started over and over again:

pg-1_1  | repmgr 21:15:41.04 INFO  ==> Initializing Repmgr...
pg-1_1  | repmgr 21:15:41.05 INFO  ==> Waiting for primary node...
pg-1_1  | repmgr 21:15:41.05 DEBUG ==> Wait for schema repmgr.repmgr on 'pg-0:5432', will try 6 times with 10 delay seconds (TIMEOUT=60)
pg-1_1  | repmgr 21:15:41.08 DEBUG ==> Host 'pg-0:5432' is not accessible

I have no idea what the reason for this issue could be. It could be that the custom pg_hba.conf blocks the database access. But as mentioned before it's a copy of the default pg_hba.conf as it is created by the container if I do not specify my own file. Also it allows access from all IPs to all databases with all users, using md5.

@Scharfenberg
Copy link
Author

Changing all pg_hba.conf entries to 'trust' will fix the database access issue. It will work then with approach 3 but with approach 1 I encounter a new issue.
Obviously the container setup script is doing some magic to the pg_hba.conf file and changes it on the run. So it is probably not possible to specfiy a custom pg_hba.conf file that restricts access as far as possible.

@javsalgar
Copy link
Contributor

Hi,

We've had some discussions about this. Currently, if you provide a pg_hba.conf, it will use it during the container initialization. However, for initializing the slaves, it needs some special pg_hba.conf permissions. When you add a custom pg_hba.conf, there could be some incompatibilities during that initialization time. Would it make sense to use the provided pg_hba.conf AFTER the initialization? One thing that you could try is to perform a first initialization of the cluster without providing a custom pg_hba.conf, and then, after having it initialized, restart all of the nodes but this time providing your configuration. Could that work?

@Scharfenberg
Copy link
Author

Scharfenberg commented May 26, 2020

Before I answer your questions, @javsalgar, some more obervations:

  • the startup scripts needs exactly one 'trust' entry in the pg_hba.conf to work. The most restricted version I can come up with is:
    host postgres postgres samenet trust
    The scripts will then replace 'trust' by 'md5' (as they will do for any other 'trust' entry).
    This has two consequences:
  1. you have to allow remote access for the user 'postgres' -- normally I allow only local access for 'postgres' and create my own admin role instead.
  2. it is not possible to have 'trust' access at all.
  • The first issue I've mentioned in approach 1 ("cp: cannot create regular file '/bitnami/postgresql/conf/pg_hba.conf': Permission denied") resulted from missing access rights in the config definition. I've added them later -- as you can see in my compose file -- but they were missing during my first attempts that resulted in this error.
  • The container does not work with user: root. This will fix the permission denied issue in case I forget to specify the correct permissions for the config definition but it will cause the deployment to fail later on. There are no errors in the log, the containers just keep restarting again and again and again.

@Scharfenberg
Copy link
Author

Scharfenberg commented May 26, 2020

Comming back to your questions, @javsalgar:
I would greatly appreciate if the startup scripts created its own pg_hab.conf during initialization and uses the custom file only after initialization. That seems logical to me.
Having a two step approach -- providing the custom pg_hba.conf only in a second configuration step -- is very inconvenient for me. Now that I know I prefer to add this line from my last comment:
host postgres postgres samenet trust

@javsalgar
Copy link
Contributor

Hi,

Great, thank you very much for your input. I would like to bring this up to the rest of the team so we can use the custom configuration files only when running the initialization. I presume this would require some discussion and analysis so it will take a bit before implementing it. However, as soon as there are more news, I will update this ticket.

@chriskearns
Copy link

I have had the same issues as @Scharfenberg when trying to add a custom pg_hba.conf file. I have used the two-step workaround that @javsalgar suggested, and it works for me, but only after I made some logic changes to the code.

In rootfs/opt/bitnami/scripts/libpostgresql.sh, I re-structured the line 556

is_boolean_yes "$POSTGRESQL_ALLOW_REMOTE_CONNECTIONS" && is_boolean_yes "$create_pghba_file" && postgresql_create_pghba && postgresql_allow_local_connection
to this:
if is_boolean_yes "$POSTGRESQL_ALLOW_REMOTE_CONNECTIONS"; then if is_boolean_yes "$create_pghba_file"; then postgresql_create_pghba && postgresql_allow_local_connection fi fi
The reason is that the third and fourth clauses in the original code are always getting executed, as explained by [https://stackoverflow.com/questions/3184164/what-is-the-bash-test-command-evaluation-order]

@javsalgar
Copy link
Contributor

Hi,

I checked the stack overflow case and I don't see the part where it says that the original && command will evaluate all expressions. Actually, in the ticket it says that && would properly short-circuit. Could you clarify and show us the example where you see that all the elements are being executed?

@chriskearns
Copy link

In the first answer in the stack overflow case, you see:
"It turns out that it's not short-circuited in that case...You can see in both those cases that it continues to interpret the expressions regardless of the state of the first expression."

I know that before I made the modification, the function postgresql_create_pghba was called in all cases, completely messing up my custom file in the container. The file on my mounted volume was OK, but the file passed as a parameter to the postgresql process was incorrect.

@javsalgar
Copy link
Contributor

Hi,

This is strange, I have not been able to reproduce it. What is more, further in the question it says: "So. bottom line: it appears that all sub-expressions are evaluated when using -a and -o with test or [." We are not using -a or -o but && and ||. Do you have any example of values that reproduces the issue?

@chriskearns
Copy link

I have tried to reproduce my results with that change listed above, and it had no effect. I realize that I had made more than the above, so I think you're correct.

I think the most significant thing for getting a custom pg_hba.conf to work with 'trusted' authentication is to set the variable REPMGR_PGHBA_TRUST_ALL. This stops the postgresql_restrict_pghba function from re-writing 'trust' to 'md5' in the running pg_hba.conf.

BTW, I think the conditions for executing the call for postgresql_restrict_pghba in librepmgr.sh are too simple. They ignore the fact that it could be a custom pg_hba.conf. For example, the call could be:
is_boolean_yes "$REPMGR_PGHBA_TRUST_ALL" || is_boolean_yes "$create_pghba_file" || postgresql_restrict_pghba
if create_pg_hba_file was visibile in that library.

@javsalgar
Copy link
Contributor

Hi,

I think we should re-evaluate how we deal with custom files. I'm convinced that they should not be considered at initialization time but only at runtime. However that would be major change and will require discussion.

@Octiee
Copy link

Octiee commented Aug 31, 2020

Hi, I'm not sure if this is already being worked on but similar to those above, I'm having an issue where I can't specify a single entry to be trusted, such as the example below.

# TYPE  DATABASE        USER            ADDRESS                 METHOD
host              all                   all              127.0.0.1/32                    trust
local              all                   all                                                      md5

As described above, all entries are changed to md5 (or trust if REPMGR_PGHBA_TRUST_ALL=yes is used)

@javsalgar
Copy link
Contributor

Hi,

I'm afraid it is still in our backlog. As soon as we have more news, we will update the ticket. Stay tuned!

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Apr 16, 2021
@github-actions
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@javsalgar javsalgar added the on-hold Issues or Pull Requests with this label will never be considered stale label Apr 21, 2021
@javsalgar javsalgar reopened this Apr 21, 2021
@github-actions github-actions bot removed the stale 15 days without activity label Apr 22, 2021
@fmulero
Copy link
Collaborator

fmulero commented Jul 28, 2022

We are going to transfer this issue to bitnami/containers

In order to unify the approaches followed in Bitnami containers and Bitnami charts, we are moving some issues in bitnami/bitnami-docker-<container> repositories to bitnami/containers.

Please follow bitnami/containers to keep you updated about the latest bitnami images.

More information here: https://blog.bitnami.com/2022/07/new-source-of-truth-bitnami-containers.html

@fmulero fmulero transferred this issue from another repository Jul 28, 2022
@bitnami-bot bitnami-bot added the triage Triage is needed label Jul 28, 2022
@fmulero fmulero changed the title Cannot specify a custom pg_hba.conf [bitnami/postgresql-repmgr] Cannot specify a custom pg_hba.conf Jul 28, 2022
@bitnami-bot bitnami-bot removed the triage Triage is needed label Jul 28, 2022
@carrodher
Copy link
Member

Unfortunately, this issue was created a year ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this year, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.

@carrodher carrodher closed this as not planned Won't fix, can't repro, duplicate, stale Oct 20, 2022
@github-actions github-actions bot added solved and removed on-hold Issues or Pull Requests with this label will never be considered stale labels Oct 20, 2022
@yukha-dw
Copy link
Contributor

How about these change? 3cc1fc8

What it does is delaying custom pg_hba.conf injection until postgresql_initialize has been executed just like REPMGR_PGHBA_TRUST_ALL=no did to replace trust with md5 here:

if ! repmgr_is_file_external "pg_hba.conf"; then
is_boolean_yes "$REPMGR_PGHBA_TRUST_ALL" || postgresql_restrict_pghba
fi

Below pg_hba.conf should works:

hostnossl    all            all         all             reject
hostssl      repmgr         repmgr      all             scram-sha-256
hostssl      replication    repmgr      all             scram-sha-256
hostssl      all            repmgr      all             scram-sha-256
hostssl      all            all         all             scram-sha-256

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants