Tuesday, 19 July 2022

POSTGRESQL DATABASE MAINTENANCE

 Routine

backup of daily database maintenance work , regular “clean up” database, periodic log file management
check_postgres can be used to detect database health and report abnormal conditions

1. Daily cleaning

PostgreSQL database requires periodic cleaning and maintenance. For many installations, it is sufficient to have the automatic cleanup daemon perform cleanup

1.1. Basic knowledge of cleaning

The PostgreSQL VACUUM command must periodically process each table for several reasons:
1. Restore or reuse the disk space occupied by updated or deleted rows.
2. Update the data statistics used by the PostgreSQL query planner.
3. Update the visibility map, it can speed up the scan using only the index.
4. Protect old data from being lost due to transaction ID rewinding or multi-transaction ID rewinding.
There are two variants of VACUUM: standard VACUUM and VACUUM FULL.
VACUUM FULL can reclaim more disk space but runs slower. VACUUM FULL requires an exclusive lock on the table it works on, so it cannot be paralleled with other uses of this table.
Standard form of VACUUM can run in parallel and production database operations (SELECT, INSERT, UPDATE, and DELETE commands will continue to work normally, but during the clean up so you can not
define using the ALTER TABLE command to update the table, etc.).
VACUUM will generate a lot of I/O traffic, which will cause poor performance of other active sessions.
-bash-4.2$ vacuumdb — help
vacuumdb cleans and analyzes a PostgreSQL database.

Usage:
vacuumdb [OPTION]… [DBNAME]

Options:
-a, — all vacuum all databases
-d, — dbname=DBNAME database to vacuum
-e, — echo show the commands being sent to the server
-f, — full do full vacuuming
-F, — freeze freeze row transaction information
-j, — jobs=NUM use this many concurrent connections to vacuum
-q, — quiet don’t write any messages
-t, — table=’TABLE[(COLUMNS)]’ vacuum specific table (s) only
-v, — verbose write a lot of output
-V, — version output version information, then exit
-z, — analyze update optimizer statistics
-Z, — analyze-only only update optimizer statistics; no vacuum
— analyze-in-stages only update optimizer statistics, in multiple
stages for faster results; no vacuum
-?, — help show this help, then exit

Connection options:
-h, — host=HOSTNAME database server host or socket directory
-p, — port=PORT database server port
-U, — username=USERNAME user name to connect as
-w, — no-password never prompt for password
-W, — password force password prompt
— maintenance-db=DBNAME alternate maintenance database

Read the description of the SQL command VACUUM for details.

Report bugs to <pgsql-bugs@postgresql.org>.
-bash-4.2$ vacuumdb -d hq -f -z -v >> /tmp/vacuumdb.log
INFO: vacuuming “pg_catalog.pg_statistic”
INFO: “pg_statistic”: found 127 removable, 402 nonremovable row versions in 26 pages

1.2. Recover disk space

In PostgreSQL, UPDATE or DELETE of a row does not immediately remove the old version of the row. This method is necessary to benefit from multi-version concurrency control (MVCC):
when the old version may still be visible to other transactions, it cannot be deleted. But in the end, any transaction is no longer on an outdated or deleted row version invigoration
interest. The space it occupies must be reclaimed for new lines, so as to avoid unlimited growth in disk space requirements. This is done by running VACUUM.
The standard form of VACUUM removes the dead row version in tables and indexes and marks the space as reusable in the future. However, it will not return the space to the operating system unless in
exceptional circumstances one or more pages at the end of the table become completely free and an exclusive table lock can be easily obtained.
On the contrary, VACUUM FULL actively shrinks the table by writing the contents of the dead space into a complete new version of the table file. This will minimize the size of the table, but it will take longer.
It also requires additional disk space for a new copy of the table until the operation is complete.
The general goal of routine cleanup is to do more standard VACUUM to avoid the need for VACUUM FULL.
Tip: When a table contains a large number of dead row versions due to a large number of update or delete activities, pure VACUUM may not be satisfactory.
If you have such a table and you need to reclaim the excess disk space it takes up, you will need to use VACUUMFULL, or CLUSTER, or one of the table rewrite variants of ALTER TABLE.
Tip: If you have a table, its entire content will be deleted periodically, consider using TRUNCATE instead of DELETE and then VACUUM.

1.3. Update planner statistics

The PostgreSQL query planner relies on statistical information about table contents to generate good plans for queries.
These statistics are collected by the ANALYZE command, which can be called as an optional step of VACUUM in addition to being called directly.
It is important to have moderately accurate statistics, otherwise poor planning may degrade database performance.
If the automatic cleanup daemon is enabled, when the content of a table is changed enough, it will automatically issue the ANALYZE command.
Tip: The automatic cleanup daemon will not issue the ANALYZE command for external tables because it cannot determine an appropriate frequency.

1.4. Update visibility map

The cleanup mechanism maintains a visibility map for each table, which is used to track which pages only contain tuples that are visible to all active transactions.
This has two purposes.
First, the cleanup itself can skip such pages in the next run, because nothing in it needs to be cleaned up.
Second, this allows PostgreSQL to answer some queries that only use indexes without having to refer to the underlying table.

1.5. Prevent transaction ID rollback failure

PostgreSQL’s MVCC transaction semantics rely on being able to compare transaction ID (XID) numbers.
If the inserted XID of a row version is greater than the XID of the current transaction, it is “belonging to the future” and should not be visible to the current transaction.
It is necessary to clean up every table in every database at least every 2 billion transactions.
The reason that periodic cleanup can solve the problem is that VACUUM will mark the rows as frozen, which means that they were inserted by a transaction submitted in the past far enough,
so from the MVCC perspective, the effect is that the insert transaction It is of course visible to all current and future affairs.
PostgreSQL retains a special XID (FrozenTransactionId), this comparison does not follow the rules of ordinary XID and XID is always considered than any
where ordinary XID is older. Ordinary XID uses modulo-232 algorithm to compare.
VACUUM usually skips pages that do not contain any dead row versions, but does not skip pages that contain row versions with old XID values.
To ensure that all old row versions have been frozen, a scan of the entire table is required. vacuum_freeze_table_age control VACUUM when to do this: If the table was
too vacuum_freeze_table_age minus vacuum_freeze_min_age transaction has not yet been fully scanned, it will force a full table clean. Setting this parameter to 0
will force VACUUM to always scan all pages and actually ignore the visibility map.
The datfrozenxid column of a database’s pg_database row is the lower bound of the unfrozen XID that appears in that database — it’s just the minimum value of the relfrozenxid value for each table in the database.

SELECT c.oid::regclass as table_name,
greatest(age(c.relfrozenxid), age(t.relfrozenxid)) as age
FROM pg_class c
LEFT JOIN pg_class t ON c.reltoastrelid = t.oid
WHERE c.relkind IN (‘ r’,’m’);
SELECT datname, age(datfrozenxid) FROM pg_database;

The age column measures the number of transactions from the truncated XID to the current transaction XID.
VACUUM usually only scans pages that have been modified since the last cleanup, but relfrozenxid can only be advanced when the full table is scanned.
When relfrozenxid is older than vacuum_freeze_table_age transactions, when the FREEZE option of VACUUM is used, or when all pages happen to require cleanup to remove the dead row version, the full table will be scanned.
If for some reason automatic cleanup fails to clear the old XID from a table, the system will start issuing a warning message like this when the database reaches 10 million transactions between the oldest XID and the rewind point:
WARNING: database “mydb” must be vacuumed within 177009986 transactions
HINT: To avoid a database shutdown, execute a database-wide VACUUM in “mydb”.
If these warnings are ignored, once there are only 1 million transactions left before the rollback point, the The system will shut down and refuse to start any new transactions:
ERROR: database is not accepting commands to avoid wraparound data loss in database mydb
HINT: Stop the postmaster and vacuum that database in single-user mode.
As soon as the system enters safe shutdown mode , It will not execute the command. The only way to do this is to stop the server and start the server as a single user to execute VACUUM.
1.5.1. Multiple transactions and rollbacks
Multixact ID is used to support rows locked by multiple transactions.
As long as more than one transaction locks a row concurrently, the lock information will be encoded using a “multiple transaction ID” (or simply multiple transaction ID).
As a safety device, a full table cleanup scan will occur for any table whose multi-transaction age exceeds autovacuum_multixact_freeze_max_age.
If the used member storage space exceeds 50% of the total, the full table cleanup scan will also be performed on all tables gradually, which will start with the tables with the oldest multi-transaction age.

1.6. Automatically clean up background processes

PostgreSQL has an optional but highly recommended feature autovacuum, its purpose is to automatically execute VACUUM and ANALYZE commands.
When it is enabled, automatic cleanup checks for tables that have been heavily inserted, updated, or deleted. These checks make use of statistical information collection functions,
so unless track_counts is set to true, automatic cleanup cannot be used. In the default configuration, automatic cleaning is enabled and related configuration parameters have been correctly configured.
#autovacuum = on # Enable autovacuum subprocess?’on’
# requires track_counts to also be on.
#track_counts = on
“Automatically clean up background processes” actually consists of multiple processes. There is a resident background process called the automatic cleanup initiator, which is responsible for starting the automatic cleanup worker process for all databases.

2. Daily rebuild index

In some cases, it is worth periodically using the REINDEX command or a series of independent reconstruction steps to rebuild the index.
B-tree index pages that have become completely empty are reclaimed and reused. However, there is still a possibility of inefficient space utilization: if all keys except a few index keys on a page are deleted, the page is still allocated.
Therefore, in this usage mode in which most but not all keys in each range are eventually deleted, it can be seen that the space usage is very poor. For
such usage patterns, periodic reindexing is recommended.
The possible expansion of non-B-tree indexes has not been well quantified. It is a good idea to regularly monitor the physical size of the index when using non-B-tree indexes.
For a B-tree index, a newly created index is accessed slightly faster than an index that has been updated multiple times, because in the newly created index, logically adjacent pages are usually also physically adjacent
. Just to improve access speed is also worth re-indexing regularly.
REINDEX can be used safely and easily in all situations. But since the command requires an exclusive table lock, a better approach is to use a sequence of creation and replacement steps to perform index reconstruction.
Index types that support CREATE INDEX with the CONCURRENTLY option can be rebuilt in this way.

3. Log file maintenance

It is a good idea to keep the log output of the database server in one place, rather than just discarding them via /dev/null. When diagnosing problems, log output is very valuable.
If you simply direct stgrer of postgres to a file, you will get log output, but the only way to truncate the log file is to stop and restart the server. This is
a better way is to send the server’s stderr output to some kind of log rotation program.

logging_collector is true in postgresql.conf


Friday, 15 July 2022

What is pg_cron? (Run periodic jobs in PostgreSQL)

 pg_cron is a simple cron-based job scheduler for PostgreSQL (10 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database:

You can also install pg_cron by building it from source:
git clone https://github.com/citusdata/pg_cron.git
cd pg_cron
# Ensure pg_config is in your path, e.g.
-bash-4.2$ export LD_LIBRARY_PATH=/u01/app/postgres/13.3/lib:$LD_LIBRARY_PATH
-bash-4.2$ export PATH=/u01/app/postgres/13.3/bin:$PATH
-bash-4.2$ cd pg_cron
-bash-4.2$ make && sudo PATH=$PATH make install
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unused-parameter -Wno-implicit-fallthrough -Iinclude -I/u01/app/postgres/13.3/include -I. -I./ -I/u01/app/postgres/13.3/include/server -I/u01/app/postgres/13.3/include/internal  -D_GNU_SOURCE   -c -o src/pg_cron.o src/pg_cron.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unused-parameter -Wno-implicit-fallthrough -Iinclude -I/u01/app/postgres/13.3/include -I. -I./ -I/u01/app/postgres/13.3/include/server -I/u01/app/postgres/13.3/include/internal  -D_GNU_SOURCE   -c -o src/job_metadata.o src/job_metadata.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unused-parameter -Wno-implicit-fallthrough -Iinclude -I/u01/app/postgres/13.3/include -I. -I./ -I/u01/app/postgres/13.3/include/server -I/u01/app/postgres/13.3/include/internal  -D_GNU_SOURCE   -c -o src/misc.o src/misc.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unused-parameter -Wno-implicit-fallthrough -Iinclude -I/u01/app/postgres/13.3/include -I. -I./ -I/u01/app/postgres/13.3/include/server -I/u01/app/postgres/13.3/include/internal  -D_GNU_SOURCE   -c -o src/task_states.o src/task_states.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unused-parameter -Wno-implicit-fallthrough -Iinclude -I/u01/app/postgres/13.3/include -I. -I./ -I/u01/app/postgres/13.3/include/server -I/u01/app/postgres/13.3/include/internal  -D_GNU_SOURCE   -c -o src/entry.o src/entry.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -shared -o pg_cron.so src/pg_cron.o src/job_metadata.o src/misc.o src/task_states.o src/entry.o -L/u01/app/postgres/13.3/lib    -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/13.3/lib',--enable-new-dtags  -L/u01/app/postgres/13.3/lib -lpq
cat pg_cron.sql > pg_cron--1.0.sql
/usr/bin/mkdir -p '/u01/app/postgres/13.3/lib'
/usr/bin/mkdir -p '/u01/app/postgres/13.3/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/13.3/share/extension'
/usr/bin/install -c -m 755  pg_cron.so '/u01/app/postgres/13.3/lib/pg_cron.so'
/usr/bin/install -c -m 644 .//pg_cron.control '/u01/app/postgres/13.3/share/extension/'
/usr/bin/install -c -m 644 .//pg_cron--1.3--1.4.sql .//pg_cron--1.2--1.3.sql .//pg_cron--1.0--1.1.sql .//pg_cron--1.4--1.4-1.sql .//pg_cron--1.1--1.2.sql pg_cron--1.0.sql '/u01/app/postgres/13.3/share/extension/'

make && sudo PATH=$PATH make install
-bash-4.2$ ./pg_ctl restart -D /u01/psql/DATA
pg_ctl: PID file "/u01/psql/DATA/postmaster.pid" does not exist
Is server running?
trying to start server anyway
waiting for server to start....2022-07-15 16:53:02.272 IST [6254] LOG:  redirecting log output to logging collector process
2022-07-15 16:53:02.272 IST [6254] HINT:  Future log output will appear in directory "/u01/psql/DATA/log".
 done
server started
-bash-4.2$
-bash-4.2$ psql -h localhost -p 5433
psql (13.3)
Type "help" for help.

postgres=# CREATE EXTENSION pg_cron;
CREATE EXTENSION
postgres=# GRANT USAGE ON SCHEMA cron TO postgres;
GRANT
postgres=#
 
 
 
-- How to check job lists.
postgres=# SELECT * FROM cron.job;
 jobid | schedule | command | nodename | nodeport | database | username | active | jobname
-------+----------+---------+----------+----------+----------+----------+--------+---------
(0 rows)
 
-- To schedule a VACUUM ANALYZE on each day at 11:00 AM
postgres=# SELECT cron.schedule('0 11 * * *', 'VACUUM ANALYZE');
 schedule
----------
        1
(1 row)
 
postgres=# SELECT * FROM cron.job;
 jobid |  schedule  |    command     | nodename  | nodeport | database | username | active | jobname
-------+------------+----------------+-----------+----------+----------+----------+--------+---------
     1 | 0 11 * * * | VACUUM ANALYZE | localhost |     5432 | postgres | postgres | t      |
(1 row)
 
postgres=#
 
postgres=#
postgres=#
postgres=# table cron.job;
 
 jobid |  schedule  |    command     | nodename  | nodeport | database | username | active | jobname
-------+------------+----------------+-----------+----------+----------+----------+--------+---------
     1 | 0 11 * * * | VACUUM ANALYZE | localhost |     5432 | postgres | postgres | t      |
(1 row)
 
 
Lets do some more tests, will now create a table with some test records and will try to delete them using pg_cron.
 
postgres=#
postgres=# CREATE TABLE article (
postgres(#     article_id bigserial primary key,
postgres(#     article_name varchar(20) NOT NULL,
postgres(#     article_desc text NOT NULL,
postgres(#     date_added timestamp default NULL
postgres(# );
CREATE TABLE
postgres=#
 
 
postgres=# insert into article (article_id, article_name, article_desc, date_added) values (1010,'vinyl','Beatles 1980 Vinyl',current_timestamp);
postgres=# insert into article (article_id, article_name, article_desc, date_added) values (1011,'Tape','Deftones 2015 Vinyl',current_timestamp);
postgres=# insert into article (article_id, article_name, article_desc, date_added) values (1019,'Tape','Deftones 2015 Vinyl',current_timestamp);
 
 
 
postgres=#
postgres=# select * from article;
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:03:30.514886
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:03:40.870081
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:03:52.046054
(3 rows)
 
postgres=#
 
 
postgres=# \! date
Sat Jul 24 12:20:40 EDT 2021
postgres=#
 
 
 
-- To delete entries those are older than 1 minute
postgres=# SELECT cron.schedule('23 12 * * *', $$DELETE FROM article WHERE date_added<now()- interval '1 Mins'$$);
 schedule
----------
       2
(1 row)
 
postgres=#
postgres=# SELECT * FROM cron.job;
 jobid |  schedule   |                            command                            | nodename  | nodeport | database | username | active | jobname
-------+-------------+---------------------------------------------------------------+-----------+----------+----------+----------+--------+---------
     1 | */5 * * * * | VACUUM                                                        | localhost |     5432 | postgres | postgres | t      |
     2 | 23 12 * * * | DELETE FROM article WHERE date_added<now()- interval '1 Mins' | localhost |     5432 | postgres | postgres | t      |
(2 rows)
 
 
-- This is what it is before pg_cron executing the schedule.
postgres=#
postgres=#
postgres=# select * from article;
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:16:00.507391
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:05.762869
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:09.960464
(3 rows)
 
 
-- Will do a watch on table records to see if pg_cron comes and delete records on specified time.
postgres=#
postgres=# \watch 5
                  Sat 24 Jul 2021 12:22:43 PM EDT (every 5s)
 
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:16:00.507391
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:05.762869
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:09.960464
(3 rows)
 
                  Sat 24 Jul 2021 12:22:48 PM EDT (every 5s)
 
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:16:00.507391
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:05.762869
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:09.960464
(3 rows)
 
                  Sat 24 Jul 2021 12:22:53 PM EDT (every 5s)
 
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:16:00.507391
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:05.762869
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:09.960464
(3 rows)
 
                  Sat 24 Jul 2021 12:22:58 PM EDT (every 5s)
 
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
       1010 | vinyl        | Beatles 1980 Vinyl  | 2021-07-24 12:16:00.507391
       1011 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:05.762869
       1019 | Tape         | Deftones 2015 Vinyl | 2021-07-24 12:16:09.960464
(3 rows)
 
                  Sat 24 Jul 2021 12:23:03 PM EDT (every 5s)
 
 article_id | article_name |    article_desc     |         date_added
------------+--------------+---------------------+----------------------------
Yes, it did pretty smoothly on specified time! Let’s do some more fun.


postgres=#
postgres=# CREATE OR REPLACE PROCEDURE display_message (INOUT msg TEXT)
postgres-# AS $$ BEGIN
postgres$# RAISE NOTICE 'Procedure Parameter: %', msg ;
postgres$# END ;
postgres$# $$
postgres-# LANGUAGE plpgsql ;
CREATE PROCEDURE
postgres=#
postgres=# call display_message('This is my test case');
NOTICE:  Procedure Parameter: This is my test case
         msg
----------------------
 This is my test case
(1 row)
 
 
 
postgres=# SELECT cron.schedule('display_message', '* * * * *', 'CALL display_message()');
 schedule
----------
       14
(1 row)
 
postgres=# SELECT * FROM cron.job;
 jobid |  schedule   |                            command                            | nodename  | nodeport | database | username | active |     jobname
-------+-------------+---------------------------------------------------------------+-----------+----------+----------+----------+--------+-----------------
     9 | */5 * * * * | VACUUM                                                        | localhost |     5432 | postgres | postgres | t      |
    12 | 20 12 * * * | DELETE FROM article WHERE date_added<now()- interval '1 Mins' | localhost |     5432 | postgres | postgres | t      |
    13 | 0 3 * * *   | VACUUM article                                                | localhost |     5432 | postgres | postgres | t      |
    14 | * * * * *   | CALL display_message()                                        | localhost |     5432 | postgres | postgres | t      | display_message
(4 rows)
Now, how to delete a single schedule or all/multiple.


postgres=#  SELECT cron.unschedule(2);
 unschedule
------------
 t
(1 row)
 
 
postgres=#
postgres=# SELECT cron.unschedule(jobid) FROM cron.job;
 unschedule
------------
 t
 t
 
(2 rows)
 
postgres=#
postgres=# SELECT * FROM cron.job;
 jobid | schedule | command | nodename | nodeport | database | username | active | jobname
-------+----------+---------+----------+----------+----------+----------+--------+---------
(0 rows)
 
postgres=#


PostgreSQL Health Check - Performance Audit and Recommendations

   PostgreSQL Server Capacity Planning and Optimization: Conduct a thorough assessment of current and projected workload demands on the data...