Friday, 13 August 2021

Installing pgbadger on Linux for PostgreSQL log analysis

As a DBA, we often encounter scenarios when you need to track PostgreSQL database performance by analyzing workloads and identify bottlenecks if any. There comes pgbadger – an open source log analyzer for PostgreSQL. It parse the PostgreSQL log files and gives you a nice overview of the PostgreSQL cluster including various query metrics like number of query executions, their types and duration, session details, Locks statistics, warnings, errors and other event patterns etc. All of these details will be available in a nice HTML format on your web browser.

Installation:

Prerequisite:

The pgbadger is written in pure Perl and uses a JavaScript library (flotr2) to draw graphs. Hence, you need to ensure that a modern Perl distribution is available in your system. Charts are rendered using a JavaScript library and Your web browser will do all the work. Nothing additional is required here.

Use the below command to install perl if it is not already installed in your system.

$ yum install -y perl perl-devel

Either of the below two method can be followed to install pgbadger.

Method to Follow :

The first step is to download the latest pgbadger installation package. The Official releases are published to the GitHub Release page of pgBadger. While I am writing this blog post, the latest available version is 11.4.

Let’s run the below command on the Linux command line interface to download the pgbadger version 11.4.

Step 1 :$ wget https://github.com/darold/pgbadger/archive/v11.4.tar.gz


Once the download is complete, you have to extract the archive from the tarball.


Step 2 : $ tar xzvf v11.4.tar.gz

[user@server pgbadger_demo]$ tar xzvf v11.4.tar.gz

pgbadger-11.4/

pgbadger-11.4/.editorconfig

pgbadger-11.4/.gitignore

pgbadger-11.4/CONTRIBUTING.md

pgbadger-11.4/ChangeLog

pgbadger-11.4/HACKING.md

pgbadger-11.4/LICENSE

pgbadger-11.4/MANIFEST

pgbadger-11.4/META.yml

pgbadger-11.4/Makefile.PL

pgbadger-11.4/README

pgbadger-11.4/README.md

pgbadger-11.4/doc/

pgbadger-11.4/doc/pgBadger.pod

pgbadger-11.4/pgbadger

pgbadger-11.4/resources/

pgbadger-11.4/resources/.gitignore

pgbadger-11.4/resources/LICENSE

pgbadger-11.4/resources/README

pgbadger-11.4/resources/bean.js

pgbadger-11.4/resources/bootstrap.css

pgbadger-11.4/resources/bootstrap.js

pgbadger-11.4/resources/font/

pgbadger-11.4/resources/font/FontAwesome.otf

pgbadger-11.4/resources/font/fontawesome-webfont.eot

pgbadger-11.4/resources/fontawesome.css

pgbadger-11.4/resources/jqplot.barRenderer.js

pgbadger-11.4/resources/jqplot.canvasAxisTickRenderer.js

pgbadger-11.4/resources/jqplot.canvasTextRenderer.js

pgbadger-11.4/resources/jqplot.categoryAxisRenderer.js

pgbadger-11.4/resources/jqplot.cursor.js

pgbadger-11.4/resources/jqplot.dateAxisRenderer.js

pgbadger-11.4/resources/jqplot.highlighter.js

pgbadger-11.4/resources/jqplot.pieRenderer.js

pgbadger-11.4/resources/jqplot.pointLabels.js

pgbadger-11.4/resources/jquery.jqplot.css

pgbadger-11.4/resources/jquery.jqplot.js

pgbadger-11.4/resources/jquery.js

pgbadger-11.4/resources/patch-jquery.jqplot.js

pgbadger-11.4/resources/pgbadger.css

pgbadger-11.4/resources/pgbadger.js

pgbadger-11.4/resources/pgbadger_slide.js

pgbadger-11.4/resources/underscore.js

pgbadger-11.4/t/

pgbadger-11.4/t/01_lint.t

pgbadger-11.4/t/02_basics.t

pgbadger-11.4/t/03_consistency.t

pgbadger-11.4/t/exp/

pgbadger-11.4/t/exp/stmt_type.out

pgbadger-11.4/t/fixtures/

pgbadger-11.4/t/fixtures/cloudsql.log.gz

pgbadger-11.4/t/fixtures/light.postgres.log.bz2

pgbadger-11.4/t/fixtures/logplex.gz

pgbadger-11.4/t/fixtures/pg-syslog.1.bz2

pgbadger-11.4/t/fixtures/pgbouncer.log.gz

pgbadger-11.4/t/fixtures/rds.log.bz2

pgbadger-11.4/t/fixtures/stmt_type.log

pgbadger-11.4/tools/

pgbadger-11.4/tools/README.pgbadger_tools

pgbadger-11.4/tools/README.updt_embedded_rsc

pgbadger-11.4/tools/pgbadger_tools

pgbadger-11.4/tools/updt_embedded_rsc.pl

[user@server pgbadger_demo]$

Now go to the extracted directory.


$ cd pgbadger-11.4

$ perl Makefile.PL


$ make && sudo make install

[user@server pgbadger-11.4]$ make && sudo make install

which: no pod2markdown in (/usr/local/rvm/gems/ruby-2.4.1/bin:/usr/local/rvm/gems/ruby-2.4.1@global/bin:/usr/local/rvm/rubies/ruby-2.4.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/rvm/bin:/home/cloud_user/.local/bin:/home/user/bin)

cp pgbadger blib/script/pgbadger

/usr/bin/perl -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/pgbadger

echo "=head1 SYNOPSIS" > doc/synopsis.pod

./pgbadger --help >> doc/synopsis.pod

echo "=head1 DESCRIPTION" >> doc/synopsis.pod

sed -i.bak 's/ +$//g' doc/synopsis.pod

rm doc/synopsis.pod.bak

sed -i.bak '/^=head1 SYNOPSIS/,/^=head1 DESCRIPTION/d' doc/pgBadger.pod

sed -i.bak '4r doc/synopsis.pod' doc/pgBadger.pod

rm doc/pgBadger.pod.bak

Manifying blib/man1/pgbadger.1p

rm doc/synopsis.pod

[sudo] password for user: 

which: no pod2markdown in (/sbin:/bin:/usr/sbin:/usr/bin)

echo "=head1 SYNOPSIS" > doc/synopsis.pod

./pgbadger --help >> doc/synopsis.pod

echo "=head1 DESCRIPTION" >> doc/synopsis.pod

sed -i.bak 's/ +$//g' doc/synopsis.pod

rm doc/synopsis.pod.bak

sed -i.bak '/^=head1 SYNOPSIS/,/^=head1 DESCRIPTION/d' doc/pgBadger.pod

sed -i.bak '4r doc/synopsis.pod' doc/pgBadger.pod

rm doc/pgBadger.pod.bak

Manifying blib/man1/pgbadger.1p

Installing /usr/local/share/man/man1/pgbadger.1p

Installing /usr/local/bin/pgbadger

Appending installation info to /usr/lib64/perl5/perllocal.pod

rm doc/synopsis.pod

[user@server pgbadger-11.4]$

[user@server pgbadger-11.4]$ pgbadger -V

pgBadger version 11.4

[user@server pgbadger-11.4]$


************PostgreSQL Configuration*******************


Since pgbadger reads the PostgreSQL logs and picks up the information, it is essential to make some changes on Postgresql.Conf file so that necessary information are available in the PostgreSQL log file for pgbadger to read.


The first and foremost thing is to enable query logging. Change the log_min_duration_statement parameter value accordingly on Postgresql.Conf file. The value ‘0’ signifies that every statement will be logged which may not be ideal for a very busy server. You can change the value accordingly based on your requirement. Suppose, you want to log queries running for more that 2 seconds, then set the parameter value(in milliseconds) as ‘2000’.


 ** Do not enable both log_min_duration_statement, log_duration and log_statement all together, this will result in wrong counter values and excessive log size.


** Do not enable log_statement as its log format will not be parsed by pgBadger. If you have log_statement set to 'all' nothing will be logged through the log_min_duration_statement directive and pgbadger will not be able to parse the log.


** If you don’t want all details about queries, set log_min_duration_statement to -1 to disable it and enable log_duration in your postgresql.conf file. It will only report duration and number of queries.


The log_min_duration_statement should always be preferred (recommended).


log_min_duration_statement = 0

The next important parameter to look for is log_line_prefix whose value can be specified based on the value of another parameter log_destination . The log_destination value tells PostgreSQL to generate logs in specified format. By default, PostgreSQL generates logs in standard error (stderr) format. In this format, each log message is prefixed with the information specified by the parameter log_line_prefix.


With ‘stderr’ log format, log_line_prefix must be having at least a time escape sequence (%t, %m or %n) and the process related escape sequence (%p or %c) as shown below.


log_line_prefix = '%t [%p]: '

It also lets you add user, database name, application name and client ip address as well as shown below.


log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '

Another thing to remember is that your log messages should be in English with or without locale support. The below settings ensures that:


 lc_messages='en_US.UTF-8'

 lc_messages='C'

The usual postgresql.conf configuration for pgBadger looks like this:


log_min_duration_statement = 0

log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '

log_checkpoints = on

log_connections = on

log_disconnections = on

log_lock_waits = on

log_temp_files = 0

log_autovacuum_min_duration = 0

log_error_verbosity = default

lc_messages='en_US.UTF-8'

lc_messages='C'

The above changes requires you to perform a PostgreSQL restart.


systemctl restart postgresql

Generating pgbadger report:


Now that everything is in place, let’s proceed with generating a pgbadger report.


The simplest way to analyze a particular log and generate the pgbadger report is to specify the log file path to pgbadger and it will generate the report to your present working directory.


$ pgbadger <log file>

For Example:


[user@server ~]$ sudo pgbadger /var/lib/pgsql/12/data/log/postgresql-Fri.log

[========================>] Parsed 1521 bytes of 1521 (100.00%), queries: 0, events: 1

LOG: Ok, generating html report...

[user@server ~]$



Tuesday, 10 August 2021

InitDB uses in Postgresql

 It will create a new PostgreSQL cluster.

  • In Postgres cluster is a collection of databases that are managed by a single cluster.
  • It will create the data directories in which the database data will live, generating the shared catalog tables and creating default databases.

* Postgresql — used for connections

* template 0 — user connection is not allowed and maintain DB consistency

* template 1 — When we creating a database will take an exact copy of template1.

  • In case the user doesn’t have permission on the data directory will create an empty directory.
  • initdb initializes the database cluster with default locale and character set encoding. This character set encoding, collation order (LC_COLLATE) and character set classes (LC_CTYPE, e.g., upper, lower, digit) can be set separately for a database when it is created. initdb determines those settings for the template1 database, which will serve as the default for all other databases.
  • Before the Initdb initialises, the data directory was empty.
[postgres@prime]$cd /u01/pgdatabase/data[postgres@prime data]$ ls -lrthtotal of 0
  • Now we are going to execute the initdb -D “ path ”, here -D specifies data directory location
postgres@prime data]$ /u01/postgresql-10.0/bin/initdb -D /u01/pgdatabase/data
  • Once the initdb initialised, to verify the data directory.
[postgres@prime data]$ ls -lrthtotal 124K-rw------- 1 postgres postgres 88 Oct 30 2019 postgresql.auto.conf-rw------- 1 postgres postgres 3 Oct 30 2019 PG_VERSIONdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_twophasedrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_tblspcdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_snapshotsdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_serialdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_replslotdrwx------ 4 postgres postgres 4.0K Oct 30 2019 pg_multixact-rw------- 1 postgres postgres 1.6K Oct 30 2019 pg_ident.conf-rw------- 1 postgres postgres 4.5K Oct 30 2019 pg_hba.confdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_dynshmemdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_commit_tsdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_xactdrwx------ 2 postgres postgres 4.0K Oct 30 2019 pg_subtrans-rw------- 1 postgres postgres 23K Feb 14 22:17 postgresql.confdrwx------ 11 postgres postgres 4.0K Feb 14 22:43 basedrwx------ 2 postgres postgres 4.0K Feb 14 22:44 pg_stat-rw------- 1 postgres postgres 769 Feb 14 22:46 logfiledrwx------ 3 postgres postgres 4.0K Mar 2 11:40 pg_wal-rw------- 1 postgres postgres 62 Mar 7 13:38 postmaster.optsdrwx------ 2 postgres postgres 4.0K Mar 7 13:38 pg_notify-rw------- 1 postgres postgres 78 Mar 7 13:38 postmaster.piddrwx------ 2 postgres postgres 4.0K Mar 7 13:46 globaldrwx------ 4 postgres postgres 4.0K Mar 7 15:43 pg_logicaldrwx------ 2 postgres postgres 4.0K Mar 8 11:33 pg_stat_tmp

Here the important files is created automatically i.e  postgresql.conf, pg_hba.conf,base,pg_tblspc ..etc

Summary:

The initdb use to create the data directories and default databases as well. By using initdb we can create multiple clusters on the same machine with the different directory path.

What is a Postmaster PID file in PostgreSQL?

 

  • As per the PostgreSQL architecture, the postmaster is the first background process that will starts when the cluster is started.
  • It is a parent process of all other background processes.
  • In case any of the other background processes got failed, The postmaster automatically restart the dead processes.
  • It also works as a listener. When the client’s request to the server first it will land into the postmaster and perform syntax and semantic checking, Once it is pass and spawn a new connection this process called “Pgprocess”.
  • The postmaster act as a listener well as the parent process, so it is also called “Supervision”.
ps -ef|grep postmasterpostgres 393 32729 0 22:28 pts/3 00:00:00 grep postmaster

Purpose of Postmaster PID file:

  • The postmaster PID file resides on the data directory when the cluster is up and running.
  • The purpose of this PID file uses to capture

1. the current postmaster process ID (PID),

2. cluster data directory path, postmaster start timestamp,

3. port number,

4. Unix domain socket directory path,

5. listen_address (IP address)

6. shared memory segment id.

  • The captured PID file details are being locked when the cluster was up and running. It protects to access the same port number or data directory for another cluster launch or running.
$cd $PGDATA$lsbase     pg_commit_ts  pg_ident.conf  pg_notify    pg_snapshots  pg_subtrans  PG_VERSION  postgresql.auto.conf  postmaster.pid
global pg_dynshmem pg_logical pg_replslot pg_stat pg_tblspc pg_wal postgresql.conf
logfile pg_hba.conf pg_multixact pg_serial pg_stat_tmp pg_twophase pg_xact postmaster.opts

To view the postmaster PID file:

$ cat postmaster.pid4693/u01/pgdatabase/data16156884855432/tmp*5432001 3604491ready

· Now I have just stopped my cluster services using pg_ctl

pg_ctl stop -mfwaiting for the server to shut down.... doneserver stopped

After the cluster is stopped of we verify that the postmaster PID file it was automatically disappeared from the data directory.

summary:

· The postmaster PID file will be generated automatically when the cluster services are up and running.

· It is a lock file that maintains the current cluster details i.e not allow to use of these details to run another cluster on the same port or data directory

Data directory Layout in Postgres

One instance of Postgres is known as a cluster .one cluster is on the disk known as a data directory


by default is located on “/var/lib/psql/11/data/”, this is the default location for the data directory


1. Global


-It contains cluster-wide database objects. we some files here each files some kind of tables or user info or metadata files which is related to the dictionary files or tables which pg_filenode.map = A file that maps internal relations (mainly system catalogue tables) to their OIDr here proper node file mapping info stored over here.

-It is an internal file that is used by the server when the server is started.




PostgreSQL Data directory Layout

2. Base


-It is the main directory and contains DB, inside the directory, we can see dB directories and its one directory per DB


3.pg_tblsc


-This is a directory that contains symbolic links to the tablespace locations, in case we have created the tablespace in our instance. Then link to the physical location of the tablespace It will be recreated inside the pg_tblsc directory


-We can see the links in case we have created the tablespaces.in case we used the default tablespace instead of creating new user tablespaces, we can’t see any links inside the directories


4.pg_wal


-It contains transaction logs or wal segments

-These files are binary-files


5.pg_log


-Which contains our start-up logs


6.logs


-Which contains the error log, All the error logs are located under these directories

-It is a human-readable file which is in text formats


7.Status Directories


-These are the multiple directories that start with pg_*, a lot of status data and lots of transaction info, temporary statistic info.

-It is just required the information of the server and content of the directories are temporary


8.Configuration Files


1.PostgreSQL.conf — one of the most parameter files inside the directory

2.pg_hba.conf — host-based access control file or host-based authentication file

3.pg_ident.conf — which is used for OS authentication

4.postgresql.auto.conf

5. postmaster. PID — which contains the postmaster details and is only available when the cluster is up and running

6.postmaster.opts — the option used to start the server

Backup Script PostgreSQL

 Please use the below script to backup all the DB using pg_dump if pg_dumpall is not an option. Please exclude the DB which you do not want to backup. For example, rdsadmin in case of RDS. And template1 and template0 which are not user related DB

#!/bin/bash
## Backup Script of Postgres
## Script will take backup of individual database and global objects. Script is good if DB size is small and do not need backup of WAL Files
## Author - srinivas123raodba@gmail.com

set -x

DATE=`(date +"%Y-%m-%d")`
export PGPASSWORD=postgres
SIMPLE_DATE=`(date +"%Y%m%d")`
BACKUP_DIRECTORY=/opt/postgresql/postgres_backup/$SIMPLE_DATE
LOG_FILE_DIRECTORY=/opt/postgresql/postgres_backup/backup_log_file
BACKUP_USER=postgres
PG_DUMP=/opt/postgresql/pg96/bin/pg_dump
PORT=5432
PSQL=/opt/postgresql/pg96/bin/psql
DUMPALL=/opt/postgresql/pg96/bin/pg_dumpall

if [ ! -d $LOG_FILE_DIRECTORY ]
then
        /bin/mkdir  -p $LOG_FILE_DIRECTORY
else
        echo "Log File Directory exist"

fi

echo "Starting Backup of all databases"

if [ ! -d $BACKUP_DIRECTORY ]
then
 /bin/mkdir  -p $BACKUP_DIRECTORY
else
 echo "Backup Directory exist"
 
fi

LIST_DB=`$PSQL -t -d postgres -c "select datname from pg_database where datname not like 'template0' and datname not like 'template1';"`

array1=($LIST_DB)

for database in "${array1[@]}"

do
 BACKUP_DATABASE="$PG_DUMP -Fc -U$BACKUP_USER -p $PORT -d $database -f $BACKUP_DIRECTORY/backup_$database.sql"
    echo "Taking backup of DATABASE $database"
 $BACKUP_DATABASE
 
 if [ $? -eq 0 ] 
 then
            echo "Backup of $database completed successfully" 
 else
            echo "Backup of $database Failed" 
 fi

done

echo "BACKUP_GLOBAL_ROLES"

$DUMPALL -U$BACKUP_USER -p $PORT -g -f $BACKUP_DIRECTORY/backup_roles.sql

echo "Backup Completed sucessfully for all roles and databases"

echo " --------------------------"

echo "Cleaning up backup directory more than 7 days"

/bin/find /opt/postgresql/postgres_backup/* -type d -ctime +7 -exec rm -rf {} \;
/bin/find $LOG_FILE_DIRECTORY/* -mtime +7 -exec rm {} \;
#/bin/find /log/archivelog/* -mtime +2 -exec rm {} \;

echo "Cleanup job completed"

Monday, 9 August 2021

CLUSTER: IMPROVING POSTGRESQL PERFORMANCE

 When designing a database application, the layout of data in disk is often neglected. However, the way data is stored by PostgreSQL can have a major performance impact. Therefore it makes sense to take a look at what can be done to improve speed and throughput. In this post you will learn one of the most important tricks.

PostgreSQL: To sort or not to sort

To demonstrate the importance of the on-disk layout I have created a simple test set:

1
2
3
4
5
6
7
test=# CREATE TABLE t_test AS SELECT *
        FROM generate_series(1, 10000000);
SELECT 10000000
test=# CREATE TABLE t_random AS SELECT *
        FROM t_test
        ORDER BY random();
SELECT 10000000

Note that both data sets are absolutely identical. I have loaded 10 million rows into a simple table. However, in the first case data has been sorted, then inserted. generate_series returns data in ascending order and because the table is new data will be written to disk in that order.
In the second case I decided to shuffle the data before insertion. We are still talking about the same data set. However, it is not in the same order:

1
2
3
4
5
6
7
test=# \d+
                    List of relations
 Schema |   Name   | Type  | Owner |  Size  | Description
--------+----------+-------+-------+--------+-------------
 public | t_random | table | hs    | 346 MB |
 public | t_test   | table | hs    | 346 MB |
(2 rows)

In both cases the size on disk is the same. There are no changes in terms of space consumption which can be an important factor as well.

Creating an index in PostgreSQL

Let us create an index on both tables:

1
2
3
4
5
6
7
8
test=# \timing
Timing is on.
test=# CREATE INDEX idx_test ON t_test (generate_series);
CREATE INDEX
Time: 3699.416 ms (00:03.699)
test=# CREATE INDEX idx_random ON t_random (generate_series);
CREATE INDEX
Time: 5084.823 ms (00:05.085)

Even creating the index is already faster on sorted data for various reasons. However, creating initial indexes does not happen too often, so you should not worry too much.

In the next step we can already create optimizer statistics and make sure that all hint bits are set to ensure a fair performance comparison:

1
2
test=# VACUUM ANALYZE;
VACUUM

Reading blocks of database

Now that all the test data sets are in place we can run a simple test: Let us fetch 49000 rows from the sorted data set first:

1
2
3
4
5
6
7
8
9
10
11
12
test=# explain (analyze, buffers) SELECT *
    FROM    t_test
    WHERE   generate_series BETWEEN 1000 AND 50000;
                                                           QUERY PLAN                                                           
---------------------------------------------------------------------------------------------------------------------------------
 Index Only Scan using idx_test on t_test  (cost=0.43..1362.62 rows=43909 width=4) (actual time=0.017..7.810 rows=49001 loops=1)
   Index Cond: ((generate_series >= 1000) AND (generate_series <= 50000))
   Heap Fetches: 0
   Buffers: shared hit=138
 Planning Time: 0.149 ms
 Execution Time: 11.785 ms
(6 rows)

Not bad. We need 11.785 milliseconds to read the data. What is most important to consider here is that the number of 8k blocks needed is 138, which is not much. “shared hit” means that all the data has come from memory.

Let me run the same test again:

1
2
3
4
5
6
7
8
9
10
11
12
test=# explain (analyze, buffers) SELECT *
    FROM    t_random
    WHERE   generate_series BETWEEN 1000 AND 50000;
                                                             QUERY PLAN                                                             
-------------------------------------------------------------------------------------------------------------------------------------
 Index Only Scan using idx_random on t_random  (cost=0.43..1665.17 rows=53637 width=4) (actual time=0.013..9.892 rows=49001 loops=1)
   Index Cond: ((generate_series >= 1000) AND (generate_series <= 50000))
   Heap Fetches: 0
   Buffers: shared hit=18799
 Planning Time: 0.102 ms
 Execution Time: 13.386 ms
(6 rows)

In this case the query took a bit longer: 13.4 ms. However, let us talk about the most important number here: The number of blocks needed to return this result. 18799 blocks. Wow. That is roughly 150 times more.

One could argue that the query is not really that much slower. This is true. However, in my example all data is coming from memory. Let us assume for a moment that data has to be read from disk because for some reason we get no cache hits. The situation would change dramatically. Let us assume that reading one block from disk takes 0.1 ms:

138 * 0.1 + 11.7 = 25.5 ms
vs.
18799 * 0.1 + 13.4 = 1893.3 ms

That is a major difference. This is why the number of blocks does make a difference – even if it might not appear to be the case at first glance. The lower your cache hit rates are, the bigger the problem will become.

There is one more aspect to consider in this example: Note that if you want to read a handful of rows only the on-disk layout does not make too much of a difference. However, if the subset of data contains thousands of rows, the way is ordered on disk does have an impact on performance.

CLUSTER: PostgreSQL comes to the rescue

The CLUSTER command has been introduced many years ago to address exactly the issues I have just outlined. It allows you to organize data according to an index. Here is the syntax:

1
2
3
4
5
6
test=# \h CLUSTER
Command:     CLUSTER
Description: cluster a table according to an index
Syntax:
CLUSTER [VERBOSE] table_name [ USING index_name ]
CLUSTER [VERBOSE]

URL: https://www.postgresql.org/docs/12/sql-cluster.html

Utilizing the CLUSTER command is easy. The following code snipped will show how you can do that:

1
2
test=# CLUSTER t_random USING idx_random;
CLUSTER

To see what happens I have executed the same query as before again. However, there is something important to be seen:

1
2
3
4
5
6
7
8
9
10
11
12
test=# explain (analyze, buffers)
    SELECT *    FROM t_random
    WHERE   generate_series BETWEEN 1000 AND 50000;
                                                          QUERY PLAN                                                         
------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on t_random  (cost=1142.21..48491.32 rows=53637 width=4) (actual time=3.328..9.564 rows=49001 loops=1)
   Recheck Cond: ((generate_series >= 1000) AND (generate_series <= 50000)) Heap Blocks: exact=218 Buffers: shared hit=2 read=353 ->  Bitmap Index Scan on idx_random  (cost=0.00..1128.80 rows=53637 width=0) (actual time=3.284..3.284 rows=49001 loops=1)
         Index Cond: ((generate_series >= 1000) AND (generate_series <= 50000))
         Buffers: shared hit=2 read=135
 Planning Time: 1.024 ms
 Execution Time: 13.077 ms
(9 rows)

PostgreSQL has changed the execution plan. This happens due to wrong statistics. Therefore it is important to run ANALYZE to make sure that the optimizer has up-to date information:

1
2
test=# ANALYZE;
ANALYZE

Once the new optimizer statistics is in place the execution plan will be as expected again:

1
2
3
4
5
6
7
8
9
10
11
12
test=# explain (analyze, buffers) SELECT *
    FROM    t_random
    WHERE   generate_series BETWEEN 1000 AND 50000;
                                                              QUERY PLAN                                                             
--------------------------------------------------------------------------------------------------------------------------------------
 Index Only Scan using idx_random on t_random  (cost=0.43..1807.12 rows=50884 width=4) (actual time=0.012..11.737 rows=49001 loops=1)
   Index Cond: ((generate_series >= 1000) AND (generate_series <= 50000))
   Heap Fetches: 49001
   Buffers: shared hit=355
 Planning Time: 0.220 ms
 Execution Time: 15.267 ms
(6 rows)

Maintaining order

If you have decided to cluster a table it does NOT mean that order on disk is maintained forever. If you run UPDATES etc. frequently the table might gradually loose order again. Therefore, CLUSTER is especially useful if your data is rather static. It can also make sense to order data as you import it to ensure physical order.


PostgreSQL Health Check - Performance Audit and Recommendations

   PostgreSQL Server Capacity Planning and Optimization: Conduct a thorough assessment of current and projected workload demands on the data...