Export specific rows from PostgreSQL as Insert SQLs

PostgreSQL doesn’t offer an option to dump specific rows. It only allows to export an entire table (or the whole db).And you know, sometimes it may be needed to migrate only a specific set of rows. Anyhow, there is a way to work it around.

It will be needed to create a real table to store the rows you want:

CREATE TABLE tmp_table AS
SELECT * FROM ...
WHERE 1=1
AND ...

And then, it can be exported using pg_dump

pg_dump -U -d -t tmp_table --data-only --column-inserts > file.sql

It will generate a file called file.sql containing the INSERT statement.

Don’t forget to drop the tmp_table

DROP TABLE tmp_table;

🙂

Vyatta + OpenVPN: Set a static IP address for a client

After enabling OpenVPN in my Brocade Vyatta, it was identified the requirement to set static IP addresses for some clients. This is quite simple to be achieved regardless the OS/device, however, this article will cover only the steps for Brocade Vyatta 5400.

Firstly, let me share the version of the products being used:

– Vyatta version: VSE6.7R10 – Brocade vRouter 5415 6.7 R
– OpenVPN 2.3.4

So, let’s get started. Initially, we need to find out where Vyatta stores its configuration files. To accomplish this task, ps command may be helpful:

ps -ef | grep -i openvpn;

You should get something like:

/usr/sbin/openvpn –daemon openvpn-vtun0 –verb 3 –writepid /var/run/openvpn-vtun0.pid –status /opt/vyatta/etc/openvpn/status/vtun0.status 30 –dev-type tun –dev vtun0 –script-security 2 –up /opt/vyatta/sbin/vyatta-ovpn-up.pl –mode server –tls-server –topology subnet –keepalive 10 60 –lport 1194 –proto tcp-server –cipher aes-256-cbc –ca /config/auth/ca.crt –cert /config/auth/my-server.crt –key /config/auth/my-server.key –dh /config/auth/dh2048.pem –management /tmp/openvpn-mgmt-vtun0 unix –push dhcp-option DNS XXX.XXX.XXX.XXX –push route XXX.XXX.XXX.XXX XXX.XXX.XXX.XXX –server XXX.XXX.XXX.XXX XXX.XXX.XXX.XXX –client-config-dir /opt/vyatta/etc/openvpn/ccd/vtun0

In the end of this big string, you can find out the value of the param. –client-config-dir. “/opt/vyatta/etc/openvpn/ccd/vtun0” in this case. This is our target. Change to this directory:

cd /opt/vyatta/etc/openvpn/ccd/vtun0;

This is the directory where you should place your rules. Basically, for each client that you need a specific IP address, you have to create a file. The filename must match the CN (common name) of the X509 client certificate.

— If you don’t recall its CN, I would suggest you to decode the client certificate. The following command may be useful to extract it from your cert. file:

openssl x509 -in YOUR_CERT.crt -noout -subject | sed -e 's/^subject.*CN=\([a-zA-Z0-9\.\-\*]*\).*$/\1/

Create the file using you preferred method (don’t forget to match the Common Name) and append the following content into. Take a look at this example:

touch commonNameFile;
echo 'ifconfig-push 10.134.247.10 255.255.255.192' > commonNameFile;

This is the point where you should take over and fill the file out with an IP address and Netmask of your choice. Make sure to avoid addresses already being assigned automatically by the OpenVPN.

Essentially, it doesn’t require to restart the service but this is a not a bad idea if needed.

It should make it.

DETAIL: Could not open file “pg_clog/xxxx”: No such file or directory.

In a beautiful day, my PostgreSQL DBMS started to issue this error message as a result of some SQL statements. When I searched for it, I found out it was related to a possible hardware crash, very possibly a power failure that interrupted some running transaction.

Basically, the problem happens because the product is looking for a non-existent transaction file within the pg_clog directory. This transaction should be consumed before retrieving (or altering) data — that’s why PostgreSQL interrupts everything and prints this message. It needs to be solved before others transactions can run.

So, to finally solve it, it is needed to access the server where the DBMS is installed, switch to the ID owner of the database — usually postgres — and find out where the PostgresSQL’s data is located. Especially for my case:

sudo su - postgres;
cd /usr/local/pgsql/data/pg_clog;

Once you get to the directory with the proper credentials, it will be needed to recreate the non-existent file with the following command:

dd if=/dev/zero of=XXXX bs=256k count=1

— Make sure to replace XXXX to the string which is being shown in the logs.
— Run a ls -ltr to make sure file was created with the proper permissions. It is important.

This should be enough at this time but keep in mind others transactions also were affected by the corruption. So, my recommendation at this moment is to run a full backup dump for this database. It will force PostgreSQL to read all the tuples. If any other set of data is depending on a missing transaction, it will be informed by the backup logs. Apply the same procedure to recreate the pg_clog if required.

Besides to creating the pg_clog file, an additional step may be necessary to complete the fix. Basically, it will be needed to identify which table is related to the missed transaction. The backup process is your friend (again) in that step. It will print the table name when it finds the transaction issue. So, for this particular table, run the following statements:

vacuum full table_name;

— This command will completely rewrite the structure of your table. That’s why it will hold any query in this table. Make sure you have a database maintenance window to run it.

reindex table table_name;

— Reindex is required because vacuum full messes with your indexes.

So, at the end of the process, start another full backup do make sure Postgresql is able to read all your tuples.

This should fix it.

Hello world!

hello world!

This couldn’t start different. For us, IT professionals, “Hello Word” is always a first step to begin something where you will employ effort.

Essentially, this is a kind of personal repository to save useful stuff and thoughts — You may find something interesting here and I totally encourage you to participate placing your comments in my posts.

Mostly, I will write about OS, Programming, Network and Security.

Lets see, I hope to get at the end of the next year, 2018, still active and with a bunch of nice contents.