[MySQL] Logging Deadlock errors

Since mysql 5.6.2 it possible to log deadlock error without using additional tool like pt-deadlock-logger

To activate this function, add inside your my.cnf:

# log deadlock in error log. default = /var/lib/mysql/$(hostname -s).err
innodb_print_all_deadlocks = 1

To enable it without restarting:

# myadm -e "set global innodb_print_all_deadlocks=1;"

Further Reading and sources

[MySQL] Upgrade 5.1->5.6

In theory mysql_upgrade should do the trick but sometime it fails some step ‘silently’. Then when doing backup you will have this :

Cannot load from mysql.proc. The table is probably corrupted

Gasp. Don’t panic, this is a very simple error to fix. Just change the type of the ‘comment’ column :

alter table proc change comment comment text;

[MySQL] The query cache blues

The MySQL query cache is one of the prominent features in MySQL and a vital part of query optimization. It caches the select query along with the result set, which enables the identical selects to execute faster as the data fetches from the in memory.

So in theory its rocks. Right ?

Yep but not quite. First there is the obvious problem of changing query/data. If you frequently update the table then you invalid the query cache. If you change any parameters into the query you just don’t use the cache. In both case you’re probably not going to get any sort of good usage from the MySQL query cache.

Then there is also the problem of how the query cache implementation work on modern multi-core CPU. Simply stated mysqld wants to lock the query cache both when checking if a result is in the cache and when writing a result set into the cache. When writing locking can occur several times, because cache has to be assigned memory block by block. On a highly concurrent environment that means a lot of mutex which may become a performance bottleneck.

For these reason, the query cache is disabled since MySQL 5.6
If you really want/need to re-enable it, just set a value for the query_cache_limit.

Further Reading and sources

[MySQL] Replication over SSL

Prerequisites

  • Mysqld 5.1 or superior
  • Openssl 0.9.8e or superior
  • an already running mysql replication

Check SSL support

mysql> show variables like '%ssl%';
+---------------+----------+
| Variable_name | Value    |
+---------------+----------+
| have_openssl  | DISABLED |
| have_ssl      | DISABLED |
| ssl_ca        |          |
| ssl_capath    |          |
| ssl_cert      |          |
| ssl_cipher    |          |
| ssl_key       |          |
+---------------+----------+
7 rows in set (0.01 sec)

DISABLED means mysqld has ssl support but it’s just not enabled. If you have NO instead then you don’t have SSL support.

Generate certificates

Create a /etc/mysql/certs directory on both master and slave node.

Generate a key and a CA certificate:

openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem

Generate a server key and certificate:

openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

And finally a client key and certificate:

openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem

Copy ca-cert.pem, client-cert.pem and client-key.pem to the slave node.

Mysqld configuration

On the master node add into /etc/mysql/my.cnf into the [mysqld] section:

ssl-ca=/etc/mysql/certs/ca.pem
ssl-cert=/etc/mysql/certs/server-cert.pem
ssl-key=/etc/mysql/newcerts/server-key.pem

Restart mysqld.

On the slave node, add the following lines:

ssl-ca=/etc/mysql/certs/ca.pem
ssl-cert=/etc/mysql-ssl/client-cert.pem
ssl-key=/etc/mysql-ssl/client-key.pem

Restart mysqld.

Last step: tell to replication process to use exclusively a SSL connection:

GRANT USAGE ON *.* TO 'slave_user'@'%' REQUIRE SSL;
FLUSH PRIVILEGES;

[Postgres] Pg8.3 – Warm stand-by replication

Unlike Mysql, Postgres doesn’t have a native replication process. Nevertheless it’s possible to built a warm stand-by replicate using asynchronous log transfer.

Prerequisites

  • SSH access between your master and slave nodes
  • rsync
  • pg_standby

Enable archive log and log shipping

First create a directory for shipped log on the slave node :

mkdir /var/lib/postgresql/alog/

Then enable archive log and log shipping into the master setting file :

master:~# vi /etc/postgresql/8.3/main/postgresql.conf
archive_mode = on
archive_command = 'rsync -a %p slave:/var/lib/postgresql/alog/%f'
archive_timeout = 60
master:~# /etc/init.d/postgresql-8.3 restart

The archive_command will be executed each time the archive_timeout is reached or after the log file grow by 16Mo. After setting these parameters and restarting the postgresql service you can watch the archive directory and you should start seeing WAL files being transmitted on your slave node.

Enable replication

Now you need to configure your stand-by server for “continous recovery mode” using archive log has data source. The ‘pg_standby’ tool will be usefull for that.

slave:~$ vi /var/lib/postgresql/8.3/main/recovery.conf
restore_command = '/usr/lib/postgresql/8.3/bin/pg_standby -d /var/lib/postgresql/alog/%f %p %r >> /var/log/postgresql/pg_standby.log 2>&1'
slave:~$ /etc/init.d/postgresql-8.3 start

Ta-dah your warm stand-by replicate should be ready. You can check the /var/log/postgresql/pg_standby.log for more info. Note that you can’t connect to the slave postgres anymore, this behaviour is perfectly normal.

Further Reading and sources

[MySQL] Limit connection per user

Is a single website wreaking havoc on your MySQL server by saturating all its sockets ? If so you need to set in place some limits.

The max_user_connections variable

You are likely familiar with the max_connections setting, but there is also the less know max_user_connections. Its name is self-explanatory.

Limit for a specific user

If you need to be more granular, it’s also possible to limit connections for a specific user, like this:

GRANT ALL ON DBNAME.* TO XXX@YYY with MAX_USER_CONNECTIONS 100;

You can use other limit options like MAX_QUERIES_PER_HOUR, MAX_UPDATES_PER_HOUR or MAX_CONNECTIONS_PER_HOUR depending your needs.

[MySQL] Dump only triggers and stored procedures

By default mysqldump backup triggers but not stored procedures. For that you must add the --routines option.

But how do you do to dump only triggers and stored procedures ?

Like this :

mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt <database> > <dump>.sql

[MySQL] Dump with one INSERT statement for each data row

By default mysqldump make a dump with only one INSERT statement per table. This is efficient but annoying as hell when you need to debug a failed import.

In that case i recommend you to redump the problematic database with the option --extended-insert=FALSE to get a dump with one INSERT statement for each data row.

[Postgres] Increase shmmax

PostgreSQL is great, but the manner its work is sometime… a little unsettling. For example, Postgres like spawning a lots of little processes that communicate by IPC, which means using chunk of shared memory. Sometime you can hit the shmmax limit.

To obtain the current value :

cat /proc/sys/kernel/shmmax

You can ovverride the default value into the /etc/sysctl.conf file.
Add a line like this for example:

kernel.shmmax = 43554432

And don’t forget to make a sysctl -p and restart postgres to take into account the new value.