My fellow MySQLers who I have chatted with recently on IRC, know how much the AUTO-INC table lock in MySQL has been bugging me. Whenever my server gets an increased number of concurrent INSERTs, I start experiencing thrashing.
Yesterday night I spent all night trying to find the bottleneck within my system which happened to be none other than disk bound.
Everyone knows that InnoDB is excellent for its concurrency, thanks to its row level lock granularity. What comes as a surprise, to some, is that if you happen to use an AUTO INCREMENT column, then InnoDB needs to acquire a special, table level,
AUTO-INC lock.
Everything was fine until I started finding many threads in InnoDB queue waiting to acquire this lock.
iostat -xnz 5 reported disk busy at 100% most of the times.
iosnoop reported that most of the fsflush activity was being caused by MyISAM (MYD and MYI) files. To check for write activity related to InnoDB files, I used the following command:
[root@db] time /usr/bin/perl iosnoop | grep fsflush | grep ib
# UID PID D BLOCK SIZE COMM PATHNAME
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 130290 7168 fsflush /var/tmp/ibXcaq7T
dtrace: 11 dynamic variable drops with non-empty dirty list
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1739344 32768 fsflush /var/tmp/ibXcaq7T
0 3 W 128928 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 1739440 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1739344 32768 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
dtrace: 5 dynamic variable drops with non-empty dirty list
dtrace: 5 dynamic variable drops with non-empty dirty list
dtrace: 2 dynamic variable drops with non-empty dirty list
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
For the same duration, much heavier fsflush activity was reported for MyISAM tables.
Since my situation is disk bound I am going to convert those tables to InnoDB to help relieve some pressure from the system.
Some tools for finding out disk utilization by process:
- psio: "This program could assist finding a process that is causing the most I/O. The I/O reported is the total of all types: disk I/O, network I/O, stream I/O, etc.. "
- prusage:
- iosnoop:
- iotop:
- lsof:
If you know of any other tools that can assist with monitoring disk utilization by process/LWPs on other systems, add them by leaving a comment.
Other resources:
Side notes: