Yesterday night I spent all night trying to find the bottleneck within my system which happened to be none other than disk bound.
Everyone knows that InnoDB is excellent for its concurrency, thanks to its row level lock granularity. What comes as a surprise, to some, is that if you happen to use an AUTO INCREMENT column, then InnoDB needs to acquire a special, table level, AUTO-INC lock.
Everything was fine until I started finding many threads in InnoDB queue waiting to acquire this lock.
iostat -xnz 5 reported disk busy at 100% most of the times.
iosnoop reported that most of the fsflush activity was being caused by MyISAM (MYD and MYI) files. To check for write activity related to InnoDB files, I used the following command:
[root@db] time /usr/bin/perl iosnoop | grep fsflush | grep ibFor the same duration, much heavier fsflush activity was reported for MyISAM tables.
# UID PID D BLOCK SIZE COMM PATHNAME
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 130290 7168 fsflush /var/tmp/ibXcaq7T
dtrace: 11 dynamic variable drops with non-empty dirty list
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1739344 32768 fsflush /var/tmp/ibXcaq7T
0 3 W 128928 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 1739440 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 1739344 32768 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
dtrace: 5 dynamic variable drops with non-empty dirty list
dtrace: 5 dynamic variable drops with non-empty dirty list
dtrace: 2 dynamic variable drops with non-empty dirty list
0 3 W 129744 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 1736736 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 131504 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130208 8192 fsflush /var/tmp/ibXcaq7T
0 3 W 130240 16384 fsflush /var/tmp/ibXcaq7T
0 3 W 129440 49152 fsflush /var/tmp/ibXcaq7T
Since my situation is disk bound I am going to convert those tables to InnoDB to help relieve some pressure from the system.
Some tools for finding out disk utilization by process:
- psio: "This program could assist finding a process that is causing the most I/O. The I/O reported is the total of all types: disk I/O, network I/O, stream I/O, etc.. "
- prusage:
- iosnoop:
- iotop:
- lsof:
Other resources:
- A Comparison of Memory Allocators in Multiprocessors: On Solaris, switching from malloc to mtalloc can increase performance.
- WSJ has a story on the impact of immigrant innovators and entreprenuers on the US economy.
No comments:
Post a Comment