Monday, July 31, 2006

Relay Log File Size and Limit

For any DBA, broken replication represents a dreaded moment. Replication can break for many factors including when the master server issues a 'Duplicate entry' error. At that point you can either skip the slave counter (recommended) or ignore/skip the errors (slave-skip-errors=1062) if you are sure that ignoring such errors won't cause a nasty effect somewhere.

Depending on when you troubleshoot the situation, when replication resumes, slaves may fall significantly behind and the relay log files may start consuming huge amount of disk space. If you have ample free disk space then that is not an issue however, if you are tight on disk space then the relay-log-space-limit option can really come handy.

The relay-log-space-limit option limits the disk space used by the relay log files. You can specify the value in bytes, megabytes (10000M) or gigabytes (10G) in either my.cnf file or as a start up option.

To specify it in the my.cnf file, backup your current my.cnf file (always recommended), stop slave, stop the MySQL server and place the following option:

# relay log restrictions
relay-log-space-limit=15G


Then save and quit the file and start MySQL. Unless you configured differently, MySQL will automatically start the slave thread.

Now assuming the relay log files were using more than 15G space, issuing a

SHOW SLAVE STATUS \G

would show the following message:

Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space


What that message indicates is that MySQL is waiting for the existing relay logs to be read so some space can be freed before new logs will be downloaded to the slave.

If you do not want to restrict the disk space for relay log files then specify 0 as the value for relay-log-space-limit.

For more information on the states of slave I/O threads, see the MySQL manual.

Tuesday, July 25, 2006

Compiling Super Smack on Solaris 10

I have been trying to compile super-smack, a benchmarking tool for MySQL, on a Solaris 10 box but compiling super-smack fails with the following message(s):

###################
[root@db:/home/me/benchmarking/super-smack-1.3] ./configure --with-mysql=/usr/local/mysql/bin/mysql
checking for a BSD-compatible install... ./install-sh -c
checking whether build environment is sane... yes
checking whether make sets $(MAKE)... no
checking for working aclocal... missing
checking for working autoconf... missing
checking for working automake... missing
checking for working autoheader... missing
checking for working makeinfo... missing
checking for sh... /bin/bash
checking for gcc... no
checking for cc... cc
checking for C compiler default output file name... configure: error: C compiler cannot create executables
See `config.log' for more details.
################

Now that main question is why is it complaining about C compiler not being able to create executables and what can be done about it?

Following are excerpts from the config.log


# #########################
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by configure, which was
generated by GNU Autoconf 2.59. Invocation command line was

$ ./configure --with-mysql=/usr/local/mysql/bin/mysql

## --------- ##
## Platform. ##
## --------- ##

hostname = db.host
uname -m = sun4u
uname -r = 5.10
uname -s = SunOS
uname -v = Generic_118833-03

/usr/bin/uname -p = sparc
/bin/uname -X = System = SunOS
Node = db.host
Release = 5.10
KernelID = Generic_118833-03
Machine = sun4u
BusType =
Serial =
Users =
OEM# = 0
Origin# = 1
NumCPU = 2

/bin/arch = sun4
/usr/bin/arch -k = sun4u
/usr/convex/getsysinfo = unknown
hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown

PATH: /usr/sbin
PATH: /usr/bin
PATH: /usr/local/bin
PATH: /opt/csw/bin
PATH: /sbin
PATH: /bin
PATH: /usr/sbin
PATH: /usr/bin
PATH: /usr/local/sbin
PATH: /usr/bin
PATH: /usr/local/mysql/bin
PATH: /usr/ucb
PATH: /etc
PATH: .
PATH: /usr/local/bin
PATH: /opt/csw/bin
PATH: /sbin
PATH: /bin
PATH: /usr/sbin
PATH: /usr/bin
PATH: /usr/local/sbin
PATH: /usr/bin
PATH: /usr/local/mysql/bin
PATH: /usr/ucb
PATH: /etc
PATH: .


## ----------- ##
## Core tests. ##
## ----------- ##

configure:1359: checking for a BSD-compatible install
configure:1414: result: ./install-sh -c
configure:1425: checking whether build environment is sane
configure:1468: result: yes
configure:1483: checking whether make sets $(MAKE)
configure:1507: result: no
configure:1535: checking for working aclocal
configure:1546: result: missing
configure:1550: checking for working autoconf
configure:1561: result: missing
configure:1565: checking for working automake
configure:1576: result: missing
configure:1580: checking for working autoheader
configure:1591: result: missing
configure:1595: checking for working makeinfo
configure:1606: result: missing
configure:1619: checking for sh
configure:1645: result: /bin/bash
configure:1698: checking for gcc
configure:1727: result: no
configure:1778: checking for cc
configure:1794: found /usr/ucb/cc
configure:1804: result: cc
configure:1968: checking for C compiler version
configure:1971: cc --version &5
/usr/ucb/cc: language optional software package not installed
configure:1974: $? = 1
configure:1976: cc -v &5
/usr/ucb/cc: language optional software package not installed
configure:1979: $? = 1
configure:1981: cc -V &5
/usr/ucb/cc: language optional software package not installed
configure:1984: $? = 1
configure:2007: checking for C compiler default output file name
configure:2010: cc conftest.c >&5
/usr/ucb/cc: language optional software package not installed
configure:2013: $? = 1
configure: failed program was:
| /* confdefs.h. */
|
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE "super-smack"
| #define VERSION "1.1"
| #define VERSION "1.1"
| /* end confdefs.h. */
|
| int
| main ()
| {
|
| ;
| return 0;
| }
configure:2052: error: C compiler cannot create executables
See `config.log' for more details.

## ---------------- ##
## Cache variables. ##
## ---------------- ##

ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_CXXFLAGS_set=
ac_cv_env_CXXFLAGS_value=
ac_cv_env_CXX_set=
ac_cv_env_CXX_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_prog_SHELL=/bin/bash
ac_cv_prog_ac_ct_CC=cc
ac_cv_prog_make_make_set=no

## ----------------- ##
## Output variables. ##
## ----------------- ##

ACLOCAL='/home/me/benchmarking/super-smack-1.3/missing aclocal'
AUTOCONF='/home/me/benchmarking/super-smack-1.3/missing autoconf'
AUTOHEADER='/home/me/benchmarking/super-smack-1.3/missing autoheader'
AUTOMAKE='/home/me/benchmarking/super-smack-1.3/missing automake'
CC='cc'
CFLAGS=''
CPP=''
CPPFLAGS=''
CXX=''
CXXFLAGS=''
DATADIR=''
DEFS=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP=''
EXEEXT=''
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
LDFLAGS=''
LEX=''
LEXLIB=''
LEX_OUTPUT_ROOT=''
LIBOBJS=''
LIBS=''
LN_S=''
LTLIBOBJS=''
MAKEINFO='/home/me/benchmarking/super-smack-1.3/missing makeinfo'
MYSQL_INCLUDE=''
MYSQL_LIB=''
OBJEXT=''
ORACLE_INCLUDE=''
ORACLE_LIB=''
PACKAGE='super-smack'
PACKAGE_BUGREPORT=''
PACKAGE_NAME=''
PACKAGE_STRING=''
PACKAGE_TARNAME=''
PACKAGE_VERSION=''
PATH_SEPARATOR=':'
PGSQL_INCLUDE=''
PGSQL_LIB=''
SET_MAKE='MAKE=make'
SHELL='/bin/bash'
SMACKS_DIR=''
VERSION='1.1'
YACC=''
ac_ct_CC='cc'
ac_ct_CXX=''
bindir='${exec_prefix}/bin'
build_alias=''
datadir='${prefix}/share'
exec_prefix='NONE'
host_alias=''
includedir='${prefix}/include'
infodir='${prefix}/info'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localstatedir='${prefix}/var'
mandir='${prefix}/man'
oldincludedir='/usr/include'

prefix='NONE'
program_transform_name='s,x,x,'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
target_alias=''

## ----------- ##
## confdefs.h. ##
## ----------- ##

#define PACKAGE "super-smack"
#define PACKAGE_BUGREPORT ""
#define PACKAGE_NAME ""
#define PACKAGE_STRING ""
#define PACKAGE_TARNAME ""
#define PACKAGE_VERSION ""
#define VERSION "1.1"
#define VERSION "1.1"

configure: exit 77


#################################
Now I need to investigate as to why this is happening.
Frank

MyISAM to InnoDB: Lost Rows 2

Thank you for the replies to my previous post.

However, let me rephrase the question:

I have a table of type MyISAM that is reporting 47 million rows when I do a SELECT COUNT(*). When I convert this table to InnoDB, running a SELECT COUNT(*) returns only 19 million rows. The conversion confirms 19 million rows were inserted and reports no warnings or duplicates.

I have done the conversion to InnoDB using the following ways
1. by dumping all the data in a text file and loading it.
2. by using ALTER TABLE

Why is the record count so low after conversion to InnoDB?
Who should I believe: InnoDB or MyISAM?
Any ideas as to what can be done to avoid loss of this many rows?

I will be posting output from my latest conversion attempt in some time.

----------------------------
PS: Thank you for all the replies. It turned out that the index was corrupt and after running OPTIMIZE table and then converting to InnoDB everything went fine, i.e. all the records were converted to InnoDB.

Monday, July 24, 2006

Where did the records go?

I experienced a strange thing happening on one of the tables I converted recently to InnoDB from MyISAM. When the table was using MyISAM, MySQL was reporting 47219404 rows in the output of

SHOW TABLE STATUS LIKE 'book_v2' \G


After converting the table to InnoDB and loading data using

INSERT INTO ... SELECT * FROM ...

I noticed that

SHOW TABLE STATUS LIKE 'book_v2_innodb' \G

reports 19265701 records and for the same table running

SELECT count(identifier) FROM db.book_v2_innodb;

returns

+-------------------+
| count(identifier) |
+-------------------+
| 19996851 |
+-------------------+
1 row in set (6 min 31.14 sec)


Now where did the remaining records go and why is MySQL reporting different value for count(identifier) than the number of rows in "SHOW TABLE STATUS" output?

One thing I would like to note is that the records were inserted after setting UNIQUE_CHECK and AUTOCOMMIT to 0.

<update</>
Another weird thing is (notice the number of rows):

mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: guestbook_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 19265701
Avg_row_length: 305
Data_length: 5895094272
Max_data_length: NULL
Index_length: 1141899264
Data_free: 0
Auto_increment: 53756907
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options: max_rows=1000000000 avg_row_length=256
Comment: InnoDB free: 380928 kB
1 row in set (2.60 sec)


mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: guestbook_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 19445335
Avg_row_length: 303
Data_length: 5895094272
Max_data_length: NULL
Index_length: 1141899264
Data_free: 0
Auto_increment: 53756907
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options: max_rows=1000000000 avg_row_length=256
Comment: InnoDB free: 380928 kB
1 row in set (1.02 sec)


mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: guestbook_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 18951343
Avg_row_length: 311
Data_length: 5895094272
Max_data_length: NULL
Index_length: 1141899264
Data_free: 0
Auto_increment: 53756907
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options: max_rows=1000000000 avg_row_length=256
Comment: InnoDB free: 380928 kB
1 row in set (0.79 sec)


mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: guestbook_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 18996251
Avg_row_length: 310
Data_length: 5895094272
Max_data_length: NULL
Index_length: 1141899264
Data_free: 0
Auto_increment: 53756907
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options: max_rows=1000000000 avg_row_length=256
Comment: InnoDB free: 380928 kB
1 row in set (1.65 sec)


Weird, huh?

Fluctuating number of rows during data insertion

Today I noticed something interesting. As I was migrating a table from MyISAM to InnoDB, I saw the number of rows fluctuate up and down as time passed suggesting that SHOW PROCESSLIST wasn't actually reporting the exact number of rows but rather an estimate.

mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: book_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 7505110
Avg_row_length: 272
Data_length: 2043674624
Max_data_length: NULL
Index_length: 391118848
Data_free: 0
Auto_increment: 7980185
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment: InnoDB free: 6144 kB
1 row in set (0.01 sec)

mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: book_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 7138176
Avg_row_length: 287
Data_length: 2055208960
Max_data_length: NULL
Index_length: 392167424
Data_free: 0
Auto_increment: 8018719
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment: InnoDB free: 9216 kB
1 row in set (0.00 sec)

mysql> show table status like 'guestbook_v2' \G
*************************** 1. row ***************************
Name: book_v2
Engine: InnoDB
Version: 9
Row_format: Dynamic
Rows: 7434343
Avg_row_length: 277
Data_length: 2059403264
Max_data_length: NULL
Index_length: 393216000
Data_free: 0
Auto_increment: 8035298
Create_time: 2006-07-24 20:40:23
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment: InnoDB free: 11264 kB
1 row in set (0.05 sec)

Sunday, July 16, 2006

Copying MySQL Tables

I really find this one liner very handy when copying MySQL tables between different hosts.

MySQL Failed to open log file (InnoDB)

One needs to be very careful before deleting files as once deleted the files are gone for good. A friend of mine today called as MySQL was failing to start on his server. Earlier when "cleaning" his server, he had deleted binary log files without fully understanding their role. When I logged in to his server, I noticed the following errors:

060716 17:21:53  mysqld started
060716 17:21:53 InnoDB: Started; log sequence number 0 1736000119
/usr/sbin/mysqld: File './mysql-bin.000002' not found (Errcode: 2)
060716 17:21:53 [ERROR] Failed to open log (file './mysql-bin.000002', errno 2)
060716 17:21:53 [ERROR] Could not open log file
060716 17:21:53 [ERROR] Can't init tc log
060716 17:21:53 [ERROR] Aborting

060716 17:21:53 InnoDB: Starting shutdown...
060716 17:21:55 InnoDB: Shutdown completed; log sequence number 0 1736000119
060716 17:21:55 [Note] /usr/sbin/mysqld: Shutdown complete

060716 17:21:55 mysqld ended


I deleted the entries from mysql-bin.index and MySQL started without a problem.

Friday, July 14, 2006

MyISAM to InnoDB

One thing that sucks really bad with MyISAM is table level locking which can cause some serious issues at times when INSERT/UPDATE/DELETE load is very high. If you are experiencing problems in your application associated with locking, try converting your tables to InnoDB.

I was asked today a question as to how one can convert a MyISAM table to InnoDB. I will blog about it in detail later but for right now here's one way:

-ALTER TABLE table_name ENGINE = InnoDB;

and here's another way:

- dump the table to a SQL file
- change table type from MyISAM to InnoDB in the dumped file
- disable keys (to speed up the process)
- SET AUTOCOMMIT to 0 (for speeding up the process)
- load the table
- SET AUTOCOMMIT to 1
- enable keys (the index is rebuilt at this point)

One is preferred over the other based on various factors such as time limit in which the conversion must be done.


Also see:

Thursday, July 13, 2006

Indexes, Low Index Selectivity and Index Performance

The other day I was talking to Jay about low index selectivity. One of the column in my users table had extremely low index selectivity (less than 0.0000005). However when running certain queries involving COUNT and utilizing that column, it seemed the index was being used as when I ran the same queries with IGNORE INDEX, the query execution time was significantly higher.

There have been posts earlier on PlanetMySQL that noted that in cases of extremely low index selectivity, MySQL's cost based optimizer will never use it.

Jay pointed out that using COUNT(column_name) in a query is a special case where even in case of extremely low selectivity, having an index speeds up the query. For all other queries not involving COUNT, the index will not be used if the selectivity was very low.

I will be posting the queries and further analysis in a later post.

You may want to see:

Monday, July 10, 2006

err-log or log-error

Today I was investigating why one of the MySQL server wasn't logging anything in the error log. By default the error logs are written in the data directory with a .err extension.

The "err-log" configuration option only works with safe_mysqld. If you try to specify this option in the configuration file under [mysqld] then the following error is generated.

[ERROR] /path/to/mysqld: unknown variable 'err-log=/path/to/error_log'


When this option is used, MySQL server doesn't report the error log file path for the "log_error" variable in the output of "SHOW VARIABLES"

To specify the error log file so that it will show up in SHOW VARIABLES, we need to use the "log-error" option.

Wednesday, July 05, 2006

Compiling MySQL Administrator on Solaris 10

I have been trying to compile MySQL Administrator on a Solaris 10 box with no success. I followed Ronald's notes (link later) on the compile issues related to mysql-gui-common but continued having problems.

So eventually I filed a bug report and yesterday it was verified. So hopefully it can be resolved soon.

Frank