Thursday, February 02, 2006

Innodb or MyISAM? What's your preference?

I have been reading a lot about Innodb and MyISAM recently. MyISAM offers speed whereas Innodb offers reliability. MyISAM is fast but with no transactions support, Innodb has certain, but very important limitations.

I hate to ask it like this, but what do you prefer for a high volume application with heavy inserts/updates/selects? In your regular job (and projects), do you go for the speed or the reliability? or do you try to find a balance between both by having tables of both kinds?

I am just interested in hearing your thoughts (and igniting a discussion) as to which one is better in the long run?

I have personally used MyISAM and outsourced everything else to my application so far.



Markus Popp said...

If there are many modifications of the data, it's said that InnoDB works faster because it uses row locking instead of table locking, like MyISAM. However, if there are mainly SELECT statements, a MyISAM table might be faster.

However, it's always important what the needs of a specific table are - so I would choose the storage engine that best fits the requirements for the given table. If you need foreign key constraints or transactions, you can only use InnoDB, wheras if you need fulltext indexes, you can only use MyISAM tables at the moment.

With replication it's even possible to take advantage of both storage engines on one table. For example, the master could store a table as InnoDB which makes it fast for INSERTs, UPDATEs and DELETEs while the slave(s) could store the same table as MyISAM and offer the best performance for SELECTs.

Michael Metz said...

If you have a lot of cncurrent inserts/updates I would recommend inndb since it supports row-locking.
I'm managing a community with more than 5.000 qps Peak (on one DB-Server) and it runs smoothly with innodb while myisam was unable to process the high percentage of inserts/updates (about 50%).

Frankly Speaking! said...

Thanks Markus for a very informative comment.

You are the man!


Frankly Speaking! said...

Thanks Michael,

You are right, for concurrent inserts and updates, one should go for data integrity rather than speed.

Markus, I really appreciate your tip on getting the benefit of both worlds using replication.


Markus Popp said...

Unfortunately there's a little uncertainty about the long-term future of InnoDB inside MySQL since Oracle bought Innobase and with it the InnoDB storage engine.

However, I don't think that there will be problems in the short- and mid-terms, and until there *could* be some problems, MySQL should have time enough to create its own storage engine(s) that support everything that InnoDB can do (the plugable architecture should provide good help). At least I think that MySQL should try to keep its independency by providing everything that a high level RDBMS needs by themselves.

Sheeri said...

What Markus said!

We have lots of different data and usage -- sessions and users tables that get updated frequently, and static data like postal codes.

We currently have a mix -- there's even one set of data (personal ads) that we have 2 tables for -- One table is a copy of the other, and the difference is one is innoDB for fast inserting, and the other is MyISAM for fulltext searching.

InnoDB for most of the tables, including the messaging (short e-mails back and forth) because we need the row-level locking (memory page level locking, really. That's important when you have so many updates per second like we do.)

MyISAM (packed, preferably) for static data -- UK zip code info is over 120M unpacked, and it never changes (and if it does, we just replace the table).

So, yeah, what markus said.

Frankly Speaking! said...

Thanks Sheeri,

Like Markus and you pointed out, a replication with a dual setup will give the best of both worlds.

Seeing that Innodb is now in Oracle's hands, I think MySQL should start immediately on making the way to get independent from Innodb.

Now I will have to look into the plugable architecture :)


Anonymous said...

I'd go with InnoDB until it's been proved that it's unsuitable. The first reason is reliability. Get a crash with MyISAM and you have the unreliable and slow, related to table size, table repair process. Same thing with InnoDB and you instead get the fixed time, fast and reliable log apply/rollback process. As the data set gets bigger, this matters more and more, as it does if you want to do things like sleep instead of being woken up in the middle of the night to fix a crashed table.

For reliability and performance, we use InnoDB for almost everything at Wikipedia - we just can't afford the downtime implied by MyISAM use and check table for 400GB of data when we get a crash.

The speed advantage of MyISAM is overstated IMO. LiveJournal with a mostly write environment saw a big increase in speed when it switched from MyISAM to InnoDB and they are very happy with InnoDB.

At Wikipedia we achieved some major performance gains by exploiting the way InnoDB clusters records by their primary key. Took some quite common queries from 50 seeks per result page to 1 or 2 seeks per page. Worse, the code at that time allowed people to go back 50,000 rows using LIMIT, causing 50,000 seeks and a DOS vulnerability. Big improvement to working set size from the change as well, so the results were more often in cache. It's so great an advantage that like the write caching it could be a dealbreaker for anything MySQL might do about alternatives to InnoDB.

With InnoDB, if you don't need repeatability, you might also switch to the least consistent transaction isolation level for a particular query, since that can reduce the locking work InnoDB needs to do. That is, assuming that you really do need only the MyISAM lack of guarantees.

On the other hand, a crazy bit of code did once use SELECT ... FOR UPDATE to scan every row of a table. Switching that table to MyISAM was a quick hack until it could be fixed. The table locking of MyISAM was way faster than watching InnoDB lock every row individually.

We used to use MyISAM for fulltext, duplicating the data in the InnoDB master table. Once the query rate grew sufficiently high and the data size grew past a gigabyte or so it became completely unacceptable on performance grounds, taking more than half of our database server capacity and still not working well. We abandoned it and switched to Lucene. By that point we were in the top thousand sites on the net, so it had survived pretty well.

I'm also as MySQL Support Engineer but these views are from my Wikipedia role, not the MySQL one.

James Day

Anonymous said...

Newbies use MyISAM for their website because "it's faster".

When you're on your test server and the only user online, sure...

And then people come and sign up and it starts to lock all around the place.

Anonymous said...

Given my experience with BOTH innodb and myisam, I'd go with Oracle. Best storage engine under the hood. Cheers!

Reza said...

Hi Frank,
You have raised an interesting discussion.
I develope large scale applications and I a mix of InnoDB and MyISAM.
My applications always read everything from database, even the copyrights. I use MyISAM for such a thing as the SELECT query is much faster. InnoDB works great when I use it for data such as employees information and so on.
The fact that Michael mentioned to use InnoDB in master and MyISAM in slave is great.
For example, replicate two servers and call both primary DB Servers.
Redirect all requests to a server called Data Handler, and depending on the current usage and traffic, simply redirect the request to one. you could also use priority in the algorithm of redirection.


Richard said...

I have a joining table (with some additional data) that has two foreign keys.

I've recently changed it from using a surrogate key and indexes on each of the foreign key fields, to using a compound primary key on the foreign key fields and deleting the surrogate key.

My concern is having read the comments above, my users are only really seeing the results of select queries. Almost all the inserts and updates are happening behind the scenes via crontab etc.

There's over 3 million rows now and I'm struggling to get the table to convert from MyISAM to InnoDB. I'd like to see if it runs better. Since getting rid of the surrogate key and indexes the front end of the site seems a little slower which is obviously not ideal. :(

Anonymous said...

i have a DB with aout 50G.
I use mix storage Myisam and InnoDb.
We have something like 500k Updates,500k selects and 100k Insert/delete/replace per hour.
Many full scan becouse of update and
i must say InnOdb it doesnt really work. for the update MyISAM takes 6 minutes innodB needs a couples of hours.
Maybe my installation of INnoDB is not fit

Jayant Kumar said...

hi, i have been trying to use innodb. Have converted a huge db of around 14 GB from myisam to innodb and put the required index. But the machine (6 GB RAM & 2X2 CPU ) machine is unable to handle the incoming requests. I have hight QPS

Myisam used to work fine.

I have set the innodb_buffer_pool to 2 GB and innodb_thread_concurrency to 8.

Please let me know if there is anything else that i need to do to increase performance.

France Hsu said...

I want to save logs in DB.
Usually, I use INSERTs and SELECTs, and only DELETEs for the overdue logs. I needed to retain recently 6 month-long logs
In this case, which DB engine is suitable for me?

France Hsu said...

I want to save logs in DB.
Usually, I use INSERTs and SELECTs, and only DELETEs for the overdue logs. I needed to retain recently 6 month-long logs
In this case, which DB engine is suitable for me?

France Hsu said...

I want to save logs in DB.
Usually, I use INSERTs and SELECTs, and only DELETEs for the overdue logs. I needed to retain recently 6 month-long logs
In this case, which DB engine is suitable for me?

Roebie said...

Having worked with myIsam, InnoDB and Oracle, I would advise PostgreSQL. I've got some tables with >500,000,000 records and >90 fields each. PostgreSQL easily beats Oracle on speed (both select and insert/update) and we have never had any reliability issue since we started with PostgreSQL 3 years ago.
Oh, by the way, even when joining a PostgreSQL table with an Oracle table in our PostgreSQL database, performance is great. Just try that the other way round...

Emerson said...

I used MyISAM for more than 2G row instead of InnoDB. How can we maximize the table row in InnoDB?

Ketan Thaker said...

Thank you all!

I am using both MyISAM and InnoDB database engine in my application as I am using full-text index on product master table and InnoDB for order master table as it is being used by as desktop application running on local server to download orders data from online server to local MySQL server.

Since am dealing with a Database of 2GB in size haven't feel speed issue yet because of proper indexing on the fields.


PS: please advice if I have to update further.

Anonymous said...


this Blog is very interesting. I have a application running on InnoDB tables. I need transactions only on a few tables, which contains few rows. I use big tables only for reading and I have to make a full load to fill the tables. (The data comes from a flat file)
My problem is that the load needs a lot of time and also to empty the table.
The performance of the application is O.K. but could be better. Is there any difference between InnoDB and MyISAM for the load table? Regards Joern

Anonymous said...

I added some of the stuff i learned from this page on the sysadmin wiki if you don't mind:

Tony Murphy said...

good quality info here. I use InnoDB mainly becuase of the need for foreign key constraints, and it was interesting to see that the repair process is better/quicker with InnoDB


Andrew said...

I can't imagine a project that doesn't require foreign key constraints.

Any project I have been involved in has had at least 2 tables and the data in each has always related to data in the others somehow.

How can you get full text indexing in a table that uses foreign keys?