Iâ€™m not an expert but Iâ€™ve had my own errors and trials along the way so here I am. These are all the tips I believe you should use when you are optimizing for MySQL database performance. Some stuff may be very handy, but they may not be good for performance.
- Normalize first, and de-normalize where appropriate. Table joins are never lags in performance. Use them!
- Choose the appropriate collation / charset. UTF16 is omnipotent but it requires 2 times the storage. UTF8 delivers more characters but itâ€™s slower than latin1. Unless itâ€™s necessary, use latin1.
- utf8_general_ci is slightly faster than utf8_unicode_ci, thus better in performance.
- Use NOT NULL wherever possible.
- Use indexes. Create indexes by analyzing the SELECTs you are doing on the table.
- Indexing increases performance in selecting, yet decreases performance in inserting. So donâ€™t index everything but only those necessary. And donâ€™t make duplicate indexes.
- Use smallest data types possible.
- IN (â€¦) is a total fail. Avoid it wherever possible.
- ORDER BY RAND() is a fail too.
- When you need to SELECT â€¦ to make sure thereâ€™s no duplicate unique index values before inserting something, use INSERT â€¦ ON DUPLICATE KEY UPDATE â€¦ instead.
- If your app writes a lot, use InnoDB; if it reads a lot, use MyISAM. Hereâ€™s a comparison.
- Never use LIMIT xxxx1, xxxx2 with large tables especially when offset xxxx1 is a pretty large number. Instead, give a more strict WHERE â€¦ clause with LIMIT xxxx2.
- If a feature is deprecated, stop using it.
- Avoid wildcards % or _ at the beginning of LIKE queries, e.g. LIKE â€˜%bloomâ€™ â€“ this should be avoided. For instance, you should store email addresses reversed (REVERSED()) so itâ€™s faster to search by domain.
- If you have a large set of data rows to be inserted at once, create a combined INSERT query string in PHP first â€“ that is, to bulk insert them in batches rather than one by one. LOAD DATA is also a better option than individual INSERT.
- Prefer GROUP BY over DISTINCT.
- Frequently visit Slow Query Log.
- Avoid calculated values in WHERE â€¦ , LIMIT â€¦ , ORDER BY â€¦ , etc.
- Use EXPLAIN â€¦ to get an idea of how MySQL is doing with your queries thus better optimize it. For example, it would suggest the indexes you should have created.
- Donâ€™t SELECT * â€¦ on everything, SELECT only what you need.
- Be wary of lots of small queries that may be triggered in loops. Instead, combine them and use one large query wherever possible.
- Know the pros and cons of different storage engines and use them accordingly. But try them out in the test environment before making the decision.
- Archive old data such as logs in ARCHIVE tables or MERGE tables.
- BLACKHOLE engine is very fast for busy things like logs.
- Increase the default key buffer size: key_buffer = 128M
- Increase the default number of tables that can be kept open: table_cache = 128
- Increase the default sort buffer size:
sort_buffer_size = 32M
myisam_sort_buffer_size = 32M
- Disable binary logging that is for data replication by commenting out the line like this:Â # log-bin=mysql-bin
What to read from here:
- Indexing: http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
- â€œLIMITâ€ Optimization: http://dev.mysql.com/doc/refman/5.0/en/limit-optimization.html
- â€œORDER BYâ€ Optimizationhttp://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
- Efficient Paging: http://www.slideshare.net/suratbhati/efficient-pagination-using-mysql-6187107
- Configuration Tweaks: http://docs.cellblue.nl/2007/03/17/easy-mysql-performance-tweaks/