Proper indexing can work wonders for large databases in query efficiency, itâ€™s a compromise of disk storage in exchange for sorting speed. For databases with millions of records, indexing takes considerable amount of disk storage, and, AGONIZING lengths of time to create.
I recently compiled a database with one table mytable consisting of over 4 million records, as apparently I want some columns to be identifiable for each of the rows, I need indexing one or 2 of them to do that.
ALTER TABLE mytable ADD INDEX category
Wherein category is one of the columns by which every record in mytable is referenced to that of another table.
You know how long this short query took me?
4 hours. Much longer than the time it took me to do a unique indexing just before it, on the same table.
So there you go, a general indexing takes much much longer time to finish than unique indexing and primary key indexing.
But thatâ€™s not all of the tip today.
The tip is that, before you are inserting all the records one by one into that table which would result in a super huge set of rows, configure that particular column to be generally indexed.
This is no better solution than indexing it all after the table is fully populated though, in that with preexisting table indexes, it takes a bit longer to insert every record as MySQL has to write down redundant information every time it inserts a new row into the table.
But in terms of programmer experience, itâ€™s much better than waiting 4 hours hoping for the indexing to succeed and fearing it would not. Iâ€™d rather spread the extra required time across each insertion of the records.