On Friday 14 June 2002 05:30 pm, Alex Daniloff wrote:
Hello SuSE folkz,
Could somebody experienced in SQL DB answer this question. Assume that all tables in DB have the same number of columns e.g.10. What is better for DB performance to have few of bigger/longer tables or lot of smaller/shorter tables? Thanks in advance for any responses.
Alex
Certainly tables that have many entries can take longer to find data in (table scans), but this is resolved by indexing the table. Indexes are usually created on primary keys, and you can add indexes on columns that you think you will be using frequently as search criteria. That will significantly improve SELECT statement performance. If you break up your tables due to length, what criteria will you use? Your application will have to know this criteria. What happens as new data is added and the tables grow? Will you create new tables automatically? It is better to design the application with a uniform data model than to break up the data model. Let the database handle that. If you are talking about extremely large data sets, that might be different. I have gotten good results from MySQL with tables that have over 1 million rows in my tests. hth, -ronc -- 7:13am up 10 days, 22:16, 1 user, load average: 0.08, 0.06, 0.01