Go4Expert (http://www.go4expert.com/)
-   MySQL (http://www.go4expert.com/forums/mysql-forum/)
-   -   efficient way to transfer 18MB of data (http://www.go4expert.com/forums/efficient-transfer-18mb-data-t579/)

rvijay17 8Feb2006 23:08

efficient way to transfer 18MB of data

I have an application where I get information about products everyday.
The file's size is about 18MB. Everyday, products may be added, updated
or deleted.

Which is more efficient way to update teh database,
1. delete all rows and then again insert data again.
2. programatically check which product has changed and only update

I have to use PHP. Each line in the file is about a product and each
field is tab separated. Any one of the field may change. So, I would
have to check each and every field which incurs one more problem, data
has to be transfered in and out of the database server.

So I wanted to know which one of the method is more efficient.


coderzone 9Feb2006 04:49

Re: efficient way to transfer 18MB of data
The debate can continue for ever as to which is better deleting all rows and inserting new data or the searching which rowschanged and update the following but here are some pros and cons of each method and after this its upto you to decide which pros I can do with and use that method.

1. Deleting all the rows.
  1. No programatically checking as to which data is updated. No select statements.
  1. All data is deleted momentarily and so database is in very unstable state though for a short period of time and if something happens in between the delete and update lots of loss of data.
  2. Looks like this can be achieved much faster but as we are deleting all the content of the tables and re-inserting them it can take a lots of time.
  3. As we are deleting all the data's we may also need to drop the table so as to keep the auto indexing of many fields to be intact or else we will be just increasing the value of the indexes.

2. Updating selected rows.
  1. Less update/Delete operations on the database allowing it to have more consistency
  1. More code to test which rows have been updated making more chances of bugs but if tested thoroughly this can be avoided.
  2. Each and evey row must be retrieved and compared with the content giving more overhead.

When the size of the table increases the its always better to go with the option no 2 and so the final conclusion that comes to my mind is 2. Updating selected rows but it might so happen that in your situation 1. Deleting all the rows may be better one. So its you who can better decide which is better.

I hope this solves your query or at least helps you in that direction.

All times are GMT +5.5. The time now is 07:31.