Which is faster for millions of records: Permanent Table or Temp Tables?
I have to use it only for 15 million records. After processing is complete, we delete t
I personally would use a permanent table and truncate it before each use. In my experience it is easier to understand/maintain. However, my best advice to you is to try both and see which one performs better.
Permanent table is faster if the table structure is to be 100% the same since there's no overhead for allocating space and building the table.
Temp table is faster in certain cases (e.g. when you don't need indexes that are present on permanent table which would slow down inserts/updates)
Temp tables are in memory (unless they're too big), so in theory they should be REALLY fast. But it's usually not. As a rule of thumb, try to stay away from temp tables, unless that is the only solution. Can you give us some more information about what you're trying to do? It could probably be done with a derived query
In your situation we use a permanent table called a staging table. This is a common method with large imports. In fact we generally use two staging tables one with the raw data and one with the cleaned up data which makes researching issues with the feed easier (they are almost always a result of new and varied ways our clients find to send us junk data, but we have to be able to prove that). Plus you avoid issues like having to grow temp db or causing issues for other users who want to use temp db but have to wait while it grows for you, etc.
You can also use SSIS and skip the staging table(s), but I find the ability to go back and research without having to reload a 50,000,000 table is very helpful.
It depends.
Temp tables are stored in the tempdb
database, which may or may not be on a different drive than your actual database. So a lot depends on a) the speed of those drives and b) which databases/files are on the same drive.
(for example, your actual database will be faster if database files and log files are on different physical drives)
If you use an availability solution like Database Mirroring, temp tables are probably faster:
At work, we are using synchronous Database Mirroring, which means that if we write to our database, the data is immediately written to the mirror server as well, and the main server waits for the mirror's confirmation before returning to the caller(!).
So if you insert 15 million records into a table, process them (probably involving some big updates on all of them) and delete them afterwards, SQL Server has to propagate all these changes immediately over the network to the mirror server.
On the other hand, doing this in a temp table will stay local on the server, in the tempdb
database.
Permanent Table is faster in most cases than temp table.
Have a look on : http://www.sql-server-performance.com/articles/per/derived_temp_tables_p1.aspx