I ran into an interesting SQL problem this week. My goal was to write a short C# program which took the data in an existing SQL table, processed that data in memory, and then write the new data to a new table in the SQL database. I tried several different techniques, and found several that worked, but discovered only one technique that was “good” in the sense that it was fast enough to be practical for large amounts of data. The first step was to read data from a SQL table into memory. No problem. I used a SqlConnection, and a SqlDataAdapter with a SELECT SqlCommand to read the SQL data into a DataSet object. The second step was to create the destination SQL table. Again, no problem using a SqlCommand with “CREATE TABLE . . .” The third step was to process the existing data and upload the new data to the new table. Problem. One of my early attempts next iterated through the DataTable object in the DataSet object one row at a time, doing my processing (removing some data, combining some data, and so on), and then calling an INSERT SqlCommand. This worked but was very slow because each INSERT operation is a SQL transaction and each is performed one-by-one. A second attempt iterated through the DataSet/DataTable and built up a second, results SqlDataAdapter and DataSet/DataTable, and then called the Update() method of the SqlDataAdapter. This too was slow. The best solution was to instantiate a SqlBulkCopy object and call its WriteToServer() method. My performance improved by a lot – from roughly 100 seconds to about 9 seconds on a test data set. All in all, a fun and interesting problem.