我试图找出如何使用c#在SQL Server的临时表中提高插入性能的方法。有人说我应该使用SQLBulkCopy,但是我一定做错了,因为它看起来比仅仅构建一个SQL插入字符串要慢得多。
我使用SQLBulkCopy创建表的代码如下:
public void MakeTable(string tableName, List<string> ids, SqlConnection connection) { SqlCommand cmd = new SqlCommand("CREATE TABLE ##" + tableName + " (ID int)", connection); cmd.ExecuteNonQuery(); DataTable localTempTable = new DataTable(tableName); DataColumn id = new DataColumn(); id.DataType = System.Type.GetType("System.Int32"); id.ColumnName = "ID"; localTempTable.Columns.Add(id); foreach (var item in ids) { DataRow row = localTempTable.NewRow(); row[0] = item; localTempTable.Rows.Add(row); localTempTable.AcceptChanges(); } using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection)) { bulkCopy.DestinationTableName = "##" + tableName; bulkCopy.WriteToServer(localTempTable); } }
这样,我的插入内容需要很长时间才能运行。我用其他方法使嵌件工作得更快:
我将插入位创建为字符串,并将其加入到我的SQL create temp table语句中:
创建插入字符串:
public string prepareInserts(string tableName, List<string> ids) { List<string> inserts = new List<string>(); var total = ids.Select(p => p).Count(); var size = 1000; var insert = 1; var skip = size * (insert - 1); var canPage = skip < total; while (canPage) { inserts.Add(" insert into ##" + tableName + @" (ID) values " + String.Join(",", ids.Select(p => string.Format("({0})", p)) .Skip(skip) .Take(size) .ToArray())); insert++; skip = size * (insert - 1); canPage = skip < total; } string joinedInserts = String.Join("\r\n", inserts.ToArray()); return joinedInserts; }
创建查询后在SQL语句中使用它们:
inserts = prepareInserts(tableName, ids); var query = @"IF EXISTS ( SELECT * FROM tempdb.dbo.sysobjects WHERE ID = OBJECT_ID(N'tempdb..##" + tableName + @"') ) BEGIN DELETE FROM ##" + tableName + @" END ELSE BEGIN CREATE TABLE ##" + tableName + @" (ID int) END " + inserts; var command = new SqlCommand(query, sqlConnection); ...
Since I’ve seen people telling me (on stack exchange https://dba.stackexchange.com/questions/44217/fastest-way-to- insert-30-thousand-rows-in-sql-server/44222?noredirect=1#comment78137_44222 ) That I should use SQLBulkCopy and that would be faster I believe that I should improve the way I do it. So if anyone can suggest how I can improve my SQLBulkCopy code OR tell me if there is a better insert statement that can improve my application’s performance that would be great.
Your problem may be in localTempTable.AcceptChanges(); Since it commit your changes. If you do the next , I think it will run faster
localTempTable.AcceptChanges();
foreach (var item in ids) { DataRow row = localTempTable.NewRow(); row[0] = item; localTempTable.Rows.Add(row); } localTempTable.AcceptChanges(); using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection)) { bulkCopy.DestinationTableName = "##" + tableName; bulkCopy.WriteToServer(localTempTable); }
From MSDN - DataSet.AcceptChanges
Commits all the changes made to this DataSet since it was loaded or since the last time AcceptChanges was called.