Sqlserver 2005使用XML一次更新多条记录的方法
Sqlserver 2005使用XML一次更新多条记录的方法,需要一次更新多条记录的朋友可以参考下。
我想很多人都知道,在oracle里面,存储过程里面可以传入数组(如int[]),也就是说,可以传多条记录到数据,从而一起更新。减少数据库的请求次数。但SqlServer呢?bulk Insert这个很多人都知道,我也知道,但可惜,我从来没用过,只有导数据的时候才会考虑,但导数据DTS不是更方便吗?
手头的一个项目,有几个功能,每次需要更新N(N幸好,SqlServer给我们提供了一个新功能,利用XML(2000好像是没有这个功能的)。
先来假定一个这样的需求:用户更新一个book,同时需要更新N个章节。
一般的思路是这样,先更新book,然后循环章节数,N次更新数据的章节表。大家可以看下这个性能。
那我们用XML试试
利用XML更新的存储过程
代码如下:
Create PROCEDURE UP_Book_Insert
(
@BookId INT,
@ChapterXml XML
)
AS
BEGIN
CREATE TABLE #table
(
ChapterId INT,
ChapterName VARCHAR(255),
Price INT
);
INSERT #table
SELECT *
FROM (
SELECT X.C.value('Id[1]', 'int') AS ChapterId,
X.C.value('Name[1]', 'varchar(255)') AS ChapterName,
X.C.value('Price[1]','int') AS Price
FROM @ChapterXml.nodes('Chapter') AS X(C) --注意:这里的X(C)命名空间是需要的
) t;
INSERT INTO tbChapter(BookId,ChapterId,ChapterName,Price)
SELECT @BookId,ChapterId,ChapterName,Price from #table;
END
其实,在存储过程里面可以把临时表去掉的。
然后我们执行下看看
执行存储过程
代码如下:
exec UP_Book_Insert 10000,'
怎么样?不错吧。只需要在存储过程里面对XML格式进行解析。
而在c#里面,XML格式可以传入DbType.String类型就可以了。
再写一个函数来生成XML格式的字符串
生成XML格式的函数
代码如下:
public static string FormatXmlInfo(List
{
if (list==null||list.Count{
return String.Empty;
}
StringBuilder sb = new StringBuilder();
foreach (ChapterInfo info in list)
{
sb.AppendFormat("
}
return sb.ToString();
}
好了,完成了。
性能具体怎么样,还没进行测试,但肯定的一点是,比多次请求数据库,或者在存储过程里面循环分割字符串效率要高。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











The main role of MySQL in web applications is to store and manage data. 1.MySQL efficiently processes user information, product catalogs, transaction records and other data. 2. Through SQL query, developers can extract information from the database to generate dynamic content. 3.MySQL works based on the client-server model to ensure acceptable query speed.

InnoDB uses redologs and undologs to ensure data consistency and reliability. 1.redologs record data page modification to ensure crash recovery and transaction persistence. 2.undologs records the original data value and supports transaction rollback and MVCC.

Compared with other programming languages, MySQL is mainly used to store and manage data, while other languages such as Python, Java, and C are used for logical processing and application development. MySQL is known for its high performance, scalability and cross-platform support, suitable for data management needs, while other languages have advantages in their respective fields such as data analytics, enterprise applications, and system programming.

MySQL index cardinality has a significant impact on query performance: 1. High cardinality index can more effectively narrow the data range and improve query efficiency; 2. Low cardinality index may lead to full table scanning and reduce query performance; 3. In joint index, high cardinality sequences should be placed in front to optimize query.

The basic operations of MySQL include creating databases, tables, and using SQL to perform CRUD operations on data. 1. Create a database: CREATEDATABASEmy_first_db; 2. Create a table: CREATETABLEbooks(idINTAUTO_INCREMENTPRIMARYKEY, titleVARCHAR(100)NOTNULL, authorVARCHAR(100)NOTNULL, published_yearINT); 3. Insert data: INSERTINTObooks(title, author, published_year)VA

MySQL is suitable for web applications and content management systems and is popular for its open source, high performance and ease of use. 1) Compared with PostgreSQL, MySQL performs better in simple queries and high concurrent read operations. 2) Compared with Oracle, MySQL is more popular among small and medium-sized enterprises because of its open source and low cost. 3) Compared with Microsoft SQL Server, MySQL is more suitable for cross-platform applications. 4) Unlike MongoDB, MySQL is more suitable for structured data and transaction processing.

InnoDBBufferPool reduces disk I/O by caching data and indexing pages, improving database performance. Its working principle includes: 1. Data reading: Read data from BufferPool; 2. Data writing: After modifying the data, write to BufferPool and refresh it to disk regularly; 3. Cache management: Use the LRU algorithm to manage cache pages; 4. Reading mechanism: Load adjacent data pages in advance. By sizing the BufferPool and using multiple instances, database performance can be optimized.

MySQL efficiently manages structured data through table structure and SQL query, and implements inter-table relationships through foreign keys. 1. Define the data format and type when creating a table. 2. Use foreign keys to establish relationships between tables. 3. Improve performance through indexing and query optimization. 4. Regularly backup and monitor databases to ensure data security and performance optimization.
