markdown 将自动增量添加到现有PK
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了markdown 将自动增量添加到现有PK相关的知识,希望对你有一定的参考价值。
The way I understand your question is that you have an existing table with a column that has up until now been populated with manual values, and now you want to (1) make this column an `IDENTITY` column, and (2) make sure that the `IDENTITY` starts from the most recent value in the existing rows.
First off, some test data to play with:
CREATE TABLE dbo.ident_test (
id int NOT NULL,
xyz varchar(10) NOT NULL,
CONSTRAINT PK_ident_test PRIMARY KEY CLUSTERED (id)
);
INSERT INTO dbo.ident_test (id, xyz)
VALUES (1, 'test'),
(2, 'test'),
(5, 'test'),
(6, 'test'),
(10, 'test'),
(18, 'test'),
(19, 'test'),
(20, 'test');
The goal is to make the table's primary key column, `id`, an `IDENTITY` column that will start at 21 for the next record that gets inserted. For this example, the column `xyz` represents all of the table's other columns.
Before you do anything, please read the warnings at the bottom of this post.
First off, in case something goes wrong:
BEGIN TRANSACTION;
Now, let's add a temporary work column, `id_temp` and set that column to the existing `id` column's values:
ALTER TABLE dbo.ident_test ADD id_temp int NULL;
UPDATE dbo.ident_test SET id_temp=id;
Next, we need to drop the existing `id` column (you can't just "add" an `IDENTITY` to an existing column, you have to create the column as an `IDENTITY`). The primary key also has to go, because the column depends on it.
ALTER TABLE dbo.ident_test DROP CONSTRAINT PK_ident_test;
ALTER TABLE dbo.ident_test DROP COLUMN id;
... and add the column again, this time as an `IDENTITY`, along with the primary key:
ALTER TABLE dbo.ident_test ADD id int IDENTITY(1, 1) NOT NULL;
ALTER TABLE dbo.ident_test ADD CONSTRAINT PK_ident_test PRIMARY KEY CLUSTERED (id);
Here's where it gets interesting. You can enable `IDENTITY_INSERT` on the table, which means that you can manually define the values of an `IDENTITY` column when you're inserting new rows (not updating existing rows, though).
SET IDENTITY_INSERT dbo.ident_test ON;
With that set, `DELETE` all the rows in the table, but the rows that you're deleting are `OUTPUT` right into the very same table - but with specific values for the `id` column (from the backup column).
DELETE FROM dbo.ident_test
OUTPUT deleted.id_temp AS id, deleted.xyz
INTO dbo.ident_test (id, xyz);
Once, done, turn `IDENTITY_INSERT` back off again.
SET IDENTITY_INSERT dbo.ident_test OFF;
Drop the temporary column that we added:
ALTER TABLE dbo.ident_test DROP COLUMN id_temp;
And finally, reseed the `IDENTITY` column, so the next record's `id` will resume after the highest existing number in the `id` column:
DECLARE @maxid int;
SELECT @maxid=MAX(id) FROM dbo.ident_test;
DBCC CHECKIDENT ("dbo.ident_test", RESEED, @maxid)
Checking the example table, the highest `id` number is 20.
SELECT * FROM dbo.ident_test;
Add another row and check its new `IDENTITY`:
INSERT INTO dbo.ident_test (xyz) VALUES ('New row');
SELECT * FROM dbo.ident_test;
In the example, the new row will have `id=21`. Finally, if you're happy, commit the transaction:
COMMIT TRANSACTION;
**Important**
This is not a trivial operation, and it carries quite a few risks that you should be aware of.
* Do this in a dedicated test environment. Have backups. :)
* I like to use `BEGIN/COMMIT TRANSACTION` because it prevents other processes from messing with the table while you're in the middle of changing it, and it gives you the possibility to roll everything back if something goes wrong. However, any other process that tries to access your table before you've committed your transaction will end up waiting. This may be pretty bad if you have a large table and/or you are on a production environment.
* `OUTPUT .. INTO` won't work if your target table has foreign key constraints or any of a number of other features that I can't remember off the top of my head. You could instead off-load the data into a temporary table instead, and then insert it back into the original table. You might be able to use partition switching (even if you don't use partitions).
* Run these statements one by one, not as a batch or in a stored procedure.
* Try to think of other things that may depend on the `id` column that you're dropping and re-creating. Any indexes will have to be dropped and re-created (like we did with the primary key). Remember to script every index and constraint that you will need to recreate beforehand.
* Disable any `INSERT` and `DELETE` triggers on the table.
**If re-creating the table is an option:**
If re-creating the table is an option for you, everything is a lot simpler:
* Create the empty table, with the `id` column as an `IDENTITY`,
* Set `IDENTITY_INSERT ON` for the table,
* Populate the table,
* Set `IDENTITY_INSERT OFF`, and
* Reseed the identity.
Using UPDATE, DELETE or INSERT to move data can take quite a lot of time and use resources (IO) on both data and log files/disks. It is possible to avoid filling the transaction log with potentially lots of records while working on a big table: Using partition switching, only the metadata is changed.
There is no data movement involved and this is therefore performed really quickly (almost instantaneous).
###Sample table
The question does not show the original table DDL. The following DDL will be used as an example in this answer:
CREATE TABLE dbo.idT(
id int not null
, uid uniqueidentifier not null
, name varchar(50)
);
ALTER TABLE dbo.idT ADD CONSTRAINT PK_idT PRIMARY KEY CLUSTERED(id);
Half a dozen dummy random ids from 0 to 15 are added with this query:
WITH ids(n) AS(
SELECT x1.n+x2.n*4
FROM (values(0), (3)) as x1(n)
CROSS JOIN (values(0), (2), (3)) as x2(n)
)
INSERT INTO idt(id, uid, name)
SELECT n, NEWID(), NEWID()
FROM ids
###Example data in `IdT`
<!-- language: lang-none -->
id uid name
0 65533096-5007-43EA-88AD-D6776B3B94FA 6A69D4F2-D682-4168-A92F-4CD2E2DBC21D
3 CE87F1ED-BE1A-4F2D-8D62-E1ECA822D35B AF0524D9-0DBB-41E1-883B-003CB4E4F012
8 34A1DBFD-4F92-4F34-9F04-4CDC824AB15A 02B4BDA4-D515-4262-9031-0BE496AC24CE
11 51606C95-9DE8-4C30-B23B-F915EEA41156 93258103-9C22-4F9C-85CF-712ED0FB3CE6
12 CEC80431-0513-4751-A250-0EB3390DACAB 2DA6B8AF-3EBC-42B3-A76C-028716E24661
15 5037EA83-286F-4EBC-AD7C-E237B570C1FF 095E51E9-8C38-4104-858F-D14AA810A550
###New table with `IDENTITY(0, 1)`
The only problem with `idT` is the lack of the `IDENTITY(0, 1)` property on id. A new table with a similar structure and `IDENTITY(0, 1)` is created:
CREATE TABLE dbo.idT_Switch(
id int identity(0, 1) not null
, uid uniqueidentifier not null
, name varchar(50)
);
ALTER TABLE dbo.idT_Switch ADD CONSTRAINT PK_idT_Switch PRIMARY KEY CLUSTERED(id);
Aside from `IDENTITY(0, 1)`, `idT_Switch` is identical to `idT`.
###Foreign Keys
Foreign keys on `idT` must be removed to allow this technique to be used.
###Partition Switch
The `idT` and `idT_Switch` tables have a compatible structure. Instead of using `DELETE`, `UPDATE` and `INSERT` statements to move rows from `idT` to `idT_Switch` or on `idT` itself, `ALTER TABLE ... SWITCH` can be used:
ALTER TABLE dbo.idT
SWITCH TO dbo.idT_Switch;
The single 'partition' of `PK_idT` (the whole table) is moved to `PK_idT_Switch` (and vice versa). `idT` now contains 0 rows and `idT_Switch` contains 6 rows.
You can find the full list of source and destination compatibility requirements here:
[Transferring Data Efficiently by Using Partition Switching][1]
Note this use of `SWITCH` does not require Enterprise Edition, because there is no explicit partitioning. An unpartitioned table is considered to be a table with a single partition from SQL Server 2005 onward.
###Replace `idT`
`idT` is now empty and useless and can be dropped:
DROP TABLE idT;
`idT_Switch` can be renamed and will replace the old `idT` table:
EXECUTE sys.sp_rename
@objname = N'dbo.idT_Switch',
@newname = N'idT', -- note lack of schema prefix
@objtype = 'OBJECT';
###Foreign keys
Foreign keys can be added again to the new `idT` table. Anything else previously removed from `idT` to make the tables compatible for switching will also need to be redone.
###Reseed
SELECT IDENT_CURRENT( 'dbo.idT');
This command returns 0. Table idT contains 6 rows with MAX(id) = 15. [DBCC CHECKIDENT ( table_name )][2] can be used:
DBCC CHECKIDENT ('dbo.idT');
Because 15 is bigger than 0, it will automatically reseed without looking for MAX(id):
> If the current identity value for a table is less than the maximum identity value stored in the identity column, it is reset using the maximum value in the identity column. See the 'Exceptions' section that follows.
IDENT_CURRENT now returns **15**.
###Test and add data
A simple `INSERT` statement:
INSERT INTO idT(uid, name) SELECT NEWID(), NEWID();
Adds this row:
id uid name
16 B395D692-5D7B-4DFA-9971-A1497B8357A1 FF210D9E-4027-479C-B5D8-057E77FAF378
The `id` column is now using the identity and the newly inserted value is indeed 16 (15+1).
###More information
There is a related question and answer with more background on the `SWITCH` technique here:
https://dba.stackexchange.com/questions/114608/why-is-removing-the-identity-property-on-a-column-not-supported
[1]: https://technet.microsoft.com/en-us/library/ms191160.aspx
[2]: https://msdn.microsoft.com/en-us/library/ms176057(v=sql.120).aspx
以上是关于markdown 将自动增量添加到现有PK的主要内容,如果未能解决你的问题,请参考以下文章