Thursday, March 29, 2012

Consolidating Data from 7 similar databases

I'm not quite sure if this is the correct forum to post this, if not please advise where should I post.

I have 7 databases with same structure, but different data in it, I need to have a query to consolidade some info from all of them in one report, is it possible just in onw script? how should I do it?

thanks,

Marcus

PS: I'm a beguinner in this so I apologize if the question seems stupid, or wrong.

If all the databases are on the same server, you can access tables in other databases using the long name [Database].[Owner (usually dbo)].[Table]. It would look something like SELECT * FROM AdventureWorks.Person.Address .

If the databases are on separate servers, you can use the "linked server" feature.

-Ryan / Kardax

Consolidate two UPDATE statements into on

I'm working on a stored procedure to age balances. In short, here's the algo
rithm:
Payments get applied to the oldest 'bucket' first; in my example, BalanceOve
r180.
Is the value in the Payments column greater than or equal to the value in Ba
lanceOver180?
If so, subtract BalanceOver180 from Payments and set BalanceOver180 to 0.
If not, subtract Payments from BalanceOver180 and set Payments to 0.
Then go on to the next oldest 'bucket' (which would be Balance151To180) and
run the same two update statements.
There are a total of 7 columns against which payments need to be applied.
Here are the two statements that execute for each column:
UPDATE AgedBalances
SET Payments = Payments - BalanceOver180, BalanceOver180 = 0
WHERE Payments >= BalanceOver180 AND Payments > 0
UPDATE AgedBalances
SET BalanceOver180 = BalanceOver180 - Payments, Payments = 0
WHERE Payments < BalanceOver180 AND Payments > 0
My question: Can these two statements be combined into one using the ABS fun
ction or the CASE construct
or something similar that I don't seem to be able to figure out?
Sample data:
BalanceOver180 Payments
Before: 200 500
After: 0 300
OR
Before: 600 500
After: 100 0
OR
Before: 400 400
After: 0 0
You get the picture.
As always, thanks in advance for all assistance.
Carl Imthurnsomething like that:
update <yor table>
set BalanceOver180 =
case
when
(Payments < BalanceOver180) then (BalanceOver180 - Payments)
when
(Payments >= BalanceOver180) then 0
end,
Payments =
case
when
(Payments < BalanceOver180) then 0
when
(Payments >= BalanceOver180) then (Payments - BalanceOver180)
end
where Payments > 0
--
Programmer
Let me know if it works for you :-))
"Carl Imthurn" wrote:

> I'm working on a stored procedure to age balances. In short, here's the al
gorithm:
> Payments get applied to the oldest 'bucket' first; in my example, BalanceO
ver180.
> Is the value in the Payments column greater than or equal to the value in
BalanceOver180?
> If so, subtract BalanceOver180 from Payments and set BalanceOver180 to 0.
> If not, subtract Payments from BalanceOver180 and set Payments to 0.
> Then go on to the next oldest 'bucket' (which would be Balance151To180) an
d run the same two update statements.
> There are a total of 7 columns against which payments need to be applied.
> Here are the two statements that execute for each column:
> UPDATE AgedBalances
> SET Payments = Payments - BalanceOver180, BalanceOver180 = 0
> WHERE Payments >= BalanceOver180 AND Payments > 0
> UPDATE AgedBalances
> SET BalanceOver180 = BalanceOver180 - Payments, Payments = 0
> WHERE Payments < BalanceOver180 AND Payments > 0
> My question: Can these two statements be combined into one using the ABS f
unction or the CASE construct
> or something similar that I don't seem to be able to figure out?
> Sample data:
> BalanceOver180 Payments
> Before: 200 500
> After: 0 300
> OR
> Before: 600 500
> After: 100 0
> OR
> Before: 400 400
> After: 0 0
> You get the picture.
> As always, thanks in advance for all assistance.
> Carl Imthurn
>|||That worked - thank you very much Sergey.
Sergey Zuyev wrote:

> something like that:
> update <yor table>
> set BalanceOver180 =
> case
> when
> (Payments < BalanceOver180) then (BalanceOver180 - Payments)
> when
> (Payments >= BalanceOver180) then 0
> end,
> Payments =
> case
> when
> (Payments < BalanceOver180) then 0
> when
> (Payments >= BalanceOver180) then (Payments - BalanceOver180)
> end
> where Payments > 0

consolidate two data files

How can I combine two data files (i.e .mdf and .ndf). We had two data file in our old server due to disk limitation. In the new server we don't have such limitation so we like to consolidate the two file in one. What is the best way to accomplish that?
ThanksFollow this:
http://groups.google.com.au/group/microsoft.public.sqlserver.server/browse_thread/thread/d7fa8af9c5ca5ca/71636c29fdf2666c%2371636c29fdf2666csqlsql

Consolidate SQL servers to single server

So processor is ruled out. That's good.
I suggest you look at the chapter we put in the SQL 2K HA
book which explains how memory works in depth. To access
8 GB of memory under 32-bit, you need to put /PAE in
boot.ini. There is no other way. With /3GB, you can get
up to 3 GB of usable memory (different space tho), and 1
GB is always reserved for the OS. Up to 16 GB, both can
technically play together, and some applications need it.
beyond 16 GB on 32-bit machines, you cannot combine /3GB
and /PAE.
So if you have 8 GB and use /PAE and /3GB, you have 7 GB
of usable memory, of which only 3 GB would be dynamic (so
one instance could potentially be set to dynamic). When
using PAE, it is best to set the AWE settings in SQL as
well as set max mem for the instance. You seem to know
what that is.
So you can run all three instances with separate memory
using /PAE only, and if you can, I'd recommend that
approach.
Thank you for your reply, though I am still a bit confused.
I understand that using /3GB and /PAE will give me 7GB of usable RAM.
However, you say "When using PAE, it is best to set the AWE settings
in SQL as well as set max mem for the instance. You seem to know what
that is.", and while this is what I have read elsewhere, I was under
the impression that AWE settings were not available on SQL 2000
Standard?
In the next paragraph you say "So you can run all three instances with
separate memory using /PAE only, and if you can, I'd recommend that
approach." I don't think I understand this sentence. Are you saying
I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
Basically, I think I'm just looking to see if I can make SQL Standard
address memory above 4GB. I know you can with Enterprise using AWE, I
just thought AWE was not usable on Standard.
Thanks,
Kevin
"Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message news:<532201c47407$87fa08c0$a501280a@.phx.gbl>...
> So processor is ruled out. That's good.
> I suggest you look at the chapter we put in the SQL 2K HA
> book which explains how memory works in depth. To access
> 8 GB of memory under 32-bit, you need to put /PAE in
> boot.ini. There is no other way. With /3GB, you can get
> up to 3 GB of usable memory (different space tho), and 1
> GB is always reserved for the OS. Up to 16 GB, both can
> technically play together, and some applications need it.
> beyond 16 GB on 32-bit machines, you cannot combine /3GB
> and /PAE.
> So if you have 8 GB and use /PAE and /3GB, you have 7 GB
> of usable memory, of which only 3 GB would be dynamic (so
> one instance could potentially be set to dynamic). When
> using PAE, it is best to set the AWE settings in SQL as
> well as set max mem for the instance. You seem to know
> what that is.
> So you can run all three instances with separate memory
> using /PAE only, and if you can, I'd recommend that
> approach.
|||PMJI, but I'm 99.9% sure that SQL Standard will only ever use 2GB RAM
maximum. Doesn't matter what OS you use or what switches you have set.
Mike Kruchten
"Kevin" <kjarrard@.gmail.com> wrote in message
news:4f5b3722.0407281043.471e0365@.posting.google.c om...
> Thank you for your reply, though I am still a bit confused.
> I understand that using /3GB and /PAE will give me 7GB of usable RAM.
> However, you say "When using PAE, it is best to set the AWE settings
> in SQL as well as set max mem for the instance. You seem to know what
> that is.", and while this is what I have read elsewhere, I was under
> the impression that AWE settings were not available on SQL 2000
> Standard?
> In the next paragraph you say "So you can run all three instances with
> separate memory using /PAE only, and if you can, I'd recommend that
> approach." I don't think I understand this sentence. Are you saying
> I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
> Basically, I think I'm just looking to see if I can make SQL Standard
> address memory above 4GB. I know you can with Enterprise using AWE, I
> just thought AWE was not usable on Standard.
> Thanks,
> Kevin
> "Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message
news:<532201c47407$87fa08c0$a501280a@.phx.gbl>...[vbcol=seagreen]

Consolidate SQL servers to single server

We have 3 SQL servers we would like to consolidate onto a single
server. While proccessor usage is not of concern (per perfmon
monitoring and stats on the new hardware), memory usage is.
The Plan:
Migrate three SQL Server 2000 Standard servers to a Dell 6650 (4 way),
each in their own instance. The Server is running Windows 2003
Advanced server.
The Question/Problem:
While I've read about the PAE, AWE and 3/4/gb tunings, I'm still
confused on just what options we have. Each of these DB's use up
around 1 - 1.75GB of memory on a normal basis. They never exceed
1.75GB of usage so we haven't seen the need to go to SQL 2000
Enterprise. Obviously, though, 1.75GB + 1.75GB + 1.75GB !< 4GB of RAM
typically installed on a server. I've used the /3GB tuning on Windows
2000 Advanced, but that doesn't allow for enough memory in this
instance. What are my options to get each of these SQL instances 2GB
of RAM working within the confines of OS the SQL memory management?
Can I simply run Windows in PAE mode and install each copy of SQL in
their own instance to have full run of the 8GB to be installed in this
server? Does the /4gb tuning work with PAE?
KevinSo processor is ruled out. That's good.
I suggest you look at the chapter we put in the SQL 2K HA
book which explains how memory works in depth. To access
8 GB of memory under 32-bit, you need to put /PAE in
boot.ini. There is no other way. With /3GB, you can get
up to 3 GB of usable memory (different space tho), and 1
GB is always reserved for the OS. Up to 16 GB, both can
technically play together, and some applications need it.
beyond 16 GB on 32-bit machines, you cannot combine /3GB
and /PAE.
So if you have 8 GB and use /PAE and /3GB, you have 7 GB
of usable memory, of which only 3 GB would be dynamic (so
one instance could potentially be set to dynamic). When
using PAE, it is best to set the AWE settings in SQL as
well as set max mem for the instance. You seem to know
what that is.
So you can run all three instances with separate memory
using /PAE only, and if you can, I'd recommend that
approach.|||Thank you for your reply, though I am still a bit confused.
I understand that using /3GB and /PAE will give me 7GB of usable RAM.
However, you say "When using PAE, it is best to set the AWE settings
in SQL as well as set max mem for the instance. You seem to know what
that is.", and while this is what I have read elsewhere, I was under
the impression that AWE settings were not available on SQL 2000
Standard?
In the next paragraph you say "So you can run all three instances with
separate memory using /PAE only, and if you can, I'd recommend that
approach." I don't think I understand this sentence. Are you saying
I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
Basically, I think I'm just looking to see if I can make SQL Standard
address memory above 4GB. I know you can with Enterprise using AWE, I
just thought AWE was not usable on Standard.
Thanks,
Kevin
"Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message news:<532201c47407$87fa08c0$a501280a@.phx.gbl>...
> So processor is ruled out. That's good.
> I suggest you look at the chapter we put in the SQL 2K HA
> book which explains how memory works in depth. To access
> 8 GB of memory under 32-bit, you need to put /PAE in
> boot.ini. There is no other way. With /3GB, you can get
> up to 3 GB of usable memory (different space tho), and 1
> GB is always reserved for the OS. Up to 16 GB, both can
> technically play together, and some applications need it.
> beyond 16 GB on 32-bit machines, you cannot combine /3GB
> and /PAE.
> So if you have 8 GB and use /PAE and /3GB, you have 7 GB
> of usable memory, of which only 3 GB would be dynamic (so
> one instance could potentially be set to dynamic). When
> using PAE, it is best to set the AWE settings in SQL as
> well as set max mem for the instance. You seem to know
> what that is.
> So you can run all three instances with separate memory
> using /PAE only, and if you can, I'd recommend that
> approach.|||PMJI, but I'm 99.9% sure that SQL Standard will only ever use 2GB RAM
maximum. Doesn't matter what OS you use or what switches you have set.
Mike Kruchten
"Kevin" <kjarrard@.gmail.com> wrote in message
news:4f5b3722.0407281043.471e0365@.posting.google.com...
> Thank you for your reply, though I am still a bit confused.
> I understand that using /3GB and /PAE will give me 7GB of usable RAM.
> However, you say "When using PAE, it is best to set the AWE settings
> in SQL as well as set max mem for the instance. You seem to know what
> that is.", and while this is what I have read elsewhere, I was under
> the impression that AWE settings were not available on SQL 2000
> Standard?
> In the next paragraph you say "So you can run all three instances with
> separate memory using /PAE only, and if you can, I'd recommend that
> approach." I don't think I understand this sentence. Are you saying
> I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
> Basically, I think I'm just looking to see if I can make SQL Standard
> address memory above 4GB. I know you can with Enterprise using AWE, I
> just thought AWE was not usable on Standard.
> Thanks,
> Kevin
> "Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message
news:<532201c47407$87fa08c0$a501280a@.phx.gbl>...
> > So processor is ruled out. That's good.
> >
> > I suggest you look at the chapter we put in the SQL 2K HA
> > book which explains how memory works in depth. To access
> > 8 GB of memory under 32-bit, you need to put /PAE in
> > boot.ini. There is no other way. With /3GB, you can get
> > up to 3 GB of usable memory (different space tho), and 1
> > GB is always reserved for the OS. Up to 16 GB, both can
> > technically play together, and some applications need it.
> > beyond 16 GB on 32-bit machines, you cannot combine /3GB
> > and /PAE.
> >
> > So if you have 8 GB and use /PAE and /3GB, you have 7 GB
> > of usable memory, of which only 3 GB would be dynamic (so
> > one instance could potentially be set to dynamic). When
> > using PAE, it is best to set the AWE settings in SQL as
> > well as set max mem for the instance. You seem to know
> > what that is.
> >
> > So you can run all three instances with separate memory
> > using /PAE only, and if you can, I'd recommend that
> > approach.

Consolidate SQL servers to single server

So processor is ruled out. That's good.
I suggest you look at the chapter we put in the SQL 2K HA
book which explains how memory works in depth. To access
8 GB of memory under 32-bit, you need to put /PAE in
boot.ini. There is no other way. With /3GB, you can get
up to 3 GB of usable memory (different space tho), and 1
GB is always reserved for the OS. Up to 16 GB, both can
technically play together, and some applications need it.
beyond 16 GB on 32-bit machines, you cannot combine /3GB
and /PAE.
So if you have 8 GB and use /PAE and /3GB, you have 7 GB
of usable memory, of which only 3 GB would be dynamic (so
one instance could potentially be set to dynamic). When
using PAE, it is best to set the AWE settings in SQL as
well as set max mem for the instance. You seem to know
what that is.
So you can run all three instances with separate memory
using /PAE only, and if you can, I'd recommend that
approach.Thank you for your reply, though I am still a bit confused.
I understand that using /3GB and /PAE will give me 7GB of usable RAM.
However, you say "When using PAE, it is best to set the AWE settings
in SQL as well as set max mem for the instance. You seem to know what
that is.", and while this is what I have read elsewhere, I was under
the impression that AWE settings were not available on SQL 2000
Standard?
In the next paragraph you say "So you can run all three instances with
separate memory using /PAE only, and if you can, I'd recommend that
approach." I don't think I understand this sentence. Are you saying
I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
Basically, I think I'm just looking to see if I can make SQL Standard
address memory above 4GB. I know you can with Enterprise using AWE, I
just thought AWE was not usable on Standard.
Thanks,
Kevin
"Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message news:<532201c47407$87fa0
8c0$a501280a@.phx.gbl>...
> So processor is ruled out. That's good.
> I suggest you look at the chapter we put in the SQL 2K HA
> book which explains how memory works in depth. To access
> 8 GB of memory under 32-bit, you need to put /PAE in
> boot.ini. There is no other way. With /3GB, you can get
> up to 3 GB of usable memory (different space tho), and 1
> GB is always reserved for the OS. Up to 16 GB, both can
> technically play together, and some applications need it.
> beyond 16 GB on 32-bit machines, you cannot combine /3GB
> and /PAE.
> So if you have 8 GB and use /PAE and /3GB, you have 7 GB
> of usable memory, of which only 3 GB would be dynamic (so
> one instance could potentially be set to dynamic). When
> using PAE, it is best to set the AWE settings in SQL as
> well as set max mem for the instance. You seem to know
> what that is.
> So you can run all three instances with separate memory
> using /PAE only, and if you can, I'd recommend that
> approach.|||PMJI, but I'm 99.9% sure that SQL Standard will only ever use 2GB RAM
maximum. Doesn't matter what OS you use or what switches you have set.
Mike Kruchten
"Kevin" <kjarrard@.gmail.com> wrote in message
news:4f5b3722.0407281043.471e0365@.posting.google.com...
> Thank you for your reply, though I am still a bit confused.
> I understand that using /3GB and /PAE will give me 7GB of usable RAM.
> However, you say "When using PAE, it is best to set the AWE settings
> in SQL as well as set max mem for the instance. You seem to know what
> that is.", and while this is what I have read elsewhere, I was under
> the impression that AWE settings were not available on SQL 2000
> Standard?
> In the next paragraph you say "So you can run all three instances with
> separate memory using /PAE only, and if you can, I'd recommend that
> approach." I don't think I understand this sentence. Are you saying
> I should run /PAE and not /3GB /PAE along with AWE settings in SQL?
> Basically, I think I'm just looking to see if I can make SQL Standard
> address memory above 4GB. I know you can with Enterprise using AWE, I
> just thought AWE was not usable on Standard.
> Thanks,
> Kevin
> "Allan Hirt" <anonymous@.discussions.microsoft.com> wrote in message
news:<532201c47407$87fa08c0$a501280a@.phx.gbl>...[vbcol=seagreen]

Consolidate Rows on Select

I want to condense several rows of a table using a select
statement, function, or stored procedure
For Example
mytable :
1 a
2 b
2 c
2 d
3 a
so that my select should result in
1 a
2 b,c,d
3 a
Any ideas or suggestions?
While this code references the sample database in my book (CHA2), the
pattern will work for your problem as well. In this example, the @.EventDates
variable is used to gather, or denomalize, the EventDate column:
USE CHA2
DECLARE
@.EventDates VARCHAR(1024)
SET @.EventDates = ''
SELECT @.EventDates = @.EventDates + CONVERT(VARCHAR(15), a.d,107 ) + '; '
FROM (Select DateBegin as [d] FROM Event
JOIN Tour
ON Event.TourID = Tour.TourID
WHERE Tour.[Name] = 'Outer Banks Lighthouses') as a
SELECT Left(@.EventDates, Len(@.EventDates)-1) AS 'Outer Banks Lighthouses
Events'
-Paul Nielsen, SQL Server MVP
SQL Server 2000 Bible, Wiley Press
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:392801c4aa4b$477a6810$a401280a@.phx.gbl...
>I want to condense several rows of a table using a select
> statement, function, or stored procedure
> For Example
> mytable :
> 1 a
> 2 b
> 2 c
> 2 d
> 3 a
> so that my select should result in
> 1 a
> 2 b,c,d
> 3 a
> Any ideas or suggestions?

Consolidate records

I consider security A to be equivalent to C so I want to consolidate these
positions. Three scenarios:
If an account holds both A and C I want to add the C quantity to the A
position and delete the C position.
If the account holds A but not C then no action.
If the account holds C but not A I want to update the C position to A.
My requirements are such that I need UPDATE and DELETE statement not a
SELECT statement so that my desired results would be produced by SELECT *
FROM #Positions
Any help would be appreciated.
CREATE TABLE [dbo].[#Positions] (
[AccountID] [int] NOT NULL ,
[SecurityID] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Quantity] [int] NULL
) ON [PRIMARY]
GO
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'A',20)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'B',25)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'C',25)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (2,'A',15)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (2,'B',5)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (3,'B',5)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (3,'C',10)
INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (4,'C',15)
Expected results
1,A,45
1,B,25
2,A,15
2,B,5
3,A,10
3,B,5
4,A,15Please explain NULL quantity to me; I understand >=0, but not a NULL.
Create a synonym table,then create a VIEW with the base name and totals
and use it.|||Terri
The following UPDATE/DELETE statemenst should help you out:
-- Update SecurityID A with the values for SecurityID C where the AccountID
is the same
UPDATE a
SET a.quantity = a.quantity + b.quantity
FROM #Positions a JOIN #Positions b
ON a.accountid = b.accountid
AND a.securityid = 'A'
AND b.securityid = 'C'
-- Delete any of the SecurityID C rows that were used in the previous update
DELETE b
FROM #Positions a JOIN #Positions b
ON a.accountid = b.accountid
AND a.securityid = 'A'
AND b.securityid = 'C'
-- Update SecurityID C to SecurityID A
UPDATE #Positions
SET securityid = 'A'
WHERE securityid = 'C'
- Peter Ward
WARDY IT Solutions
"Terri" wrote:

> I consider security A to be equivalent to C so I want to consolidate these
> positions. Three scenarios:
> If an account holds both A and C I want to add the C quantity to the A
> position and delete the C position.
> If the account holds A but not C then no action.
> If the account holds C but not A I want to update the C position to A.
> My requirements are such that I need UPDATE and DELETE statement not a
> SELECT statement so that my desired results would be produced by SELECT *
> FROM #Positions
> Any help would be appreciated.
> CREATE TABLE [dbo].[#Positions] (
> [AccountID] [int] NOT NULL ,
> [SecurityID] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
> [Quantity] [int] NULL
> ) ON [PRIMARY]
> GO
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'A',20)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'B',25)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (1,'C',25)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (2,'A',15)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (2,'B',5)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (3,'B',5)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (3,'C',10)
> INSERT INTO #Positions (AccountID,SecurityID,Quantity) VALUES (4,'C',15)
> Expected results
> 1,A,45
> 1,B,25
> 2,A,15
> 2,B,5
> 3,A,10
> 3,B,5
> 4,A,15
>
>sqlsql

Consolidate multiple databases via merge replication

Hi
I am trying to merge a set of tables from 5 separate databases (4 on
separate servers and 2 on the same server) into one 'master' database.
The tables being replicated all have unique ID ranges
How to I organise the replication so that the subscriber database can allow
inserts which fit into the ID ranges for the related Publishers database.
ie All inserts with a Lab_ID = 'L' should ONLY be replicated back to the
associated publisher.
I am not familiar with the automatic identity range handling .. can anyone
point me to an article about how this works ?
any help would be appreciated
cheers
mike
Mike,
as you've got multiple publishers, just ordinary filters will work to
restrict the flow of data to each publisher. For identity range handling,
you'll need to design each publisher's range yourself. The subscriber table
can be created during the first initialization, and all others are nosync
ones. So, the subscriber's identity range is defined after the first
initialization, and subsequent publications need to be sure not to overlap
with this range. Make the ranges large, because assignment of another range
might cause an overlap, so if the range is so big there will never be any
need to reset then this is best.
Alternatively, all subscriptions can be nosync ones.
These might help:
http://www.replicationanswers.com/No...alizations.asp
http://www.replicationanswers.com/ManualIdentities.asp
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Console.WriteLine generates HostProtectionException

Gotcha!

It took a few minutes of staring at my CLR method (it's actually a ServiceBroker service), trying to figure out why I was getting an exception on something that looked pretty innocuous.

It turned out to be the Console.WriteLine(...) statement. In hindsight, not really much of a surprise . However, for debugging purposes, I'd still like to use Console.WriteLine. Is there a HostProtectionAttribute I can apply that will allow it?

Josh

Yes, you are right, Console.WriteLine is not the appropiate output for debugging in SQL Server. What about using any trace source or even to keep it simple the System.Diagnostics namespace ? This one has some method for the output during the debbuging process.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

Console apps work for SA but no other user

Hi

My console applications work forSA and no other user. I can run the Stored procedures used in the console application from Query analyser when logged in with username/password that I am attempting to use for console applications. I am using SQL server authenication. User access permissions look ok in Enterprise Manager. Access is permit for my user.

Any suggestions?

Thanks

Permissions in SQL are much more than just access permit. You should also grant EXECUTE permssion for a stored procedure to a user if you want the user to execute the stored procedure; or you can create a role and add the user as member, then grant proper permissions to the role. To understand permissions related concepts in SQL, you can start from here:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/acdata/ac_8_qd_03_8q2b.asp

sqlsql

console apps only work if user is SA

Hi

When I try to use a user other then SA my console apps don't work.

I can run the Stored Procedures used in console application from Query analyser when logged in with the username/password that
I'm attempting to use for the console applications.

Under Users in Enterprise Manager Database access is 'permit' for my user.

By the way my web application which uses the same user name and password as in console applications is working. I also have dts packages running using dtsexec accessing the database with the same user name and password and they work fine.

MDAC 2.8 SP2 on windows server 2003 spi

C:\Program Files\Microsoft SQL Server\80\Tools\Binn>ODBCPING.EXE -S xxx.xxx.xxx.xxx
-U myusername -P mypassword

CONNECTED TO SQL SERVER

ODBC SQL Server Driver Version: 03.86.1830

SQL Server Version: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86)
May 3 2005 23:18:38
Copyright (c) 1988-2003 Microsoft Corporation
Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)

If the connection is available then why is the adapter.fill method failing?

I tried it using a text sql statement and that doesn't work either.

The problem isn't database specific as I did a test .bat on Northwind sample database and got same 'general network error'

Here's the error:

apps\Exports>exporter.bat

Unhandled Exception: System.Data.SqlClient.SqlException: General network error.
Check your network documentation.
at System.Data.SqlClient.ConnectionPool.CreateConnection()
at System.Data.SqlClient.ConnectionPool.UserCreateRequest()
at System.Data.SqlClient.ConnectionPool.GetConnection(Boolean& isInTransactio
n)
at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConn
ectionString options, Boolean& isInTransaction)
at System.Data.SqlClient.SqlConnection.Open()
at System.Data.Common.DbDataAdapter.QuietOpen(IDbConnection connection, Conne
ctionState& originalState)
at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startR
ecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior be
havior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord,
Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)

at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable)

//Testing using SELECT statement as command text

D\Test>Test.bat

D:\Test>TestDBAccess.exe "server=xxx.xxx.xxx.xxx;uid=xxxx;pwd=xxx;
database=Northwind;"

Unhandled Exception: System.Data.SqlClient.SqlException: General network error.
Check your network documentation.
at System.Data.SqlClient.ConnectionPool.CreateConnection()
at System.Data.SqlClient.ConnectionPool.UserCreateRequest()
at System.Data.SqlClient.ConnectionPool.GetConnection(Boolean& isInTransactio
n)
at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConn
ectionString options, Boolean& isInTransaction)
at System.Data.SqlClient.SqlConnection.Open()

D:\Test>

Any ideas/help much appreciated!

Hi,

would be cool if you could show us your code.Sometimes people hardcode certain properties (I confess that I did that on my own one time :-) ) which will lead to an error where usally there shouldn′t be an error (especially in console apps where connection properties are passed via arguments which isn′t testable while debugging within VS)

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

Hi,

Here's the essentials of the code (without some try catch statements).

Thanks.

KH

.bat file

::arg 0 Connection String
::arg 1 Export Path //just a path where I want the output file to go


Exporter.exe "server=xxx.xxx.xxx.xxx;uid=username;pwd=password;database=DATABASENAME;" "d:\\xxx\\xxx\\ExportOut\\"


Source code


using System;

using System.Data;
using System.Security;
using System.Security.Permissions;
using System.Security.Policy;
using System.Configuration;
using System.Data.SqlClient;
using System.IO;


namespace Exporter
{
class Export
{

private static String ConnectionString;

private static String ExportPath;

private static StreamWriter ExportLog;

[STAThread]
static void Main(string[] args)
{

ConnectionString = args[0];

ExportPath = args[1];

String DateString = System.DateTime.Today.Day.ToString() + "_" + System.DateTime.Today.Month.ToString();

ExportLog = new StreamWriter(ExportPath+"Export_Log_"+DateString+".txt");

DoExport();

ExportLog.Close();

}

private static void DoExport()
{

ExportLog.WriteLine("Beginning export");
GetData());


}



private static void GetData()
{


SqlCommand SelectCommand = new SqlCommand();

SelectCommand.CommandType=(System.Data.CommandType.StoredProcedure);

SelectCommand.CommandText="GetCSVOutput";

SqlConnection Conn = new SqlConnection(ConnectionString);

SelectCommand.Connection=Conn;



SqlDataAdapter ReportAdapter = new SqlDataAdapter();

FaultReportAdapter.SelectCommand=SelectCommand;

DataSet ReportData = new DataSet();


ReportAdapter.Fill(ReportData,"Report");

if (ReportData.Tables["Report"].Rows.Count == 0)
{
Conn.Close();
ExportLog.WriteLine("No reports to export");
return false;

}
else
{
Conn.Close();
return MakeFile(ReportData);

}

}


private static void MakeFile(DataSet FaultReportData)
{


/* Just prints out the results of Stored procedure to file and closes file */




}


|||Hi,

beside that the FaultReportDapater doesn′t exists (but I guess this is just a typo) you can try disabling the connection pool to see if it is based on this with adding the keywords "Pooling=False" to the connecting string.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de
|||

Hi,

Yes that was just a typo. With pooling set to false I still get the same error.

Thanks

KH

|||

What is the call stack of the exception if you disable pooling?

|||

Here's the call stack with pooling set to false.

D:\apps\Export>exporter.bat

D:\content\apps>Exporter.exe "server=xxx.xxx.xxx.xxx;uid=xxxx;pwd=;database=xxxx;pooling=False" "d:\\Content\\xxxx\\ExportOut\\"

Unhandled Exception: System.Data.SqlClient.SqlException: General network error.
Check your network documentation.
at System.Data.SqlClient.SqlInternalConnection.OpenAndLogin()
at System.Data.SqlClient.SqlInternalConnection..ctor(SqlConnection connection
, SqlConnectionString connectionOptions)
at System.Data.SqlClient.SqlConnection.Open()
at System.Data.Common.DbDataAdapter.QuietOpen(IDbConnection connection, Conne
ctionState& originalState)
at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startR
ecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior be
havior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord,
Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)

at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable)
at Exporter.Export.GetData() in
\\xxxx\dotnet_dll_production
\xxxx\xxxxx\export.cs:line 536
at Exporter.Export.DoExport() in
\\xxxx\dotnet_dll_production\xxxx\xxxx\export.cs:line 143
at Exporter.Export.Main(String[] args) in \\xxxx\dotnet_dll_produc
tion\xxxx\xxxx\export.cs:line 95

D:\apps\Export>

console application for retrieving a large amount of data

i need to retrieve a large amount of data from the sql server database and make changes to one field and put the data back using a console application. how do i do it?Well, there are lots of ways, probably the easiest being to retrievethe data as a dataset, make the changes, and update the database.
But let's start with this. Why do you need to retrieve a large amountof data to only change one field? Do you mean one field in all or mostof the records, or one field in one record?
If you're doing an update that affects many rows and don't really needto pull the data off the server, you might be able to use a SQL UPDATEstatement that updates the data on the database server, saving thenetwork traffic of pulling it down to the client.
Tell us more about what you want to do and we'll try to help.
Don
|||well, i have to retrieve 3 million records from the sql server database. i need to encrypt one field present in all records and put that encrypted field back into the db.
thanks.|||Okay. Is this a one-time thing or will you be doing it regularly? If aone-time thing then you probably don't care too much about performanceor being inefficient with regards to memory usage. So I would probablypull down the data into a dataset, probably in ranges of data usingsome field that is reasonably well-distributed, make the changes, andthen update the database.
For example, if the data had a last name field, you could do it forlast names that begin with A to E, then F to M, and so on. Or whateverranges make sense. And bring down ONLY the data you need, presumablythe one field with the data to be encrypted, and perhaps the secondfield that is the destination for the encrypted data. Or does theencrypted data go into a different table? Then you'll need to generatethe insert statements or use a second data table in the data set.
This is going to be horribly inefficient, however, so you won't want togo this route if this is anything but a one-time thing. If it'ssomething you'll need to do regularly, I'd try to find a way to do thisentierly on the server. In that case you could write a stored procedurethat uses OLE Automation (the sp_OA* system stored procedures) to dothe encryption. Since that uses COM it's going to have its ownperformance issues, but at least you're not slepping three million rowsof data to the client across the network.
Depending on exactly how you need to do this, there are plenty of other ways to get it done.
Don

Consoldating data across databases

Thanks in advance for your help,
Is there an easy way to "link" data from one SQL database to another, preferably within a view?
More specifically, can one "link" to data from 2 seperate databases to a 3rd without importing the data?
Thanks,
LeeSure, if they are on the same server then just use 3-part naming convention in your queries:

select t1.f2, t2.f2, t3.f2
from db1.dbo.t t1
inner join db2.dbo.t t2 on t1.f1=t2.f1
inner join db3.dbo.t t3 on t1.f1=t3.f1 and t2.f1=t3.f1

...and if on different servers then create linked servers and use 4-part naming convention by preceeding the database name with the alias of the server.|||Thanks so much! It was so easy that I am embarrased

Consoldate columns in a SELECT

I have a vendor supplied orders table that tracks the fee types associated
with an order. The allowable fee types are A, B, C, D, and E. These fee
types can be in anyone of the six fee code fields. In the corresponding fee
amount field is the fee amount. The application allows NULL fee amounts even
when there is a corresponding fee code entered. See OrderID 7 where fee code
'E' in field FEE_6_CD has a NULL value in FEE_6_AMT.
For the purposes of my SELECT I want to consolidate and sum all fee types
into 2 master types:
AB_Consolidation-All A and B fee types
CDE_Consolidation--All C, D, and E fee types.
See CREATE TABLE and INSERT statements as well as expected results below.
Thanks to anyone who could help.
CREATE TABLE [dbo].[Orders] (
[OrderID] [smallint],
[OrderType] [char] (4),
[OrderDate] [Datetime],
[FEE_1_CD] [char] (1)NULL,
[FEE_2_CD] [char] (1)NULL,
[FEE_3_CD] [char] (1)NULL,
[FEE_4_CD] [char] (1)NULL,
[FEE_5_CD] [char] (1)NULL,
[FEE_6_CD] [char] (1)NULL,
[FEE_1_AMT] [decimal](18, 2) NULL,
[FEE_2_AMT] [decimal](18, 2) NULL,
[FEE_3_AMT] [decimal](18, 2) NULL,
[FEE_4_AMT] [decimal](18, 2) NULL,
[FEE_5_AMT] [decimal](18, 2) NULL,
[FEE_6_AMT] [decimal](18, 2) NULL
) ON [PRIMARY]
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES
(1,'Buy','20060117','A','C',NULL,NULL,NU
LL,NULL,8,9,NULL,NULL,NULL,NULL)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES
(2,'Buy','20060117',NULL,NULL,'D',NULL,N
ULL,NULL,NULL,NULL,11,NULL,NULL,NULL
)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES
(3,'Sell','20060117',NULL,NULL,NULL,NULL
,NULL,'B',NULL,NULL,NULL,NULL,NULL,1
5)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES
(4,'Buy','20060116','A',NULL,NULL,'B',NU
LL,NULL,18,NUll,NULL,85,NULL,NULL)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES
(5,'Sell','20060117',NULL,NULL,NULL,NULL
,NULL,'D',NULL,NULL,NULL,NULL,NULL,5
4)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES (6,'Buy','20060116','A','C','D','B',NULL
,NULL,8,9,11,12,NULL,NULL)
INSERT INTO ORDERS
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE
E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
VALUES (7,'Buy','20060117','A','C',NULL,NULL,'D
','E',3,99,NULL,NULL,14,NULL)
Expect Results:
OrderID,OrderType,OrderDate,ABConsolidat
ion,DEConsolidation
1,Buy,20060117,8,9
2,Buy,20060117,0,11
3,Sell,20060117,15,0
4,Buy,20060116,103,0
5,Sell,20060117,0,54
6,Buy,20060116,20,20
7,Buy,20060117,3,113
Partial SELECT statement
SELECT
OrderID,
OrderType,
OrderDate,
Perform calcs here AS AB_Consolidation,
Perform calcs here AS CDE_Consolidation
FROM ORDERSAs you might have noticed, the vendor table schema is mediocre. In any case,
you can use a derived table to transpose the data & get the results you
need. If this table is being used in multiple areas, a view might be better.
You can create one as:
ALTER VIEW vw ( order_id, order_type, order_date, seq, code, amount )
AS SELECT OrderID, OrderType, OrderDate, seq,
CASE seq WHEN 1 THEN FEE_1_CD
WHEN 2 THEN FEE_2_CD
WHEN 3 THEN FEE_3_CD
WHEN 4 THEN FEE_4_CD
WHEN 5 THEN FEE_5_CD
WHEN 6 THEN FEE_6_CD
END,
CASE seq WHEN 1 THEN FEE_1_AMT
WHEN 2 THEN FEE_2_AMT
WHEN 3 THEN FEE_3_AMT
WHEN 4 THEN FEE_4_AMT
WHEN 5 THEN FEE_5_AMT
WHEN 6 THEN FEE_6_AMT
END
FROM Orders, ( SELECT 1 UNION
SELECT 2 UNION
SELECT 3 UNION
SELECT 4 UNION
SELECT 5 UNION
SELECT 6) S ( seq )
GO
Now you can do the query for consolidation like:
SELECT order_id, order_type, order_date,
SUM( CASE WHEN code IN ( 'A', 'B' )
THEN Amt ELSE 0 END ) AS "AB_",
SUM( CASE WHEN code IN ( 'C', 'D', 'E' )
THEN Amt ELSE 0 END ) AS "CDE_"
FROM vw
GROUP BY order_id, order_type, order_date,
ORDER BY order_id ;
Anith|||cant you just do this, or am I missing something?
SELECT
OrderID,
OrderType,
OrderDate,
coalesce(FEE_1_AMT,0)+coalesce(FEE_2_AMT
,0) AS AB_Consolidation,
coalesce(FEE_3_AMT,0) + coalesce(FEE_4_AMT,0) + coalesce(FEE_5_AMT,0) AS
CDE_Consolidation
FROM ORDERS
Basically, just add the collumns using coaless to handle the nulls.
"Terri" <terri@.cybernets.com> wrote in message
news:dqovlt$v0d$2@.reader2.nmix.net...
> I have a vendor supplied orders table that tracks the fee types associated
> with an order. The allowable fee types are A, B, C, D, and E. These fee
> types can be in anyone of the six fee code fields. In the corresponding
fee
> amount field is the fee amount. The application allows NULL fee amounts
even
> when there is a corresponding fee code entered. See OrderID 7 where fee
code
> 'E' in field FEE_6_CD has a NULL value in FEE_6_AMT.
> For the purposes of my SELECT I want to consolidate and sum all fee types
> into 2 master types:
> AB_Consolidation-All A and B fee types
> CDE_Consolidation--All C, D, and E fee types.
> See CREATE TABLE and INSERT statements as well as expected results below.
> Thanks to anyone who could help.
>
> CREATE TABLE [dbo].[Orders] (
> [OrderID] [smallint],
> [OrderType] [char] (4),
> [OrderDate] [Datetime],
> [FEE_1_CD] [char] (1)NULL,
> [FEE_2_CD] [char] (1)NULL,
> [FEE_3_CD] [char] (1)NULL,
> [FEE_4_CD] [char] (1)NULL,
> [FEE_5_CD] [char] (1)NULL,
> [FEE_6_CD] [char] (1)NULL,
> [FEE_1_AMT] [decimal](18, 2) NULL,
> [FEE_2_AMT] [decimal](18, 2) NULL,
> [FEE_3_AMT] [decimal](18, 2) NULL,
> [FEE_4_AMT] [decimal](18, 2) NULL,
> [FEE_5_AMT] [decimal](18, 2) NULL,
> [FEE_6_AMT] [decimal](18, 2) NULL
> ) ON [PRIMARY]
>
> INSERT INTO ORDERS
>
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES
> (1,'Buy','20060117','A','C',NULL,NULL,NU
LL,NULL,8,9,NULL,NULL,NULL,NULL)
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES
>[/color]
(2,'Buy','20060117',NULL,NULL,'D',NULL,N
ULL,NULL,NULL,NULL,11,NULL,NULL,NULL[col
or=darkred]
> )
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES
>[/color]
(3,'Sell','20060117',NULL,NULL,NULL,NULL
,NULL,'B',NULL,NULL,NULL,NULL,NULL,1[col
or=darkred]
> 5)
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES
> (4,'Buy','20060116','A',NULL,NULL,'B',NU
LL,NULL,18,NUll,NULL,85,NULL,NULL)
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES
>[/color]
(5,'Sell','20060117',NULL,NULL,NULL,NULL
,NULL,'D',NULL,NULL,NULL,NULL,NULL,5[col
or=darkred]
> 4)
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES (6,'Buy','20060116','A','C','D','B',NULL
,NULL,8,9,11,12,NULL,NULL)
> INSERT INTO ORDERS
>[/color]
(OrderID,OrderType,OrderDate,FEE_1_CD,FE
E_2_CD,FEE_3_CD,FEE_4_CD,FEE_5_CD,FE[col
or=darkred]
> E_6_CD,FEE_1_AMT,FEE_2_AMT,FEE_3_AMT,FEE
_4_AMT,FEE_5_AMT,FEE_6_AMT)
> VALUES[/color]
(7,'Buy','20060117','A','C',NULL,NULL,'D
','E',3,99,NULL,NULL,14,NULL)red">
>
> Expect Results:
> OrderID,OrderType,OrderDate,ABConsolidat
ion,DEConsolidation
> 1,Buy,20060117,8,9
> 2,Buy,20060117,0,11
> 3,Sell,20060117,15,0
> 4,Buy,20060116,103,0
> 5,Sell,20060117,0,54
> 6,Buy,20060116,20,20
> 7,Buy,20060117,3,113
> Partial SELECT statement
> SELECT
> OrderID,
> OrderType,
> OrderDate,
> Perform calcs here AS AB_Consolidation,
> Perform calcs here AS CDE_Consolidation
> FROM ORDERS
>
>|||"Jim Underwood" <james.underwoodATfallonclinic.com> wrote in message
news:OPVJGPUHGHA.2668@.tk2msftngp13.phx.gbl...
> cant you just do this, or am I missing something?
> SELECT
> OrderID,
> OrderType,
> OrderDate,
> coalesce(FEE_1_AMT,0)+coalesce(FEE_2_AMT
,0) AS AB_Consolidation,
> coalesce(FEE_3_AMT,0) + coalesce(FEE_4_AMT,0) + coalesce(FEE_5_AMT,0) AS
> CDE_Consolidation
> FROM ORDERS
> Basically, just add the collumns using coaless to handle the nulls.
Fee type A can be in any of the 6 fee fields so I don't think that will
work. I'm looking at Anith's solution to see if that will work.
thanks|||>
> Fee type A can be in any of the 6 fee fields so I don't think that will
> work. I'm looking at Anith's solution to see if that will work.
>
Ahhh... I misunderstood that in your first post. The problem is much more
interesting than I thought.
I like how Anith is getting 6 fields from one row to 6. I'll have to
remember this join technique in case I ever run into a similar situation.
If your final code differs from his, please let us know what you come up
with.

Tuesday, March 27, 2012

Consistent Formatting

If I create are report using a wizard, there are some color choices
available, how can I pick from this same list when I create a report with
the wizard?
How can I easily apply the same color theme to multiple reports?
TIA
DeanOn Jun 13, 2:34 pm, "Dean" <deanl...@.hotmail.com.nospam> wrote:
> If I create are report using a wizard, there are some color choices
> available, how can I pick from this same list when I create a report with
> the wizard?
> How can I easily apply the same color theme to multiple reports?
> TIA
> Dean
One option would be to physically copy the report and then modify the
controls, parameters, datasets, etc in it. Something else worth
looking into is if you can tie a report to a style sheet (css). This
option I don't have experience w/but is worth a shot. Hope this helps.
Regards,
Enrique Martinez
Sr. Software Consultantsqlsql

Consistent Deadlock, need to identify the rogue PAGE lock

I'm having trouble tracking down a deadlock in a large multi-threaded application. I have been able to gather some information, but not quite enough to identify the two queries at fault.

I collected the following by turning on the -T1204 and -T3605 parameters:

Deadlock encountered .... Printing deadlock information
2006-04-13 09:13:48.70 spid3
2006-04-13 09:13:48.70 spid3 Wait-for graph
2006-04-13 09:13:48.70 spid3
2006-04-13 09:13:48.70 spid3 Node:1
2006-04-13 09:13:48.70 spid3 KEY: 10:1993058136:1 (aa0069f30857) CleanCnt:3 Mode: U Flags: 0x0
2006-04-13 09:13:48.70 spid3 Grant List 1::
2006-04-13 09:13:48.71 spid3 Owner:0x1a0d5b80 Mode: U Flg:0x0 Ref:1 Life:00000000 SPID:61 ECID:0
2006-04-13 09:13:48.71 spid3 SPID: 61 ECID: 0 Statement Type: UPDATE Line #: 1
2006-04-13 09:13:48.71 spid3 Input Buf: RPC Event: sp_execute;1
2006-04-13 09:13:48.71 spid3 Requested By:
2006-04-13 09:13:48.71 spid3 ResType:LockOwner Stype:'OR' Mode: U SPID:53 ECID:0 Ec:(0x1B4E54F0) Value:0x1a0d5ae0 Cost:(0/BA0)
2006-04-13 09:13:48.71 spid3
2006-04-13 09:13:48.71 spid3 Node:2
2006-04-13 09:13:48.71 spid3 PAG: 10:4:14898 CleanCnt:2 Mode: U Flags: 0x2
2006-04-13 09:13:48.71 spid3 Grant List 0::
2006-04-13 09:13:48.71 spid3 Owner:0x1a0c6bc0 Mode: U Flg:0x0 Ref:1 Life:00000000 SPID:58 ECID:0
2006-04-13 09:13:48.71 spid3 SPID: 58 ECID: 0 Statement Type: UPDATE Line #: 1
2006-04-13 09:13:48.71 spid3 Input Buf: RPC Event: sp_execute;1
2006-04-13 09:13:48.71 spid3 Requested By:
2006-04-13 09:13:48.71 spid3 ResType:LockOwner Stype:'OR' Mode: IU SPID:61 ECID:0 Ec:(0x1C2B54F0) Value:0x1a0d5440 Cost:(0/B98)
2006-04-13 09:13:48.71 spid3
2006-04-13 09:13:48.71 spid3 Node:3
2006-04-13 09:13:48.71 spid3 PAG: 10:4:14899 CleanCnt:2 Mode: IX Flags: 0x2
2006-04-13 09:13:48.71 spid3 Grant List 1::
2006-04-13 09:13:48.71 spid3 Owner:0x1a0c6e80 Mode: IX Flg:0x0 Ref:2 Life:02000000 SPID:55 ECID:0
2006-04-13 09:13:48.71 spid3 SPID: 55 ECID: 0 Statement Type: UPDATE Line #: 1
2006-04-13 09:13:48.71 spid3 Input Buf: RPC Event: sp_execute;1
2006-04-13 09:13:48.71 spid3 Requested By:
2006-04-13 09:13:48.71 spid3 ResType:LockOwner Stype:'OR' Mode: U SPID:58 ECID:0 Ec:(0x1C0114F0) Value:0x1a0d5a20 Cost:(0/B98)
2006-04-13 09:13:48.71 spid3
2006-04-13 09:13:48.71 spid3 Node:4
2006-04-13 09:13:48.71 spid3 KEY: 10:1993058136:1 (aa0069f30857) CleanCnt:3 Mode: U Flags: 0x0
2006-04-13 09:13:48.71 spid3 Wait List:
2006-04-13 09:13:48.71 spid3 Owner:0x1a0d5ae0 Mode: U Flg:0x0 Ref:1 Life:00000000 SPID:53 ECID:0
2006-04-13 09:13:48.71 spid3 SPID: 53 ECID: 0 Statement Type: UPDATE Line #: 1
2006-04-13 09:13:48.71 spid3 Input Buf: RPC Event: sp_execute;1
2006-04-13 09:13:48.71 spid3 Requested By:
2006-04-13 09:13:48.71 spid3 ResType:LockOwner Stype:'OR' Mode: U SPID:55 ECID:0 Ec:(0x1BFB94F0) Value:0x1a0c6fc0 Cost:(0/BA0)
2006-04-13 09:13:48.71 spid3 Victim Resource Owner:
2006-04-13 09:13:48.71 spid3 ResType:LockOwner Stype:'OR' Mode: U SPID:58 ECID:0 Ec:(0x1C0114F0) Value:0x1a0d5a20 Cost:(0/B98)

I then ran the following queries: -

> select name from sysobjects where id=1993058136;
result: ACCOUNTARRANGEMENTSCHEDULES

> select name from sysindexes where indid=1 and id=1993058136;
result: PK_ACCOUNTARRANGEMENTSCHEDULES

> sp_help PK_ACCOUNTARRANGEMENTSCHEDULES
result: PK_ACCOUNTARRANGEMENTSCHEDULES , dbo , primary key cns , 2005-11-01 13:51:01.650

It looks like there is sql server selected a row lock for the ACCOUNTARRANGEMENTSCHEDULES table (sounds good), but for the UPDATE being done by nodes 2 and 3 a page lock is being used. Is there any way I can get more info on what the statement is or what table it is updating?
I found some more useful info, using the following command: -

dbcc traceon (3604)
dbcc page(10, 4, 14898,2)
dbcc traceoff (3604)

I received the following (amongst other things): -
m_objId = 1993058136

Which would indicate the page lock is on the same table (ACCOUNTARRANGEMENTSCHEDULES).|||

The page that the deadlock occurs on has little to do with what led to the deadlock.

This is the deadlock scenario:

Spid 61 -> owns KEY: 10:1993058136:1 (aa0069f30857)
-> requesting 10:4:14898
Spid 58 -> owns 10:4:14898
-> Requesting 10:4:14899
Spid 55 -> Owns 10:4:14899
-> requesting 10:1993058136:1 (aa0069f30857)

So spid 61 used an index to find a row it was interested in and the needed additional data off page 10:4:14898.

Spid 55 appears to have found a row it intends to update on page 10:4:14899, and now needs a key lock to perform the necessary index maintenance.

Spid 58 appears to be scanning as it is requesting the page with the next ID, which likely just happens to be the next page in the list.

All 3 of these spids are exectuting some kind of prepared statement that does an update, as evidenced by the "sp_execute" statement in the input buffer.

To get to the bottom of this you're going to need to look at the exectution plan of these statements. Use profiler and gather the showplan all, sp:stmtstarting, sp:stmtcompleted, rpc starting and rpc completed events. If the plan shows (as I suspect it will) something like CLUSTERED INDEX SCAN, then you need to think about proper indexes to support these statements.

|||Unfortunately, whenever I use the profiler (even if I only turn on Deadlock and Deadlock Chain events) it seems to change the timings enough that the deadlock is not produced.

I don't know how I can change the indexes. The table has a primary key, which is a composite of two columns. Since a primary key exists, why is a page lock being used? The only thing that I can think that could be causing it, is the fact that an updateable jdbc ResultSet is being used by one of the queries. So it's not strictly an UPDATE, it's a SELECT (that selects only 1 row) then an updateRow.

This deadlock only appeared when introducing sp4. With sp3, no deadlocks occur. Has the lock selection algorithm changed?
|||The lock selection algorithm has not changed. It's possible that you have a different plan now. Even if you don't hit the deadlock the plans themselves can be instructive. If you see Scan's in the plan you want to understand why. Of course a Scan is going to cause each page in an index to be read and locked. The fewer pages that are locked the fewer chances there are of encountering a blocking, or deadlocking scenario.

Consistent Backup Failures - Please Help

I am not quite a DBA, and this issue has me at the end of my rope.
We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000.
This backup takes over 8 hours to complete. Each night toward the end, our
backup job fails with the following error:
BackupMedium::ReportIoError: write failure on backup device
'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot access
the file because another process has locked a portion of the file.).
For the life of me, I cannot determine what process is locking the file. I
have tried executing the backup with the database in Single User mode, and
it still fails.
Are there any utilities that I can use to spy on the access to the file, so
I can turn them off?
ANY help is really appreciated. Thanks.
Ive discovered the drives are all compressed (and the accompanying KBs that
say its bad to backup to compressed drives)...
"codejockey" <cj@.hotmail.com> wrote in message
news:#nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>
|||You might try FileMon from sysinternals (http://www.sysinternals.com).
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"codejockey" <cj@.hotmail.com> wrote in message
news:%23nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>
|||codejockey,
Annoying isn't it? I have had this a lot and 99% of the time it has been
a tape backup job that has the file locked copying it to tape. Quite
often the tape backup program is awaiting someone to change the tape as
the tape is full.
sysinternals.com is a great resource as Peter mentioned. FileMon should
highlight the culprit.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
codejockey wrote:
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file, so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>

Consistent Backup Failures - Please Help

I am not quite a DBA, and this issue has me at the end of my rope.
We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000.
This backup takes over 8 hours to complete. Each night toward the end, our
backup job fails with the following error:
BackupMedium::ReportIoError: write failure on backup device
'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot access
the file because another process has locked a portion of the file.).
For the life of me, I cannot determine what process is locking the file. I
have tried executing the backup with the database in Single User mode, and
it still fails.
Are there any utilities that I can use to spy on the access to the file, so
I can turn them off?
ANY help is really appreciated. Thanks.Ive discovered the drives are all compressed (and the accompanying KBs that
say its bad to backup to compressed drives)...
"codejockey" <cj@.hotmail.com> wrote in message
news:#nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>|||You might try FileMon from sysinternals (http://www.sysinternals.com).
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"codejockey" <cj@.hotmail.com> wrote in message
news:%23nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>|||codejockey,
Annoying isn't it? I have had this a lot and 99% of the time it has been
a tape backup job that has the file locked copying it to tape. Quite
often the tape backup program is awaiting someone to change the tape as
the tape is full.
sysinternals.com is a great resource as Peter mentioned. FileMon should
highlight the culprit.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
codejockey wrote:
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000
.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot acces
s
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file, s
o
> I can turn them off?
> ANY help is really appreciated. Thanks.
>

Consistent Backup Failures - Please Help

I am not quite a DBA, and this issue has me at the end of my rope.
We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000.
This backup takes over 8 hours to complete. Each night toward the end, our
backup job fails with the following error:
BackupMedium::ReportIoError: write failure on backup device
'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot access
the file because another process has locked a portion of the file.).
For the life of me, I cannot determine what process is locking the file. I
have tried executing the backup with the database in Single User mode, and
it still fails.
Are there any utilities that I can use to spy on the access to the file, so
I can turn them off?
ANY help is really appreciated. Thanks.Ive discovered the drives are all compressed (and the accompanying KBs that
say its bad to backup to compressed drives)...
"codejockey" <cj@.hotmail.com> wrote in message
news:#nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>|||You might try FileMon from sysinternals (http://www.sysinternals.com).
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"codejockey" <cj@.hotmail.com> wrote in message
news:%23nAnWiISEHA.2468@.TK2MSFTNGP11.phx.gbl...
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server
2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot
access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file,
so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>|||codejockey,
Annoying isn't it? I have had this a lot and 99% of the time it has been
a tape backup job that has the file locked copying it to tape. Quite
often the tape backup program is awaiting someone to change the tape as
the tape is full.
sysinternals.com is a great resource as Peter mentioned. FileMon should
highlight the culprit.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
codejockey wrote:
> I am not quite a DBA, and this issue has me at the end of my rope.
> We have a large (150 GB) database on Win2K3 Standard Ed w/ SQL Server 2000.
> This backup takes over 8 hours to complete. Each night toward the end, our
> backup job fails with the following error:
> BackupMedium::ReportIoError: write failure on backup device
> 'F:\ourbackupfile.BAK'. Operating system error 33(The process cannot access
> the file because another process has locked a portion of the file.).
> For the life of me, I cannot determine what process is locking the file. I
> have tried executing the backup with the database in Single User mode, and
> it still fails.
> Are there any utilities that I can use to spy on the access to the file, so
> I can turn them off?
> ANY help is really appreciated. Thanks.
>

Consistency of row when reading with READ UNCOMMITTED

Hello.
I have a question about the internal consistency of a single row during an
update statement when another connection is reading the data with a READ
UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
Let me give a small example to illustrate my question.
Say I have a table T1 with some data in it:
CREATE TABLE [T1] (
[optLockVersion] rowversion NULL,
[Name] nvarchar(256) NULL,
[Note] nvarchar(256) NULL
)
INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
With connection #1, we perform an update statement:
UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
The above statement will update our row in two ways: it will update the
rowversion and the Note.
Now suppose a second connection tries to read in this row with an isolation
level of READ UNCOMMITTED/NOLOCK:
select * from [T1] WITH(NOLOCK)
Is it possible that the results returned from this select statement would
have updated Notes, but the rowversion would not have been updated in the
returned results? Or perhaps that the rowversion would be updated and the
new Notes would not have been updated yet?
What I'm trying to determine is if SQL server still maintains the
consistency of a single row when reading data in another connection with the
read uncommitted isolation level.
Thanks in advance for your time,
Eli Tucker
"Eli Tucker" <EliTucker@.discussions.microsoft.com> wrote in message
news:0C3A0C35-26D2-4578-9A8D-69B8353F29E4@.microsoft.com...
> Hello.
> I have a question about the internal consistency of a single row during an
> update statement when another connection is reading the data with a READ
> UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
>
There is no guarantee about the internal consistency of a single row with
READ UNCOMMITTED/NOLOCK. None. Period.

> Let me give a small example to illustrate my question.
> Say I have a table T1 with some data in it:
> CREATE TABLE [T1] (
> [optLockVersion] rowversion NULL,
> [Name] nvarchar(256) NULL,
> [Note] nvarchar(256) NULL
> )
> INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
> With connection #1, we perform an update statement:
> UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
> The above statement will update our row in two ways: it will update the
> rowversion and the Note.
> Now suppose a second connection tries to read in this row with an
> isolation
> level of READ UNCOMMITTED/NOLOCK:
> select * from [T1] WITH(NOLOCK)
> Is it possible that the results returned from this select statement would
> have updated Notes, but the rowversion would not have been updated in the
> returned results? Or perhaps that the rowversion would be updated and the
> new Notes would not have been updated yet?
> What I'm trying to determine is if SQL server still maintains the
> consistency of a single row when reading data in another connection with
> the
> read uncommitted isolation level.>
Yes. It is possible. Why? Because there are no guarantees about the
correctness or consistency of results in READ UNCOMMITTED/NOLOCK. Whether
it actually happens depends on details of the algorithms used to update the
physical database pages, and on the details of the table structure.
It is very, very possible, for instance, for such a select query to do an
index seek on a nonclustered index followed by a bookmark lookup on the
clustered index. Some of the output columns will be read from the index and
others from the row data. Here there are two separate physical structures
used to construct one output row, and it will very often happen that a
NOLOCK query will return a row that never existed at any point in time.
Consider setting the database to READ COMMITTED SNAPSHOT ISOLATION to
provide non-blocking and correct query access to changing data.
David
sqlsql

Consistency of row when reading with READ UNCOMMITTED

Hello.
I have a question about the internal consistency of a single row during an
update statement when another connection is reading the data with a READ
UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
Let me give a small example to illustrate my question.
Say I have a table T1 with some data in it:
CREATE TABLE [T1] (
[optLockVersion] rowversion NULL,
[Name] nvarchar(256) NULL,
[Note] nvarchar(256) NULL
)
INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
With connection #1, we perform an update statement:
UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
The above statement will update our row in two ways: it will update the
rowversion and the Note.
Now suppose a second connection tries to read in this row with an isolation
level of READ UNCOMMITTED/NOLOCK:
select * from [T1] WITH(NOLOCK)
Is it possible that the results returned from this select statement would
have updated Notes, but the rowversion would not have been updated in the
returned results? Or perhaps that the rowversion would be updated and the
new Notes would not have been updated yet?
What I'm trying to determine is if SQL server still maintains the
consistency of a single row when reading data in another connection with the
read uncommitted isolation level.
Thanks in advance for your time,
--
Eli Tucker"Eli Tucker" <EliTucker@.discussions.microsoft.com> wrote in message
news:0C3A0C35-26D2-4578-9A8D-69B8353F29E4@.microsoft.com...
> Hello.
> I have a question about the internal consistency of a single row during an
> update statement when another connection is reading the data with a READ
> UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
>
There is no guarantee about the internal consistency of a single row with
READ UNCOMMITTED/NOLOCK. None. Period.
> Let me give a small example to illustrate my question.
> Say I have a table T1 with some data in it:
> CREATE TABLE [T1] (
> [optLockVersion] rowversion NULL,
> [Name] nvarchar(256) NULL,
> [Note] nvarchar(256) NULL
> )
> INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
> With connection #1, we perform an update statement:
> UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
> The above statement will update our row in two ways: it will update the
> rowversion and the Note.
> Now suppose a second connection tries to read in this row with an
> isolation
> level of READ UNCOMMITTED/NOLOCK:
> select * from [T1] WITH(NOLOCK)
> Is it possible that the results returned from this select statement would
> have updated Notes, but the rowversion would not have been updated in the
> returned results? Or perhaps that the rowversion would be updated and the
> new Notes would not have been updated yet?
> What I'm trying to determine is if SQL server still maintains the
> consistency of a single row when reading data in another connection with
> the
> read uncommitted isolation level.>
Yes. It is possible. Why? Because there are no guarantees about the
correctness or consistency of results in READ UNCOMMITTED/NOLOCK. Whether
it actually happens depends on details of the algorithms used to update the
physical database pages, and on the details of the table structure.
It is very, very possible, for instance, for such a select query to do an
index seek on a nonclustered index followed by a bookmark lookup on the
clustered index. Some of the output columns will be read from the index and
others from the row data. Here there are two separate physical structures
used to construct one output row, and it will very often happen that a
NOLOCK query will return a row that never existed at any point in time.
Consider setting the database to READ COMMITTED SNAPSHOT ISOLATION to
provide non-blocking and correct query access to changing data.
David

Consistency of row when reading with READ UNCOMMITTED

Hello.
I have a question about the internal consistency of a single row during an
update statement when another connection is reading the data with a READ
UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
Let me give a small example to illustrate my question.
Say I have a table T1 with some data in it:
CREATE TABLE [T1] (
[optLockVersion] rowversion NULL,
[Name] nvarchar(256) NULL,
[Note] nvarchar(256) NULL
)
INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
With connection #1, we perform an update statement:
UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
The above statement will update our row in two ways: it will update the
rowversion and the Note.
Now suppose a second connection tries to read in this row with an isolation
level of READ UNCOMMITTED/NOLOCK:
select * from [T1] WITH(NOLOCK)
Is it possible that the results returned from this select statement would
have updated Notes, but the rowversion would not have been updated in the
returned results? Or perhaps that the rowversion would be updated and the
new Notes would not have been updated yet?
What I'm trying to determine is if SQL server still maintains the
consistency of a single row when reading data in another connection with the
read uncommitted isolation level.
Thanks in advance for your time,
Eli Tucker"Eli Tucker" <EliTucker@.discussions.microsoft.com> wrote in message
news:0C3A0C35-26D2-4578-9A8D-69B8353F29E4@.microsoft.com...
> Hello.
> I have a question about the internal consistency of a single row during an
> update statement when another connection is reading the data with a READ
> UNCOMMITTED (also known as WITH(NOLOCK) ) isolation level.
>
There is no guarantee about the internal consistency of a single row with
READ UNCOMMITTED/NOLOCK. None. Period.

> Let me give a small example to illustrate my question.
> Say I have a table T1 with some data in it:
> CREATE TABLE [T1] (
> [optLockVersion] rowversion NULL,
> [Name] nvarchar(256) NULL,
> [Note] nvarchar(256) NULL
> )
> INSERT INTO [T1] (Name, Note) values ("Bob", "Good guy")
> With connection #1, we perform an update statement:
> UPDATE [T1] set Note = "Decent guy" where [Name] = "Bob"
> The above statement will update our row in two ways: it will update the
> rowversion and the Note.
> Now suppose a second connection tries to read in this row with an
> isolation
> level of READ UNCOMMITTED/NOLOCK:
> select * from [T1] WITH(NOLOCK)
> Is it possible that the results returned from this select statement would
> have updated Notes, but the rowversion would not have been updated in the
> returned results? Or perhaps that the rowversion would be updated and the
> new Notes would not have been updated yet?
> What I'm trying to determine is if SQL server still maintains the
> consistency of a single row when reading data in another connection with
> the
> read uncommitted isolation level.>
Yes. It is possible. Why? Because there are no guarantees about the
correctness or consistency of results in READ UNCOMMITTED/NOLOCK. Whether
it actually happens depends on details of the algorithms used to update the
physical database pages, and on the details of the table structure.
It is very, very possible, for instance, for such a select query to do an
index seek on a nonclustered index followed by a bookmark lookup on the
clustered index. Some of the output columns will be read from the index and
others from the row data. Here there are two separate physical structures
used to construct one output row, and it will very often happen that a
NOLOCK query will return a row that never existed at any point in time.
Consider setting the database to READ COMMITTED SNAPSHOT ISOLATION to
provide non-blocking and correct query access to changing data.
David

Consistency errors/repair_all_data_loss question

I've been receiving consistency errors for the past few weeks
whenever the indexes on 1 table are rebuilt (which they are each
evening). I restored the production database to our development
server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
were the same (which would rule out hardware errors, right?)
DBREINDEX results:
A possible database consistency problem has been detected on database
'database_name'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on
database 'database_name'.
Connection Broken
Then I ran CHECKDB with repair_allow_data_loss (results below) and that
fixed the consistency errors (CHECKDB came back with no errors the
next time).
I read online (not BOL) that only index, header and consistency
information are removed with repair_all_data_loss, no actual data.
>From the info below, can you tell if the only thing that changed was
the clustered index? And if so, would it be safe (i.e., no data loss)
to run repair_allow_data_loss on the production database? I'd like
to clean this error up without resorting to a month old backup, which
would definitely mean data loss.
Thanks!!
CHECKDB with repair_allow_data_loss results:
CHECKDB found 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
Ran DBCC CHECKDB ('database_name',repair_allow_data_loss) WITH
ALL_ERRORMSGS
DBCC results for 'dbowner.f8'.
Server: Msg 8944, Level 16, State 4, Line 1
Table error: Object ID 1648724926, index ID 0, page (1:155093), row 25.
Test (offsetNull >= BASEOFFSET) failed. Values are 0 and 4.
Server: Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 1648724926, index ID 1. Page (1:155093) was not
seen in the scan although its parent (1:155073) and previous (1:155092)
refer to it. Check any previous errors.
Server: Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 1648724926, index ID 1. Page (1:155094) is
missing a reference from previous page (1:155093). Possible chain
linkage problem.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 10, text ID 41680896 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 11, text ID 1340932096 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 12, text ID 23986176 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313604), slot 0, text ID 96010240 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313604), slot 1, text ID 817430528 is not referenced.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
Repair: Page (1:155093) has been deallocated from object ID 1648724926,
index ID 0.
Clustered index successfully restored for object 'dbowner.f8' in
database 'database_name'.
There are 1140058 rows in 33131 pages for object 'dbowner.f8'.
CHECKDB found 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
CHECKDB fixed 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
Select count (*) from dbowner.f8 on development server shows: 1140058
Select count (*) from dbowner.f8 on production server shows: 1145787
> I restored the production database to our development
> server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
> were the same (which would rule out hardware errors, right?)
I doubt it will rule out hw errors. I can imagine that hw errors were introduced on the prod
machine, corrupted the db and is carried over by your backup on the test machine.

> I read online (not BOL) that only index, header and consistency
> information are removed with repair_all_data_loss, no actual data.
Que! Can you post that link? AFAIK, The option does what it say. If will remove all corrupted pages,
resulting in possible data loss. In your case, you seem to have problems in index 0 (a data page)
for a user object (just reading the first error you posted). That page will be removed = data loss.
I strongly encourage you to open a case with MS Support, assuming that you value the database > the
few hundred bucks a support issue costs. They might not be able to help you further, but you have
tried. And make sure you do far more frequent CHECKDBs. Also, you might want to check
http://www.karaszi.com/SQLServer/inf...suspect_db.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"nmsm" <naomimsm@.gmail.com> wrote in message
news:1127328673.857606.24480@.f14g2000cwb.googlegro ups.com...
> I've been receiving consistency errors for the past few weeks
> whenever the indexes on 1 table are rebuilt (which they are each
> evening). I restored the production database to our development
> server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
> were the same (which would rule out hardware errors, right?)
> DBREINDEX results:
> A possible database consistency problem has been detected on database
> 'database_name'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on
> database 'database_name'.
> Connection Broken
>
> Then I ran CHECKDB with repair_allow_data_loss (results below) and that
> fixed the consistency errors (CHECKDB came back with no errors the
> next time).
> I read online (not BOL) that only index, header and consistency
> information are removed with repair_all_data_loss, no actual data.
> the clustered index? And if so, would it be safe (i.e., no data loss)
> to run repair_allow_data_loss on the production database? I'd like
> to clean this error up without resorting to a month old backup, which
> would definitely mean data loss.
> Thanks!!
> CHECKDB with repair_allow_data_loss results:
> CHECKDB found 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> Ran DBCC CHECKDB ('database_name',repair_allow_data_loss) WITH
> ALL_ERRORMSGS
> DBCC results for 'dbowner.f8'.
> Server: Msg 8944, Level 16, State 4, Line 1
> Table error: Object ID 1648724926, index ID 0, page (1:155093), row 25.
> Test (offsetNull >= BASEOFFSET) failed. Values are 0 and 4.
> Server: Msg 8976, Level 16, State 1, Line 1
> Table error: Object ID 1648724926, index ID 1. Page (1:155093) was not
> seen in the scan although its parent (1:155073) and previous (1:155092)
> refer to it. Check any previous errors.
> Server: Msg 8978, Level 16, State 1, Line 1
> Table error: Object ID 1648724926, index ID 1. Page (1:155094) is
> missing a reference from previous page (1:155093). Possible chain
> linkage problem.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 10, text ID 41680896 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 11, text ID 1340932096 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 12, text ID 23986176 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313604), slot 0, text ID 96010240 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313604), slot 1, text ID 817430528 is not referenced.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> Repair: Page (1:155093) has been deallocated from object ID 1648724926,
> index ID 0.
> Clustered index successfully restored for object 'dbowner.f8' in
> database 'database_name'.
> There are 1140058 rows in 33131 pages for object 'dbowner.f8'.
> CHECKDB found 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> CHECKDB fixed 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> Select count (*) from dbowner.f8 on development server shows: 1140058
> Select count (*) from dbowner.f8 on production server shows: 1145787
>

Consistency errors/repair_all_data_loss question

I've been receiving consistency errors for the past few weeks
whenever the indexes on 1 table are rebuilt (which they are each
evening). I restored the production database to our development
server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
were the same (which would rule out hardware errors, right?)
DBREINDEX results:
A possible database consistency problem has been detected on database
'database_name'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on
database 'database_name'.
Connection Broken
Then I ran CHECKDB with repair_allow_data_loss (results below) and that
fixed the consistency errors (CHECKDB came back with no errors the
next time).
I read online (not BOL) that only index, header and consistency
information are removed with repair_all_data_loss, no actual data.
>From the info below, can you tell if the only thing that changed was
the clustered index? And if so, would it be safe (i.e., no data loss)
to run repair_allow_data_loss on the production database? I'd like
to clean this error up without resorting to a month old backup, which
would definitely mean data loss.
Thanks!!
CHECKDB with repair_allow_data_loss results:
CHECKDB found 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
Ran DBCC CHECKDB ('database_name',repair_allow_data_loss) WITH
ALL_ERRORMSGS
DBCC results for 'dbowner.f8'.
Server: Msg 8944, Level 16, State 4, Line 1
Table error: Object ID 1648724926, index ID 0, page (1:155093), row 25.
Test (offsetNull >= BASEOFFSET) failed. Values are 0 and 4.
Server: Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 1648724926, index ID 1. Page (1:155093) was not
seen in the scan although its parent (1:155073) and previous (1:155092)
refer to it. Check any previous errors.
Server: Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 1648724926, index ID 1. Page (1:155094) is
missing a reference from previous page (1:155093). Possible chain
linkage problem.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 10, text ID 41680896 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 11, text ID 1340932096 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313603), slot 12, text ID 23986176 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313604), slot 0, text ID 96010240 is not referenced.
Server: Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 1648724926. The text, ntext, or image node at
page (1:313604), slot 1, text ID 817430528 is not referenced.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
The error has been repaired.
Repair: Page (1:155093) has been deallocated from object ID 1648724926,
index ID 0.
Clustered index successfully restored for object 'dbowner.f8' in
database 'database_name'.
There are 1140058 rows in 33131 pages for object 'dbowner.f8'.
CHECKDB found 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
CHECKDB fixed 0 allocation errors and 9 consistency errors in table
'dbowner.f8' (object ID 1648724926).
Select count (*) from dbowner.f8 on development server shows: 1140058
Select count (*) from dbowner.f8 on production server shows: 1145787> I restored the production database to our development
> server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
> were the same (which would rule out hardware errors, right?)
I doubt it will rule out hw errors. I can imagine that hw errors were introduced on the prod
machine, corrupted the db and is carried over by your backup on the test machine.
> I read online (not BOL) that only index, header and consistency
> information are removed with repair_all_data_loss, no actual data.
Que! Can you post that link? AFAIK, The option does what it say. If will remove all corrupted pages,
resulting in possible data loss. In your case, you seem to have problems in index 0 (a data page)
for a user object (just reading the first error you posted). That page will be removed = data loss.
I strongly encourage you to open a case with MS Support, assuming that you value the database > the
few hundred bucks a support issue costs. They might not be able to help you further, but you have
tried. And make sure you do far more frequent CHECKDBs. Also, you might want to check
http://www.karaszi.com/SQLServer/info_corrupt_suspect_db.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"nmsm" <naomimsm@.gmail.com> wrote in message
news:1127328673.857606.24480@.f14g2000cwb.googlegroups.com...
> I've been receiving consistency errors for the past few weeks
> whenever the indexes on 1 table are rebuilt (which they are each
> evening). I restored the production database to our development
> server and ran DBCC CHECKDB and DBREINDEX to confirm that the errors
> were the same (which would rule out hardware errors, right?)
> DBREINDEX results:
> A possible database consistency problem has been detected on database
> 'database_name'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on
> database 'database_name'.
> Connection Broken
>
> Then I ran CHECKDB with repair_allow_data_loss (results below) and that
> fixed the consistency errors (CHECKDB came back with no errors the
> next time).
> I read online (not BOL) that only index, header and consistency
> information are removed with repair_all_data_loss, no actual data.
>>From the info below, can you tell if the only thing that changed was
> the clustered index? And if so, would it be safe (i.e., no data loss)
> to run repair_allow_data_loss on the production database? I'd like
> to clean this error up without resorting to a month old backup, which
> would definitely mean data loss.
> Thanks!!
> CHECKDB with repair_allow_data_loss results:
> CHECKDB found 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> Ran DBCC CHECKDB ('database_name',repair_allow_data_loss) WITH
> ALL_ERRORMSGS
> DBCC results for 'dbowner.f8'.
> Server: Msg 8944, Level 16, State 4, Line 1
> Table error: Object ID 1648724926, index ID 0, page (1:155093), row 25.
> Test (offsetNull >= BASEOFFSET) failed. Values are 0 and 4.
> Server: Msg 8976, Level 16, State 1, Line 1
> Table error: Object ID 1648724926, index ID 1. Page (1:155093) was not
> seen in the scan although its parent (1:155073) and previous (1:155092)
> refer to it. Check any previous errors.
> Server: Msg 8978, Level 16, State 1, Line 1
> Table error: Object ID 1648724926, index ID 1. Page (1:155094) is
> missing a reference from previous page (1:155093). Possible chain
> linkage problem.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 10, text ID 41680896 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 11, text ID 1340932096 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313603), slot 12, text ID 23986176 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313604), slot 0, text ID 96010240 is not referenced.
> Server: Msg 8964, Level 16, State 1, Line 1
> Table error: Object ID 1648724926. The text, ntext, or image node at
> page (1:313604), slot 1, text ID 817430528 is not referenced.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> The error has been repaired.
> Repair: Page (1:155093) has been deallocated from object ID 1648724926,
> index ID 0.
> Clustered index successfully restored for object 'dbowner.f8' in
> database 'database_name'.
> There are 1140058 rows in 33131 pages for object 'dbowner.f8'.
> CHECKDB found 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> CHECKDB fixed 0 allocation errors and 9 consistency errors in table
> 'dbowner.f8' (object ID 1648724926).
> Select count (*) from dbowner.f8 on development server shows: 1140058
> Select count (*) from dbowner.f8 on production server shows: 1145787
>