Showing posts with label app. Show all posts
Showing posts with label app. Show all posts

Tuesday, March 27, 2012

Considerations for packaging MSDE app

Hi
I have a C# Winform app which runs using SQL Server.
I want to package up the app, so that it deploys with MSDE.
Can anyone offer any pointers or references to help?
Thanks
Hi Paul,
I hope my suggestions will help you, because I have gone through pure h#$l
getting this to work right.
I have a VB.Net winforms app and I use Wise Installation System to install
the .Net framework, MSDE, the app.
A couple of things tried and failed:
1. Within wise, we installed MSDE with the command line option where it sets
the password. We decided to install MSDE into its own instance becasue we
don't want anyone coming behind us and hosing it up by installing an
untested service pack or anything.
2. At the end of the install, we run a batch file to stop and restart the
SQL service. In retrospect, this may not be necessary because of our
decision to reboot before the app is executed the first time.
3. We attempted to run a batch file to attach the database, but no matter
what we tried, it just would not work. It didn't matter if we rebooted and
then ran the batch, nothing worked. So, I added the code to my app that does
this:
a. check to see if the SQL service is running, if its not, warn the user and
shut down the app.
b. if this is the first time the app is executed:
b1. check to see if the database exists, of course it does not, so using
ado.net, execute the command against the master database to attach the
database. Test again to make sure the database is attached, if not, throw an
exception.
b2. Update my configuration file to show that the app has successfully
initialized.
4. Now here's the real McKoy - in order to get all this to work the first
time I connect to the database to check the version I had to use
POOLING=False in the connection string. Problem: The app ran as slow as a
turtle because now it wasn't using connection pooling anywhere. Solution:
Two connection strings, one for initial startup (first time connection) then
one for the rest of the time (without POOLING=false)
5. Another issue came up of how we would update our database, we surely
didn't want to have to detach and then re-attach another database and blow
peoples data away, so I added a version table to the database and I run a
check on the current version each time the app is executed. If the versions
are the same, I don't execute the update executable. So you may ask, how do
you know if you have a new database version and an update needs to take
place? Answer: When a patch is applied (or an executable update) I have a
Version.xml file where we will put the new version number to be compared
against. The executable that runs the update is also new since it will have
code in it to do the update.
But, here's the big question, after you install MSDE, do you need to reboot
or not? The answer is that I was able to get my app to work without having
to reboot, but we have decided to require a reboot because I just believe
there are things in MSDE that need to be set correctly and I think rebooting
is the only way to do it. Again, my app works without it, but I just feel
like its more proper to reboot. I mean after all, you are installing the SQL
desktop engine which is no minor thing, not to mention the .Net framework
install.
Now, there are probably many MSDE experts out there reading my steps
thinking, man, he did this totally wrong, but I'm telling YOU these are
things you are in the end, probably going to have to do to (especially the
POOLING = False setting).
There are many things we haven't even gotten to yet like:
1. What happens if a person doesn't reboot the machine, how do you stop them
from running the app?
2. What if a person runs the install, uninstalls, then reinstalls and
reboots, how does that affect the app?
3. Do you want the uninstall to uninstall the .Net framework and MSDE? I
personally don't think so but then both these items are rather large.
Hope this helps.
STom
"Paul Aspinall" <paul@.aspy.co.uk> wrote in message
news:ctuKd.9606$n9.2569@.fe3.news.blueyonder.co.uk. ..
> Hi
> I have a C# Winform app which runs using SQL Server.
> I want to package up the app, so that it deploys with MSDE.
> Can anyone offer any pointers or references to help?
> Thanks
>

Tuesday, March 20, 2012

connections not closed by app

we inherited this java web app that uses a jdbc-odbc bridge for connecting
to 2 sql server 2000 instances. there are probably hundreds of places in
the application where connections are not being closed (resultsets and
statements not being closed as well).
my question:
1) i assume that as DBAs you would all frown on this? do you have
suggestions about doing something on the db side to address this?
2) what i though was something like: does sql server automatically (or can
i make it) close connections after a peroid of time? would that work?Dont know that I would actually do this, but if all else fails:
1; sp_who2 into a temp table
2; have a cursor loop through the LastBatch column, using dynamic sql, and
KILL everything > 5 minutes. (Or whatever time you decide on.)
I would really try to exhaust all other resources before using this method,
just an idea to keep in mind in case you get this deperate.
--
TIA,
ChrisR
"usenetjb" wrote:
> we inherited this java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?
>|||Wow. The program must be fixed but for now, I would consider setting up a
schedule to reboot the server at night.
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns97335385B8126usenetjb@.207.115.17.102...
> we inherited this java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?|||"Grant" <email@.nowhere.com> wrote in
news:uUcMfmzBGHA.3572@.TK2MSFTNGP14.phx.gbl:
> Wow. The program must be fixed but for now, I would consider setting
> up a schedule to reboot the server at night.
>
>
wow is not what i said, but then again ... believe it or not the app has
to be up 24/7. so we generally reboot the box after a week to make the app
speed up, sigh. it is leaking resources all over the place on the java app
side.
anyway, can sql server reboot itself? do you have any suggestions in terms
of tools to automate this sql server reboot?
thx

connections not closed by app

we inherited this Java web app that uses a jdbc-odbc bridge for connecting
to 2 sql server 2000 instances. there are probably hundreds of places in
the application where connections are not being closed (resultsets and
statements not being closed as well).
my question:
1) i assume that as DBAs you would all frown on this? do you have
suggestions about doing something on the db side to address this?
2) what i though was something like: does sql server automatically (or can
i make it) close connections after a peroid of time? would that work?Dont know that I would actually do this, but if all else fails:
1; sp_who2 into a temp table
2; have a cursor loop through the LastBatch column, using dynamic sql, and
KILL everything > 5 minutes. (Or whatever time you decide on.)
I would really try to exhaust all other resources before using this method,
just an idea to keep in mind in case you get this deperate.
TIA,
ChrisR
"usenetjb" wrote:

> we inherited this Java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?
>|||Wow. The program must be fixed but for now, I would consider setting up a
schedule to reboot the server at night.
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns97335385B8126usenetjb@.207.115.17.102...
> we inherited this Java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?|||"Grant" <email@.nowhere.com> wrote in
news:uUcMfmzBGHA.3572@.TK2MSFTNGP14.phx.gbl:

> Wow. The program must be fixed but for now, I would consider setting
> up a schedule to reboot the server at night.
>
>
wow is not what i said, but then again ... believe it or not the app has
to be up 24/7. so we generally reboot the box after a week to make the app
speed up, sigh. it is leaking resources all over the place on the Java app
side.
anyway, can sql server reboot itself? do you have any suggestions in terms
of tools to automate this sql server reboot?
thx

connections not closed by app

we inherited this java web app that uses a jdbc-odbc bridge for connecting
to 2 sql server 2000 instances. there are probably hundreds of places in
the application where connections are not being closed (resultsets and
statements not being closed as well).
my question:
1) i assume that as DBAs you would all frown on this? do you have
suggestions about doing something on the db side to address this?
2) what i though was something like: does sql server automatically (or can
i make it) close connections after a peroid of time? would that work?
Dont know that I would actually do this, but if all else fails:
1; sp_who2 into a temp table
2; have a cursor loop through the LastBatch column, using dynamic sql, and
KILL everything > 5 minutes. (Or whatever time you decide on.)
I would really try to exhaust all other resources before using this method,
just an idea to keep in mind in case you get this deperate.
TIA,
ChrisR
"usenetjb" wrote:

> we inherited this java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?
>
|||Wow. The program must be fixed but for now, I would consider setting up a
schedule to reboot the server at night.
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns97335385B8126usenetjb@.207.115.17.102...
> we inherited this java web app that uses a jdbc-odbc bridge for connecting
> to 2 sql server 2000 instances. there are probably hundreds of places in
> the application where connections are not being closed (resultsets and
> statements not being closed as well).
> my question:
> 1) i assume that as DBAs you would all frown on this? do you have
> suggestions about doing something on the db side to address this?
> 2) what i though was something like: does sql server automatically (or can
> i make it) close connections after a peroid of time? would that work?
|||"Grant" <email@.nowhere.com> wrote in
news:uUcMfmzBGHA.3572@.TK2MSFTNGP14.phx.gbl:

> Wow. The program must be fixed but for now, I would consider setting
> up a schedule to reboot the server at night.
>
>
wow is not what i said, but then again ... believe it or not the app has
to be up 24/7. so we generally reboot the box after a week to make the app
speed up, sigh. it is leaking resources all over the place on the java app
side.
anyway, can sql server reboot itself? do you have any suggestions in terms
of tools to automate this sql server reboot?
thx

connections from app

Hi All,
I need a help from experts. We have an vb.net app using sqlserver.
I review the app and I saw a function declaring a connection, open, execute
sp and close. This app runs once a day for couple of minutes.
Now I suggest that this app should have one defined connection and in the
program we can open and close de connection many times. But the developer
keep saying that it doesn't matter.
My question is how is the best way to handle connections inside of an app in
a way to use better the resource of sqlserver server?
I hope this is enough description for you.
Tks in advance.
JohnnyBest practice is to open a connection as late as possible before you use it,
and close it as early as possible after you have finished with it.
This does not mean the you need to destroy and re-create the objects used to
access the DB or that you should close the connection between each operation
in a serial chain of database commands, just dont leave a connection open if
your application is idle.
Mr Tea
"JFB" <help@.jfb.com> wrote in message
news:%234dogSmLFHA.700@.TK2MSFTNGP10.phx.gbl...
> Hi All,
> I need a help from experts. We have an vb.net app using sqlserver.
> I review the app and I saw a function declaring a connection, open,
> execute
> sp and close. This app runs once a day for couple of minutes.
> Now I suggest that this app should have one defined connection and in the
> program we can open and close de connection many times. But the developer
> keep saying that it doesn't matter.
> My question is how is the best way to handle connections inside of an app
> in
> a way to use better the resource of sqlserver server?
> I hope this is enough description for you.
> Tks in advance.
> Johnny
>

Monday, March 19, 2012

Connections

I've got a third party download app that creates ten connections to Sql
Server 2000 per user! We've got an average of 40 users/day and so we have
around 400 connections floating around all day just for this one small app!
We asked the vendor and all they will say is "it's working as designed".
So here are my questions:
1. Is there ANY reason on planet earth, performance or otherwise, that
someone would require ten connections?!?
2. How can I get it across that this is unacceptable? How can I calculate
how many connections Sql Server can handle? Are there any rules of thumb
that I can use for leverage?
Thx.
CLM wrote:
> I've got a third party download app that creates ten connections to
> Sql Server 2000 per user! We've got an average of 40 users/day and
> so we have around 400 connections floating around all day just for
> this one small app! We asked the vendor and all they will say is
> "it's working as designed".
> So here are my questions:
> 1. Is there ANY reason on planet earth, performance or otherwise,
> that someone would require ten connections?!?
Possibly, but not likely.

> 2. How can I get it across that this is unacceptable? How can I
> calculate how many connections Sql Server can handle? Are there any
> rules of thumb that I can use for leverage?
>
Maybe you can turn on connection pooling in control panel if the app is
using ODBC connections to SQL Server. However, you may need to enable
connection pooling in the application using SQLSetEnvAttr and you
probably can't do this. ADO.Net provides connection pooling
automatically.
Maybe you can find out from the vendor if they support connection
pooling. Then send or email someone high up at the company (like the
CTO) your concerns about the product and the way it uses connections
very liberally.
Each connection consumes 12 KB + (3 * Network Packet Size). So, even
with 400 connections using a 4K packet size, you're only consuming about
10MB of memory.
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
Server will start to go over 2 connections per thread and resources will get
shared.
Plus, in addition to what you wrote, isn't Sql Server using some cpu
resources to manage these 400 inactive connections?
"David Gugick" wrote:

> CLM wrote:
> Possibly, but not likely.
>
> Maybe you can turn on connection pooling in control panel if the app is
> using ODBC connections to SQL Server. However, you may need to enable
> connection pooling in the application using SQLSetEnvAttr and you
> probably can't do this. ADO.Net provides connection pooling
> automatically.
> Maybe you can find out from the vendor if they support connection
> pooling. Then send or email someone high up at the company (like the
> CTO) your concerns about the product and the way it uses connections
> very liberally.
> Each connection consumes 12 KB + (3 * Network Packet Size). So, even
> with 400 connections using a 4K packet size, you're only consuming about
> 10MB of memory.
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>
|||CLM wrote:
> Yes, but I've been reading that if Sql Server goes over 255 threads,
> Sql Server will start to go over 2 connections per thread and
> resources will get shared.
If they're mostly idle, then you probably won't reach the default of 255
worker threads. Even if you do (it would require about 6+ active batches
run on each of your 40 clients), SQL Server will temporarily have
batches wait until a worker thread is available for use. From BOL:
"Having all worker threads allocated does not mean that the performance
of SQL Server will degrade. Typically, a new batch has only a short wait
for a free thread. Allocating more threads may degrade performance
because of the increased work required to coordinate resources among the
threads. Many SQL Server systems running in production reach this state
and run with very high performance levels."

> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections?
Not much if they are inactive.
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:7A9AD5AD-E23C-469F-B8C6-0124D2D2788C@.microsoft.com...
> Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
> Server will start to go over 2 connections per thread and resources will
> get
> shared.
That's not true. Or rather, it's not relavent. A connection is assigned
not to a thread, but to a scheduler. Each scheduler has a pool of workers
(threads) to carry out work. So there is not a 1-1 relationship between the
number of connections and the number of threads on the server.

> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections?
No.
David

connections

im writing a service that will be on a app server
this is one of many service's to process Recordsets from Sql Server...
1 service may have 20 Processes to Complete
example
main class PreProcess()
method CleanNames();
method updateNames();
methodr ValidateContracts();
Various things must be completed
Suedo Example below
My question is
in the main class make my connection and keep it for all sub members
or connect in each member
A. This Way opens connection...closes connect

method cleanNames()
1 get connection
2 get recordset
3. Close Connection
4 process Recordset
5. Open Connection
4 Commit Changes
6. Close Connection()
finished cleanames()
or
method cleanNames()
1 get connection
2 get recordset
3 process Recordset
4 Commit Changes
5. Close Connection()
finished cleanames()
or
this final way is in the parent Class or Calling Function
sends in the connections holds it open till the service is
completed of all jobs its Suppose to do
then releases the connection

method cleanNames(oconn) connection as parameter
1 get recordset
2 process Recordset
3 Commit Changes
finished cleanames()

Thanks
DavePDaveP (dvs_bis@.sbcglobal.net) writes:

Quote:

Originally Posted by

im writing a service that will be on a app server
this is one of many service's to process Recordsets from Sql Server...
1 service may have 20 Processes to Complete
example
main class PreProcess()
method CleanNames();
method updateNames();
methodr ValidateContracts();
Various things must be completed
Suedo Example below
My question is
in the main class make my connection and keep it for all sub members
or connect in each member


I'm not really sure I understand, but if the members are intended to
be separate threads, you should definitely have one connection per
member.

If they are just different tasks that a single-threaded service will
perform, it's more of a toss-up, but I think the idiom today is to stick
with local connections. Keep in mind that ADO .Net maintains a connection
pool where it lingers to disconnected connections and then reuse them
if there is a request for a connect with the same properties within
some timefram (60 seconds, I believe).

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Connections

I've got a third party download app that creates ten connections to Sql Server 2000 per user! We've got an average of 40 users/day and so we have around 400 connections floating around all day just for this one small app! We asked the vendor and all they will say is "it's working as designed".

So here are my questions:

1. Is there ANY reason on planet earth, performance or otherwise, that someone would require ten connections?!?
2. How can I get it across that this is unacceptable? How can I calculate how many connections Sql Server can handle? Are there any rules of thumb that I can use for leverage?

Thx.It is most probably not required. Typically it is efficient to make and break connections as required from the client side code. The driver/provider maintains a pool of connections anyway which will help in reuse of connections. This also means that you manage resources more efficiently on the client side. It is possible to leak connection resources if used incorrectly from client side and this may require frequent restarts of the application or bad performance over time. Some connections could also be orphaned both on client/server depending on the complexity of the application. So it is generally not a good idea to keep too many open connections from an application to SQL Server. And lastly, connections to take some resources on the server (see books online capacity specifications topic).

Connections

I've got a third party download app that creates ten connections to Sql
Server 2000 per user! We've got an average of 40 users/day and so we have
around 400 connections floating around all day just for this one small app!
We asked the vendor and all they will say is "it's working as designed".
So here are my questions:
1. Is there ANY reason on planet earth, performance or otherwise, that
someone would require ten connections?!?
2. How can I get it across that this is unacceptable? How can I calculate
how many connections Sql Server can handle? Are there any rules of thumb
that I can use for leverage?
Thx.CLM wrote:
> I've got a third party download app that creates ten connections to
> Sql Server 2000 per user! We've got an average of 40 users/day and
> so we have around 400 connections floating around all day just for
> this one small app! We asked the vendor and all they will say is
> "it's working as designed".
> So here are my questions:
> 1. Is there ANY reason on planet earth, performance or otherwise,
> that someone would require ten connections?!?
Possibly, but not likely.

> 2. How can I get it across that this is unacceptable? How can I
> calculate how many connections Sql Server can handle? Are there any
> rules of thumb that I can use for leverage?
>
Maybe you can turn on connection pooling in control panel if the app is
using ODBC connections to SQL Server. However, you may need to enable
connection pooling in the application using SQLSetEnvAttr and you
probably can't do this. ADO.Net provides connection pooling
automatically.
Maybe you can find out from the vendor if they support connection
pooling. Then send or email someone high up at the company (like the
CTO) your concerns about the product and the way it uses connections
very liberally.
Each connection consumes 12 KB + (3 * Network Packet Size). So, even
with 400 connections using a 4K packet size, you're only consuming about
10MB of memory.
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
Server will start to go over 2 connections per thread and resources will get
shared.
Plus, in addition to what you wrote, isn't Sql Server using some cpu
resources to manage these 400 inactive connections'
"David Gugick" wrote:

> CLM wrote:
> Possibly, but not likely.
>
> Maybe you can turn on connection pooling in control panel if the app is
> using ODBC connections to SQL Server. However, you may need to enable
> connection pooling in the application using SQLSetEnvAttr and you
> probably can't do this. ADO.Net provides connection pooling
> automatically.
> Maybe you can find out from the vendor if they support connection
> pooling. Then send or email someone high up at the company (like the
> CTO) your concerns about the product and the way it uses connections
> very liberally.
> Each connection consumes 12 KB + (3 * Network Packet Size). So, even
> with 400 connections using a 4K packet size, you're only consuming about
> 10MB of memory.
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||CLM wrote:
> Yes, but I've been reading that if Sql Server goes over 255 threads,
> Sql Server will start to go over 2 connections per thread and
> resources will get shared.
If they're mostly idle, then you probably won't reach the default of 255
worker threads. Even if you do (it would require about 6+ active batches
run on each of your 40 clients), SQL Server will temporarily have
batches wait until a worker thread is available for use. From BOL:
"Having all worker threads allocated does not mean that the performance
of SQL Server will degrade. Typically, a new batch has only a short wait
for a free thread. Allocating more threads may degrade performance
because of the increased work required to coordinate resources among the
threads. Many SQL Server systems running in production reach this state
and run with very high performance levels."

> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections'
Not much if they are inactive.
David Gugick
Quest Software
www.imceda.com
www.quest.com|||"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:7A9AD5AD-E23C-469F-B8C6-0124D2D2788C@.microsoft.com...
> Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
> Server will start to go over 2 connections per thread and resources will
> get
> shared.
That's not true. Or rather, it's not relavent. A connection is assigned
not to a thread, but to a scheduler. Each scheduler has a pool of workers
(threads) to carry out work. So there is not a 1-1 relationship between the
number of connections and the number of threads on the server.

> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections'
No.
David

Connections

I've got a third party download app that creates ten connections to Sql
Server 2000 per user! We've got an average of 40 users/day and so we have
around 400 connections floating around all day just for this one small app!
We asked the vendor and all they will say is "it's working as designed".
So here are my questions:
1. Is there ANY reason on planet earth, performance or otherwise, that
someone would require ten connections?!?
2. How can I get it across that this is unacceptable? How can I calculate
how many connections Sql Server can handle? Are there any rules of thumb
that I can use for leverage?
Thx.CLM wrote:
> I've got a third party download app that creates ten connections to
> Sql Server 2000 per user! We've got an average of 40 users/day and
> so we have around 400 connections floating around all day just for
> this one small app! We asked the vendor and all they will say is
> "it's working as designed".
> So here are my questions:
> 1. Is there ANY reason on planet earth, performance or otherwise,
> that someone would require ten connections?!?
Possibly, but not likely.
> 2. How can I get it across that this is unacceptable? How can I
> calculate how many connections Sql Server can handle? Are there any
> rules of thumb that I can use for leverage?
>
Maybe you can turn on connection pooling in control panel if the app is
using ODBC connections to SQL Server. However, you may need to enable
connection pooling in the application using SQLSetEnvAttr and you
probably can't do this. ADO.Net provides connection pooling
automatically.
Maybe you can find out from the vendor if they support connection
pooling. Then send or email someone high up at the company (like the
CTO) your concerns about the product and the way it uses connections
very liberally.
Each connection consumes 12 KB + (3 * Network Packet Size). So, even
with 400 connections using a 4K packet size, you're only consuming about
10MB of memory.
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
Server will start to go over 2 connections per thread and resources will get
shared.
Plus, in addition to what you wrote, isn't Sql Server using some cpu
resources to manage these 400 inactive connections'
"David Gugick" wrote:
> CLM wrote:
> > I've got a third party download app that creates ten connections to
> > Sql Server 2000 per user! We've got an average of 40 users/day and
> > so we have around 400 connections floating around all day just for
> > this one small app! We asked the vendor and all they will say is
> > "it's working as designed".
> >
> > So here are my questions:
> >
> > 1. Is there ANY reason on planet earth, performance or otherwise,
> > that someone would require ten connections?!?
> Possibly, but not likely.
> > 2. How can I get it across that this is unacceptable? How can I
> > calculate how many connections Sql Server can handle? Are there any
> > rules of thumb that I can use for leverage?
> >
> Maybe you can turn on connection pooling in control panel if the app is
> using ODBC connections to SQL Server. However, you may need to enable
> connection pooling in the application using SQLSetEnvAttr and you
> probably can't do this. ADO.Net provides connection pooling
> automatically.
> Maybe you can find out from the vendor if they support connection
> pooling. Then send or email someone high up at the company (like the
> CTO) your concerns about the product and the way it uses connections
> very liberally.
> Each connection consumes 12 KB + (3 * Network Packet Size). So, even
> with 400 connections using a 4K packet size, you're only consuming about
> 10MB of memory.
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||CLM wrote:
> Yes, but I've been reading that if Sql Server goes over 255 threads,
> Sql Server will start to go over 2 connections per thread and
> resources will get shared.
If they're mostly idle, then you probably won't reach the default of 255
worker threads. Even if you do (it would require about 6+ active batches
run on each of your 40 clients), SQL Server will temporarily have
batches wait until a worker thread is available for use. From BOL:
"Having all worker threads allocated does not mean that the performance
of SQL Server will degrade. Typically, a new batch has only a short wait
for a free thread. Allocating more threads may degrade performance
because of the increased work required to coordinate resources among the
threads. Many SQL Server systems running in production reach this state
and run with very high performance levels."
> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections'
Not much if they are inactive.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:7A9AD5AD-E23C-469F-B8C6-0124D2D2788C@.microsoft.com...
> Yes, but I've been reading that if Sql Server goes over 255 threads, Sql
> Server will start to go over 2 connections per thread and resources will
> get
> shared.
That's not true. Or rather, it's not relavent. A connection is assigned
not to a thread, but to a scheduler. Each scheduler has a pool of workers
(threads) to carry out work. So there is not a 1-1 relationship between the
number of connections and the number of threads on the server.
> Plus, in addition to what you wrote, isn't Sql Server using some cpu
> resources to manage these 400 inactive connections'
No.
David

Wednesday, March 7, 2012

Connection Timeout - Need Help

Hello,

My app uses sql express on the client work station and has been installed by our application installer as a part of the prerequisites. The sql express install in standard and we've not changed any settings or brought in any tweaks.

All is well in our dev center. However, some of our clients are experiencing a problem when the app starts up. After the PC has been restarted, running the application causes it to crash with a connection time out. However, running it a second time will work with absolutely no issues.

Overall - after a restart, the first sql express connection will time out. I had a post on this earlier here -> http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=93279&SiteID=1

MS Lee had suggested -- > "As for the first connection failing and subsequent connections succeeding, one potential cause is the cached last connect information. If you're connecting to a named instance and not specifying a port, then the client will determine the port by querying SQL Browser on the server"

I'm not using a remote sql express. It is running on the local machine. My connection string looks like -->

"Integrated Security=True;User Instance=True;Data Source=.\SQLExpress;AttachDBFilename=<%USERS_LOCAL_FOLDER%>;"

What may be the problem ? Any help/advise will be great. I'll need to sort this out ASAP.

Thanks,

Avinash

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)

at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)

at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)

at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateObject stateObj)

at System.Data.SqlClient.TdsParserStateObject.ReadPacket(Int32 bytesExpected)

at System.Data.SqlClient.TdsParserStateObject.ReadBuffer()

at System.Data.SqlClient.TdsParserStateObject.ReadByte()

at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)

at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK)

at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance)

at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance)

at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection)

at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options)

at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)

at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)

at System.Data.SqlClient.SqlConnection.Open()

at ***.**.Base.DataServices.DataFactory.SQLProvider.GetConnection(String connectionString) in c:\DevNet\SCAH\***.**.Base.DataServices\DataFactory\SQLProvider.cs:line 30

at ***.**.Base.DataServices.DalContext.OpenConnection() in c:\DevNet\SCAH\***.**.Base.DataServices\DALContext.cs:line 166

at ***.**.Base.DataServices.DataManager.GetTable(String sql, String tableName, DataSet dataSet) in c:\DevNet\SCAH\***.**.Base.DataServices\DataManager.cs:line 93

Hi Avinash,

I'd recommend setting the Connect Timeout or Connection Timeout values in the connection string to a higher value. I believe the default is 60 seconds.

The most likely cause of the problem you area seeing is a result of how User Instances work. User Instances launch a new process of SQL Server that runs in the context of the application user. In order to do this, we need to setup some directories in the user's profile and copy some files into that directory. This takes some time to do, and it can cause a timeout to occur the first time the User Instance is created.

That work only needs to be done once, so subsequent connection succed as you've found.

Regards,

Mike Wachal
SQL Express team

Mark the best posts as Answers!

|||

Hello Mike W,

Many thanks for your response. I'll go ahead and try that out.

Question - will the folders/files need to be set up each time the sql process is started ? Won't it use ones created from a previous run ?

Also - some of the clients experiencing this problem are running very modern laptops with generous amounts of RAM and Hard Disk space being available. They were found to be rather light and no heavy apps/processes were running when this occurred. In that light - I suppose 60 Seconds is reasonable time to get started. Thoughts ?

But, I'll go ahead and try it - we'll see how it goes with our next build.

Thanks again.

Regards,

Avinash

|||

Hi Avinash,

The files and folders only need to be created once per user. The second time you try to connect, the folders and files are there and those are used. This is why you're seeing the behavior of a timeout the first time, but success the second (and subsequent) times.

It doesn't really have much to do with how modern your computer is, it has to do with HD access time and how long it takes to copy a file. Simply put, the default timeout is too short to accomplish both the file copy and the connection. I believe the default timeout is 60 seconds, so I'd go with something longer. A little trial and error should get you the right time.

Regards,

Mike

Mark the best posts as Answers!

|||

One other possibility I can think of is: You have some network issue with slow DNS.

Every time you make a connection, we need to do a couple of DNS resolve (in some cases both directions, IP <--> DNS). The connection may timeout if you DNS is slow. The second, windows will pick up the DNS caches and you can get connected much faster. If you know a machine that can expecience this first-time-timeout issue for sure, you can try to ping the name of the server machine before you connect the server. If the issue is gone in this case, you know the reason is slow DNS.

Another way to verify, run "ipconfig /flushdns" from command window and try connect to the server afterwards. See whether you see the timeout issue this time.

|||

This may be a dumb question. But I'll go ahead and ask anyway.

Is this valid in the case of SQL Express and particularly when both the client application(C# Smart Client) accessing the express database and SQL Express itself are running on the same machine ? Cause this is the mode in which we are seeing this issue.

Will a DNS resolve be needed when we use ".\SQLExpress" in the connect string ?

Thanks,

Avinash

|||

I would guess that DNS has nothing to do with accessing a local instance of SQL Express. As I said in an earlier post, the most likely issue here is the delay caused by the file copy that is required the first time (and only the first time) you create a User Instance for a given user.

Has adjusting the timeout eliminated the problem?

Regards,

Mike

|||

Hello Mike,

The last build has just been shipped out. I've set the time out to 120 seconds. Should hear from our clients in a few days.

Will let you know how it goes.

Thanks for all the help,

Regards,

Avinash

|||

Hello Mike,

Just to let you know that increasing the connection time out has indeed solved the problem.

Thanks,

Avinash

Connection Timeout - Need Help

Hello,

My app uses sql express on the client work station and has been installed by our application installer as a part of the prerequisites. The sql express install in standard and we've not changed any settings or brought in any tweaks.

All is well in our dev center. However, some of our clients are experiencing a problem when the app starts up. After the PC has been restarted, running the application causes it to crash with a connection time out. However, running it a second time will work with absolutely no issues.

Overall - after a restart, the first sql express connection will time out. I had a post on this earlier here -> http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=93279&SiteID=1

MS Lee had suggested -- > "As for the first connection failing and subsequent connections succeeding, one potential cause is the cached last connect information. If you're connecting to a named instance and not specifying a port, then the client will determine the port by querying SQL Browser on the server"

I'm not using a remote sql express. It is running on the local machine. My connection string looks like -->

"Integrated Security=True;User Instance=True;Data Source=.\SQLExpress;AttachDBFilename=<%USERS_LOCAL_FOLDER%>;"

What may be the problem ? Any help/advise will be great. I'll need to sort this out ASAP.

Thanks,

Avinash

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)

at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)

at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)

at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateObject stateObj)

at System.Data.SqlClient.TdsParserStateObject.ReadPacket(Int32 bytesExpected)

at System.Data.SqlClient.TdsParserStateObject.ReadBuffer()

at System.Data.SqlClient.TdsParserStateObject.ReadByte()

at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)

at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK)

at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance)

at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance)

at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection)

at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options)

at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)

at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)

at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)

at System.Data.SqlClient.SqlConnection.Open()

at ***.**.Base.DataServices.DataFactory.SQLProvider.GetConnection(String connectionString) in c:\DevNet\SCAH\***.**.Base.DataServices\DataFactory\SQLProvider.cs:line 30

at ***.**.Base.DataServices.DalContext.OpenConnection() in c:\DevNet\SCAH\***.**.Base.DataServices\DALContext.cs:line 166

at ***.**.Base.DataServices.DataManager.GetTable(String sql, String tableName, DataSet dataSet) in c:\DevNet\SCAH\***.**.Base.DataServices\DataManager.cs:line 93

Hi Avinash,

I'd recommend setting the Connect Timeout or Connection Timeout values in the connection string to a higher value. I believe the default is 60 seconds.

The most likely cause of the problem you area seeing is a result of how User Instances work. User Instances launch a new process of SQL Server that runs in the context of the application user. In order to do this, we need to setup some directories in the user's profile and copy some files into that directory. This takes some time to do, and it can cause a timeout to occur the first time the User Instance is created.

That work only needs to be done once, so subsequent connection succed as you've found.

Regards,

Mike Wachal
SQL Express team

Mark the best posts as Answers!

|||

Hello Mike W,

Many thanks for your response. I'll go ahead and try that out.

Question - will the folders/files need to be set up each time the sql process is started ? Won't it use ones created from a previous run ?

Also - some of the clients experiencing this problem are running very modern laptops with generous amounts of RAM and Hard Disk space being available. They were found to be rather light and no heavy apps/processes were running when this occurred. In that light - I suppose 60 Seconds is reasonable time to get started. Thoughts ?

But, I'll go ahead and try it - we'll see how it goes with our next build.

Thanks again.

Regards,

Avinash

|||

Hi Avinash,

The files and folders only need to be created once per user. The second time you try to connect, the folders and files are there and those are used. This is why you're seeing the behavior of a timeout the first time, but success the second (and subsequent) times.

It doesn't really have much to do with how modern your computer is, it has to do with HD access time and how long it takes to copy a file. Simply put, the default timeout is too short to accomplish both the file copy and the connection. I believe the default timeout is 60 seconds, so I'd go with something longer. A little trial and error should get you the right time.

Regards,

Mike

Mark the best posts as Answers!

|||

One other possibility I can think of is: You have some network issue with slow DNS.

Every time you make a connection, we need to do a couple of DNS resolve (in some cases both directions, IP <--> DNS). The connection may timeout if you DNS is slow. The second, windows will pick up the DNS caches and you can get connected much faster. If you know a machine that can expecience this first-time-timeout issue for sure, you can try to ping the name of the server machine before you connect the server. If the issue is gone in this case, you know the reason is slow DNS.

Another way to verify, run "ipconfig /flushdns" from command window and try connect to the server afterwards. See whether you see the timeout issue this time.

|||

This may be a dumb question. But I'll go ahead and ask anyway.

Is this valid in the case of SQL Express and particularly when both the client application(C# Smart Client) accessing the express database and SQL Express itself are running on the same machine ? Cause this is the mode in which we are seeing this issue.

Will a DNS resolve be needed when we use ".\SQLExpress" in the connect string ?

Thanks,

Avinash

|||

I would guess that DNS has nothing to do with accessing a local instance of SQL Express. As I said in an earlier post, the most likely issue here is the delay caused by the file copy that is required the first time (and only the first time) you create a User Instance for a given user.

Has adjusting the timeout eliminated the problem?

Regards,

Mike

|||

Hello Mike,

The last build has just been shipped out. I've set the time out to 120 seconds. Should hear from our clients in a few days.

Will let you know how it goes.

Thanks for all the help,

Regards,

Avinash

|||

Hello Mike,

Just to let you know that increasing the connection time out has indeed solved the problem.

Thanks,

Avinash

Saturday, February 25, 2012

Connection Throttle

I have an application that where some of the queries hit the server hard (I
cannot rewrite the app). Some of the queries take 5 or more minutes to run
and during this time the CPU is pegged and SQL no longer services other
queries until the the hog query is complete. I there a way I can throttle a
connection - like not allow a connection to use more than 25% of the CPU?
Thanks,
Troy
Try setting the MAXDOP to something less than the total number of
processors. But it sounds like you probably need to just tune your query.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle
>a connection - like not allow a connection to use more than 25% of the CPU?
> Thanks,
> Troy
>
|||Troy Murphy wrote:
> I have an application that where some of the queries hit the server
> hard (I cannot rewrite the app). Some of the queries take 5 or more
> minutes to run and during this time the CPU is pegged and SQL no
> longer services other queries until the the hog query is complete. I
> there a way I can throttle a connection - like not allow a connection
> to use more than 25% of the CPU?
> Thanks,
> Troy
Maybe you could add appropriate indexes to the database to help the
query run more efficiently without worrying about touching the
application.
David Gugick
Imceda Software
www.imceda.com
|||Thanks, I will look into that. The queries are fairly optimized, but we
have 10's of millions of records. Even a simple select sum() takes several
minutes. However the main culprit is a morning update against a few hundred
thousand records that keeps the software unusable during that time.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
> Try setting the MAXDOP to something less than the total number of
> processors. But it sounds like you probably need to just tune your
> query.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>
|||Several minutes for a simple SUM() is pretty slow. I suspect you lack
enough memory and have the wrong disk configurations. Any chance you can
create a covering index for that SUM()? As for the updates are you trying
to do all of them in one transaction? If so you may want to break them up
into smaller batches. It's pretty easy to lop and update say 10,000 rows at
a time and will usually be faster and certainly less intrusive.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
> Thanks, I will look into that. The queries are fairly optimized, but we
> have 10's of millions of records. Even a simple select sum() takes
> several minutes. However the main culprit is a morning update against a
> few hundred thousand records that keeps the software unusable during that
> time.
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>
|||On Thu, 14 Oct 2004 07:12:17 -0700, "Troy Murphy"
<thm92630@.hotmail.com> wrote:
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle a
>connection - like not allow a connection to use more than 25% of the CPU?
No.
Better would be a way to assign a lower priority to the offending
connection/query.
But the thinking is that this kind of thing is very dangerous, as the
throttled thread may be holding resources that block other users.
Unfortunately, you can get conditions where the range of the heavy
query is logically limited, it just soaks up all the database
capability.
If your server is otherwise large enough memory-wise and your disk
usage is appropriate (log files on separate disk from data files) and
you have two or more processors, then short of recoding, which you say
you can't do, that's it.
J.
|||Thanks to all for the feedback. The SUM() is just an example and the
application is a 3rd party solution so changing the source would be
difficult to say the least.
Still, the behavior seems to be that only a single query will run at a time
and queries from other connections are on hold while the query is running.
I played with the MAXDOP as well as the CURSOR THRESHOLD setting with out
any changes in behavior (reboot of server after each change).
I can reproduce this behavior using query analyzer from 2 workstations where
WS1 will run a long query and while that is running WS2 will run a very
quick query. While WS1 is running WS2 has the world turning, but as soon as
WS1 completes, the result set from WS2 is served.
The server configuration is a dual processor with hyper-threading and 2 gb
RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
Server 2000 Enterprise Edition. Running Performance Monitor on Processor
Time and SqlServer Cache Pages does not indicate an overload.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
> Several minutes for a simple SUM() is pretty slow. I suspect you lack
> enough memory and have the wrong disk configurations. Any chance you can
> create a covering index for that SUM()? As for the updates are you trying
> to do all of them in one transaction? If so you may want to break them up
> into smaller batches. It's pretty easy to lop and update say 10,000 rows
> at a time and will usually be faster and certainly less intrusive.
>
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>
|||Troy,
What you are describing sounds more like a blocking issue and not so much
CPU. If you set MAXDOP to 1 and the query were to use all of the first
processor the others would still be available. Have you looked at sp_who2
to see if any blocking is going on while this is happening? It also still
can be that the tables are poorly indexed. Even though the app is 3rd party
you can most likely add indexes.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
> Thanks to all for the feedback. The SUM() is just an example and the
> application is a 3rd party solution so changing the source would be
> difficult to say the least.
> Still, the behavior seems to be that only a single query will run at a
> time and queries from other connections are on hold while the query is
> running. I played with the MAXDOP as well as the CURSOR THRESHOLD setting
> with out any changes in behavior (reboot of server after each change).
> I can reproduce this behavior using query analyzer from 2 workstations
> where WS1 will run a long query and while that is running WS2 will run a
> very quick query. While WS1 is running WS2 has the world turning, but as
> soon as WS1 completes, the result set from WS2 is served.
> The server configuration is a dual processor with hyper-threading and 2 gb
> RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
> Server 2000 Enterprise Edition. Running Performance Monitor on Processor
> Time and SqlServer Cache Pages does not indicate an overload.
> Troy
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>
|||Ok, I reproduced the scenerio described below and ran SP_WHO (I was actually
expecting this to wait for the query to complete, but it ran as expected).
Anyway, the user who was running the long query showed a status of 'Ready to
Run' while executing and the process that was waiting in limbo showed as
'Sleeping'. All values in the BLK field were 0 and each had a unique SPID.
I had reset all the Configuration settings back to default. We exported the
database to another server (single processor, 1 gb Ram, SCSI) and that
server did not seem to have this problem in that the small queries ran as
expected while the large query was still running. HELP!!!!
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
> Troy,
> What you are describing sounds more like a blocking issue and not so much
> CPU. If you set MAXDOP to 1 and the query were to use all of the first
> processor the others would still be available. Have you looked at sp_who2
> to see if any blocking is going on while this is happening? It also still
> can be that the tables are poorly indexed. Even though the app is 3rd
> party you can most likely add indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
>
|||What about disk queues? Is the first process creating a massive disk queue?
sp_who would not be affected as much by that. What about wait types? Try
running DBCC SQLPERF(Waitstats) or select * from sysprocesses and see what
the wait types and times are for the 2 spids.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:efIBI7usEHA.3788@.TK2MSFTNGP15.phx.gbl...
> Ok, I reproduced the scenerio described below and ran SP_WHO (I was
> actually expecting this to wait for the query to complete, but it ran as
> expected). Anyway, the user who was running the long query showed a status
> of 'Ready to Run' while executing and the process that was waiting in
> limbo showed as 'Sleeping'. All values in the BLK field were 0 and each
> had a unique SPID. I had reset all the Configuration settings back to
> default. We exported the database to another server (single processor, 1
> gb Ram, SCSI) and that server did not seem to have this problem in that
> the small queries ran as expected while the large query was still running.
> HELP!!!!
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
>

Connection Throttle

I have an application that where some of the queries hit the server hard (I
cannot rewrite the app). Some of the queries take 5 or more minutes to run
and during this time the CPU is pegged and SQL no longer services other
queries until the the hog query is complete. I there a way I can throttle a
connection - like not allow a connection to use more than 25% of the CPU?
Thanks,
TroyTry setting the MAXDOP to something less than the total number of
processors. But it sounds like you probably need to just tune your query.
--
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle
>a connection - like not allow a connection to use more than 25% of the CPU?
> Thanks,
> Troy
>|||Troy Murphy wrote:
> I have an application that where some of the queries hit the server
> hard (I cannot rewrite the app). Some of the queries take 5 or more
> minutes to run and during this time the CPU is pegged and SQL no
> longer services other queries until the the hog query is complete. I
> there a way I can throttle a connection - like not allow a connection
> to use more than 25% of the CPU?
> Thanks,
> Troy
Maybe you could add appropriate indexes to the database to help the
query run more efficiently without worrying about touching the
application.
--
David Gugick
Imceda Software
www.imceda.com|||Thanks, I will look into that. The queries are fairly optimized, but we
have 10's of millions of records. Even a simple select sum() takes several
minutes. However the main culprit is a morning update against a few hundred
thousand records that keeps the software unusable during that time.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
> Try setting the MAXDOP to something less than the total number of
> processors. But it sounds like you probably need to just tune your
> query.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>I have an application that where some of the queries hit the server hard
>>(I cannot rewrite the app). Some of the queries take 5 or more minutes to
>>run and during this time the CPU is pegged and SQL no longer services
>>other queries until the the hog query is complete. I there a way I can
>>throttle a connection - like not allow a connection to use more than 25%
>>of the CPU?
>> Thanks,
>> Troy
>|||Several minutes for a simple SUM() is pretty slow. I suspect you lack
enough memory and have the wrong disk configurations. Any chance you can
create a covering index for that SUM()? As for the updates are you trying
to do all of them in one transaction? If so you may want to break them up
into smaller batches. It's pretty easy to lop and update say 10,000 rows at
a time and will usually be faster and certainly less intrusive.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
> Thanks, I will look into that. The queries are fairly optimized, but we
> have 10's of millions of records. Even a simple select sum() takes
> several minutes. However the main culprit is a morning update against a
> few hundred thousand records that keeps the software unusable during that
> time.
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>> Try setting the MAXDOP to something less than the total number of
>> processors. But it sounds like you probably need to just tune your
>> query.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>I have an application that where some of the queries hit the server hard
>>(I cannot rewrite the app). Some of the queries take 5 or more minutes
>>to run and during this time the CPU is pegged and SQL no longer services
>>other queries until the the hog query is complete. I there a way I can
>>throttle a connection - like not allow a connection to use more than 25%
>>of the CPU?
>> Thanks,
>> Troy
>>
>|||On Thu, 14 Oct 2004 07:12:17 -0700, "Troy Murphy"
<thm92630@.hotmail.com> wrote:
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle a
>connection - like not allow a connection to use more than 25% of the CPU?
No.
Better would be a way to assign a lower priority to the offending
connection/query.
But the thinking is that this kind of thing is very dangerous, as the
throttled thread may be holding resources that block other users.
Unfortunately, you can get conditions where the range of the heavy
query is logically limited, it just soaks up all the database
capability.
If your server is otherwise large enough memory-wise and your disk
usage is appropriate (log files on separate disk from data files) and
you have two or more processors, then short of recoding, which you say
you can't do, that's it.
J.|||Thanks to all for the feedback. The SUM() is just an example and the
application is a 3rd party solution so changing the source would be
difficult to say the least.
Still, the behavior seems to be that only a single query will run at a time
and queries from other connections are on hold while the query is running.
I played with the MAXDOP as well as the CURSOR THRESHOLD setting with out
any changes in behavior (reboot of server after each change).
I can reproduce this behavior using query analyzer from 2 workstations where
WS1 will run a long query and while that is running WS2 will run a very
quick query. While WS1 is running WS2 has the world turning, but as soon as
WS1 completes, the result set from WS2 is served.
The server configuration is a dual processor with hyper-threading and 2 gb
RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
Server 2000 Enterprise Edition. Running Performance Monitor on Processor
Time and SqlServer Cache Pages does not indicate an overload.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
> Several minutes for a simple SUM() is pretty slow. I suspect you lack
> enough memory and have the wrong disk configurations. Any chance you can
> create a covering index for that SUM()? As for the updates are you trying
> to do all of them in one transaction? If so you may want to break them up
> into smaller batches. It's pretty easy to lop and update say 10,000 rows
> at a time and will usually be faster and certainly less intrusive.
>
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>> Thanks, I will look into that. The queries are fairly optimized, but we
>> have 10's of millions of records. Even a simple select sum() takes
>> several minutes. However the main culprit is a morning update against a
>> few hundred thousand records that keeps the software unusable during that
>> time.
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>> Try setting the MAXDOP to something less than the total number of
>> processors. But it sounds like you probably need to just tune your
>> query.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>I have an application that where some of the queries hit the server hard
>>(I cannot rewrite the app). Some of the queries take 5 or more minutes
>>to run and during this time the CPU is pegged and SQL no longer services
>>other queries until the the hog query is complete. I there a way I can
>>throttle a connection - like not allow a connection to use more than 25%
>>of the CPU?
>> Thanks,
>> Troy
>>
>>
>|||Troy,
What you are describing sounds more like a blocking issue and not so much
CPU. If you set MAXDOP to 1 and the query were to use all of the first
processor the others would still be available. Have you looked at sp_who2
to see if any blocking is going on while this is happening? It also still
can be that the tables are poorly indexed. Even though the app is 3rd party
you can most likely add indexes.
--
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
> Thanks to all for the feedback. The SUM() is just an example and the
> application is a 3rd party solution so changing the source would be
> difficult to say the least.
> Still, the behavior seems to be that only a single query will run at a
> time and queries from other connections are on hold while the query is
> running. I played with the MAXDOP as well as the CURSOR THRESHOLD setting
> with out any changes in behavior (reboot of server after each change).
> I can reproduce this behavior using query analyzer from 2 workstations
> where WS1 will run a long query and while that is running WS2 will run a
> very quick query. While WS1 is running WS2 has the world turning, but as
> soon as WS1 completes, the result set from WS2 is served.
> The server configuration is a dual processor with hyper-threading and 2 gb
> RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
> Server 2000 Enterprise Edition. Running Performance Monitor on Processor
> Time and SqlServer Cache Pages does not indicate an overload.
> Troy
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>> Several minutes for a simple SUM() is pretty slow. I suspect you lack
>> enough memory and have the wrong disk configurations. Any chance you can
>> create a covering index for that SUM()? As for the updates are you
>> trying to do all of them in one transaction? If so you may want to break
>> them up into smaller batches. It's pretty easy to lop and update say
>> 10,000 rows at a time and will usually be faster and certainly less
>> intrusive.
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>> Thanks, I will look into that. The queries are fairly optimized, but we
>> have 10's of millions of records. Even a simple select sum() takes
>> several minutes. However the main culprit is a morning update against a
>> few hundred thousand records that keeps the software unusable during
>> that time.
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>> Try setting the MAXDOP to something less than the total number of
>> processors. But it sounds like you probably need to just tune your
>> query.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>I have an application that where some of the queries hit the server
>>hard (I cannot rewrite the app). Some of the queries take 5 or more
>>minutes to run and during this time the CPU is pegged and SQL no longer
>>services other queries until the the hog query is complete. I there a
>>way I can throttle a connection - like not allow a connection to use
>>more than 25% of the CPU?
>> Thanks,
>> Troy
>>
>>
>>
>|||Ok, I reproduced the scenerio described below and ran SP_WHO (I was actually
expecting this to wait for the query to complete, but it ran as expected).
Anyway, the user who was running the long query showed a status of 'Ready to
Run' while executing and the process that was waiting in limbo showed as
'Sleeping'. All values in the BLK field were 0 and each had a unique SPID.
I had reset all the Configuration settings back to default. We exported the
database to another server (single processor, 1 gb Ram, SCSI) and that
server did not seem to have this problem in that the small queries ran as
expected while the large query was still running. HELP!!!!
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
> Troy,
> What you are describing sounds more like a blocking issue and not so much
> CPU. If you set MAXDOP to 1 and the query were to use all of the first
> processor the others would still be available. Have you looked at sp_who2
> to see if any blocking is going on while this is happening? It also still
> can be that the tables are poorly indexed. Even though the app is 3rd
> party you can most likely add indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
>> Thanks to all for the feedback. The SUM() is just an example and the
>> application is a 3rd party solution so changing the source would be
>> difficult to say the least.
>> Still, the behavior seems to be that only a single query will run at a
>> time and queries from other connections are on hold while the query is
>> running. I played with the MAXDOP as well as the CURSOR THRESHOLD setting
>> with out any changes in behavior (reboot of server after each change).
>> I can reproduce this behavior using query analyzer from 2 workstations
>> where WS1 will run a long query and while that is running WS2 will run a
>> very quick query. While WS1 is running WS2 has the world turning, but as
>> soon as WS1 completes, the result set from WS2 is served.
>> The server configuration is a dual processor with hyper-threading and 2
>> gb RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
>> Server 2000 Enterprise Edition. Running Performance Monitor on Processor
>> Time and SqlServer Cache Pages does not indicate an overload.
>> Troy
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>> Several minutes for a simple SUM() is pretty slow. I suspect you lack
>> enough memory and have the wrong disk configurations. Any chance you can
>> create a covering index for that SUM()? As for the updates are you
>> trying to do all of them in one transaction? If so you may want to
>> break them up into smaller batches. It's pretty easy to lop and update
>> say 10,000 rows at a time and will usually be faster and certainly less
>> intrusive.
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>> Thanks, I will look into that. The queries are fairly optimized, but
>> we have 10's of millions of records. Even a simple select sum() takes
>> several minutes. However the main culprit is a morning update against
>> a few hundred thousand records that keeps the software unusable during
>> that time.
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>> Try setting the MAXDOP to something less than the total number of
>> processors. But it sounds like you probably need to just tune your
>> query.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>I have an application that where some of the queries hit the server
>>hard (I cannot rewrite the app). Some of the queries take 5 or more
>>minutes to run and during this time the CPU is pegged and SQL no
>>longer services other queries until the the hog query is complete. I
>>there a way I can throttle a connection - like not allow a connection
>>to use more than 25% of the CPU?
>> Thanks,
>> Troy
>>
>>
>>
>>
>|||What about disk queues? Is the first process creating a massive disk queue?
sp_who would not be affected as much by that. What about wait types? Try
running DBCC SQLPERF(Waitstats) or select * from sysprocesses and see what
the wait types and times are for the 2 spids.
--
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:efIBI7usEHA.3788@.TK2MSFTNGP15.phx.gbl...
> Ok, I reproduced the scenerio described below and ran SP_WHO (I was
> actually expecting this to wait for the query to complete, but it ran as
> expected). Anyway, the user who was running the long query showed a status
> of 'Ready to Run' while executing and the process that was waiting in
> limbo showed as 'Sleeping'. All values in the BLK field were 0 and each
> had a unique SPID. I had reset all the Configuration settings back to
> default. We exported the database to another server (single processor, 1
> gb Ram, SCSI) and that server did not seem to have this problem in that
> the small queries ran as expected while the large query was still running.
> HELP!!!!
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
>> Troy,
>> What you are describing sounds more like a blocking issue and not so much
>> CPU. If you set MAXDOP to 1 and the query were to use all of the first
>> processor the others would still be available. Have you looked at
>> sp_who2 to see if any blocking is going on while this is happening? It
>> also still can be that the tables are poorly indexed. Even though the
>> app is 3rd party you can most likely add indexes.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
>> Thanks to all for the feedback. The SUM() is just an example and the
>> application is a 3rd party solution so changing the source would be
>> difficult to say the least.
>> Still, the behavior seems to be that only a single query will run at a
>> time and queries from other connections are on hold while the query is
>> running. I played with the MAXDOP as well as the CURSOR THRESHOLD
>> setting with out any changes in behavior (reboot of server after each
>> change).
>> I can reproduce this behavior using query analyzer from 2 workstations
>> where WS1 will run a long query and while that is running WS2 will run a
>> very quick query. While WS1 is running WS2 has the world turning, but
>> as soon as WS1 completes, the result set from WS2 is served.
>> The server configuration is a dual processor with hyper-threading and 2
>> gb RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and
>> SQL Server 2000 Enterprise Edition. Running Performance Monitor on
>> Processor Time and SqlServer Cache Pages does not indicate an overload.
>> Troy
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>> Several minutes for a simple SUM() is pretty slow. I suspect you lack
>> enough memory and have the wrong disk configurations. Any chance you
>> can create a covering index for that SUM()? As for the updates are you
>> trying to do all of them in one transaction? If so you may want to
>> break them up into smaller batches. It's pretty easy to lop and update
>> say 10,000 rows at a time and will usually be faster and certainly less
>> intrusive.
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>> Thanks, I will look into that. The queries are fairly optimized, but
>> we have 10's of millions of records. Even a simple select sum() takes
>> several minutes. However the main culprit is a morning update against
>> a few hundred thousand records that keeps the software unusable during
>> that time.
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>> Try setting the MAXDOP to something less than the total number of
>> processors. But it sounds like you probably need to just tune your
>> query.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>>I have an application that where some of the queries hit the server
>>>hard (I cannot rewrite the app). Some of the queries take 5 or more
>>>minutes to run and during this time the CPU is pegged and SQL no
>>>longer services other queries until the the hog query is complete. I
>>>there a way I can throttle a connection - like not allow a connection
>>>to use more than 25% of the CPU?
>>>
>>> Thanks,
>>> Troy
>>>
>>
>>
>>
>>
>>
>|||Thanks for the assistance, but this is getting way over my head...
I was not planning on becomming a SQL guru but hoping for good
'out-of-the-box' performance.
Anyway, we are going to throw more hardware at it to see if the situation
resolves itself. If that does not work, we will refactor to Pervasive SQL
which seems to perform better on this application.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OeX8qGwsEHA.3412@.TK2MSFTNGP14.phx.gbl...
> What about disk queues? Is the first process creating a massive disk
> queue? sp_who would not be affected as much by that. What about wait
> types? Try running DBCC SQLPERF(Waitstats) or select * from sysprocesses
> and see what the wait types and times are for the 2 spids.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:efIBI7usEHA.3788@.TK2MSFTNGP15.phx.gbl...
>> Ok, I reproduced the scenerio described below and ran SP_WHO (I was
>> actually expecting this to wait for the query to complete, but it ran as
>> expected). Anyway, the user who was running the long query showed a
>> status of 'Ready to Run' while executing and the process that was waiting
>> in limbo showed as 'Sleeping'. All values in the BLK field were 0 and
>> each had a unique SPID. I had reset all the Configuration settings back
>> to default. We exported the database to another server (single
>> processor, 1 gb Ram, SCSI) and that server did not seem to have this
>> problem in that the small queries ran as expected while the large query
>> was still running. HELP!!!!
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
>> Troy,
>> What you are describing sounds more like a blocking issue and not so
>> much CPU. If you set MAXDOP to 1 and the query were to use all of the
>> first processor the others would still be available. Have you looked at
>> sp_who2 to see if any blocking is going on while this is happening? It
>> also still can be that the tables are poorly indexed. Even though the
>> app is 3rd party you can most likely add indexes.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
>> Thanks to all for the feedback. The SUM() is just an example and the
>> application is a 3rd party solution so changing the source would be
>> difficult to say the least.
>> Still, the behavior seems to be that only a single query will run at a
>> time and queries from other connections are on hold while the query is
>> running. I played with the MAXDOP as well as the CURSOR THRESHOLD
>> setting with out any changes in behavior (reboot of server after each
>> change).
>> I can reproduce this behavior using query analyzer from 2 workstations
>> where WS1 will run a long query and while that is running WS2 will run
>> a very quick query. While WS1 is running WS2 has the world turning,
>> but as soon as WS1 completes, the result set from WS2 is served.
>> The server configuration is a dual processor with hyper-threading and 2
>> gb RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and
>> SQL Server 2000 Enterprise Edition. Running Performance Monitor on
>> Processor Time and SqlServer Cache Pages does not indicate an overload.
>> Troy
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>> Several minutes for a simple SUM() is pretty slow. I suspect you lack
>> enough memory and have the wrong disk configurations. Any chance you
>> can create a covering index for that SUM()? As for the updates are
>> you trying to do all of them in one transaction? If so you may want
>> to break them up into smaller batches. It's pretty easy to lop and
>> update say 10,000 rows at a time and will usually be faster and
>> certainly less intrusive.
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>> Thanks, I will look into that. The queries are fairly optimized, but
>> we have 10's of millions of records. Even a simple select sum()
>> takes several minutes. However the main culprit is a morning update
>> against a few hundred thousand records that keeps the software
>> unusable during that time.
>> Troy
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>>> Try setting the MAXDOP to something less than the total number of
>>> processors. But it sounds like you probably need to just tune your
>>> query.
>>>
>>> --
>>> Andrew J. Kelly SQL MVP
>>>
>>>
>>> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
>>> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>>>I have an application that where some of the queries hit the server
>>>hard (I cannot rewrite the app). Some of the queries take 5 or more
>>>minutes to run and during this time the CPU is pegged and SQL no
>>>longer services other queries until the the hog query is complete.
>>>I there a way I can throttle a connection - like not allow a
>>>connection to use more than 25% of the CPU?
>>>
>>> Thanks,
>>> Troy
>>>
>>>
>>>
>>
>>
>>
>>
>>
>

Connection Throttle

I have an application that where some of the queries hit the server hard (I
cannot rewrite the app). Some of the queries take 5 or more minutes to run
and during this time the CPU is pegged and SQL no longer services other
queries until the the hog query is complete. I there a way I can throttle a
connection - like not allow a connection to use more than 25% of the CPU?
Thanks,
TroyTry setting the MAXDOP to something less than the total number of
processors. But it sounds like you probably need to just tune your query.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle
>a connection - like not allow a connection to use more than 25% of the CPU?
> Thanks,
> Troy
>|||Troy Murphy wrote:
> I have an application that where some of the queries hit the server
> hard (I cannot rewrite the app). Some of the queries take 5 or more
> minutes to run and during this time the CPU is pegged and SQL no
> longer services other queries until the the hog query is complete. I
> there a way I can throttle a connection - like not allow a connection
> to use more than 25% of the CPU?
> Thanks,
> Troy
Maybe you could add appropriate indexes to the database to help the
query run more efficiently without worrying about touching the
application.
David Gugick
Imceda Software
www.imceda.com|||Thanks, I will look into that. The queries are fairly optimized, but we
have 10's of millions of records. Even a simple select sum() takes several
minutes. However the main culprit is a morning update against a few hundred
thousand records that keeps the software unusable during that time.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
> Try setting the MAXDOP to something less than the total number of
> processors. But it sounds like you probably need to just tune your
> query.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:O%23em8ifsEHA.224@.TK2MSFTNGP15.phx.gbl...
>|||Several minutes for a simple SUM() is pretty slow. I suspect you lack
enough memory and have the wrong disk configurations. Any chance you can
create a covering index for that SUM()? As for the updates are you trying
to do all of them in one transaction? If so you may want to break them up
into smaller batches. It's pretty easy to lop and update say 10,000 rows at
a time and will usually be faster and certainly less intrusive.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
> Thanks, I will look into that. The queries are fairly optimized, but we
> have 10's of millions of records. Even a simple select sum() takes
> several minutes. However the main culprit is a morning update against a
> few hundred thousand records that keeps the software unusable during that
> time.
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uezVitfsEHA.3572@.tk2msftngp13.phx.gbl...
>|||On Thu, 14 Oct 2004 07:12:17 -0700, "Troy Murphy"
<thm92630@.hotmail.com> wrote:
>I have an application that where some of the queries hit the server hard (I
>cannot rewrite the app). Some of the queries take 5 or more minutes to run
>and during this time the CPU is pegged and SQL no longer services other
>queries until the the hog query is complete. I there a way I can throttle
a
>connection - like not allow a connection to use more than 25% of the CPU?
No.
Better would be a way to assign a lower priority to the offending
connection/query.
But the thinking is that this kind of thing is very dangerous, as the
throttled thread may be holding resources that block other users.
Unfortunately, you can get conditions where the range of the heavy
query is logically limited, it just soaks up all the database
capability.
If your server is otherwise large enough memory-wise and your disk
usage is appropriate (log files on separate disk from data files) and
you have two or more processors, then short of recoding, which you say
you can't do, that's it.
J.|||Thanks to all for the feedback. The SUM() is just an example and the
application is a 3rd party solution so changing the source would be
difficult to say the least.
Still, the behavior seems to be that only a single query will run at a time
and queries from other connections are on hold while the query is running.
I played with the MAXDOP as well as the CURSOR THRESHOLD setting with out
any changes in behavior (reboot of server after each change).
I can reproduce this behavior using query analyzer from 2 workstations where
WS1 will run a long query and while that is running WS2 will run a very
quick query. While WS1 is running WS2 has the world turning, but as soon as
WS1 completes, the result set from WS2 is served.
The server configuration is a dual processor with hyper-threading and 2 gb
RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
Server 2000 Enterprise Edition. Running Performance Monitor on Processor
Time and SqlServer Cache Pages does not indicate an overload.
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
> Several minutes for a simple SUM() is pretty slow. I suspect you lack
> enough memory and have the wrong disk configurations. Any chance you can
> create a covering index for that SUM()? As for the updates are you trying
> to do all of them in one transaction? If so you may want to break them up
> into smaller batches. It's pretty easy to lop and update say 10,000 rows
> at a time and will usually be faster and certainly less intrusive.
>
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:Onn12wisEHA.2780@.TK2MSFTNGP09.phx.gbl...
>|||Troy,
What you are describing sounds more like a blocking issue and not so much
CPU. If you set MAXDOP to 1 and the query were to use all of the first
processor the others would still be available. Have you looked at sp_who2
to see if any blocking is going on while this is happening? It also still
can be that the tables are poorly indexed. Even though the app is 3rd party
you can most likely add indexes.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
> Thanks to all for the feedback. The SUM() is just an example and the
> application is a 3rd party solution so changing the source would be
> difficult to say the least.
> Still, the behavior seems to be that only a single query will run at a
> time and queries from other connections are on hold while the query is
> running. I played with the MAXDOP as well as the CURSOR THRESHOLD setting
> with out any changes in behavior (reboot of server after each change).
> I can reproduce this behavior using query analyzer from 2 workstations
> where WS1 will run a long query and while that is running WS2 will run a
> very quick query. While WS1 is running WS2 has the world turning, but as
> soon as WS1 completes, the result set from WS2 is served.
> The server configuration is a dual processor with hyper-threading and 2 gb
> RAM. Unfortunately the drive is un-striped IDE. Windows 2003 and SQL
> Server 2000 Enterprise Edition. Running Performance Monitor on Processor
> Time and SqlServer Cache Pages does not indicate an overload.
> Troy
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O9GpoIksEHA.2320@.TK2MSFTNGP12.phx.gbl...
>|||Ok, I reproduced the scenerio described below and ran SP_WHO (I was actually
expecting this to wait for the query to complete, but it ran as expected).
Anyway, the user who was running the long query showed a status of 'Ready to
Run' while executing and the process that was waiting in limbo showed as
'Sleeping'. All values in the BLK field were 0 and each had a unique SPID.
I had reset all the Configuration settings back to default. We exported the
database to another server (single processor, 1 gb Ram, SCSI) and that
server did not seem to have this problem in that the small queries ran as
expected while the large query was still running. HELP!!!!
Troy
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
> Troy,
> What you are describing sounds more like a blocking issue and not so much
> CPU. If you set MAXDOP to 1 and the query were to use all of the first
> processor the others would still be available. Have you looked at sp_who2
> to see if any blocking is going on while this is happening? It also still
> can be that the tables are poorly indexed. Even though the app is 3rd
> party you can most likely add indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Troy Murphy" <thm92630@.hotmail.com> wrote in message
> news:ufFY8ttsEHA.3556@.TK2MSFTNGP10.phx.gbl...
>|||What about disk queues? Is the first process creating a massive disk queue?
sp_who would not be affected as much by that. What about wait types? Try
running DBCC SQLPERF(Waitstats) or select * from sysprocesses and see what
the wait types and times are for the 2 spids.
Andrew J. Kelly SQL MVP
"Troy Murphy" <thm92630@.hotmail.com> wrote in message
news:efIBI7usEHA.3788@.TK2MSFTNGP15.phx.gbl...
> Ok, I reproduced the scenerio described below and ran SP_WHO (I was
> actually expecting this to wait for the query to complete, but it ran as
> expected). Anyway, the user who was running the long query showed a status
> of 'Ready to Run' while executing and the process that was waiting in
> limbo showed as 'Sleeping'. All values in the BLK field were 0 and each
> had a unique SPID. I had reset all the Configuration settings back to
> default. We exported the database to another server (single processor, 1
> gb Ram, SCSI) and that server did not seem to have this problem in that
> the small queries ran as expected while the large query was still running.
> HELP!!!!
> Troy
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uPahbQusEHA.2072@.tk2msftngp13.phx.gbl...
>