I have had a few calls with a Dynamics GP customer that was having performance issues.
They have two separate Dynamics GP environments for two separate entities. Environment A runs fine--GP performance is fine and only takes about 15 seconds to login. Environment B takes 10 to 12 minutes to load!!! Environment A has about 160 company databases, and Environment B has about 150 company databases.
When this type of Dynamics GP performance issue occurs, I usually notice that the client misdiagnoses the problem and focuses on possible causes that are based on coincidence rather than analysis or data.
This client had recently installed Mekorma MICR in Environment B. They believe that the performance issues started after they installed Mekorma MICR, so they were calling Mekorma support and spending hours trying to troubleshoot the issue. Not surprisingly, they told me that Mekorma was baffled by the performance issues.
Knowing how Mekorma MICR works, and having worked with many companies that use Mekorma with a lot of company databases, I was very skeptical that Mekorma MICR was causing the performance issue in Environment B, especially since Environment A also used Mekorma MICR. It just didn't make sense.
The contact at the client explained that they have had several internal meetings with DBAs and network engineers analyzing activity and packets and they claim that they aren't able to observe any indicators of a performance issue. Yet I can guarantee that having to wait 12 minutes for GP to login definitively indicates a severe performance issue--I don't care what CPU utilization stats say.
Environment A is a physical environment, with SQL Server running on a powerful server with 48GB of RAM. It could probably use more RAM, but performance seems fine. Environment B is a virtual environment with 24GB of RAM. Why the difference in RAM? I've found that for some reason, people think that virtual environments only need a fraction of the RAM that they would normally allocate to a physical machine. 24GB for a SQL Server that is hosting over 150 company databases seems grossly inadequate. My desktop computer has 32GB of RAM, so I am baffled that a company is looking to conserve RAM by starving their production SQL Server virtual machine.
I recommended that the client try a simple test of allocating 48GB of RAM to the SQL VM in Environment B and restarting the server to allow SQL Server to reallocate its memory usage.
I then thought about checking to see how large the company databases were in both environments. I had the client run this query to get a list of all databases and their sizes.
http://stackoverflow.com/questions/5945360/sql-server-2008-how-to-query-all-databases-sizes
with fs
as
(
select database_id, type, size * 8.0 / 1024 size
from sys.master_files
)
select
name,
(select sum(size) from fs where type = 0 and fs.database_id = db.database_id) DataFileSizeMB,
(select sum(size) from fs where type = 1 and fs.database_id = db.database_id) LogFileSizeMB
from sys.databases db
The client sent me the results and one thing jumped out right away.
The tempdb database is 87 GB! As the Mythbusters would say, "Now THERE'S your problem!"
I don't know for sure that the massive tempdb is the cause, but at the moment it's at the top of my list of suspects.
This Microsoft KB article explains that the easiest way to clear and shrink the tempdb database is to restart the SQL Server service. So I have recommended that the client schedule a time to restart their SQL Server.
But what would cause the tempdb to become so large? I speculate that one possible cause might be...wait for it...insufficient memory! I haven't yet found specific documentation that links the two, but it is mentioned on a few forum threads such as this one:
http://www.sqlservercentral.com/Forums/Topic1454495-1550-1.aspx
So I speculate that the root cause is insufficient memory, which causes excessive use of tempdb. As tempdb grows, its ability to compensate for lack of memory decreases.
I'm waiting to hear back from the client to see what changes they make and whether it improves performance.
12 minutes!!!! That's just crazy.
Steve Endow is a Dynamics GP Certified IT Professional in
Los Angeles. He is also the owner of Precipio Services, which provides
Dynamics GP integrations, customizations, and automation solutions.
2 comments:
Steve,
Take a look into the following script, it shows the space allocated for each SQL Table in your database, this might indicate what is your customer issue:
http://mitchelsellers.com/blogs/2007/07/27/determing-sql-server-table-size.aspx
Thanks Mohammad.
I'll follow up with the client to see if they resolved the issue or if they found any other clues.
I'm guessing it is something lower level than a large table, since simple queries against relatively small tables (like SY00500) are also taking longer than I would suspect.
Thanks,
Steve
Post a Comment