Monday, January 30, 2017

Is a Test environment required anymore for Dynamics GP upgrades?

By Steve Endow

I've worked with several customers recently who have upgraded Dynamics GP to a new version without doing any prior testing.  The upgrade was performed in their production environment.  No test environment. No test database upgrade. No testing integrations or customizations prior to the upgrade.  Today I was informed of another customer that will be upgrading to GP 2016 without a test environment--just an upgrade of production.  Which made me wonder...


While there are probably many people who would rail against such an approach, I'm now asking a serious question:  Do you really need a Test environment anymore for a "typical" Dynamics GP upgrade?  I'm guessing that many GP customers could probably upgrade their production environment in place without any significant issues.

Yes, there are customers with complex environments that would definitely benefit from a Test environment, and yes, there are cases where upgrades encounter errors that cause the upgrade to fail, but I suspect there are a large number of GP customers with pretty simple environments where a separate environment and extensive testing is not required and would be difficult to justify.

Years ago, before Microsoft purchased Dynamics GP, GP upgrades could be a harrowing experience.  Both the GP install and upgrade processes involved many steps and the GP installers weren't nearly as refined as they are now.  One of the things I noticed following the Microsoft acquisition was that the GP installation and upgrade process became much simpler, easier, and more reliable.  Whereas I used to always recommend setting up a separate test server and performing a test upgrade first, I have worked with several customers recently who have simply upgraded their production environment without any prior testing of a new version of GP.

If you make sure to take good database backups, have a few GP client backups, and have a thorough upgrade plan that has a solid rollback contingency, is it really necessary to have a separate Test environment and perform a full test upgrade first?

Are there particular modules, customizations, environment considerations, or other factors that you think make a Test environment more import?  Third party modules?  Customizations?  Integrations?  Web client?  On premise vs. hosted?  Lots of data or company databases that causes the upgrade to take a long time?


Update: Tim Wappat made a good point on Twitter: Since many companies now run GP on virtual machines, you can easily backup the relevant VMs for quick rollback, and you can also easily clone VMs to quickly setup a "test" environment, greatly reducing the administrative costs of using a test environment to validate a GP upgrade.



Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+




Saturday, January 21, 2017

Riddle Me This: Fixed Assets and Long Fiscal Year

This one left me scratching my head, so I am up at 2am on a Saturday and thought I  would share.  Here is the scenario...


  1. Customer has a long fiscal year due to a change in their fiscal year
  2. The long  year has 14 periods, all years prior and after have 12 periods
So we adjusted the Fixed Assets Calendar (Setup-Fixed Assets-Calendar) to have 14 periods for the current year.  We also marked the option "Short/Long Year" and specified 116.67% depreciation (so that the 13th and 14th periods depreciate normally).


All ran great when the client depreciated period 13.  It is when we get to period 14 that things seem to go haywire.  When we run depreciate on period 14, it backs out the depreciation for period 13.  Creates a complete reversal entry.  The only items that depreciate properly are those items placed in service in periods 12, 13, and 14.  Odd, right?  Well, wait, it gets better...


I can replicate all of this in sample data on GP2015 (the client is on 2013, so wanted to be as close to that version as possible).  So I started wondering what would happen if I backed out the period 14 depreciation. So I did that.  Re-ran depreciation for period 13, and it backed out the incorrect entry.  But then if I re-ran depreciation for period 14, it calculates correctly.  What?  Why?  Simply backing it out and rerunning it appears to fix the problem.  Not normal, right? 

From what I can tell, it has to do with reset life and perhaps the back out process triggers a recalc of sorts.  Because if I pre-emptively run reset life, period 14 will depreciate properly the first time around.  I think there is some conflicting info out there about the need to run reset life if you are creating a long year, but you heard it hear first...always run reset life if you alter (even just lengthening) a year in fixed assets.


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Wednesday, January 18, 2017

A less than ideal Dynamics GP SQL Server setup

By Steve Endow

I recently wrote a post about a customer where Dynamics GP took 10 times longer to post a GL batch than one of my development virtual machines.  So a GL batch with 1200 lines that took 6 seconds to post on my server would take 60 seconds in the customer's environment.

I had another call with the GP partner today to confirm the symptoms and get some information about the customer's SQL Server.  During the call, I saw another GL batch with 1500 lines that took 88 seconds to post.  Not very good.  That's only 17 records per second, which is abysmal performance for SQL Server.

The SQL Server is a quad core machine with 16GB RAM.  The consultant didn't know if the machine was physical or virtual.  The customer has a single production company database with an MDF file that is 20.5GB, and an LDF file that is 14GB.

But, they have a TEST database, which is a recent copy of production, which has a 20.5GB MDF and a 7GB LDF.

And then they have an additional backup copy of their production database for some reason, which has a 25GB MDF and a 14GB LDF.  They also have an old copy of their production database from 2015, which has a 17GB MDF and a 14GB LDF.  And there's another random test copy that has a 14GB MDF.


But wait, there's more!  There is the active Dynamics database, which has a 5.6GB MDF and 4.6GB LDF.  And there is not just one, but TWO other copies of the Dynamics database--one 3.2GB and the other 2.7GB.

So the server only has 16GB of RAM, but there is well over 100GB of online databases on the server.  If we're optimistic, let's say that only two databases actually have any activity: the main production and test companies.  Those two databases, plus the Dynamics database, total over 45GB.

So 45GB of active databases on a server with 16GB of physical RAM.

I then check the SQL Server Maximum Server Memory setting, and no surprise, it had not been changed from the default value.


The combination of insufficient RAM and lack of a reasonable Maximum Server Memory value is likely putting significant memory pressure on Windows, which then contributes to abysmal SQL Server performance.  I've seen a similar SQL Server with just 4 GP users become unresponsive, lock up GP clients, and drop network connections when under load.

The Dynamics GP consultant I spoke with was not familiar with SQL Server configuration or memory management, so I recommended that the consultant speak with his team and the customer about increasing the RAM on the server and setting the Maximum Server Memory setting to a reasonable value.

Unfortunately, I can't be certain that those two items will dramatically improve their GP batch posting performance--although I'm pretty sure it won't hurt.  Maybe the databases need to be reindexed or optimized, or maybe there is some other issue causing the poor performance. If they do upgrade the server memory, I'll try and follow up with them to see if the changes improve Dynamics GP posting performance.

If this topic is of interest to you, I recommend checking out the book Troubleshooting SQL Server by Jonathan Kehayias and Ted Kreuger.  There is a link on the page to download a free PDF of the book.  It's a few years old, but many of the SQL Server fundamentals remain the same.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+








6 Tips to Help You Get More Out of SQL Server Management Studio

By Steve Endow

If you use SQL Server Management Studio, I've assembled a few helpful tips that can help you save time and work more efficiently.

Here's a video where I discuss and demonstrate the tips.



1. By far the most valuable time saving tip is to use the Object Explorer Details window in SSMS.  I have had hundreds of GoToMeeting sessions with customers and consultants who only used the Object Explorer pane and weren't familiar with the benefits of the Object Explorer Details window.  If you are using the Object Explorer pane to locate tables or stored procedures, press F7 to open the Details window and save yourself a ton of time.  Check out the video to see how to quickly navigate and search using Object Explorer Details.



Tuesday, January 17, 2017

Benchmarking GL batch posting times in Dynamics GP using DEX_ROW_TS?

By Steve Endow

I just finished a call with a customer who seems to be experiencing relatively slow GL batch posting in Dynamics GP.

We were reviewing records for the GL batch in the GL20000 table, and out of curiosity, I happened to look at the DEX_ROW_TS values.  For a GL batch that had a total of 1,200 lines, the difference between the minimum and maximum DEX_ROW_TS values was just over 60 seconds.  So my interpretation is that it took over 60 seconds for GP to perform the posting and copy the records from GL10000 to GL20000, with the TS field time stamps reflecting that processing time.

There could be many reasons why DEX_ROW_TS isn't the most accurate measure of actual batch posting times, but I was curious if it could be used as a way to roughly and quickly benchmark GL batch posting times.

I didn't know if 60 seconds for a 1,200 line JE was fast or slow, so I performed a few tests on one of my development VMs.  I created two test batches:  One had 150 JEs with 8 lines each, and the other had 300 JEs with 4 lines each.  So each batch had 1,200 lines.  I posted both batches, and then ran this query on them:


SELECT MAX(ORGNTSRC) AS Batch, COUNT(*) AS Rows, MIN(DEX_ROW_TS) AS StartTime, MAX(DEX_ROW_TS) AS EndTime, DATEDIFF(ss, MIN(DEX_ROW_TS), MAX(DEX_ROW_TS)) AS SecondsElapsed
FROM GL20000 
WHERE ORGNTSRC = 'TEST150'
UNION
SELECT MAX(ORGNTSRC) AS Batch, COUNT(*) AS Rows, MIN(DEX_ROW_TS) AS StartTime, MAX(DEX_ROW_TS) AS EndTime, DATEDIFF(ss, MIN(DEX_ROW_TS), MAX(DEX_ROW_TS)) AS SecondsElapsed
FROM GL20000 
WHERE ORGNTSRC = 'TEST300'

(If you use this query, note that if the same batch ID has been used more than once, you will need to filter the query to ensure you only measure a single posting of the given batch ID)

Here are the results:


As you can see, my test batches showed DEX_ROW_TS elapsed times of 6 and 8 seconds, respectively.  So my test JEs appear to have posted significantly faster--up to 1/10th the time as the customer.

It's no surprise that my test in the virtually empty TWO database will show faster times than a large production database, but 6 seconds vs. 60 seconds is a pretty big difference.  And having worked with hundreds of customers to automate their Dynamics GP posting processes using Post Master, I am pretty sure that this customer is seeing less than optimal SQL performance, and that I'll be having a few more support calls with them in the future.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+




Dynamics GP obscurity when voiding AP payments in a high volume environment

By Steve Endow

I seem to frequently work on unusual and obscure tasks and issues in Dynamics GP, and I discovered another one recently.

I have a large Dynamics GP customer that issues thousands of AP payments every month.  The payments are issued to dozens of countries using every payment mechanism imaginable.  Checks, ACH, wires, debit cards, PayPal--you name it.  The payments are issued from both Dynamics GP and through at least one third party international payment processing service.

The company issues so many payments in so many forms that they have a very interesting and challenging problem.  The problem stems from the fact that they regularly encounter situations where the payment is not successfully delivered to the vendor.  Maybe the check was returned as undeliverable.  Perhaps the ACH info wasn't correct.  Maybe the PayPal email address was wrong.  Given the number of different payment methods they use, sometimes they discover this in a few days, while sometimes it takes a few months to be notified that a payment was not successfully delivered.  Given their high payment volume, the challenge this creates is having to void hundreds of payments a month in Dynamics GP so that they can re-issue the payment.

The void process is so time consuming for them that they asked me to develop a solution that could automatically void payments in GP.  I developed that solution, which is a very long story on its own, but in the process of testing, I discovered an unusual scenario that made it difficult to automatically void a payment.

The issue is that it is possible to issue the same check or payment number from multiple checkbooks in Dynamics GP.  This isn't something I had considered before.  So if I pay a vendor with Check 100 from Checkbook 1, and then later happen to pay that same vendor with Check 100 from Checkbook 2, the vendor now has two payments with the same check number.  Given the number of GP checkbooks, the number of payment methods used, and the fact that a third party payment processor is involved, I couldn't rule out this possibility.

Here's an example of what that scenario looks like in the Void Historical Payables Transactions window.


Even if you filter the vendor and the document number, the window displays multiple payments.  In the screen shot, I used the extreme example of payments with the same date and amount.  In this case, the only way to tell the difference between the two payments is by the internal GP Payment Number value.

A user who is manually performing a void would have to select a row in the scrolling window and click on the Document Number link to drill in to the payment and see which Checkbook was used to issue the payment.  But because the Checkbook ID is not shown on the window, an automated solution just looking at the data in the scrolling window cannot tell which payment should be voided.  So I'm probably going to have to enhance the automated solution to verify the date and amount shown in the grid record, and also lookup the payment number to determine which checkbook issued the payment.

One could reasonably say that it is unlikely that a vendor would be issued two payments from two checkbooks with the same payment number.  I would have previously agreed, but the fact that this issue happened to come up in my limited testing on my development server would seem to indicate that it could be more likely than you might think.  And if you've worked with ERP systems long enough, you know that if an obscure problematic situation can arise, it usually will.

I thought that this was a good example of how flexible functionality in Dynamics GP and unexpected or complex scenarios can produce a situation that requires custom solutions to handle unusual situations, even if unlikely.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Monday, January 16, 2017

What Are Your Software Resolutions?

Some of my favorite clients, when I walk in their door every few months, ask "What's New Out There?", "What Are People Doing?"  I will admit, I just love the continual growth mindset.  Although it does take time and energy (and money) to leverage your software to its fullest potential, I find that clients who take this on as part of software ownership are generally happier and more satisfied than those who tend to stagnate- never looking at new approaches, add-ons, or taking care to expand their use of new functionality as appropriate. 


So along these lines, I thought I would put together my top 5 software resolutions.  Although written with Dynamics GP and CRM in mind, these really can apply to a myriad of software solutions and vendor relationships you may have.


  1. Stop expecting software to do more without you contributing more: Whether it is time, expertise, or money (in the form of consulting dollars or add-on software), your software package will only expand and do more for you if you are willing to contribute.  Some of my clients who do the best with this resolution have monthly GP user meetings (internally) to discuss issues and goals and also participate in GPUG and other groups to knowledge-share.  In organizations that don't regularly do this, it's not unusual to hear about them simply implementing another product a few years down the road and starting the cycle again.
  2. Build a partnership with your VAR/Consultant.  No one likes to have a combative relationship (consultants, too).  Understand that your partner is there to help you, and in most cases wants to make sure you are happy with them as well as the software.  So look at how you engage with them, do you do it in a proactive way? Do you ask them what they think of how you are using the software?  Ask for their help in more strategic ways, like how you might better use support or even cut your support costs through training or other avenues.
  3. Set a budget for ongoing software enhancement.  And I am not just talking about service packs and upgrades, although it can be bundled in with those costs.  With each new release, there is new functionality and we (partners/consultants) want you to be able to take advantage of it.  But in a lot of cases, clients simply budget upgrades like service packs with no consulting services beyond the upgrade.  Even consider inviting your consultant/partner out once a year for the sole purpose of asking "What could we be doing better with our software and processes?".  You might be surprised by their answering.
  4. Reset your approach to training to be an ongoing process, not a one time event.  I know users who have used GP for 10+ years but still find training classes, webinars, and other events to attend every year and leave excited about how they can improve their use of the software.  Join GPUG.  Go to conferences.  Treat training as something you do every year.  Not just when you add a new employee or implement a new module.
  5. Recognize that software won't solve all of your issues.  Above I mentioned clients who have monthly internal GP user meetings. These opportunities can also be opened up to include accounting and business processes, even those that fall outside of the system.  What is working?  What isn't?  And can software help? Or do you need to consider internal changes?  Approaching issues with an open mind, and recognizing that sometimes internal/institutional change is needed (with or without software) can help you make positive change in your organization.
What would be on your resolution list? I am interested to hear from you all!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Sunday, January 8, 2017

Portable DIY Surface Pro Table Top Stand / Lectern: Computer Woodworking

By Steve Endow

Last week MVP Jen Kuntz posted a neat update on Twitter with some photos of a cool sliding door that she built.

Following her lead on the woodworking post, I thought I would write a post about a small woodworking project that I worked on today.  Computer related, no less!

I needed some type of table top stand for my Surface Pro 4.  I have a situation where I need to work on my Surface Pro while standing, but the space where I'll be working only has a small table.

I didn't want, or need, a typical boxy table-top lectern.  I wanted something simple, compact, portable and light, that I could quickly setup for use, and then easily fold up and put away.  Unlike a typical lectern with an angled top, I wanted a flat surface so that my Surface Pro and my mouse would not slide off. (If you've done presentations with a typical angled lectern, you know what I'm talking about.)

I fired up SketchUp and quickly came up with this simple design, which has a flat top and folding support legs.  I wanted to keep it as simple as possible so that I could quickly build it this afternoon with as little wood and as little effort as possible.

The folding legs would be attached to the back piece with hinges so that they could be moved into place to support the top, and the top would be on a hinge as well, allowing it to fold down.


After some initial testing, I realized I needed to add a folding stand in the back to prevent it from tipping backwards.


With the legs folded flat, the hinged top folds down flat, and the top has a convenient carry handle.  I figured this would make it very easy to setup, and then I could fold it up in 2 seconds and easily store it out of the way, taking up minimal room.


With my rough design in hand, I headed out to the wood pile. Um, I mean my garage.  If you are a woodworker, or know any woodworkers, you probably know that we hate to throw away perfectly good scraps of wood.  You never know when that small off cut will come in handy!

Fortunately, I had the perfect scraps for the project.  I had a scrap of maple plywood that was almost exactly the dimensions of the top, a nice piece of poplar for the center back, and I had just enough select pine scraps for the folding legs.

The select pine was slightly narrower then my SketchUp design, so I had to adjust my dimensions a bit on the fly, but it worked out just fine.


I cut the pieces to length on the miter saw, and things were looking good.  To save time, I didn't bother to taper the legs, like what is shown in the design.


To join the folding legs, I used my Festool Domino, but pocket hole screws would probably work fine as well.


The Domino is a bit tedious to setup, but the results are Extra Fancy.


With the legs glued and assembled, I clamped them up and then moved on to work on the top piece.


The scrap of plywood was so close to my design dimensions that I didn't even have to cut it--it was ready to go.  I just needed to cut the handle out.

I sketched out the area for the handle and I used a large forstner bit to start the handle hole.



At this point, most people would use a jigsaw to cut the piece between the two holes, but 1) I absolutely hate the jigsaw, and 2) I got a new compact router recently, so I figured I would take the path less traveled and cut out the handle with a spiral up cut bit.


So the router was an interesting choice.  The cut didn't turn out perfect, but it was convenient and good enough for this project.

I then got Extra Fancy and chamfered the edges of the handle--again, another excuse to use the router.


Then, every woodworker's least favorite task--sanding--to remove any rough edges.


More chamfering around the edges of the top. Because new router!


With all of the pieces done, I did a dry fit of sorts, just to make sure everything looked right.


Then a quick run to Home Depot to pick up some hinges.  If you want to get Extra Fancy, you could go with piano hinge for just a few dollars more, but I didn't want to spend time cutting the piano hinge, so I opted for the ugly utilitarian hinges.


And with all of the hinges in place, the stand worked perfectly.


And it folded up nice and flat.


It's very light weight and the handle makes it really easy to carry.


A quick test on a table confirmed that it worked great with my Surface Pro and mouse.


During my initial testing, I noticed that it could potentially tip backwards, so I grabbed another small scrap of plywood (perfect size!) and with the one remaining hinge, added the extra stand on the back to prevent it from tipping over.


To finish it off with a touch of Extra Fancy, I'm going to counter sink a few neodymium magnets into the top of the legs and bottom of the table so that the legs will pop into place and be held by the magnets.  I'll probably also add a magnet to the stand on the back to keep it folded flat when closed.

I hope you enjoyed this computer woodworking fusion project!



You can also find him on Google+ and Twitter