Wednesday, March 25, 2015

One sign of a really bad API

By Steve Endow

The last several months, I've been working with a third party product that integrates with Dynamics GP.

While the product sometimes technically works, it is absolutely riddled with bugs.  These aren't subtle bugs where you need to perform 26 steps with specific data in perfect sequence to trigger them.  These are glaring, obvious, brick-wall-in-the-middle-of-the-road bugs that will cause an error with the simplest data entry example.  In 10 minutes I found a half dozen bugs in the product--and that was on just 1 window.

The product has a web service that serves as the back end to their Dynamics GP integration--so their custom GP windows call the web service, rather than calling DLLs.  That web service also serves as the API if you want to integrate with their product.  Great, makes sense.

After working through numerous bugs in the product, we finally got it working and I got my integration working with it as well.  All seemed peaceful.

Until one day we noticed that some records that were sent to the product through my integration weren't in the database.  Records existed in GP, but didn't exist in the third party product tables.

After being puzzled for several minutes, we eventually tried to manually enter the same data into the product window in GP.  We entered the necessary data and clicked on Save.

So the problem wasn't my integration per se, it was that this particular data was failing to save.  We looked at the data, pondered for a minute, and saw that the data in one of the fields was relatively long--51 characters.

Curious, we made that value shorter and clicked Save.  The save was then successful.  I then opened the diagnostic logs for the web service and saw that it had logged an error.
String or binary data would be truncated
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
I then checked the SQL table, and sure enough, the field length was 50 characters.  Our 51 character field value was triggering the error.

Developers should immediately recognize the issue, and alarm bells should be ringing.

This tells us that the developer of the web service API is not validating input values or lengths in their API when they attempt to save data to their own tables.  So if a field is 50 characters and I submit 51 characters, their method will fail with a SQL exception.

Digging through the log file, it looks like they are building SQL statements by concatenating a string.

INSERT INTO SomeTable (TransactionType, Amount, TransactionTime, IsProcessed, CurrencyId, InvoiceNum, PurchaseOrderNum, SubmittedBy, CompanyId, CustomerId, CompanyName, AgreementId)
VALUES (1,10.99,1,'3/23/2015 2:37:08 PM',2,1,1, 'STDINV3588', '', '1', 'sa','Test Company','','-1','TI TESTCO100',2,2,'Test Company', 0)

While there might be some rare situations where I build a SQL statement like this, I do it reluctantly as an exception, and I never do it with user generated data.  The potential for errors from unvalidated data is bad enough, not to mention issues like SQL injection.

What they should be doing is something like this:

SqlParameter[] sqlParameters = new SqlParameter[2];
sqlParameters[0] = new SqlParameter("@CustomerID", System.Data.SqlDbType.VarChar, 15);
sqlParameters[0].Value = gpCustomerID.Trim();
sqlParameters[1] = new SqlParameter("@CompanyID", System.Data.SqlDbType.Int);

sqlParameters[1].Value = gpCompanyID;

In this example, my code is building parameters for a SQL command and is defining data types and maximum value lengths.  Customer ID must be a VarChar of no more than 15 characters, and Company ID must be an integer.  This doesn't handle all of the potential problems of someone submitting a 20 character customer ID (which should be caught and handled earlier in the process), but it at least prevents a SQL truncation exception.

This isn't rocket science and isn't a developer secret.  It's pretty basic .NET development.

After finding this issue, I had to modify my integration code to truncate input values before submitting them to the web service API.

All because the developer of an API for a "commercial" software product that is being sold to Dynamics GP customers for thousands of dollars doesn't know how to validate inputs.  And the "QA department" clearly didn't do a whole lot of testing.

Developer, please pick up the chalk and start writing.

I make coding mistakes all the time, but this clearly isn't a typo or a small mental lapse.  It's a fundamental design issue in the code that correlates well with all of the bugs we have found in the product.

On the plus side, this developer is making me feel like a coding genius.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Monday, March 23, 2015

BadImageFormatException Error after upgrading eConnect import to GP 2015: Don't forget to update your config file!

By Steve Endow

I recently upgraded a .NET eConnect Dynamics GP integration from GP 2013 to GP 2015.

In Dynamics GP 2015, the eConnect libraries were updated to use .NET 4.5.1, so when you upgrade any existing eConnect projects to GP 2015, you will need to use Visual Studio 2012 or higher.  I personally recommend skipping Visual Studio 2012 and going to Visual Studio 2013, which has a better user interface and some nice auto complete features, and also has native support for Git, which I recently discussed.

After you open your project, replace your eConnect references, and update to .NET 4.5.1, you can rebuild and produce your new eConnect integration for Dynamics GP 2015.

Easy peasy lemon squeezee, right?

In theory.

After completing this upgrade process for my integration (a scheduled Console application), I copied my upgraded EXE and DLL files to the client's workstation.  We then tested the integration.  As soon as we tried to run the EXE using Task Scheduler, the import crashed and we got an error message.

The only meaningful error text that was displayed was "BadImageFormatException".

Huh.  When you see the BadImageFormatException error, that almost always means that you have an issue mixing 32-bit and 64-bit libraries or projects.  So if I reference a 64-bit DLL but have my app targeted to 32-bit in Visual Studio, that can cause the BadImageFormat error.

Puzzled, I double checked my settings, and everything was consistently 32-bit.  My projects targeted x86, and I was definitely referencing only 32-bit DLLs. Hmmm.

As a test, I converted my projects and references to 64-bit.  I rebuilt and tested on the client's machine, but we got the same BadImageFormatException error.  Argh.

After some more research, I then opened the Visual Studio Configuration Manager to see if it might be overriding my Target Platform settings and mixing 32-bit and 64-bit elements.  Sure enough, when I opened the window, the settings were a mess--I don't know why, as I normally never touch that window.

I finally figured out how to clean up all of the configuration settings, and thought I had found and resolve the issue, for sure!

All nice and purrrrdy now...
I confidently delivered the new EXE and DLL files and we tested and...

It crashed right away with the same error.

Un. Be. Lievable.

I had run out of ideas.  I finally tried testing via the Command Prompt window to see if I could get any additional error info.

Unhandled Exception:  System.BadImageFormatException: Could not load file or assembly 'myapp.exe' or one of its dependencies.  This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Hmmm, this additional information would seem to indicate that the problem is a .NET version issue, and not a typical 32 vs. 64 bit conflict. I went back and triple checked my .NET versions.  My projects were definitely using .NET 4.5.1.  The eConnect libraries were definitely GP 2015 and definitely required .NET 4.5.1, so that wasn't the issue.  A third party library that I was using was definitely 4.5.x compatible.  And I couldn't reproduce the issue on my development machine, so clearly the components worked together.

I then tested my app on...not four...not five...but SIX different computers on my network.  Windows 7, Windows 8, and Server 2008, and Server 2012.  Of course it worked perfectly on all six.  This would seem to rule out a .NET version issue in my application.


I then asked the client if they had another computer we could test with.  Fortunately they did, so we tested on that computer.  I was hoping the issue was workstation specific, and while I might never know the cause of the problem, as long as we could get it working, I would be happy.

We tested on the other computer...but...crash.  BadImageFormatException.

Lacking any other options, I dug in deep.  I downloaded the Sysinternals Process Monitor and the client recorded the activity from the import from launch to crash.  I also ran Process Monitor on my server and recorded the activity when it worked successfully.

Finally, I had a small clue.  It wasn't obvious or specific, but it definitely told me that something was different on the workstation.  I put the two Process Monitor data files side by side in Excel and saw an obvious issue.

My server is on the left.  The client's workstation is on the right.  Notice how my machine is clearly using .NET 4, while the client machine is clearly using .NET 2.  That would explain the error message and the problem.

So I had found the "cause" of the error, but didn't have an explanation as to why this was happening.

The .NET version is set in Visual Studio, and Windows just follows instructions and loads the appropriate .NET version.  It's not like Windows will randomly choose the wrong .NET version.

I then searched for "windows runs wrong .net version" or something like that.  I didn't see any obvious results relating to my issue, but somehow "config file" popped into my head.  Hmmm.

When I provided the new GP 2015 version of the import, I only updated the DLL and EXE files.  The exe.config configuration file was already setup, so we didn't need to make any changes.

And what is contained in a .NET configuration file?

That is an excellent question, I'm glad you asked!

Behold, it is a .NET version setting!

< startup>< supportedRuntime version="v2.0.50727"/ >< /startup>

And so that is what was still in the exe.config file, and that is why Windows was dutifully trying to launch the application with .NET 2.0, which is in turn why the application crashed.

Face.  Palm.

While it isn't an excuse, I at least have an explanation.

I have certainly upgraded integrations before and changed .NET versions, but with upgrades from .NET 2 to .NET 3.5, no change was required in the configuration file--it stayed at v2.0.

But with an upgrade to .NET 4 or higher, the reference in the configuration file needs to be changed.  While I have developed and deployed .NET 4+ integrations, I believe this was the first time that I upgraded an existing integration from .NET 2 to .NET 4.  It just didn't occur to me that the configuration file would have to be updated.

Here is what my configuration file should have looked like:

< startup>< supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1"/ >< /startup>

As soon as I modified the config file on the client workstation, the GP 2015 integration ran fine.

And I shook my head.

But hey, good news...I eventually figured it out...

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Friday, March 20, 2015

Source Code Control: Converting from VisualSVN to Git on BitBucket

By Steve Endow

I do a lot of development with Visual Studio, creating eConnect integrations and VS Tools integrations for Microsoft Dynamics GP.

Back in 2012 when I decided to finally setup a source code control solution, Subversion was one of the few source code systems I had heard of, other than the infamous and ancient Microsoft Visual SourceSafe.  I wasn't about to resurrect SourceSafe, and I found the 1.7 GB download for Team Foundation Server made quite a statement, telling me that I shouldn't bother with it.

After some searching, I found VisualSVN--a Windows implementation of the open source Subversion that seamlessly integrated into Visual Studio.  I was up and running in minutes and had a project checked in to my new repository shortly thereafter.

I've been using VisualSVN since February 2012 and love it.  It is simple, relatively obvious, very easy to use for typical tasks, and has been 100% reliable.  I haven't had a single issue with VisualSVN on the server or on my many development servers.  It has been flawless.

Best of all, VisualSVN has a free license for certain qualified users, so you can't beat the price.  For simple, linear development with a small number of users on a project, I highly recommend it.

With VisualSVN, typical day-to-day check-in and check-out operations are very simple and generally require one or two clicks in my single user environment.  However, I've had a few cases where I've needed to revert to older code or research an issue in a prior version, and in those cases, the VisualSVN interface and tools (such as the TortoiseSVN tool that VisualSVN relies on) become much less intuitive and less friendly.

I recently had a situation where a client discovered a bug in a version 1.0 production integration after I had already started on making version 2.0 changes and new features to the code.  Version 1.0 had been in production for a few months before the bug was discovered.

It went like this:

  • Develop version 1.0
  • Release version 1.0 to QA environment
  • Test and refine version 1.0
  • Release 1.0 to production environment
  • All is well

  • Develop version 2.0
  • Release version 2.0 to QA environment
  • Test and refine version 2.0

  • Customer finds bug in version 1.0 in production
  • Chaos ensues

So there I was, with my half-baked version 2.0 code released in QA, faced with a bug in version 1.0 that needed to be fixed ASAP.

I had never faced this situation before, so I didn't know how to best handle it.  I knew that I had all of my code and revisions safely checked in VisualSVN, so I knew I could roll back to my 1.0 release code.

But what about my current 2.0 code?  Even though my 2.0 code was checked in, I didn't want to overwrite my current project and roll back to 1.0.  I guess I was afraid that if I pulled down the old 1.0 code and checked in any changes, it would revert my code and mess something up.

My fundamental problem, I later realized, was that short of checking in code, I didn't understand source code control best practices.  Surely there was a way to handle this situation, right?

I posted the question to Experts Exchange and quickly received a response that made total sense in hindsight.  I needed Branching.

I had been developing exclusively against my "trunk" code, in a completely linear manner, without ever branching.  My projects are typically relatively small, easy to manage, and usually don't require branching, but for my current project, the code was moderately complex and had enough features that I really needed to branch with each release, and perhaps with each sub-release.

With that obvious clue in hand, I hit the books.  I downloaded the free SVN Book and read every page.  I needed to understand how to properly branch in SVN / VisualSVN and manage my branches.

The SVN Book is fantastic, providing clear examples in a very easily understood and readable guide.

But I stopped in my tracks when I read this statement:
Once a --reintegrate merge is done from branch to trunk, the branch is no longer usable for further work. It's not able to correctly absorb new trunk changes, nor can it be properly reintegrated to trunk again. For this reason, if you want to keep working on your feature branch, we recommend destroying it and then re-creating it from the trunk.
This refers to the process of merging your "branch" back into the main "trunk" of the code.  Once you perform this merge, the branch is essentially deprecated.  Once the trunk is changed after that point, the branch should no longer be used and should be "destroyed".

What???  So that means I can't persist branches and would have to jump through some hoops if I needed to "back port" some fixes into a prior release.  While the book offers a few workarounds for handling such situations, it appeared that SVN wasn't particularly good at, or really designed for, such "dynamic" branching and back-porting.

The book explained the SVN architecture well, so it was pretty obvious why this limitation existed.  But I was disappointed nonetheless.

I posted another question to Experts Exchange about my concern, asking how to deal with my situation in SVN.  The answer I received wasn't one I was expecting:
You may not like to hear it, but you should switch to Git. 
At first I thought this might be one of those Windows vs. Mac vs. Linux religious opinions, but after a few minutes of reading about Git, the recommendation made a lot of sense.

Git, despite the horrible name, is an amazingly powerful source control system designed to handle the completely crazy development process of the Linux kernel.  If it is good enough Linus Torvalds and Linux kernel development, I have no doubt it would handle my simple needs.

I downloaded the free Pro Git book and started reading that night.  Mind. Blown.

Barney...wait for it...Stinson
Git is source code control and branching and merging on steroids.  Unlike the nice, simple, obvious, and intuitive SVN book, the Git book was harder to follow and digest.  Branch, branch again, branch your branches, then merge, rebranch, merge your merges to your merged merge and then magically just have it all end up in a single cohesive pile of code at the end of the day.

The design and capabilities of Git seemed like some crazy magic.

Despite not fully understanding all of the features that Git provided, it seemed clear that it was more than capable of handling my branching needs.

So, I then needed to figure out how to use it.  I realized that the ubiquity of Git meant that there were quite a few providers of hosted Git servers, so I could outsource my source code control server.  While my VisualSVN server has been virtually zero maintenance, I won't mind having one less VM running.

I looked into GitHub, the well known online code repository.  While it seems to be a great service, there was one significant issue, given my business model.  GitHub charges by "repository", which in Git lingo, is essentially a project.  So if I have 30 clients with an average of 3 projects, I'll have 90 repositories.

Hosted Git services differentiate between a public (open source) and private repository--in my case, all of my client projects will need private repositories, so I would need 90 private repositories.  On GitHub, that would cost me $200 a month.  Gulp.  That would be tough to justify.

Fortunately, there is an alternative service called Bitbucket that has a completely different pricing model that is much better suited to my needs.  Bitbucket charges by user, not by private repository, so I was able to get their free plan with unlimited private repositories and up to 5 users.  Perfect.

(BTW, Atlassian, the provider of Bitbucket, also offers Git server software called Stash that can be installed and hosted on an internal server in case that is appealing)

So now I was interested in using Git and had setup an account on Bitbucket.  How does it work and how to I use it with Visual Studio?

This is one area where I think VisualSVN is a clear winner and Git, well, it's a distant second.  Because Git has a different architecture, a local distributed repository model, more flexibility, and more features, it takes effort to figure out how to use it and how to work with it in Visual Studio.  The local / distributed repository design of Git adds an extra step in the code management process that takes some adjustment when coming from SVN.  But on the flipside, having a local offline repository provides fantastic flexibility for distributed and remote development.

One nice plus is that Visual Studio 2013 has native support for Git, and for online services such as Bitbucket, built right into the Team Explorer window in Visual Studio.

While it is very handy to have the source control built right into Visual Studio, versus a client install and regular updates with VisualSVN, I find the user interface somewhat unintuitive.  I'm slowly learning how to use it, and for now I'm having to regularly refer to my notes every time I have to create a new repository and sync it to Bitbucket.

I'm sure I'll get more comfortable with Git, just as it took me a while to get familiar with VisualSVN, and I hope to take advantage of Git's rich branching and merging functionality.

Whether you consider SVN, Git, or some other solution, one general thing I learned about this project is that source code control is an entire discipline--like networking, system administration, or even software development.  It's one of those things where you really should invest time to learn and understand the discipline in addition to the specific technology that you choose to use.

If I read the Git book a few more times, or a dozen, I hope I'll get more comfortable with the Git features and how to use them.

Now go Git coding!

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Get GPConnNet to connect to SQL Server instance with a TCP port number using SQL Alias

By Steve Endow
I have a very large Dynamics GP customer who has a very large and complex IT environment with a lot of security.  In their Test Dynamics GP environment, they have the unusual situation where they have a firewall between their Test SQL Server, and their Test Dynamics GP application server.

To make things more complex, they have multiple SQL Server installations and instances on their Test SQL Server.  This, combined with the firewall that is blocking UDP traffic, prevents them from using standard SQL Server instance names and dynamic TCP ports.  In their Test environment, they have been using this naming convention with GP to connect to their Test SQL Server instance through the firewall:


SQL Server takes the unusual approach of using a comma to specify a port number, and I admit that this is the first time I had ever seen this connection string format that included the SQL Server port number.

For example:


This allows them to connect to the SQL Server instance on port 49156.

Except when that doesn't work.

This server/instance,port format works with SQL Server Management Studio and also works fine with ODBC DSNs, allowing Dynamics GP to work.  (BTW, they indicated that Management Reporter is unable to use the connection string with the port number)

However, what we discovered is that a Dynamics GP integration will not work when the connection string contains a comma.  You will get SQL Connection Error 26:
The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
No matter what we tried, we couldn't get a connection with the .NET Dynamics GP integration, and only after creating a test SQL connection app was I able to identify that GPConnNet was the culprit, and not a network or firewall issue.

My guess is that GPConnNet is either stripping out the comma/port, or is unable to connect using the comma/port connection string.  A native SQL ADO.NET connection works fine using SQL authentication, but if you have a Dynamics GP related application that relies on GP username and password, you're stuck with GPConnNet.

After fruitlessly testing numerous workarounds, I found a discussion about SQL Server network Aliases.  I've never had to use them previously, but they offer a way to assign a simple name to a SQL Server instance name, including a specific port number.

SQL Aliases are setup in the infrequently used SQL Server Configuration Manager.  If you aren't familiar with the Configuration Manager application, I highly recommend understanding its role and capabilities.  It is a critical tool for troubleshooting SQL Server connectivity issues.

In Configuration Manager, you should see a 32-bit driver and 64-bit driver.  You will want to work with the one that matches your application.  If  you have your .NET app compiled to target x64, you'll use the 64-bit Alias, and vice versa for 32-bit apps.

Creating an Alias is very simple--just give it a name, specify the port, choose the protocol, and then enter the SQL instance name.

After you save the Alias, it should start working immediately without restarting the SQL Server service.

With the 32-bit Alias setup, my test application, using GPConnNet, was finally able to connect to the SQL Server instance on port 49156.

So in the highly unlikely situation where you have to use GPConnNet with a SQL port number, there is a solution!

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Thursday, March 19, 2015

Updating Payroll Transactions for FUTA and SUTA

I had previously posted some scripts to do a compare, to identify transactions where the FUTA and SUTA fields are set differently than the set up records for the pay codes involved.  Remember, FUTA and SUTA are calculated at the time the reports are run (not on a per payroll basis) so that means you can make changes (in the SUTA setup in GP, or in the database) and see the impact on the reports.

--This first script does the compare on FUTA between the transaction and the Employee Pay Code record in Dynamics GP and shows those records where Subject to Futa is marked differently between the Employee Pay Code record and the transaction

select UPR30300.EMPLOYID, UPR30300.CHEKDATE, UPR30300.PAYROLCD, UPR30300.SBJTFUTA, UPR00400.Payrcord, UPR00400.SBJTFUTA from UPR30300 inner join UPR00400 on UPR30300.EMPLOYID=UPR00400.EMPLOYID and UPR30300.PAYROLCD=UPR00400.PAYRCORD where upr30300.SBJTFUTA<>UPR00400.SBJTFUTA

--This script updates the payroll transaction history table to set the Subject to FUTA flag where it is marked on the Employee Pay Code record
update UPR30300 set UPR30300.SBJTFUTA=1 from upr30300 inner join UPR00400 on upr30300.employid=upr00400.employid and upr30300.PAYROLCD=upr00400.PAYRCORD where upr00400.SBJTFUTA=1 and upr30300.SBJTFUTA<>upr00400.SBJTFUTA

Happy correcting! And, as always, make a backup and/or test this out in a test company first!
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a senior managing consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Reopening Closed Years!

Okay, so my new year’s resolution has completely gone astray as I have been absent  from this blog for way way way too long.  So here I am, coming out of the post-year end haze, and hope to get back in the habit.  So, I  thought I would start out with some easy stuff that I can share in terms Dynamics GP 2015 and the exciting things  out there for those of you that are considering upgrading (BTW, our new implementations are going to on to GP 2015 and we have started upgrading folks as well).

 So, in terms of exciting stuff in GP 2015, how about this little gem?

This is a super exciting one, because it eliminates all  of the time consuming options we had (reversing stuff manually in the tables  or sending the data to Microsoft or yadda yadda yadda). 

So why is this exciting?  Well, we all know that GP let’s you post to the most recent historical year, right?  So, here in 2015, we can post to 2014 (if it is closed) but not 2013.  So, let’s say that we accidentally closed 2014 and we really do need to post to 2013 (this sort of thing happens most commonly during implementation, when you are loading several years of history and closing years in succession).

Now, you can click this lovely little button…

And you can opt to reopen the most recently closed year.  Exciting stuff.  Always make a backup first, of course!  All users must be out of the GP when you do this.  Equally exciting is that it will also move analytical accounting information (if you are using it) back to the open year as well.  Once you run the reverse, it is recommended that you reconcile (Utilities-Financial-Reconcile) all open years starting with the oldest.

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a senior managing consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Wednesday, March 4, 2015

Two quick tips on Dynamics GP SQL Server backups

By Steve Endow

I recently learned two interesting things about SQL Server backups from the esteemed Victoria Yudin.

FIRST, I'm embarrassed to say that I didn't know that SQL Server 2008 added a native Backup Compression option.  Admittedly, I'm not a full time SQL DBA and I don't spend much (any?) time on SQL maintenance or backups for the TWO databases on my development servers, but I probably should have picked up on the feature some time in the last SEVEN years, you would think.

Anyway, now that Victoria schooled me, I found that there are two places where you can set backup compression.  The first is at the SQL Server instance level, under Properties -> Database Settings.  Setting this option causes all database backups to compress by default.

The second location where you can select backup compression is under the database backup window on the Options page.

Here you can use the default server setting, or choose to manually enable to disable the backup compression.

I'm now going to enable the option for all of my SQL Servers.

Even if you didn't know about this option, if you have done SQL Server backups, you should already know that the SQL bak files are HIGHLY compressible.  In my humble opinion, you should always compress SQL backup files because of the massive reduction in disk space usage that it provides.

I previously have had customers use WinZip or WinRAR or 7-Zip to compress backups if they needed to send me a copy, and the difference in file size is astounding. (If you are dealing with really large files, 7-Zip offers the best compression, in my experience)  Another thing I've done is set the SQL backup folder to use Windows folder compression.  That works well for backups that only need to sit on disk.  But having SQL Server automatically compress the backup files is the most convenient option, as it also makes file copying or archiving much faster.

SECOND, Victoria and I learned a very interesting lesson about 32-bit vs. 64-bit SQL Server.  She has a customer with a GP 10 install that is running SQL Server on a 64-bit version of Windows.  The server has 24 GB of  RAM and solid state drives, so you would think it should be pretty speedy.  But Victoria noticed that the SQL backups for a 20 GB database took about an hour.  Again, since I don't do much DBA work and all of my test databases are small, I didn't really have any idea of how long it should take to backup a 20 GB database, but Victoria assured me that there was something wrong.

After a few calls with the customer, we learned that the GP 10 SQL Server was running a 32-bit version of SQL 2005.  Obviously the 32-bit version can't begin to utilize the 24 GB of RAM, but we weren't sure how backups were affected by 32-bit vs. 64-bit SQL Server--it has been so long since I've used 32-bit SQL that I no longer have any virtual machines with it installed.  Fortunately, the client had a second test server with identical specs as production, so he installed the 64-bit version of SQL 2005 on that server, restored a copy of the 20 GB production database and then ran a backup.

It took 2 minutes.  TWO MINUTES.  From one hour to two minutes for a 20 GB backup.

Stunning.  I expected some improvement, but wow.  So that told us that the server hardware was just fine--it was the limited memory of 32-bit SQL Server that was causing the long backups.

And I assume that the 64-bit version will also produce some performance improvement for GP users.  They may not have realized that processes were slow, but hopefully they benefit from an upgrade to 64-bit.

So there ya go--there's always something new to learn.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter

Sunday, March 1, 2015

Dynamics GP GPConnNet ReturnCode 131074: One possible cause

By Steve Endow

Last week a customer contacted me saying that an eConnect integration suddenly stopped working.

They have been using the integration for a few years without issue, but without apparent explanation, it started returning an error of "ExecuteNonQuery requires an open and available connection".

The error message is pretty straightforward--the integration was failing to get a connection to SQL Server, but we didn't know why.

The client said that the only change they made was resetting the password for the GP user who was using the integration.  Despite numerous attempts to figure out the cause, I was unable to figure out why GPConnNet was not returning an open database connection.

Well today, while working on a different eConnect integration, I suddenly had the same problem.  GPConnNet was not returning an open connection.

After stepping through my code line by line, I found that GPConnNet was returning a ReturnCode value of 131074.

    GPConnection GPConnObj = new GPConnection();

    GPConnObj.Init("key1", "key2");
    gpConn.ConnectionString = "DATABASE=" + gpDatabase;
    GPConnObj.Connect(gpConn, gpServer, gpUser, gpPassword);

    //Check for error from GPConnNet
    if ((GPConnObj.ReturnCode & (int)GPConnection.ReturnCodeFlags.SuccessfulLogin) != (int)GPConnection.ReturnCodeFlags.SuccessfulLogin)
        Log.Write("Failed to get SQL connection from GPConnNet", true);
        return null;


I found the GPConnNet documentation, which listed the following return codes:

Constant                  Value                Description
SuccessfulLogin           1                    A connection was created
FailedLogin               2                    A connection could not be created
ExceptionCaught           131072 (&H20000)     An exception occurred during the connection attempt
PasswordExpired           65536 (&H10000)      The user’s password has expired

So, I'm not sure what my ReturnCode of 131074 meant.  Is it a combination of FailedLogin and ExceptionCaught? (131072 + 2)  Even if it is, what does that mean?

I knew the GP login was valid, since I had just created it.

And then it dawned on me.  What if I had forgotten to give the user access to the GP company databases?

Sure enough, I had forgotten to give my "gpimport" user access to my company databases.  I normally use the "Copy Access" button on the user window, but I think I got distracted while creating this user and forgot to copy the access.

Once I gave the user access, GPConnNet provided an open database connection.

I'm now wondering if my client has the same issue.  I'll be following up with them to see if they might have created a new company database or a test company database and they didn't grant access to the GP user being used by my integration.

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter