Wednesday, January 31, 2018

Are you "closing the loop" with your Dynamics GP system integrations?

By Steve Endow

I've been developing system integrations for so long that I sometimes forget that some parts of system integration design may not be obvious to customers I work with.

I have been working with a company that is integrating their document management solution into Dynamics GP.  Their software will capture a scan of a document, such as a vendor invoice, their workflow will route the document image, a user will verify the vendor and the expense accounts, and once the workflow is complete, the software will import the invoice data into Dynamics GP using eConnect.  Once the invoice is imported, a GP user can view the Payables Transaction invoice in Dynamics GP, and also open the scanned invoice image directly from GP.

Okay, that sounds pretty straightforward...

The initial version of the integration with Dynamics GP is asynchronous, using XML files.  The document management system exports an XML file containing the "metadata" for the vendor invoice, and the GP import is a scheduled task that regularly picks up those XML files and imports the data into Dynamics GP as transactions.  (Aside: This design was chosen for its simplicity, as the initial version of the integration was a prototype--a proof of concept to explore how the system could be integrated with Dynamics GP.)

Okay, so...your point is...?

Saturday, January 20, 2018

Are SQL Server subqueries bad? Let's find out!

By Steve Endow

For the past several years, I've noticed that I have generally avoided using subqueries, based on a suspicion that they are probably less efficient than a JOIN.  I do still use them for one-time or ad-hoc queries where performance isn't a concern, but I have been avoiding them for any production queries that I release.  But I haven't done any research to support my suspicion.

This Saturday morning, while working on another SQL query optimization issue, I figured I would try a test to compare the performance of a simple subquery vs. a JOIN.

What do you think?  Do you think that subqueries are typically slower than JOINs?  If so, how much slower?

Here's a YouTube video where I test my sample queries and review the results.



Before doing my test, I searched the interwebs and I found a post (unfortunately the images aren't showing for me) that appears to definitively demonstrate the correlated subqueries perform much worse than similar JOIN queries.  The explanation made sense.

To qualify my test setup: These queries were run on SQL Server 2014 SP1 using Management Studio 17.4 against the GP 2016 Fabrikam/TWO database. And also notable is that I have 73,752 records in my SOP30300 table--which is probably quite a bit more than a vanilla TWO database.  I suspect this is important, as the results may be different for other SQL Server versions, and may vary based on the number of tables and records.


Wednesday, January 17, 2018

Dynamics GP Integrations: Eliminate the need for perfection

By Steve Endow

I had a call with a customer this morning to review an error they were seeing with their AP Invoice integration.  The custom integration is moderately complex, importing invoices from 90 retail locations into 30 different Dynamics GP companies, with Intercompany distributions.

There's an interesting twist to this integration.  The invoice data contains a retail store number, but it doesn't tell the import which GP company the invoice belongs to.  And there is nothing in the GP companies indicating which retail stores belong to which database.  Some retail stores have their own dedicated GP company database, while other retail stores are managed together in a GP company database.  The users just know which retail store belongs to which GP company.

So how does the integration figure out which company an invoice belongs to?

We could have created a mapping table, listing each retail store ID number and the corresponding GP company database.  But the problem with mapping tables is that they have to be maintained.  When a new retail store is opened, or a new GP database is created, users will invariably forget to update the custom mapping table.

So for this integration, I tried something new.  The one clue in the invoice data file is a GL account number.  The first segment of the GL account is a two digit number that uniquely identifies the Dynamics GP company.  Like this:

01 = Company A
03 = Company B
06 = Company C
25, 31, 49 = Company D

So, the integration reads the GL expense account assigned to the invoice, and uses that to determine which company the invoice belongs to.

When the integration launches, it queries all of the GP databases to determine which Segment 1 values are used in each database.

DECLARE @INTERID varchar(10) = ''
DECLARE @SQL varchar(MAX) = ''

DECLARE INTERID_cursor CURSOR FOR
SELECT INTERID FROM DYNAMICS..SY01500

OPEN INTERID_cursor
FETCH NEXT FROM INTERID_cursor INTO @INTERID

WHILE @@FETCH_STATUS = 0
BEGIN
       IF @SQL <> '' BEGIN SET @SQL += ' UNION '; END
       SET @SQL += ' SELECT ''' + @INTERID + ''' AS INTERID, (SELECT COUNT(DISTINCT ACTNUMBR_1) FROM ' + @INTERID + '..GL00100) AS Segment1Values, (SELECT TOP 1 ACTNUMBR_1 FROM ' + @INTERID + '..GL00100) AS CompanyID';
       FETCH NEXT FROM INTERID_cursor INTO @INTERID
END

CLOSE INTERID_cursor
DEALLOCATE INTERID_cursor

EXEC(@SQL)


It is then able to use this "mapping" to match invoices to databases based on the GL expense account.

But, this scheme is based on the critical assumption that in Company A, every single GL account will always have a first segment value of 01.  And Company B will always have a segment 1 value of 03.  Or Segment 1 value of 25, 31, and 49 will only ever exist in Company D.  For every account.  No exceptions.

I'll let you guess what happens next.

A user enters a "06" account in Company A.  And another user enters a "01" account in Company B.

Despite the customer's insistence that this would never happen, and that they always make sure that only one unique Segment 1 value is used in each company, someone ends up entering a Segment 1 value in the wrong company.

Am I surprised by this?  Not at all.  Whenever the word "never" is used during integration design discussions, that's always a clue.  I substitute it with "usually" or "mostly".  There are almost always exceptions, whether intentional or unintentional.

So now what?  If the program can't ensure that the Segment 1 values are unique to each company, what can it do?

Well, the second layer is that during the import process, the integration checks the destination company database to verify that the GL account exists.  If it queries Company A for a 06 GL account and doesn't find it, it logs an error and that invoice isn't imported.  This is the error that was logged this morning.

But then what?  The customer insists, again, that they only use the 06 accounts in Company C, so the import must be wrong.  So we run the above query again and find that someone accidentally entered a 06 account in Company A, which confused the import.  And the customer is shocked that such a mistake could happen.  For the third time.

But I'm not satisfied with this process.  Because 6 months from now, it's going to happen again.  And they'll blame the integration again.  And we'll have to manually run the query again and find which account was added to the wrong company.

So let's just assume that this mistake is going to continue to happen and deal with it.  I'm thinking that I need to modify the integration to have it review the results of the query above.  If it finds that 06 is present in more than one GP database, it needs to log an error and let the user know.

"Hey, I found account 06-5555-00 in Company A. That doesn't look right. Please look into it."

This will proactively identify that an issue exists, identify the specific account, identify the company, and give the user enough information to research and resolve the problem.

It assumes the mistake will happen. It eliminates the need for perfection in a complex process in a moderately complex environment, where employees have 90 other things on their minds.  And it should only take a few lines of code--a one time investment that will save time for years into the future.

So why not do this for other possible exceptions and issues?  If you can identify other potential mistakes or errors, why not just code for all of them?  Because there are endless possible exceptions, and it would cost a fortune to handle all of them, most of which will never occur.

I usually release an initial version of an integration, identify the exceptions, and quickly handle errors that do occur.  When a new exception comes up, handle it.  It's usually a surprisingly small number, like 3 or 5 different data issues that cause problems.

So that's my philosophy: Eliminate the need for perfection in integrations whenever possible or practical.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+