Monday, October 19, 2009

Save the Day with Regular Expressions

xkcd has a great comic about regular expressions:

It rings so true--if you have ever seen the look of amazement on people's faces after you use regular expressions, you definitely understand the comic.

For those of you who aren't familiar with regular expressions, have you ever received a data file as part of a Dynamics GP data migration or import that had horrible formatting, or had extra or messy data in it that made it nearly impossible for you to use?

Or, to pull a skeleton out of our own GP closet, have you ever had to try and extract just the data from a Dynamics GP report that was saved to a text file? Or attempted to get the GP report to import into Excel? Fun, huh? All those massive, useless page headers, and the fields positioned all over the multi-line detail data make those quite a chore.

I've seen this plenty over the years, and surprisingly, a typical reaction is, "Sorry, we can't work with that format." People just don't know how to deal with such messy looking or inconsistent data layouts.

Although it can't solve every data formatting problem, regular expressions can be a life saver in many situations, and using regular expressions is usually much cheaper (often free!) than high-end data extraction applications.

Regular Expressions, or "reg ex" (or regex or regexp) for short, is a technique of using special search strings, or patterns, that allow you to search for specific, complex, or possibly inconsistent or varying patterns in a text file.

One of my most basic and frequent uses of regex is to take a column of data and convert it for use in a SQL statement. I had to use this today for a client--they provided me a list of Dynamics GP Payroll pay codes, and I had to develop a query to find all payroll transactions that used the pay codes.

The list of 35 pay codes in Excel looked something like this:


And I needed to be able to use them in an "IN" clause in a SQL query.

I fired up UltraEdit, my preferred text editor, pasted in the column of data,
and then ran a Replace with the following options:

Find What: ^p (new paragraph)
Replace With: ', '

I instantly had converted the list into this:

'C09900 ', 'C11500 ', 'H16500 ', 'H18000', 'H18750 ', 'H19500 ', 'H21000 ', 'H21750 ', 'R06000 ', 'R08000', 'R08550 '

Close, but the trailing spaces after some pay codes bugged me, so I wanted to get rid of them. Once again, regex to the rescue. But some of the pay codes have no trailing space, and others had one or more spaces. So I reverted back to my original column of pay codes and ran this replace routine first:

Find What: [space]*^p (find one or more spaces followed by a new paragraph)
Replace With: ', '

(where [space] was a single space)

Viola! A clean list that I can use in a SQL "IN" statement:

'C09900', 'C11500', 'H16500', 'H18000', 'H18750', 'H19500', 'H21000', 'H21750', 'R06000', 'R08000', 'R08550'

In just a few seconds, I was able to convert the data into exactly what I needed and paste it right into my SQL statement.

Another common use of regular expressions is to validate or clean data input. For example, let's say you are having to import social security numbers, credit card numbers, or other all-numeric data. How can you tell if an alpha character, symbol, or other non-numeric character exists in the data? And what if you wanted to strip out invalid characters? What if you needed to validate the format for a Visa vs. Amex credit card number? Or the proper format of a Dutch postal code?

All easily accomplished with regular expressions.

Here's the catch: Unfortunately, there are multiple "standards" for regular expression syntax. Although I may say I "know" regular expressions, I understand the concept, how and when to use them, and I happen to know the UltraEdit and .NET syntaxes. But if I had to use a tool that used a different syntax, I would have to refer to the help pages and learn the specific expression syntax for that tool.

The examples I've give so far are using the basic UltraEdit expression syntax. There is also Unix syntax. And Perl syntax. And of course Microsoft could never follow an existing standard, so .NET has its own syntax with its own corresponding quirks. (.NET also has dedicated validation classes, such as RegexStringValidator)

Naturally, you can Google the topic and find quite a few resources for learning about regular expressions, how they work, the syntax, examples, and tricks. But, ultimately you will have to use an application, tool, or specific programming language that supports regular expressions, and that tool will use a particular regex syntax. So I would recommend picking a tool first, at least temporarily while you learn, and then use its documentation to get familiar with regex.

I found this free add on that appears to let you use regex in Excel. I haven't tried it yet, but it would be a great feature to have in Excel. There are some things that make regex difficult in Excel (like needing to combine or search text that spans multiple cells), but it wouldn't hurt to have it for data within cells.

If you are new to regular expressions, here is one web site that may be a good starting point to better understand the concept and see some interesting examples:

After a few minutes of study, you'll have a new tool to save the day!

Tuesday, October 13, 2009

Dynamics Salary Survey

As you have probably already seen on everyone else's blog posts, is sponsoring the Nigel Frank International Annual Microsoft Dynamics Salary Survey.

It's a quick survey, so if you have a few minutes to spare, click on through and tell them how wealthy you are getting working with Dynamics GP.

Interesting Purchasing Returns Project Issue

So, I had an issue pop up yesterday that I wanted to share because it is a newly identified quality report in GP 10.0. The issue comes up when using the Purchasing Returns functionality with Project Accounting, Transactions>>Purchasing>>Returns Trx Entry. In our case, we were working with non-inventory items.

It occurs when a user enters a return for a project-related transaction, and gets an error (like "you cannot exceed accrued revenue" or something like that) and they proceed to recreate the error more than once. When they do this, the QTYRESERVED and QTYINVRESERVE fields in the POP10500 do not get set back to zero when the return is deleted. This can then manifest itself in a couple different ways (as I witnessed with the client):

1. The system will not let you return a non-inventory item, saying that you never received that item from the vendor (even though you did).
2. The system will not display the receipts for a non-inventory item, acting like there are no available receipts to return against.

It was easy enough to fix, we just set the values for QTYRESERVED and QTYINVRESERVE back to zero for the receipts we were trying to return against. Anyway, thought I would share the scoop in case any one else comes across this before the quality report is published and a resolution scheduled :)

Item Setup: The source of it all

Well, first I should apologize for being buried in Sure Step curriculum lately, so I have been blog-absent :) But recently I have been reminded of the importance of item setup. It seems like the bulk of issues and misunderstandings in Sales Order Processing and Purchase Order Processing can be tracked back to decisions made during the setup of items. Sometimes, users assume the system functions one way or another, but really it is the item setup that is controlling the behavior-- which could be easily changed in a lot of instances. I chalk this up the fact that the users who set up items are often not the same users entering transactions, or so much time has passed since item setup was initially done (and understood) that no one remembers why certain decisions were made (yet another argument for documentation...can you hear the collective groan from my coworkers?).

When I teach the Microsoft Dynamics GP Partner Academy for Distribution, new GP consultants are often overwhelmed by the number of item setup screens in inventory, as well as which ones may or may not be required based on how users may or may not be using SOP or POP (confused yet?). So a couple of years ago, I put together a hand out that I share with students that breaks out the different setup screens and which modules they impact. I will summarize that handout here (it's nothing fancy), but I am happy to share the actual handout with anyone who is interested-- just post a comment on this blog or email me at Keep in mind, this is meant to be a basic breakdown to keep it simple and clear for students, as there is a lot of grey area :)

Module Settings:
Options to control how module behaves as well as defaults
Inventory Page>>Setup>>Inventory Control
Purchasing Page>>Setup>>Purchase Order Processing
Sales Page>>Setup>>Sales Order Processing

General Item Setup:
Item Class: Inventory Page>>Setup>>Item Class
Provides defaults for:
--Item: Inventory Page>>Cards>>Items
--Item Currency>>Inventory Page>>Cards>>Item Currency
--Used to determine which currencies are valid for an item in POP and SOP
Count Cycle Assignment: Inventory Page>>Cards>>Count Cycle Assignment

General Site Setup:
Site Maintenance: Inventory Page>>Cards>>Sites
Site Resource Planning: Inventory Page>>Cards>>Site Resource Planning
Provides order policy and order point defaults from site for item resource planning (below)

General Landed Cost Setup:
Landed Cost: Inventory Page>>Cards>>Landed Cost
Individual landed costs to be grouped together
Landed Cost Group: Inventory Page>>Cards>>Landed Cost Group
Landed cost groups to be assigned to item/site combinations below

Purchasing Item Setup:
Item Quantities Maintenance: Inventory Page>>Cards>>Quantities/Sites
Provides item default site, primary vendor and landed group
Item Vendor Maintenance: Inventory Page>>Cards>>Vendors
Provides vendor item number, vendor item description, default unit of measure, last originating invoice cost, and EOQ for Purchase Order Generator
Item Purchasing Options: Inventory Page>>Cards>>Purchasing Options
Provides revalue option/tolerance and U of M options
Item Resource Planning: Inventory Page>>Cards>>Item Resource Planning
Provides order policy (use PO gen), order point, order up to level, and PO gen options (along with mass update capability)

Sales Item Setup:
Item Price List: Inventory Page>>Cards>>Price List
Provides default price level (if customer or RM setup not defined with price level), default selling unit of measure, and U of M options

So, that's it :) Easy enough, right? Have a great Tuesday!

Monday, October 5, 2009

Develop with XML? You had better know XPath!!!

I recently had to work on a VB .NET app that made me feel great about my modest development skills. I know I have weaknesses and gaps in my development skills portfolio, but one thing that makes me shake my head and roll my eyes is when I see code that tries to work with XML without using XPath. If you work with XML, and if you don't know about XPath and when you should use it, you really need to spend a few hours learning it.

I started working with XML in 1999 with MSXML3, using DOM and XPath extensively, and then later using extremely elaborate XSLT to do presentation layer transformations for a large web site. Admittedly, back then XML was extremely overhyped, but since then, I haven't met a developer that is comfortable with XPath, let alone XSLT. I'm sure they are out there, I just haven't had the luck to meet them.

The app in question (an add on window for Dynamics GP) was developed by someone a few years ago, but was never really finished, and therefore the client never implemented it. I got the honor of finishing the required functionality and getting it to work properly.

I ignored the fact that this extremely simple, single window "application" with the most rudimentary functionality was completely over-engineered (i.e. 3 tiers). But when I came across code that read a hand coded XML configuration file, I stared in disbelief.

First, why the developer created a custom XML file instead of using the My.Settings feature in Visual Studio 2005 puzzles me. I'll give him the benefit of the doubt and assume that he was using an old version of Visual Studio for some reason. But then I saw this code, which should induce horror in any developer that knows how to work with XML:

Public Function getSQLConnection() As String

Dim xmlRdr As XmlTextReader
Dim sqlConn As String = ""
Dim sDirectory As String = System.Windows.Forms.Application.StartupPath()
Dim sXmlPath As String = sDirectory & "\xmlTrainTemplateConfig.xml"
'create xml reader
xmlRdr = New XmlTextReader(sXmlPath)
'Disable whitespace so that you don't have to read over whitespaces
xmlRdr.WhitespaceHandling = WhitespaceHandling.None
'read the xml declaration
'read the 'Configuration' tag
'Read the 'Config' tag
'Get the 'Config' Attribute Value
If xmlRdr.GetAttribute("type") = "SQL" Then
'Read element 'ConnStr'
'Get the 'ConnStr' Element Value
sqlConn = xmlRdr.ReadElementString("ConnStr")
End If
'close the reader
Catch EX As Exception
Dim exCustom As New Exception("Error reading xmlTrainTemplateConfig.xml: " & sXmlPath, EX)
Throw exCustom
getSQLConnection = sqlConn
End Try

End Function

Okay, very funny, I get it, practical joke, right? Seriously, so where is the real code that reads the config file?

So, let me summarize. Reading an XML file line by line, making assumptions about the order of each node/line, and then having to test attribute and element values to see if you are reading the correct node completely defeats the purpose of using an XML file. You might as well just use a text INI file and save yourself time and embarrassment.

The code sample above is, simply, how not to work with XML.

Now, let's try another approach. There are probably a dozen better ways to get the job done, but I'll throw out just one that didn't require any thought or design. Doesn't this look just a tad bit better?

Dim xmlDoc As New XmlDocument
sqlConn = xmlDoc.SelectSingleNode("//ConnStr").InnerText.Trim

This approach uses the most rudimentary XPath node selection expression ("//") that searches the entire XML document for any matching nodes. Technically, this is a very sloppy technique, but like I said, no thought required, and we're dealing with a private config file, so we don't need to be very rigorous.

If you aren't familiar with XPath but want to learn more, I would recommend reading this every good, very simple tutorial:

And here is the XPath chapter for the O'Reilly book "XML in a Nutshell" that appears to be available for free online:

There are plenty of other XPath references, so it shouldn't be hard to at least learn the basics and bookmark some good references for later use.

To close, I'll leave you with a practical example of how XPath can be used with eConnect. I recently fielded an eConnect question on Experts Exchange where someone asked how they could delete specific taSopLineIvcInsert nodes from an XML document. So if you have an XML document with 500 SOP line items, and you need to delete just 5 specific items, how would you do it?

One way is to try looping through all of the 500 line nodes and check each one to see if it matches the 5 that you need to delete. But that means your code is wasting time checking 495 lines that do not need to be deleted. Wouldn't it be nice to just jump into the XML and select the 5 that you need to delete without having to read the other 495? How does this look?

xmlNode = xmlDoc.SelectSingleNode("//taSopLineIvcInsert[ITEMNMBR='X205']")

Pop Quiz: Knowing what you know about SOP lines in Dynamics GP, and now that you are an expert on XPath, what is potentially wrong with the sample code above?

I'll reward the person who gives the first correct answer with a cold beer, or hot cup of coffee, in Fargo at the GP Technical Conference in November. (Winner must be present to receive prize!)

Saturday, October 3, 2009

Using and Debugging SQL Triggers with Dynamics GP

SQL database triggers are an interesting, powerful, and potentially dangerous tool. On the one hand, they offer the appealing ability to execute a SQL script automatically whenever a change is made to a database table. The trigger is fired immediately by SQL server, and frees you from having to separately monitor database tables to try and detect changes to data.

On the other hand, triggers can be difficult to implement, debug, and manage, and can cause unintended consequences. Since I don't use them very often, I don't claim to be a trigger guru, but my interpretation is that SQL Server does not provide very good tools for managing, diagnosing, and debugging triggers. And when you are dealing with a moderately complex application like Dynamics GP, triggers can get you into trouble very quickly--for several different reasons. You have to really understand not only all of the tables, records, and underlying data, but you also need to very clearly understand the sequence of events that occur in those tables if you want to successfully implement a trigger.

I was recently asked to customize GP to automatically populate a single field on a SOP transaction. The client wanted a SOP User Defined field filled out automatically based on the customer and address ID assigned to the order. But they wanted this field to be populated without requiring additional user intervention, and they needed it populated even for orders that were imported from their e-commerce web site.

After weighing the pros and cons of different approaches, I chose to use triggers. I reasoned that triggers would be the most reliable, as they could most easily populate the field regardless of how the order was entered in GP, and a trigger could be fired without requiring any user intervention. One field, right? How hard could that be?

I think that triggers turned out to be a good choice, but I ran into a few surprising issues that required me to learn a few things about GP, and also how to debug triggers on SQL Server.

It turns out that modifying the SOP User Defined fields table was trickier than I thought. I chose to put a trigger on the SOP10100 table, because that table would contain the customer ID and address ID that I needed to lookup my user defined value that was stored in the Customer Master Address File (RM00102). But, of course, it wasn't that easy. There are dozens of things that occur in the database when GP creates a new SOP transaction, so I had to reverse engineer each step of that process to resolve a few errors that were occuring. Here is the trigger that I ended up creating.

-- =============================================
-- Author: Steve Endow
-- Create date: 7/22/2009
-- Description: On insert of SOP transaction, updates
-- the SOP UDF table (SOP10106) to store the 'End User ID'
-- value in the SOP UDF Text 3 field.
-- The End User ID value is stored in the UDF1 field of
-- the customer address record in RM10102.
-- Rev 1.0: 7/22/2009
-- Rev 1.0f: 7/23/2009
-- Corrected error that would cause SOP get next number to fail (SET @ENDUSERID = '')
-- Corrected error during Order transfer to Invoice: Trigger would insert SOP10106
-- record before GP did, causing GP to get dup index error
-- =============================================

IF EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[ctrStoreEndUserIDSOP10100]'))
DROP TRIGGER ctrStoreEndUserIDSOP10100;

ON SOP10100
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.

-- Insert statements for trigger here



--Retrieve the values from the inserted / updated record

--Do not fire if customer or address ID are empty

--Query the end user ID from the customer Ship To Address ID

--Check to see if the UDF record exists for this SOP transaction
--Also check to see if the UDF record exists for an originating transaction (in case of transfer to invoice)

--Update the UDF Text 3 field with the End User ID based on the Ship To Address ID

--Insert a new record into the SOP UDF table, including the end user ID in UDF Text 3
VALUES (@SOPTYPE, @SOPNUMBE, '1900.01.01', '1900.01.01', '', '', '', '', '', @ENDUSERID, '', '', '', '', '', '', '')




After the trigger was installed, when transferring an order to an invoice, GP would throw a 'duplicate index' error. And when trying to create a new sales order, gp complained that it was unable to get the next document number. When we disabled the trigger, GP worked fine, so it was clear that the trigger code was causing the errors.

The errors were not very specific, so I had little to go on. After some digging, I found that I could debug the trigger code using Visual Studio. This allowed me step through each line of code in my trigger, much like a .NET program, and discover the cause of the errors.

To debug triggers on SQL 2005, you will need Visual Studio 2005 Professional. I'm only going to describe a basic process for debugging T-SQL triggers, but here is an MSDN web page that describes many options and caveats for debugging T-SQL and CLR with SQL 2005. There are several ways to debug T-SQL with Visual Studio, but this is one method that I stumbled on based on the limited instructions I could find.

1. Create a new stored procedure in your database that performs a very simple operation that will cause the trigger to fire. In my case, I had an INSERT, UPDATE trigger, so I created the following stored proc in TWO:

CREATE Procedure DebugTrigger

2. Open Visual Studio 2005 Professional and create a new "Database Project".

3. When asked to choose a database reference, either add a new connection, or select an existing connection for your GP database.

4. Now that you have created a new project, expand your database connection on the left, and expand Stored Procedures. It will take a while and may warn you that there are alot of procedures to display (just click on Yes).

5. Right mouse click on your test stored procedure and select "Step Into Stored Procedure"

6. A new tab should open displaying your stored procedure code in debug mode, ready to step into the first line of your procedure.

7. Press F11 to Step Into the procedure line

8. A new tab will open showing your trigger code in debug mode

9. You can now press F11 to step through each line of your trigger

10. Note that the Locals window will allow you to easily see the values of all of the trigger parameters

In the case of my trigger, it turns out that I had two bugs. The first bug caused the insert of the SOP10106 record to fail (my code was trying to insert a NULL value). Because my trigger was on the SOP10100 table, this caused GP's update of the SOPNUMBE field to fail, resulting in the next number error. This 'cascade' effect of triggers is just one of the problems that can make them potentially dangerous, and difficult to debug in production environments. If your trigger fails, it may cause a train wreck as it causes other higher level operations to fail.

The second problem I had was that if an order is transferred to an invoice, GP would attempt to create a new record in SOP10106 for the invoice, but my trigger had already created one as soon as the SOP10100 record was updated. This caused GP to receive a duplicate index error. So my trigger was fighting with GP to insert the record. So in this case, understanding the exact sequence of events that occur under different circumstances is required to ensure that the trigger does not interfere with GP's normal operations.