Wednesday, December 30, 2009

Display VS Tools Assembly Version

(Update: Reader Stephan offers a much more refined solution in the comments section of this post)

Recently I've been buried writing a VS Tools AddIn that has been a bit more complex than any previous AddIn that I've had to develop. Not terribly complex, just more involved than the typical AddIn widget that I write frequently. It's basically like a mini-module: A custom window, with a custom GP menu item, its own custom table, and lookups that query from GP tables.

And since the client uses C# internally, I've been transitioning to C# as well, so it's only the second project that I've done in C#.

Anyway, one of the things that I like to have on any user interface application that I develop is a visible build number. I have never used Help -> About dialogs (although maybe I should), so I just prefer to use a label or status strip to discretely show the app version.

Well, with VS Tools AddIns, this is a bit tricky.

There are at least 3 different ways that I've found to try and get the version number of your .NET application. Which approach you use depends on the type of app you've written (WinForm vs. non-WinForm, and other subtleties that I don't fully understand). With a VSTools AddIn, the 3 approaches produce different results, only one of which is truly the Assembly Version of my DLL.

One approach (Application.ProductVersion) displayed 10.0.0357 (or something like that), as it was apparently referencing some GP component that was invoking my DLL.

Another (System.Reflection.Assembly.GetCallingAssembly) simply displayed 2.0.0.0. I'm not entirely sure what that was referring to, but from what I could tell, it might have been a base .NET assembly version, like Windows.Forms.

And finally, #3, the winner, produced the version info I was looking for. After googling through several results, I found it on this web page in VB syntax.

Having taken the C# plunge, I ended up writing it this way:

StringBuilder buildNumber = new StringBuilder("Build: "); buildNumber.Append(System.Diagnostics.FileVersionInfo.GetVersionInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).FileMajorPart);
buildNumber.Append(".");
buildNumber.Append(System.Diagnostics.FileVersionInfo.GetVersionInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).FileMinorPart);
buildNumber.Append(".");
buildNumber.Append(System.Diagnostics.FileVersionInfo.GetVersionInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).FileBuildPart);
buildNumber.Append(".");
buildNumber.Append(System.Diagnostics.FileVersionInfo.GetVersionInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).FilePrivatePart);

lblBuild.Text = buildNumber.ToString();


There are probably a few other ways to do it, and probably more elegant ways of assembling the string, but I had a deadline, so this was good enough for now.

Over the next week I will try and share some tips and tricks that I used to get around some VS Tools limitations and a VS Tools bug.

Tuesday, December 29, 2009

Dynamics GP Custom VBA Lookups

I've been asked several times to add custom lookups to user defined fields in Dynamics GP.

Clients want to utilize user defined or custom fields, but they want to have the values validated, and they want to allow the user to select a valid value from a list. Makes sense.

I briefly checked out the drop down and list box controls available in Modifier, but they are pretty limited, and aren't really oriented towards displaying dynamic result sets from a database query.

So I ventured out and created my own custom lookup window in VBA.

As a side note, my biggest gripe with the standard GP lookup windows is their lack of a simple, "real-time" search field. By "real-time", I mean that I want the lookup window to instantly filter the displayed records based on what I type in the lookup search box--as I type it. And I want to have the window simultaneously search multiple fields at the same time--not just one field. For standard GP lookup fields, Rockton SmartFill does an excellent job in this regard, but sadly, in my experience, few clients have been willing to pay for SmartFill.

I'm going to show an example of a lookup that displays Sales Prospect records. In this case, the client called prospects "End Users", so it is called the "End User Lookup" window.

I first created a custom User Form in VBA, and added a Search field at the top. Because VBA is pretty limited in terms of controls, I had to use a ListBox to display my results. Finally, I have a Select and Cancel button.



When the form loads, I utilize UserInfoGet to have it query the GP company database and retrieve all prospects. I store those prospects in a client-side disconnected ADO recordset so that they remain in memory. I then have a simple procedure that displays the recordset results.



When a user types in the Search box, I send the equivalent of a SQL WHERE clause to the ADO recordset Filter property. This dynamically changes the records available in the recordset, allowing me to quickly redisplay only matching records as the user types each character.



As you can see in this example, when I type "wav" in the search box, I am simultaneously searching on the ID, name, and city of the prospect records, narrowing my list down from 5,000 records to 10 records as fast as I can type the three letters. The search results change almost instantaneously as I type each character.

At first I was concerned that this type of dynamic, real-time searching would run into performance issues. But, to my surprise, it's very responsive. One reason is that as the user types each character in the search box, the number of records that meet the search criteria and need to be displayed gets smaller and smaller. The ADO Filter property is so fast that the search itself is instantaneous, leaving only the work of displaying the resulting records. So even if I started with 10,000 records, after I type just one word, I will have quickly narrowed down my result set to a much smaller number of records that can be displayed quickly in the listbox.

There are a few minor limitations with this solution, both of which are imposed by VBA. First, because I am limited to a list box instead of a ListView or Grid, I chose to use static lables for column headers, and fixed columns, and have therefore decided to create a separate user form for each lookup window that I need. You could invest some time to add code to make a single user form work for multiple lookups, but since I have only needed one or two for any given client, I didn't bother.

The second limitation is that the list box control doesn't have a sorting feature. Based on my experience, because the lookup is only used to select one record, and because the Search feature is so flexible and fast, there was never a requirement to sort records. But if you needed to sort, you could add some buttons at the top of each column and then use the ADO recordset Sort property.

Based on this VBA lookup, I have subsequently created a similar .NET class that I use for lookups with the VS Tools AddIns that I have been working on. For that class, I did spend the time to design it to work for multiple look up types, so I can now use it easily to lookup customers, addresses, inventory items, sales orders, and even records in custom tables. It also has a dynamic real-time search capability, but also has the benefit of better controls and capabilities offered by .NET. If anyone is interested in that, let me know and I'll try and find the time to create another post discussing it.

Below is the code behind the VBA User Form.


Private rsProspects As New ADODB.Recordset

Private Sub cmdCancel_Click()
Me.Hide
End Sub

Private Sub cmdSelect_Click()
CustomerAddressMaintenance.EndUserID.Value = ProspectList.Value
Me.Hide
End Sub

Private Sub ProspectList_DblClick(ByVal Cancel As MSForms.ReturnBoolean)
If CustomerAddressMaintenance.IsLoaded Then
CustomerAddressMaintenance.EndUserID.Value = ProspectList.Value
ElseIf SalesUserDefinedFieldsEnt.IsLoaded Then
SalesUserDefinedFieldsEnt.UserDefined3.Value = ProspectList.Value
End If
Me.Hide
End Sub

Private Sub SearchFilter_Change()
Dim strFilter As String
strFilter = Trim(SearchFilter.Value)
If Trim(strFilter) <> "" Then
rsProspects.Filter = "CUSTNAME LIKE '%" & strFilter & "%' OR CITY LIKE '%" & strFilter & "%'"
Else
rsProspects.Filter = adFilterNone
End If
Call PopulateProspectList
End Sub

Private Sub UserForm_Activate()
SearchFilter.Value = ""
rsProspects.Filter = adFilterNone
Call QueryProspects
Call PopulateProspectList
SearchFilter.SetFocus
End Sub

Private Sub UserForm_Initialize()
ProspectList.ColumnCount = 5
ProspectList.ColumnWidths = "80;160;90;30;80"
End Sub

Private Sub PopulateProspectList()
lngRecords = rsProspects.recordCount
If lngRecords > 0 Then
ReDim strRecords(lngRecords - 1, 4)
recordCount = 0
While Not rsProspects.EOF
strRecords(recordCount, 0) = rsProspects.Fields("PROSPID").Value
strRecords(recordCount, 1) = rsProspects.Fields("CUSTNAME").Value
strRecords(recordCount, 2) = rsProspects.Fields("CITY").Value
strRecords(recordCount, 3) = rsProspects.Fields("STATE").Value
strRecords(recordCount, 4) = rsProspects.Fields("ZIP").Value
recordCount = recordCount + 1
rsProspects.MoveNext
Wend
ProspectList.List() = strRecords
Else
ProspectList.Clear
End If
End Sub

Private Sub QueryProspects()
'rsProspects.State
'0 = Closed
'1 = Opened
If rsProspects.State = 1 Then
rsProspects.Close
End If

Dim oConn As ADODB.Connection
Dim strSQL As String
Dim lngRecords As Long
Dim recordCount As Long
Dim strRecords() As String

Set oConn = UserInfoGet.CreateADOConnection
oConn.DefaultDatabase = UserInfoGet.IntercompanyID
oConn.CursorLocation = adUseClient
rsProspects.CursorLocation = adUseClient

strSQL = "SELECT RTRIM(PROSPID) AS PROSPID, RTRIM(CUSTNAME) AS CUSTNAME, RTRIM(CITY) AS CITY, RTRIM(STATE) AS STATE, RTRIM(ZIP) AS ZIP FROM SOP00200 ORDER BY PROSPID"
Set rsProspects = oConn.Execute(strSQL)
rsProspects.ActiveConnection = Nothing
oConn.Close
End Sub

Integration Migration Frustration

About a month ago I was asked to review a client's integrations in preparation for their upgrade from GP 8 to GP 10. They run 4 integrations in Integration Manager on a regular basis to import employees, payroll, and invoices from their operational system.

I looked at the integrations and saw that they were relatively simple. They read from a few SQL tables in a staging database on the GP SQL Server, and only had a few simple scripts to call stored procedures to prepare the data prior to import by IM.

Pretty simple, right? I documented my findings and declared that the upgrade should have no impact on the integrations.

A month later, and many projects later, I completely forgot the details of the review that I performed. And in the meantime, the client discovered that their old SQL server was underpowered and that they would need to install GP 10 on a new SQL server machine. I didn't get that particular memo, so I was oblivious to this new detail.

On the scheduled date, the partner performed the upgrade to GP 10 on the new server, and all was swell. Except until the client tried to run their integrations.

Integration Manager complained that the source table could not be found. Hmmm. That's odd. And remember at this point, I didn't realize that a new SQL server had been installed.

When I checked the table list, a bunch of random objects were listed--none of which looked like a staging table. The integrations were using a DSN within IM, so when I checked the DSN, I found the problem. By default, DSNs created in IM do not have the database name set in the connection string. Although a new DSN had been setup to point to the new SQL server, the staging database was not listed. So I added the "Database=GPImport" parameter to the connection string, and viola, the appropriate tables appeared in the Data Source table list.

Problem solved! Right? Not so quick. IM showed that there was no data in the Employee table. And all of the invoices in the invoice table had already been imported into GP. Umm.....

Funny thing about integrations: The source data doesn't just materialize magically. Something or someone needs to prepare or make the data available for IM. Well, let's just say that I neglected to dig deep enough in that direction when I performed my initial assessment.

I saw that the integration was reading from a SQL staging table, and that table would not be affected by the upgrade to GP 10, so no worries. And the stored procedure that was being called in the Before Integration script looked good, and would also not be affected by the GP 10 upgrade.

But, what I didn't consider was what would happen if GP was moved to a new SQL server. At the time I, the partner, and the client, didn't know they would be using a new SQL server, so we didn't discuss the issues such a move would cause.

Naturally, it caused issues, and lots of them.

If you've ever heard of a Rube Goldberg machine, you'll appreciate this particular integration.

It turns out that multiple static HTML intranet web pages, circa 1994, on a separate web server, were using IDC and HTX files to call SQL stored procedures. Never heard of IDC and HTX files? Neither had I, since I started my data-driven web development with ASP 1.0. So the IDC file has a DSN, login info, and a SQL command, which was a stored procedure. When the user accesses these IDC files via IIS, the stored procedure is called.

Sounds simple, right? Well, that stored procedure calls several other stored procedures. Those stored procedures query the GP database and join to a linked SQL server to query data and fill cursors which ultimately populate data in staging tables. But wait, there's more! The stored procedures also manually build and write HTML report files to the SQL server local disk! We're talking hundreds and hundreds of lines of SQL in over a dozen stored procedures. Wait, don't touch that dial! They also use the xp_cmdshell extended stored procedure to map network drives back to the intranet web server and then copy the HTML files to the web server! Yessiree, a stored procedure to map network drives and copy files!

If you aren't scared yet, you are truly a stoic.

Thankfully the IDC files all referenced the same DSN, so I was able to change the DSN on the intranet server to point to the new GP SQL Server. And then, of course, I had to setup a linked server on the new GP SQL server. Whew, all done, right?

Nope. It turns out that the BeforeIntegration scripts in IM had hard-coded references to the old GP SQL server, and did not use RetrieveGlobals. So those scripts had to be updated as well (it was much faster to just change the server name, so I didn't bother to implement the GPConnection object in IM 10).

What was really fun was trying to figure all of this out without any documentation. Also, the integration had been developed and modified extensively by at least two different firms over the course of the last FOURTEEN YEARS, so there were pieces of code all over the place. I had to search for clues just to figure out what I should be searching for as I dug through files and settings on three different servers.

As of today, I think I finally have all of the integrations working. I didn't dare touch a line of code, lets the entire house of cards collapse, so I ended up just having to find all of the database connection settings and change them appropriately.

The real lesson was that although I performed an initial assessment of the integrations prior to the upgrade, I clearly wasn't thorough enough, and made several implicit assumptions--the biggest of which was that GP 10 would be installed on the same server.

So after going through this experience, I created an "integration review template" to help me try and detect some of these issues in advance next time. It's by no means complete, but it has several questions that I could think of that migth have helped me better anticipate some of the issues that I ran into by at least understanding the complexity of the processes that generate the source data for Integration Manager.

I've posted it for download in case anyone is interested.

Please, integrate safely.

Thursday, December 24, 2009

Table Import is not always the ugly stepsister

I shy away from table import whenever possible. I prefer integration manager, or using SQL directly, or eConnect if I can. But sometimes I am reminded of how simple things can be done simply.

For the past few weeks, I have been working on an implementation that goes live Monday 12/28. We have had a fairly tight timeframe, so I have been trying to avoid manual data entry whenever possible. However, I have run up against more than a few things that I needed to import but could not do so with Integration Manager. Table import to the rescue with imports in to some simple tables like...

  • Direct Deposit Header and Lines for Payroll
  • Shipping Methods
  • Customer Items

All of these table structures are fairly friendly to table import, and result in simple file uploads. Shipping methods even came up late in the game, and it took less than 10 minutes to build and load the file.

So why am I writing about this? Well, I know I could use the reminder that older tools that I regularly dismiss can actually come in quite handy :)

Please share any of your table import success stories, and I will be happy to update the post to include.

A few other uses courtesy of Steve Chapman and Frank Harnelly:

  • High volume imports even if the destinations are available in Integration Manager, like Customers (1,000,000+ records in minutes) or Chart of Accounts
  • Fixed Assets General Info

SmartList Builder and SQL Tables/Views

I am sure many of you already are aware of this, but I ran across a bit of an issue this week with what I thought should be a simple deployment of a SmartList Builder item. Here are the steps I went through....
  1. Created two views in SQL based on a combination of GP and custom tables
  2. Granted access to the views for SmartList Builder, Tools>>SmartList Builder>>Security>>SQL Table Security
  3. Created my Smartlist, Tools>>Smartlist Builder>>Smartlist Builder
  4. Tested by Smartlist, all looked good
  5. Deployed views to client site
  6. Exported Smartlist Builder to XML
  7. Imported to client site
  8. Granted access to the views for SmartList Builder at the site
  9. Error when viewing Smartlist, "You do not have security privileges to view all of the tables used in this SmartList"
  10. Follow steps in KB Article #967731 to configure a Security Task and Security Role that includes the operation Smarlist Builder\Smartlist Builder Permissions\Smartlist Builder\View Smartlists with SQL Tables
  11. Still get error
  12. Grant DYNGRP SELECT Permissions on both views
  13. Still get error
  14. Run Grant.SQL
  15. Still get error

Ugh. And the error happened for both regular users and sa. So what was the issue? Well, you have to have the permissions set BEFORE you import the XML file. So make sure you have those steps completed before you do your import, I did not narrow it down but I would think steps 8, 12, and 14 would be important. But I would do 8, 10, 12, and 14 just to be safe.

Once I had the permissions set, and then imported the XML file, all was good :) It seems that it does not recognize the permissions fully if they are set afterwards.

Happy holidays to all!

Tuesday, December 8, 2009

Don't you go closing my year...

Just thought I would send out a gentle reminder of the major myths regarding year end closing in Dynamics GP. As I find they surface every year, even among users who have used the system for years. So here is my top five list, please post your comments and I will update the list accordingly.

I have to wait until I have ALL my adjusting entries before I can close the GL
This is the biggest myth of all. You can actually have your cake and eat it too. GP allows you to post to the most recent historical year. So, for example, if I close 2009 I can still post adjusting entries to it. I just can no longer post adjustments to 2008. The two requirements to post to the most recent historical year are that you must mark "allow posting to history" in GL setup (Tools>>Setup>>Financial>>General Ledger) and the fiscal period must be open in Fiscal Period Setup (Tools>>Setup>>Company>>Fiscal Periods).

I am running my general ledger year end close, and it is stuck
I promise you the odds are that it is not stuck. The year end close is a pretty intensive process, so it is fairly common for a workstation to show as not responding. Please be patient. Using Ctrl-Alt-Delete to cancel out of the process is NOT recommended, and will REQUIRE you to restore a backup. And if you don't have a backup, well, that is just not fun...and will involve a nice little database fixing service from Professional Services at Microsoft. So, two lessons: have a backup and be patient.

I can view all my audit entries because I set up an audit period
This is a popular trick, to set up a 13th period in your Fiscal Period setup with the same date range as the December period. You then close December, and with the 13th audit period open, transactions will post in to that audit period. Sounds brilliant right? Well, it is...as long as you never run financial reconcile (Tools>>Utilities>>Financial>>Reconcile) on that year. As it will reattribute the postings to the earliest period with the same start date (assuming both periods were closed when you ran reconcile) or all postings (including your original Dec postings) will be attributed to the open period (if the 13th is open, and the 12th is closed). Ugh. Better choice? Set up a source document code (Tools>>Setup>>Posting>>Source Document) called AJ for Adjusting Entries and select it whenever you enter an adjusting entry instead of GJ. Then use the Cross Reference by Source Document report to view your AJs for a time period (Reports>>Financial>>Cross Reference>>Source Document).

Fixed Assets year end is so easy, I just change the year on the book right?
Nooooo! You must run the fixed asset year end closing routine (Tools>>Routines>>Fixed Assets>>Year End) in order to close the year properly. Changing the year directly on the books themselves is not recommended and will require you to restore a backup if you process transactions in the new year if you do so. Also, keep in mind, Fixed Assets is one module where you must complete all activity for the prior year and close the year BEFORE you can process activity in the new year.

I can close Receivables and Payables whenever I get around to it
Unfortunately, the payables management and receivables management year end closes are not date sensitive. This means that you need to run them as close to the actual end of year (after posting as much of the prior year activity as possible, but before you have posted to the new year) to get the most accurate close as possible. Now, the good news is that these closes only impact the "Amounts Since Last Close" summary figures for current year -vs- last year that are tracked for reports, inquiries, and Smartlist. The close will move all posted transactions to last year (regardless of their actual date) for the summaries. And the current year bucket will start over with any transactions posted after the close, again regardless of their actual posting date.

For example, let's say that on 1/1/2010 I post an entry of $500 to payables and date it 1/1/2010. I then close the year for payables on 1/2/2010. That $500 will be moved to the last year column in the "Amounts Since Last Close" summary figures even though it was posted to the 1/1/2010 date simply because it was actually posted BEFORE the year end close. By that same logic, let's say on 1/3/2010 you enter another payables entry for $1000 and post it back to 12/20/2009. Since the close has already been performed, that $1000 will be tracked in the current year column for the "Amounts Since Last Close" summary figures.

For most folks, these amount to minor differences that they generally do not notice. However, keep in mind that if you are looking at a vendor or customer summary field in SmartList, it will be using the "Amounts Since Last Close" summary figures. And if you are looking at current year -vs- last year, there may be discrepancies for the actual detail.

Well, there is my mythbusting. Please share your own myths, and how you like to bust them :) Not sure how many more blog posts I will make before the little one arrives (January 7th is coming quick) so I will go ahead and wish everyone happy holidays!

Monday, November 23, 2009

PartnerSource Funny

I'm sure you never thought you would hear those two words together.

So tonight I go to login to PartnerSource, and apparently Microsoft has decided that I'm now a customer and that my company has moved to Hong Kong!



I don't look forward to trying to figure out how this happened and how to get it fixed!


11/24/09 Update: I sent a message this morning at 8:20am to Voice Support explaining the issue. 40 minutes later, I received a response saying that they would look into the problem.

At 3:30pm I received an update that they had resolved the issue by setting my language preference in my profile to English. Unfortunately, this didn't resolve the issue.

The rep replied immediately and asked me for step by step details on how I am accessing the site. I responded with the step by step instructions, and they resolved the issue quickly.

4:15pm: Problem solved!

Friday, November 20, 2009

Formatting Note Lines with GP Web Services

When importing customer notes into GP 10 via Web Services, a client discovered that typical 'new line' escape sequences were not working properly for Notes. It turns out that Web Services handles formatting differently than eConnect, at least when it comes to new lines in Notes.

With GP 10 Web Services, \r\n, \r, and \n (or vbCr, vbLf, or vbCrLf in Visual Basic) all produce the same results in notes:

Note the vertical bar at the beginning of line 2 and 3. This is a line feed character, which GP Notes do not like.

In eConnect, you can use \r or vbCr to produce a new line, but in Web Services, any of the new line characters are being transformed into a CR and LF.

The solution to this is to use the UTF-8 representation of CR: & #13 ;

.Notes = "Note line #1 & #13 ; Line #2 & #13 ; This is line #3"

(In the above examples, I have added spaces between the characters so that they aren't parsed out of the blog post as HTML. When you use them, you will remove the spaces between the ampersand, pound sign, and before the semi-colon.)

With this change, the notes are imported with proper line breaks, but without the extra vertical bar characters.

And there you go!

Thursday, November 19, 2009

Don't be a Propellerhead

I like technology. I like computers and software. I like that they are constantly changing, meaning that there is always something new. I like the innovation, which brings new concepts and ideas to the marketplace and gives me endless opportunities to learn new things. It's constant work to continuously learn and stay current, but I'm certainly never bored.

Sometimes innovation brings simplicity and time savings. But sometimes innovation occurs because there are complex problems that require complex solutions to solve. When those complex problems occur, it's great to have a solution that is suited for the task.

But I would argue that many business issues, even if unique, are fundamentally not terribly complex. In the Dynamics GP marketplace, we aren't designing terrain mapping radar, modelling global weather patterns, designing a jumbo jet, or creating new pharmaceuticals. We're helping businesses meet business requirements, fulfill obligations, make promises, and manage information related to their accounting, inventory, sales, payroll, and purchasing, among other things.

Sure, things like sales commissions can become a tangled mess because of convoluted incentive programs, but ultimately they come down to fundamentally simple principles that use basic math to perform calculations on basic data sets.

The vast majority of the customizations, automations, and integrations that I provide to clients are fundamentally pretty simple. Import customers. Display estimated shipping weight of an order. Allow the customer to manage their item costs. Automate the creation of a project. Import journal entries.

Most of the individual Dynamics GP projects I work on are under 40 hours. They just aren't that complex, and they generally don't need complex solutions. My concern is offering a reasonable estimate, getting the project done on time and under budget, and exceeding the customer's expections. Using cool new tools or technologies is not one of my primary concerns.

When designing and developing these solutions, I have many tools at my disposal. SQL Server, Modifier & VBA, VS Tools and .NET, eConnect, Web Services, Dexterity, SQL Reporting Services, Excel, SmartList Builder, Extender, and the list goes on. And each tool can usually be used in several different ways. In the case of SQL Server, I can create a query, a view, a stored procedure, a trigger, an SSIS package, a user defined function, or even use CLR.

Sometimes there is a particular design approach that is obvious, and is clearly the best for the given requirements. But oftentimes, there are two or three, or many different design options that might work, with no obvious winner. In those cases, you have to make choices based on the information you have available.

What are the stated requirements? What is the goal or intended outcome of the solution? Who will be using the solution and what is their skill level? How often does it need to be used? Is the solution temporary or will it be used long term? what are my skills and competence with the tools at my disposal? What is my budget? When does the solution need to be delivered?

These and many other factors usually play into your design choices, assuming you are putting the client's concerns first.

But a several times now, I have inherited projects from other contractors, consultants, or firms that, I believe, did not put the client's concerns first. In my humble opinion, they were "propellerheads", making choices based on technological 'cool' factor rather than business concerns.

One extreme example was a simple eConnect integration that had an impressive user interface, with lots of tabs and controls and status fields. It even had fancy graphics with their consulting company logo prominently displayed on the user interface. (Since I don't spend alot of time on elaborate user interface design in .NET, I honestly couldn't figure out how they did the graphics). It looked great. But there was a funny thing about the consulting firm that developed pretty program. The client had fired them.

Why? The project was over budget and the integration didn't work. It couldn't successfully import a single transaction into GP.

So the GP partner for the client calls me and asks me to fix the integration. When I look into the .NET code, I see lots of user interface, and a fair amount of code, but I see things like distribution lines where the DR and CR are being set to the same account number. And I saw hard coded values, such as GL accounts. And invalid SQL statements that weren't properly querying the source data. And writing XML out to files instead of using a Memory Stream.

This particular project demonstrated many forms of incompetence, but it also demonstrated that the developer spent too much time on technical details that didn't matter, and too little on the business concerns that did matter.

Another example was a project that required a single window that would allow the GP client to categorize their HR training courses into groups. A single window used by maybe 3 GP users. I inherited a Visual Studio solution with multiple projects, lots of classes, and a 3-tier architecture. What was going on in the mind of that developer that they thought they needed all of that technology for such a simple business requirement? Did they used to work at NASA or JPL? To give you an idea of what was delivered to the client, the installation instructions for the application were 14 pages long. Did I mention it was ONE WINDOW that worked with two SQL tables? After having it for a few years, the client had never implemented it. Why? It was never completed and did not meet their original business requirements. And, maybe you're seeing the theme already: The consulting firm that developed the solution was fired.

Finally, I recently helped a client modify an eConnect integration that they had been using for a few years. When I looked at the .NET code, I had to call a friend to do a code review with me to make sure I wasn't imaginging things. The integration used inheritance on every eConnect class, reproducing every eConnect class in a custom class, which then ended up in a class library, which then ended up as a reference in the integration project. And what was the wild and crazy business requirement that produced this over engineered design? Importing journal entries. Yup, 34 megabytes (zipped!!) of project files and supporting files, all to import a journal entry.

As far as we could tell, not a single practical feature of inheritance had actually been used, but the eConnect objects went through so many classes and transformations that it was nearly impossible to trace through the code. Many convenient features of eConnect and its serialization libraries had effectively been re-written. Literally thousands of lines of code in the entire project was re-written in a single subroutine with just 275 lines in one hour. No inheritance required, and using a single project. And, here's a shocker for you: The original project was significantly over budget, and the client will never use that consulting firm again.

So the next time you work on a project and start to get excited about using a new tool or technology, please ask yourself: Are you being a propellerhead?

Monday, November 16, 2009

Dance With The One Who Brung 'Ya - Correcting Transactions in Dynamics GP

So often it seems like many of the questions on the Dynamics GP forums deal with how to correct transactions. Or, even more often, how correcting a transaction has led to more issues. As a trainer, I teach my students to "Dance with the one who brung 'ya" which is my way of emphasizing the importance of correcting transactions in the module where they originated from. And when I am lucky enough to find myself on phone support duty for the day, I always end up with a few "how do I fix this or that" cases. So I always ask my standard questions..

  • What steps have been taken so far? Often times, I find that attempts to correct the issue (sometimes even multiple posted corrections) have contributed to the current issue.
  • What is the current state of the issue? What remains to be corrected (e.g., the GL is incorrect but the RM aging is correct)? Getting to this bottom line can often be helpful in solidifying the real issue.

But we want to avoid these issues in the first place, right? So getting back to the dancing. Here are the common correction points that I emphasize to students and users, to point out the importance of correcting transactions where they originated.

Sales Order Processing

  • To correct a posted invoice (Transactions>>Sales>>Sales Transaction Entry).
  • Enter and post a return (Transactions>>Sales>>Sales Transaction Entry).
  • Updates inventory (if applicable), sales history by item, receivables management, and general ledger.
  • Do not void using Receivables Management (Transactions>>Sales>>Posted Transactions), as this will only update the general ledger and receivables management.

Purchase Order Processing

  • To correct a posted receipt or invoice (Transactions>>Purchasing>>Receivings Trx Entry, Enter/Match Invoices).
  • Enter and post a return (Transactions>>Purchasing>>Returns Trx Entry).
  • Updates inventory (if applicable), purchasing history by item, payables management, and general ledger.
  • Do not void using Payables Management (Transactions>>Purchasing>>Void Open), as this will only update the general ledger and payables management.

Project Accounting (Employee Expenses, Miscellaneous Logs, Timesheets, and Equipment Logs)

  • To correct a non-purchasing transaction (purchasing transactions are corrected in POP per the example above), return to the original entry window (Transactions>>Project>>Timesheet Entry, Employee Expense Entry, etc).
  • Enter a Referenced type transaction and use negative quantities to decrease the original posting. You can use the PA Trx Adjustment tool to assist with this process (Transactions>>Project>>PA Trx Adjustment). Using either method will result in an update to project accounting, payables management (if applicable), payroll (if applicable), and the general ledger.
  • Do not correct the transactions in payroll, or by voiding the transaction in payables, as these methods will not update the project accounting module.

Well, that's it for tonight. I promise that my next post will get in to the finer points of correcting project accounting billings and inventory transactions, which have more variations than the examples I have provided above.

But, in the meantime, please share any tips and tricks you have gathered over the years to assist users in understanding how to dance with the one who brung 'em :)

Friday, November 6, 2009

No Longer (Still) a Mystery: Invalid column name 'ACTNUMBR_6' when creating TWO / Fabrikam

(Update: Reader Jessica experienced the same issue, and provided feedback indicating that using Run As Administrator for GP Utilities should resolve this issue.)


So while installing GP 10 SP4 for the millionth time on a new development server, I ran into these four fantastically fun error message while trying to create the TWO company:


The following SQL statement produced an error:
declare @MXNUMSEG int, @Counter int, @AcctSQLCmd varchar(500), @StrLinked char(20), @StrNotLinked char(20) select @Counter = 0 set @AcctSQLCmd = space(1) set @StrLinked = '''Linked''' set @StrNotLinked = '''Not Linked''' select @MXNUMSEG = MXNUMSEG from DYNAMICS..SY003001 while @Counter <> @MXNUMSEG begin select @AcctSQLCmd = @AcctSQLCmd +',' +'GL00100.ACTNUMBR_' +ltrim(str(@Counter+1)) set @Counter = @Counter + 1 end exec('create view AAG00200FL as SELECT GL00100.ACTINDX'+@AcctSQLCmd + ', GL00100.ACTDESCR, GL00100.ACCTTYPE, GL00100.ACTIVE, AAG00200.aaAcctClassID, AAG00201.aaAccountClass

ERROR [Microsoft][SQL Native Client][SQL Server]Invalid column name 'ACTNUMBR_6'.

The following SQL statement produced an error:
GRANT SELECT , INSERT , DELETE , UPDATE ON [dbo].[AAG00200FL] TO [DYNGRP]

ERROR [Microsoft][SQL Native Client][SQL Server]Cannot find the object 'AAG00200FL', because it does not exist or you do not have permission.


Whoa there! That's quite a mouthful.

Since I've had many, many, many random errors, crashes, and problems occur over the years when creating the TWO / Fabrikam database, when I see an error, I just roll my eyes, then re-launch GP Utilities, and have it resume the database setup, or recreate TWO. Well, I tried that approach, but after the 3rd time, it was clear that this wasn't your average TWO setup failure.

I tried resuming, I tried re-creating, and I tried deleting both the Dynamics and TWO databases and starting from scratch, to no avail. But despite what I tried, I noticed that at least the error was consistent.

As an additional test, I tried to create a new empty company to see if it was something about TWO, or if it was a more fundamental issue. But sure enough, even creating a new, blank company produced the same errors.

So I finally resorted to SQL Profiler so that I could capture the entire SQL statement that was causing the initial error. The relevant portion is:

select @MXNUMSEG = MXNUMSEG from DYNAMICS..SY003001
while @Counter <> @MXNUMSEG
begin


So the SQL first queries the number of GL segments from the SY003001 table, and then performs a loop based on the number of segments.

So why was the statement later failing to find the column ACTNUMBR_6? When I launched GP Utilities, I specified an account framework of 66 characters and 10 segments, so there should be nothing special about segment 6.

But when I queried GL00100, to my surprise, there were only 5 account segments in the TWO database, which explains why the statements were failing.

Since I've installed GP tons of times and never had this problem, I didn't know where to start. Several more times I tried deleting Dynamics and TWO then launching GP Utilities to set them back up. During those subsequent tries, I figured that there are only two things that could possibly be affecting the segment field setup and causing the issue.

First, I was entering the account framework, specifying max length of 66 characters and 10 segments. The second item was that I was electing to not sort by account segment. Neither of these settings should cause the install to fail, but I couldn't see what else it could be.

So I then tried 60 and 10, with no segment sorting. No luck. I then tried several permutations of 60 and 10 with and without sorting, and 66 and 10 with and without sorting.

At some point, after over a dozen attempts and having completely lost track of what I had already tried, for some reason, it suddenly worked. TWO setup completed successfully.

Puzzled, I tried deleting both Dynamics and TWO, and ran GP Utilities again with my original options: 66 and 10, with no segment sorting.

But, once again, the TWO setup was successful. No errors.

From this, I can only assume that the account framework and segment sorting may not have been the cause, or may have only been one of several factors that was contributing to the error.

I wish I knew what the cause was, but if I never see that error again, I won't complain either.

During my testing, I noticed something that gave me at least a tiny bit of insight into the process of how TWO is created, and what else may have contributed to the problem.

When the GL00100 table is first created in TWO, it contains only 5 segments. But a few steps later in the setup process, apparently a separate script alters the table to insert the additional 5 segment fields. Based on this sequence, one theory is that the alter script was failing, being skipped, or wasn't completing successfully for some reason--but no error message occurs. If that alter script is not run successfully, when GP Utilities tries to create some analytical accounting views, the create view scripts will fail.

And with that nail biter of a story, have a good weekend, and I'll hopefully see some of you at the Dynamics GP Technical Conference in Fargo next week!

VBA Error: Class not registered. Looking for object with CLSID:{AC9F2F90-E877-11CE-9F68-00AA00574A4F}

I know, I know, it's such an obvious error, right? I mean, really, is there anybody left in the Dynamics GP world that wouldn't scoff at you for asking about this obvious class ID?

So I recently created a new development virtual server for a new project, something I've done dozens of times without issue. But for some reason, this time I ran into a few strange problems with Dynamics GP. One of them was this VBA error:



This was on a fresh Windows Server 2008 x64 virtual server, SQL Server 2005 SP3, and Dynamics GP 10 FP1, with SP4 installed. I did not have Visual Studio or Office installed, which I'm pretty sure is the reason for the error.

The error occurred whenever I tried to import a user form file into VBA, or whenever I tried to insert a new user form.

I later found that this error is apparently a variant of the one described in MBS KB Article 961568:

"Errors occurred during load" or when you import a package file that has a user form: Class {C62A69F0-16DC-11CE-9E98-00AA00574A4F} of control frmXXX was not a loaded control class.

At the time, I knew that this likely meant that a DLL wasn't registered on the server, but although those lovely GUIDs are a joy to see, they convey nothing to the average human as to which DLL is not registered.

As a shortcut, I switched over to a server that I knew did not have any problems with user forms in VBA and searched the registry for that GUID. That search told me that the problem was that the FM20.dll file was not registered.

After Googling for that file name, I learned that FM20.dll is a file needed to allow VBA to work with windows forms. This lovely KB article describes it's usage, but emphasizes that FM20.dll is "NOT redistributable" (they had to make the NOT all caps and bold for those of us who were unclear and confused about the lowercase non-bold version of the word 'not'), and that it can only be obtained by installing one of several Microsoft applications that happens to distribute it.

It must have taken a team of high priced lawyers thousands of dollars of billable hours to come up with that brilliant scheme that would surely prevent someone from copying a single DLL file and manually registering it. Looking to avoid the wrath of those same summer intern legal scholars, I did not simply copy the DLL from another machine and register it on my new server. Instead, I used the link in that same KB article to download the ever-so-popular "Microsoft ActiveX Control Pad", which installs the much coveted FM20.dll for me.

(Of course, the MBS KB article says to simply copy the FM20.dll file from an MS Office CD. Skirting the rules, are we, MBS?)

And, with that, the error went away and I was able to insert user forms and import form files into my VBA project.

One note--if you are working on Windows Server 2008 x64, the 32-bit file is installed in the C:\Windows\SysWOW64 directory.

Wednesday, November 4, 2009

Emerging from the Curriculum Caverns

So, I have to start out with an apology of sorts, as I have been heads down for the past 6 weeks working on curriculum updates for Microsoft. So this has become an orphan blog, which Steve so gently reminded me of. So here I am to give you a bit of an update and share what has been keeping me away for so long. I have been updating courseware for the upcoming release of Microsoft Dynamics Sure Step 2010, primarily working on the courseware forProject Managing Microsoft Dynamics® AX, Microsoft Dynamics® GP, and Microsoft Dynamics® NAV Implementations with Microsoft Dynamics® Sure Step (which will be course 80199) and also doing some support around the Using Microsoft Dynamics Sure Step course (80047).

The Project Managing course is a case study based course, which I think is a great collaborative learning experience with students participating in the implementation of a fictional company and working with many of the key tools, templates, and guidance provided in Sure Step. I have experience teaching the course on earlier releases, and I found the shift in perception very interesting. Students often arrived in class aware of Sure Step and its capabilities, but unaware of how to apply the capabilities to their implementation process. After two days with the case study, however, the possibilities became clear and students would leave excited to take what they had learned and apply it practically (and quickly). As an instructor, I love to see that sort of return on the training investment!

The Using Sure Step course is meant to serve as a more general Sure Step training, great for those that need the nuts and bolts approach to understanding the components and benefits of the Sure Step model. This is the courseware we have used internally to train our team members, and I find myself referring to it regularly when I need to refresh my own memory :)

Being the geek girl that I am, I have to say that I came out of the project with a (even for me, an enternally enthusiastic person when it comes to new releases) surprising level of anticipation for the new release. For those of you not yet working with Sure Step, it presents a variety of additional touchpoints to engage you that I hope you will find immediately applicable to your business. And for those of you already working with it, I think you will find that Microsoft listened to the feedback and incorporated more tools and guidance to allow you to take the tool even further in to your implementation processes.

Some key new features (borrowed from the US Partner Executive Program News for Microsoft Dynamics - October 2009):
  • Industry and Cross-Product Content - Positioning and deploying Microsoft Dynamics solutions in a given Industry or its related segments. Industry Pilots for AX Process Manufacturing and CRM Public Sector including the launch of an Industry Playbook for solution selling. Also available will be guidance on xRM platform due-diligence and deployment.
  • Agile Project Type – The addition of this project type which will be especially suitable for implementations where the Microsoft Dynamics solution is positioned as a platform rather than as a commercial-off-the-shelf (COTS) system.
  • Organizational Change Management Discipline – New guidance on managing organizational change, to address this common challenge in any ERP or CRM deployment.
  • ISV Guidance – New content that applies to ISV solution positioning and deployment and engagement with the VARs and SIs during sales and implementation. Also, introducing Certified for Microsoft Dynamics alignment recommendations.
  • More collaboration and project tracking capabilities through enhanced SharePoint integration based on Sales, Implementation and Optimization engagements.
  • Enhancements to the Diagnostic Phase including a link between Sure Step and Unleash Your Potential, an Industry Playbook to assist the Sales Roles focused on selling Microsoft Dynamics solutions catered to a specific industry/vertical, a consolidated set of Decision Accelerator Offerings, estimation tools and an enhanced ROI calculator.
  • Enhancements to the Optimization Offerings including further alignment and integration to Support and Services from Microsoft and additional proactive and post go-live offerings.

If you are interested in checking out the upcoming release, I encourage you to check out the latest information available on PartnerSource under Partner Essentials>>Sure Step.

Also, don't forget about these resources for Sure Step as well:

I would love to hear back from those of you who are using Sure Step, thinking about using it, or wondering how/if it can apply to your organization and implementation practices!

In the meantime, I promise more frequent posts in the upcoming couple months (at least until my due date, 1/7/10...it's a girl for those of you who don't already know).

Take care,

Christina

Tuesday, November 3, 2009

Creating a Dynamics GP DSN on 64-bit Windows

aka "Why isn't my DSN showing up in GP? It's right there in the Data Sources window!"

This is one of those things that I am familiar with, but since I haven't run into it in a long time, I had to do some digging to find specifics of the solution. I'll post it here in case anyone else has the issue for the first time, and for myself when I forget again.

So far, I have used 32-bit Windows for my development virtual servers, but Windows 2008 R2 is only available in 64-bit, so I'll probably be using 64-bit Windows going forward for any new virtual servers.

As you know, Dynamics GP requires an ODBC Data Source Name (DSN) to be setup on the Windows machine where it is installed. It uses this to connect to SQL Server, and is usually very simple to setup.

But, if you install GP on a 64-bit version of Windows and manually setup a DSN, when you try and launch GP or GP Utilities for the first time, the "Server" drop down list will be blank. You'll then go back and make sure you aren't imagining things and confirm that you did setup the DSN.

If you setup the DSN by navigating to Administrative Tools --> Data Sources, you actually went into the 64-bit ODBC Administrator tool. GP is only able to 'see' 32-bit System DSNs, and will not see the 64-bit System DSNs.

In order to setup the DSN for GP, you will need to launch the 32-bit ODBC Administrator tool. The two windows appear to be indistinguishable, with no visible differences between them.

Here is a Microsoft Support KB article discussing the two versions.

From what I have found, you have to launch it by navigating to:

%systemdrive%\Windows\SysWoW64\odbcad32.exe

Confusingly, the 64-bit and 32-bit versions of the tool have the same file name, which includes "32". Yet the 32-bit version is in the SysWoW64 folder. Crystal clear, right?

Once you locate the file, I would recommend creating a shortcut to it on the desktop or start menu, or somewhere else where it will be pretty visible. You should also consider naming it something like "32-bit Data Sources" to differentiate it, and might also want to rename the Data Sources icon under Administrative tools to "64-bit Data Sources" to further reduce confusion.

Once you launch the 32-bit Administrator tool and setup your DSN, it should show up in GP and GP Utilities.

And for those that are wondering why I don't just use the DSN setup option that is part of the GP 10 installation, I tried using it when GP 10 was first released, but it didn't work for some reason and gave me errors, so I haven't tried it again since. Since it only takes a few seconds to set up a DSN, I just continue to do it manually, which also gives me the opportunity to test it and make sure it is working properly and that I didn't mistype anything.

Lastly, as the MS KB article suggests, if you need to use both 64-bit and 32-bit DSNs, consider naming them with 32 or 64 in the name to differentiate them.

Monday, October 19, 2009

Save the Day with Regular Expressions

xkcd has a great comic about regular expressions:



It rings so true--if you have ever seen the look of amazement on people's faces after you use regular expressions, you definitely understand the comic.

For those of you who aren't familiar with regular expressions, have you ever received a data file as part of a Dynamics GP data migration or import that had horrible formatting, or had extra or messy data in it that made it nearly impossible for you to use?

Or, to pull a skeleton out of our own GP closet, have you ever had to try and extract just the data from a Dynamics GP report that was saved to a text file? Or attempted to get the GP report to import into Excel? Fun, huh? All those massive, useless page headers, and the fields positioned all over the multi-line detail data make those quite a chore.

I've seen this plenty over the years, and surprisingly, a typical reaction is, "Sorry, we can't work with that format." People just don't know how to deal with such messy looking or inconsistent data layouts.

Although it can't solve every data formatting problem, regular expressions can be a life saver in many situations, and using regular expressions is usually much cheaper (often free!) than high-end data extraction applications.

Regular Expressions, or "reg ex" (or regex or regexp) for short, is a technique of using special search strings, or patterns, that allow you to search for specific, complex, or possibly inconsistent or varying patterns in a text file.

One of my most basic and frequent uses of regex is to take a column of data and convert it for use in a SQL statement. I had to use this today for a client--they provided me a list of Dynamics GP Payroll pay codes, and I had to develop a query to find all payroll transactions that used the pay codes.

The list of 35 pay codes in Excel looked something like this:

C09900
C11500
H16500
H18000
H18750
H19500
H21000
H21750
R06000
R08000
R08550
...

And I needed to be able to use them in an "IN" clause in a SQL query.

I fired up UltraEdit, my preferred text editor, pasted in the column of data,
and then ran a Replace with the following options:

Find What: ^p (new paragraph)
Replace With: ', '

I instantly had converted the list into this:

'C09900 ', 'C11500 ', 'H16500 ', 'H18000', 'H18750 ', 'H19500 ', 'H21000 ', 'H21750 ', 'R06000 ', 'R08000', 'R08550 '

Close, but the trailing spaces after some pay codes bugged me, so I wanted to get rid of them. Once again, regex to the rescue. But some of the pay codes have no trailing space, and others had one or more spaces. So I reverted back to my original column of pay codes and ran this replace routine first:

Find What: [space]*^p (find one or more spaces followed by a new paragraph)
Replace With: ', '

(where [space] was a single space)

Viola! A clean list that I can use in a SQL "IN" statement:

'C09900', 'C11500', 'H16500', 'H18000', 'H18750', 'H19500', 'H21000', 'H21750', 'R06000', 'R08000', 'R08550'

In just a few seconds, I was able to convert the data into exactly what I needed and paste it right into my SQL statement.

Another common use of regular expressions is to validate or clean data input. For example, let's say you are having to import social security numbers, credit card numbers, or other all-numeric data. How can you tell if an alpha character, symbol, or other non-numeric character exists in the data? And what if you wanted to strip out invalid characters? What if you needed to validate the format for a Visa vs. Amex credit card number? Or the proper format of a Dutch postal code?

All easily accomplished with regular expressions.

Here's the catch: Unfortunately, there are multiple "standards" for regular expression syntax. Although I may say I "know" regular expressions, I understand the concept, how and when to use them, and I happen to know the UltraEdit and .NET syntaxes. But if I had to use a tool that used a different syntax, I would have to refer to the help pages and learn the specific expression syntax for that tool.

The examples I've give so far are using the basic UltraEdit expression syntax. There is also Unix syntax. And Perl syntax. And of course Microsoft could never follow an existing standard, so .NET has its own syntax with its own corresponding quirks. (.NET also has dedicated validation classes, such as RegexStringValidator)

Naturally, you can Google the topic and find quite a few resources for learning about regular expressions, how they work, the syntax, examples, and tricks. But, ultimately you will have to use an application, tool, or specific programming language that supports regular expressions, and that tool will use a particular regex syntax. So I would recommend picking a tool first, at least temporarily while you learn, and then use its documentation to get familiar with regex.

I found this free add on that appears to let you use regex in Excel. I haven't tried it yet, but it would be a great feature to have in Excel. There are some things that make regex difficult in Excel (like needing to combine or search text that spans multiple cells), but it wouldn't hurt to have it for data within cells.

If you are new to regular expressions, here is one web site that may be a good starting point to better understand the concept and see some interesting examples:

http://www.regular-expressions.info/tutorial.html

After a few minutes of study, you'll have a new tool to save the day!

Tuesday, October 13, 2009

Dynamics Salary Survey

As you have probably already seen on everyone else's blog posts, DynamicsCareers.com is sponsoring the Nigel Frank International Annual Microsoft Dynamics Salary Survey.

http://www.surveymonkey.com/s.aspx?sm=ciKJqQpLB5JgWU3jPvVQhw_3d_3d

It's a quick survey, so if you have a few minutes to spare, click on through and tell them how wealthy you are getting working with Dynamics GP.

Interesting Purchasing Returns Project Issue

So, I had an issue pop up yesterday that I wanted to share because it is a newly identified quality report in GP 10.0. The issue comes up when using the Purchasing Returns functionality with Project Accounting, Transactions>>Purchasing>>Returns Trx Entry. In our case, we were working with non-inventory items.

It occurs when a user enters a return for a project-related transaction, and gets an error (like "you cannot exceed accrued revenue" or something like that) and they proceed to recreate the error more than once. When they do this, the QTYRESERVED and QTYINVRESERVE fields in the POP10500 do not get set back to zero when the return is deleted. This can then manifest itself in a couple different ways (as I witnessed with the client):

1. The system will not let you return a non-inventory item, saying that you never received that item from the vendor (even though you did).
2. The system will not display the receipts for a non-inventory item, acting like there are no available receipts to return against.

It was easy enough to fix, we just set the values for QTYRESERVED and QTYINVRESERVE back to zero for the receipts we were trying to return against. Anyway, thought I would share the scoop in case any one else comes across this before the quality report is published and a resolution scheduled :)

Item Setup: The source of it all

Well, first I should apologize for being buried in Sure Step curriculum lately, so I have been blog-absent :) But recently I have been reminded of the importance of item setup. It seems like the bulk of issues and misunderstandings in Sales Order Processing and Purchase Order Processing can be tracked back to decisions made during the setup of items. Sometimes, users assume the system functions one way or another, but really it is the item setup that is controlling the behavior-- which could be easily changed in a lot of instances. I chalk this up the fact that the users who set up items are often not the same users entering transactions, or so much time has passed since item setup was initially done (and understood) that no one remembers why certain decisions were made (yet another argument for documentation...can you hear the collective groan from my coworkers?).

When I teach the Microsoft Dynamics GP Partner Academy for Distribution, new GP consultants are often overwhelmed by the number of item setup screens in inventory, as well as which ones may or may not be required based on how users may or may not be using SOP or POP (confused yet?). So a couple of years ago, I put together a hand out that I share with students that breaks out the different setup screens and which modules they impact. I will summarize that handout here (it's nothing fancy), but I am happy to share the actual handout with anyone who is interested-- just post a comment on this blog or email me at christinap@theknastergroup.com. Keep in mind, this is meant to be a basic breakdown to keep it simple and clear for students, as there is a lot of grey area :)

Module Settings:
Options to control how module behaves as well as defaults
Inventory Page>>Setup>>Inventory Control
Purchasing Page>>Setup>>Purchase Order Processing
Sales Page>>Setup>>Sales Order Processing

General Item Setup:
Item Class: Inventory Page>>Setup>>Item Class
Provides defaults for:
--Item: Inventory Page>>Cards>>Items
--Item Currency>>Inventory Page>>Cards>>Item Currency
--Used to determine which currencies are valid for an item in POP and SOP
Count Cycle Assignment: Inventory Page>>Cards>>Count Cycle Assignment

General Site Setup:
Site Maintenance: Inventory Page>>Cards>>Sites
Site Resource Planning: Inventory Page>>Cards>>Site Resource Planning
Provides order policy and order point defaults from site for item resource planning (below)

General Landed Cost Setup:
Landed Cost: Inventory Page>>Cards>>Landed Cost
Individual landed costs to be grouped together
Landed Cost Group: Inventory Page>>Cards>>Landed Cost Group
Landed cost groups to be assigned to item/site combinations below

Purchasing Item Setup:
Item Quantities Maintenance: Inventory Page>>Cards>>Quantities/Sites
Provides item default site, primary vendor and landed group
Item Vendor Maintenance: Inventory Page>>Cards>>Vendors
Provides vendor item number, vendor item description, default unit of measure, last originating invoice cost, and EOQ for Purchase Order Generator
Item Purchasing Options: Inventory Page>>Cards>>Purchasing Options
Provides revalue option/tolerance and U of M options
Item Resource Planning: Inventory Page>>Cards>>Item Resource Planning
Provides order policy (use PO gen), order point, order up to level, and PO gen options (along with mass update capability)

Sales Item Setup:
Item Price List: Inventory Page>>Cards>>Price List
Provides default price level (if customer or RM setup not defined with price level), default selling unit of measure, and U of M options

So, that's it :) Easy enough, right? Have a great Tuesday!

Monday, October 5, 2009

Develop with XML? You had better know XPath!!!

I recently had to work on a VB .NET app that made me feel great about my modest development skills. I know I have weaknesses and gaps in my development skills portfolio, but one thing that makes me shake my head and roll my eyes is when I see code that tries to work with XML without using XPath. If you work with XML, and if you don't know about XPath and when you should use it, you really need to spend a few hours learning it.

I started working with XML in 1999 with MSXML3, using DOM and XPath extensively, and then later using extremely elaborate XSLT to do presentation layer transformations for a large web site. Admittedly, back then XML was extremely overhyped, but since then, I haven't met a developer that is comfortable with XPath, let alone XSLT. I'm sure they are out there, I just haven't had the luck to meet them.

The app in question (an add on window for Dynamics GP) was developed by someone a few years ago, but was never really finished, and therefore the client never implemented it. I got the honor of finishing the required functionality and getting it to work properly.

I ignored the fact that this extremely simple, single window "application" with the most rudimentary functionality was completely over-engineered (i.e. 3 tiers). But when I came across code that read a hand coded XML configuration file, I stared in disbelief.

First, why the developer created a custom XML file instead of using the My.Settings feature in Visual Studio 2005 puzzles me. I'll give him the benefit of the doubt and assume that he was using an old version of Visual Studio for some reason. But then I saw this code, which should induce horror in any developer that knows how to work with XML:

Public Function getSQLConnection() As String

Dim xmlRdr As XmlTextReader
Dim sqlConn As String = ""
Dim sDirectory As String = System.Windows.Forms.Application.StartupPath()
Dim sXmlPath As String = sDirectory & "\xmlTrainTemplateConfig.xml"
Try
'create xml reader
xmlRdr = New XmlTextReader(sXmlPath)
'Disable whitespace so that you don't have to read over whitespaces
xmlRdr.WhitespaceHandling = WhitespaceHandling.None
'read the xml declaration
xmlRdr.Read()
'read the 'Configuration' tag
xmlRdr.Read()
'Read the 'Config' tag
xmlRdr.Read()
'Get the 'Config' Attribute Value
If xmlRdr.GetAttribute("type") = "SQL" Then
'Read element 'ConnStr'
xmlRdr.Read()
'Get the 'ConnStr' Element Value
sqlConn = xmlRdr.ReadElementString("ConnStr")
End If
'close the reader
xmlRdr.Close()
Catch EX As Exception
Dim exCustom As New Exception("Error reading xmlTrainTemplateConfig.xml: " & sXmlPath, EX)
Throw exCustom
Finally
getSQLConnection = sqlConn
End Try

End Function


Okay, very funny, I get it, practical joke, right? Seriously, so where is the real code that reads the config file?

So, let me summarize. Reading an XML file line by line, making assumptions about the order of each node/line, and then having to test attribute and element values to see if you are reading the correct node completely defeats the purpose of using an XML file. You might as well just use a text INI file and save yourself time and embarrassment.

The code sample above is, simply, how not to work with XML.

Now, let's try another approach. There are probably a dozen better ways to get the job done, but I'll throw out just one that didn't require any thought or design. Doesn't this look just a tad bit better?

Dim xmlDoc As New XmlDocument
xmlDoc.Load(sXmlPath)
sqlConn = xmlDoc.SelectSingleNode("//ConnStr").InnerText.Trim

This approach uses the most rudimentary XPath node selection expression ("//") that searches the entire XML document for any matching nodes. Technically, this is a very sloppy technique, but like I said, no thought required, and we're dealing with a private config file, so we don't need to be very rigorous.

If you aren't familiar with XPath but want to learn more, I would recommend reading this every good, very simple tutorial:

http://www.w3schools.com/XPath/default.asp

And here is the XPath chapter for the O'Reilly book "XML in a Nutshell" that appears to be available for free online:

http://oreilly.com/catalog/xmlnut/chapter/ch09.html

There are plenty of other XPath references, so it shouldn't be hard to at least learn the basics and bookmark some good references for later use.

To close, I'll leave you with a practical example of how XPath can be used with eConnect. I recently fielded an eConnect question on Experts Exchange where someone asked how they could delete specific taSopLineIvcInsert nodes from an XML document. So if you have an XML document with 500 SOP line items, and you need to delete just 5 specific items, how would you do it?

One way is to try looping through all of the 500 line nodes and check each one to see if it matches the 5 that you need to delete. But that means your code is wasting time checking 495 lines that do not need to be deleted. Wouldn't it be nice to just jump into the XML and select the 5 that you need to delete without having to read the other 495? How does this look?

xmlDoc.LoadXml(myXML)
xmlNode = xmlDoc.SelectSingleNode("//taSopLineIvcInsert[ITEMNMBR='X205']")
xmlNode.ParentNode.RemoveChild(xmlNode)



Pop Quiz: Knowing what you know about SOP lines in Dynamics GP, and now that you are an expert on XPath, what is potentially wrong with the sample code above?

I'll reward the person who gives the first correct answer with a cold beer, or hot cup of coffee, in Fargo at the GP Technical Conference in November. (Winner must be present to receive prize!)

Saturday, October 3, 2009

Using and Debugging SQL Triggers with Dynamics GP


SQL database triggers are an interesting, powerful, and potentially dangerous tool. On the one hand, they offer the appealing ability to execute a SQL script automatically whenever a change is made to a database table. The trigger is fired immediately by SQL server, and frees you from having to separately monitor database tables to try and detect changes to data.

On the other hand, triggers can be difficult to implement, debug, and manage, and can cause unintended consequences. Since I don't use them very often, I don't claim to be a trigger guru, but my interpretation is that SQL Server does not provide very good tools for managing, diagnosing, and debugging triggers. And when you are dealing with a moderately complex application like Dynamics GP, triggers can get you into trouble very quickly--for several different reasons. You have to really understand not only all of the tables, records, and underlying data, but you also need to very clearly understand the sequence of events that occur in those tables if you want to successfully implement a trigger.

I was recently asked to customize GP to automatically populate a single field on a SOP transaction. The client wanted a SOP User Defined field filled out automatically based on the customer and address ID assigned to the order. But they wanted this field to be populated without requiring additional user intervention, and they needed it populated even for orders that were imported from their e-commerce web site.

After weighing the pros and cons of different approaches, I chose to use triggers. I reasoned that triggers would be the most reliable, as they could most easily populate the field regardless of how the order was entered in GP, and a trigger could be fired without requiring any user intervention. One field, right? How hard could that be?

I think that triggers turned out to be a good choice, but I ran into a few surprising issues that required me to learn a few things about GP, and also how to debug triggers on SQL Server.

It turns out that modifying the SOP User Defined fields table was trickier than I thought. I chose to put a trigger on the SOP10100 table, because that table would contain the customer ID and address ID that I needed to lookup my user defined value that was stored in the Customer Master Address File (RM00102). But, of course, it wasn't that easy. There are dozens of things that occur in the database when GP creates a new SOP transaction, so I had to reverse engineer each step of that process to resolve a few errors that were occuring. Here is the trigger that I ended up creating.

-- =============================================
-- Author: Steve Endow
-- Create date: 7/22/2009
-- Description: On insert of SOP transaction, updates
-- the SOP UDF table (SOP10106) to store the 'End User ID'
-- value in the SOP UDF Text 3 field.
-- The End User ID value is stored in the UDF1 field of
-- the customer address record in RM10102.
--
-- Rev 1.0: 7/22/2009
--
-- Rev 1.0f: 7/23/2009
-- Corrected error that would cause SOP get next number to fail (SET @ENDUSERID = '')
-- Corrected error during Order transfer to Invoice: Trigger would insert SOP10106
-- record before GP did, causing GP to get dup index error
--
-- =============================================

IF EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[ctrStoreEndUserIDSOP10100]'))
BEGIN
DROP TRIGGER ctrStoreEndUserIDSOP10100;
END
GO

CREATE TRIGGER ctrStoreEndUserIDSOP10100
ON SOP10100
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;

-- Insert statements for trigger here

DECLARE @SOPTYPE AS smallint
DECLARE @SOPNUMBE AS char(21)
DECLARE @ORIGTYPE AS smallint
DECLARE @ORIGNUMB AS char(21)
DECLARE @CUSTNMBR AS char(15)
DECLARE @ADRSCODE AS char(15)
DECLARE @ENDUSERID AS char(15)
DECLARE @UDFEXISTS AS smallint
DECLARE @ORIGUDFEXISTS AS smallint

SET @SOPTYPE = 0
SET @SOPNUMBE = ''
SET @ORIGTYPE = 0
SET @ORIGNUMB = ''
SET @CUSTNMBR = ''
SET @ADRSCODE = ''
SET @ENDUSERID = ''

--Retrieve the values from the inserted / updated record
SELECT @SOPTYPE = SOPTYPE, @SOPNUMBE = SOPNUMBE, @ORIGTYPE = ORIGTYPE, @ORIGNUMB = ORIGNUMB, @CUSTNMBR = CUSTNMBR, @ADRSCODE = PRSTADCD FROM inserted

--Do not fire if customer or address ID are empty
IF @CUSTNMBR <> '' AND @ADRSCODE <> ''
BEGIN

--Query the end user ID from the customer Ship To Address ID
SELECT @ENDUSERID = RTRIM(USERDEF1) FROM RM00102 WHERE CUSTNMBR = @CUSTNMBR AND ADRSCODE = @ADRSCODE

--Check to see if the UDF record exists for this SOP transaction
SELECT @UDFEXISTS = COUNT(*) FROM SOP10106 WHERE SOPTYPE = @SOPTYPE AND SOPNUMBE = @SOPNUMBE
--Also check to see if the UDF record exists for an originating transaction (in case of transfer to invoice)
SELECT @ORIGUDFEXISTS = COUNT(*) FROM SOP10106 WHERE SOPTYPE = @ORIGTYPE AND SOPNUMBE = @ORIGNUMB

IF @UDFEXISTS = 1
BEGIN
--Update the UDF Text 3 field with the End User ID based on the Ship To Address ID
UPDATE SOP10106 SET USRDEF03 = @ENDUSERID WHERE SOPTYPE = @SOPTYPE AND SOPNUMBE = @SOPNUMBE
END
ELSE
BEGIN

IF @ORIGUDFEXISTS = 0
BEGIN
--Insert a new record into the SOP UDF table, including the end user ID in UDF Text 3
INSERT INTO SOP10106
(SOPTYPE, SOPNUMBE, USRDAT01, USRDAT02, USRTAB01, USRTAB09, USRTAB03, USERDEF1, USERDEF2, USRDEF03, USRDEF04, USRDEF05, COMMENT_1, COMMENT_2, COMMENT_3, COMMENT_4, CMMTTEXT)
VALUES (@SOPTYPE, @SOPNUMBE, '1900.01.01', '1900.01.01', '', '', '', '', '', @ENDUSERID, '', '', '', '', '', '', '')
END

END

END

END
GO



After the trigger was installed, when transferring an order to an invoice, GP would throw a 'duplicate index' error. And when trying to create a new sales order, gp complained that it was unable to get the next document number. When we disabled the trigger, GP worked fine, so it was clear that the trigger code was causing the errors.

The errors were not very specific, so I had little to go on. After some digging, I found that I could debug the trigger code using Visual Studio. This allowed me step through each line of code in my trigger, much like a .NET program, and discover the cause of the errors.

To debug triggers on SQL 2005, you will need Visual Studio 2005 Professional. I'm only going to describe a basic process for debugging T-SQL triggers, but here is an MSDN web page that describes many options and caveats for debugging T-SQL and CLR with SQL 2005. There are several ways to debug T-SQL with Visual Studio, but this is one method that I stumbled on based on the limited instructions I could find.

1. Create a new stored procedure in your database that performs a very simple operation that will cause the trigger to fire. In my case, I had an INSERT, UPDATE trigger, so I created the following stored proc in TWO:

CREATE Procedure DebugTrigger
AS
UPDATE SOP10100 SET ORIGTYPE = ORIGTYPE WHERE SOPNUMBE = 'ORDST2230'
GO



2. Open Visual Studio 2005 Professional and create a new "Database Project".



3. When asked to choose a database reference, either add a new connection, or select an existing connection for your GP database.




4. Now that you have created a new project, expand your database connection on the left, and expand Stored Procedures. It will take a while and may warn you that there are alot of procedures to display (just click on Yes).



5. Right mouse click on your test stored procedure and select "Step Into Stored Procedure"



6. A new tab should open displaying your stored procedure code in debug mode, ready to step into the first line of your procedure.



7. Press F11 to Step Into the procedure line



8. A new tab will open showing your trigger code in debug mode



9. You can now press F11 to step through each line of your trigger

10. Note that the Locals window will allow you to easily see the values of all of the trigger parameters



In the case of my trigger, it turns out that I had two bugs. The first bug caused the insert of the SOP10106 record to fail (my code was trying to insert a NULL value). Because my trigger was on the SOP10100 table, this caused GP's update of the SOPNUMBE field to fail, resulting in the next number error. This 'cascade' effect of triggers is just one of the problems that can make them potentially dangerous, and difficult to debug in production environments. If your trigger fails, it may cause a train wreck as it causes other higher level operations to fail.

The second problem I had was that if an order is transferred to an invoice, GP would attempt to create a new record in SOP10106 for the invoice, but my trigger had already created one as soon as the SOP10100 record was updated. This caused GP to receive a duplicate index error. So my trigger was fighting with GP to insert the record. So in this case, understanding the exact sequence of events that occur under different circumstances is required to ensure that the trigger does not interfere with GP's normal operations.