Wednesday, December 24, 2008

Inline Scalar Function Performance (T-SQL geeks only)

If you are technical and work alot with SQL Server, I highly recommend subscribing to SQL Server Magazine. One of the contributing authors, Itzik Ben-Gan, is a T-SQL expert in the extreme.

Itzik recently posted an excellent article about performance issues using inline scalar functions (functions that return only 1 value and are used within a larger query). He demonstrates the performance of a typical inline scalar function, and then offers a relatively simple alternative approach that dramatically improves performance.

http://www.sqlmag.com/articles/index.cfm?articleid=101104&

In his sample, he demostrates how this change makes his 14 second query run in less than 1 second. That's a massive performance improvement for changing just a few keywords that will not affect the structure of your query.

Since I wrote an earlier post about UDFs, I thought that this would be a great follow up. To summarize his post:

1) Add the "RETURNS TABLE" clause to your UDF
2) Change your inline function call to use a SELECT statement against the function

Well worth the minimal effort, even if you aren't querying a million rows.

Thursday, December 18, 2008

Dynamics GP and SharePoint Resources

SharePoint is a popular discussion item with clients lately; it seems to come up alongside discussions of Business Portal and deploying Dynamics GP information to a larger group of users (who do not necessarily have access to Dynamics GP directly). So I thought I would share some of the resources I use regularly as support in these discussions. Please share your popular links as well, and I will update the blog :)

SharePoint Edition Comparison, to understand differences between Windows SharePoint Services (WSS) and Microsoft Office SharePoint Server (MOSS), along with other editions:
http://office.microsoft.com/en-us/sharepointtechnology/FX101758691033.aspx

For Dynamics Partners, information on the Dynamics Client for Office. The DCO is a bundled licensing of Microsoft Office SharePoint Server and other Dynamics features. Includes a FAQ, Fact Sheet, and Presentation:
https://mbs.microsoft.com/partnersource/newsevents/news/newsgeneral/microsoftdynamicsclientofficesharepoint.htm?printpage=false

High level information on the Dynamics GP and Microsoft Office SharePoint Server integration points:
http://www.microsoft.com/dynamics/gp/product/sharepointofficeintegration.mspx

For Dynamics Partners, fact sheets on specific integration points between MOSS and Dynamics GP:


Project Accounting Default Account Sourcing

Currently I am working on several project accounting implementations. When I have several similar projects going on at the same time like this, I tend to focus on the common themes. So, lately, I have been thinking about the default project account setup through Tools>>Setup>>Project>>Project>>Accounts. This is always an interesting thing to explain to clients when they ask, how does project determine what accounts to use? And my answer is, normally, it depends on your setup.

In the core modules of Dynamics GP, the defaulting of posting accounts is fairly straightforward. The system looks first to a master record, and if no accounts are specified, it then looks to the posting account setup. So, for example, when entering a payables transaction, the system will first look to the vendor card for default posting accounts and whatever it cannot find there, it will attempt to pull from the posting accounts setup (Tools>>Setup>>Posting>>Posting Accounts).

Project accounting presents users with a more flexible defaulting model, where default locations can be specified by transaction type and by account. Additionally, segment overrides can also be specified at the contract and project levels. This creates tremendous flexibility in terms of how the system can default accounts, and where accounts need to be set up in the first place.

So, to outline the different options...

For each cost transaction type in project accounting:
  • Miscellanous Logs
  • Equipment Logs
  • Purchases/Materials
  • Employee Expenses
  • Timesheets

And then for each account type (I only list a few of the basics below for the example that follows):

  • Cost of Goods Sold/Expense
  • Contra Account for Cost
  • Project Revenue/Sales
  • Accounts Receivable

For each account type, you can specify a "source" to pull the default accounts from:

  • None (do not create distributions for this account)
  • Customer (use Cards>>Sales>>Customer>>Project>>Accounts)
  • Contract (use Cards>>Project>>Contract>>Accounts)
  • Project (use Cards>>Project>>Project>>Accounts)
  • Cost Category (use Cards>>Project>>Cost Category>>Accounts)
  • Trx Owner (depending on transaction type, can pull from Employee, Equipment, Miscellaneous, or Vendor)
  • Specific (enter one specific account to always be used)
So, to take you through an example from a project I am working on...in the example we are working with timesheets. The clients expressed the following requirements:
  • The expense account used on timesheets should vary by the cost category, but the project determines which department segment should be used.
  • The offset to the expense (the contra account for cost) is determined by the employee's home department.
  • When the labor is billed, the revenue account should vary by the cost category, but the project determines which department segment should be used.
  • When the labor is billed, the AR account will be different based on the customer.

Sometimes it takes a bit of discussion to get items distilled down to the bullet points I show above. I actually start with a cost category worksheet that I have clients complete as homework that documents the cost categories and accounts they need in project accounting as a way to help us think through the requirements. Then, from that worksheet, we can discuss how to best set up the account sourcing. Feel free to post on this blog and I am happy to share the worksheet I use in case anyone is interested.

So, the outcome we came to for timesheets in the example above:

  • Cost of Goods Sold/Expense- Source: Cost Category
  • Contra Account for Cost- Source: Trx Owner
  • Project Revenue/Sales- Source: Cost Category
  • Accounts Receivable- Source: Customer

Then, at the project level, we will specify a sub-account format (Cards>>Project>>Project>>Sub Account Format Expansion Arrow>>Account Segment Overrides) that includes the department segment value and we will mark to override the segment on the Cost of Goods Sold/Expense and Project Revenue/Sales account. So this setting will substitute in the department value specified on the project, while pulling the remaining account segments from the cost category.

With the sources set as noted above, the client needs only specify accounts as follows:

  • On the cost category (Cards>>Project>>Cost Category>>Accounts), enter the Cost of Goods Sold/Expense and Project Revenue/Sales accounts.
  • On the employee (Cards>>Payroll>>Employee>>Project>>Accounts), enter a Contra Account for Costs.
  • On the customer (Cards>>Sales>>Customer>>Project>>Accounts), enter an Accounts Receivable account for timesheets.

And, more importantly, the accounts do not have to set up repeatedly in multiple places. And, because the project determines the department segment, there is no need for cost categories to be set up for each department, only for each main account needed.

Although the effort involved in coming to this conclusion may make you feel like accepting the default of all accounts defaulting from the cost category, taking the time to set this up in the most efficient way possible will minimize mistakes in posting and simplify the setup in the long run.

As always, please feel free to share your experiences and perspectives. Happy holidays!

Monday, December 15, 2008

More HyperV Resources

Today I tried in earnest to migrate a Virtual Server 2005 VHD file to HyperV. Things looked good until I realized the networking in the virtual server did not work. That's when I found this article that offers some steps on preparing a virtual server before you move the VHD to HyperV. Of course, I would recommend making a backup of your VHD and VMC before making these changes, just in case.

http://blogs.technet.com/askcore/archive/2008/09/19/migrating-to-hyper-v-from-virtual-server-or-virtual-pc-tips-and-suggestions.aspx


Which led to this tool, which is apparently a simple means for translating the VMC file to HyperV. Since I only have a few servers to migrate, I don't know that this tool is particularly valuable to me--it only takes a few minutes for me to create a new virtual server and then just attach the VHD. If you have alot of servers to migrate, the article mentions that System Center Virtual Machine Manager is probably a better option.

http://blogs.technet.com/matthts/archive/2008/09/12/vmc-to-hyper-v-import-tool-available.aspx


And finally, here is a post with an amazing list of HyperV links and resources. I will be looking over these during the next few weeks.

http://blogs.technet.com/virtualworld/archive/2008/11/24/pfe-hyper-v-ops-day-october.aspx

Sunday, December 14, 2008

Transitioning to HyperV

For the last few years, I have used Virtual Server 2005 to host virtual machines for testing different Dynamics GP environments, and for hosting different development environments. Some clients still use SQL 2000 and GP 8, while others are using SQL 2008 with the latest GP 10 service pack. Maintaining those configurations on a single physical machine typically isn't practical, so Virtual Server offered a great way to easily maintain and backup multiple server configurations. Virtual Server uses the same VHD files as Virtual PC, but I found that it was more convenient to use, easier to manage VMs, and seemed to be faster than Virtual PC.

I've finally built a new Windows 2008 Server, so I'm going to be transitioning my virtual machines over to HyperV, which is the replacement for Virtual Server 2005. Because of the memory requirements of virtual machines, I've installed the 64-bit (x64) version of Server 2008, which allows me to get past the 3GB / 4GB limit of 32-bit windows. The desktop machine I used for Server 2008 only supports up to 8GB of RAM, but I figure that should allow me to pretty comfortably host up to 4 virtual servers (typically using 1.5GB of memory each)--more than I need running at any one time.


I'm still learning about HyperV and how it is different from Virtual Server 2005, and in which ways it is the same, so for the time being I'll offer some links to articles that found helpful.


My focus is not just learning and understanding the HyperV features, but also understanding how to perform equivalent maintenance operations and processes in HyperV to replace the tricks I learned with Virtual Server 2005. For instance, what is the best way to backup a HyperV image? The files that are created by HyperV are slightly different than VS 2005, so I need to learn what to backup, when, and how (i.e. Can I write a script to pause the VM and run a backup like I did with VS 2005?). Also, can I migrate an existing VS 2005 VHD file to HyperV (the answer is yes), and if so, what steps do I need to complete to get all of the server features working properly in HyperV? (still working on this one...)


To get started, here are a few articles on HyperV by Janique Carbone on the VirtualizationAdmin.com web site that do a great job of explaining the basics of HyperV and some of the new HyperV features.


HyperV Integration Services: My interpretation is that this is effectively the replacement / upgrade for the old "Virtual Machine Additions" in Virtual Server and Virtual PC. You will definitely want to install these in your VMs (it makes the mouse 'usable', just like VM Additions).

http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/general/windows-server-2008-hyper-v-integration-services.html



Understanding and Using HyperV Snapshots: I still have alot of questions about HyperV snapshots, but so far, it seems like an amazing feature that works great.

http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/general/understanding-microsoft-windows-server-2008-hyper-v-snapshots.html

And here is a nice video she created showing how simple it is to create and use snapshots:

http://www.virtualizationadmin.com/articles-tutorials/videos/microsoft-hyper-v-articles/general/using-hyper-v-snapshots.html


What is pretty wild to me is how HyperV maintains 'branches' of snapshots, so it seems you can revert to a prior snapshot, then create new snapshots to create a new 'branch'. This is a pretty powerful feature, especially considering it only takes seconds to create or apply a snapshot.

But the caveat is that it can produce alot of permutations of your virtual server, so you'll have to learn how to manage those snapshots effectively so that they can be used in a meaningful fashion and not pile up over time to the point where they make a mess.

One small recommendation I have so far for taking snapshots is to try and log out of all sessions on the server before taking a snapshot. If you don't logout, your remote desktop connections will be saved as a disconnected sessions that you'll end up having to clear if you ever revert.

Wednesday, December 10, 2008

Exceeding maximum eConnect char field lengths

So today a developer at a customer posed a seemingly simple question: What is the max length of a SOP Order number? 17, correct?

It turns out that the answer is: It depends.

If you manually enter an order into GP, yes, you are limited to 17 characters. But the eConnect documentation says that the field length is 21. And if you check the database tables, the SOPNUMBE field is a char(21). What gives?

So two clarifying questions came up:

1) If you are importing transactions using eConnect, what is the max length for a SOP Order number?

2) And, what happens if you send eConnect a document number that is greater than the max field length?


It turns out that you can send a 21 character SOP Number to eConnect. The number will be faithfully stored in the SOP tables. If you attempt to lookup that SOP Order in GP, all 21 characters will display in the lookup window, but only 18 characters will show in the Sales Transaction Entry window. Not 17, not 21, but 18 characters.

And, more interestingly, at least to me, is that you can send a SOP document number of any length to eConnect. 30 characters, 50 chars, you name it. You will not receive an error, or any indication of a problem. It appears that eConnect silently truncates all char values at the max field length. This appears to apply to any char field, as I tested a 30 char batch number, which was truncated to 15. Maybe this is common knowledge, but since I've never knowingly exceeded a char field length, I really didn't know how eConnect would behave.

So the lesson is to make sure to check char field lengths before sending data to eConnect, since you could potentially have some eventual problems if you did accidentally send long char values.

Friday, December 5, 2008

eConnect Performance Benchmarks

Over the last several years, I've used eConnect quite a bit because of it's features, capabilities, flexibility, and performance. A few times I've written an integration to replace an Integration Manager import due to the relatively poor performance of IM with large imports. I know that eConnect performance is much better than Integration Manager, but it has been a long time since I benchmarked its speed.

When I first used eConnect, I developed a simple prototype to import a very basic payables voucher. Back then I was using GP 7.5, and developed the integration using VB 6. I recall that my prototype was able to import about 20 transactions per second. Since then, that benchmark has been the general number that I use when speaking with clients who are concerned about the performance of an eConnect integration. Having written integrations that have processed millions of transactions, I have yet to work with a client who really needs to import transactions real-time into GP at 20 per second, let alone any faster, so performance has never really been an issue.

But I recently worked on a data migration project where performance was a concern, simply because of the volume of data to be transferred. It involved migrating nearly 27,000 customers and 58,000 addresses from multiple GP companies into a single GP company. In this particular case, I suppose that DTS or SSIS might have been an option, but because of changes to various IDs and customer classes, it was alot safer and much easier to use eConnect and get the benefits of its validation and error handling (which were put to good use). The performance concern wasn't that the migration needed to be completed in a matter of minutes--it was that simply testing the migration of that many records takes quite a long time, especially if you have to run the test multiple times.

The database server where I ran the eConnect integration had two Xeon E5345 Quad Core processors running at 2.33 GHz, 4GB of RAM, and was running 32-bit Windows 2003 Server Standard. The data source was a separate SQL server with 5 GP company databases.

For this integration and testing, I didn't perform any database or code optimization to try and speed things up (I didn't have time)--I just wanted to see the performance of a typical integration written without regard for performance.

Anyway, here are the statistics from the totals of my largest data sets.

25,444 customers (with main address) imported in 2,325 seconds = 11 customers per second average

56,520 customer address IDs imported in 1,136 seconds = 50 addresses per second average

46,459 inet info records imported in 1,019 seconds = 46 records per second average


These averages were pretty consistent regardless of the size of the source data set (21 records to 15,000 records)

You can see from the averages that there can be significant variability in speed depending on the transaction, which makes sense. These particular transactions aren't the most meaningful to me, as I would really like to see stats on SOP Orders and PM Invoices, but at least it's one data point to consider.

12/5/08 - UPDATE: One thing I forgot to mention is that while this integration was running on the server, only 1 CPU core indicated any activity, and it generally stayed under 10%. So processing power was definitely not a bottleneck for performance. And given the data sources and size, I'm inclined to completely rule out disk and network as bottlenecks. Given this, I'm not yet sure why the integration didn't run faster--my current assumption is that some elements of my integration application have a lot of latency, or there is something about the eConnect calls that produce latency or a bottleneck. One test I'm thinking of performing is creating a test integration with 'source' data values that are hard-coded, and see how fast that data can be inserted into GP a few hundred times. That should completely eliminate any latency or delays due to reading or accessing the source data.

12/20/08 - UPDATE: I was able to create an integration for AR invoices, and for over 3,000 invoices, the average speed was 14 transactions per second.

3/23/09 - UPDATE: I recently created two different SOP transaction imports.

The first was a one-time load of 13,000 Invoices an average of 2.3 lines per order. The lines were non-inventory items, and had totals of $0, so there was no inventory impact, and were no distributions. The import averaged about 1,000 transactions per minute, or 16.7 transactions per second.

The second integration was tested with 3 line items per order, as well as 3 SOP payments per order. That integration imported 500 transactions per minute, or 8.3 transactions per second. The customer needs to import up to 40,000 transactions per day, so this performance should be sufficient.

Other than understanding these benchmarks, one lesson is that the performance varies significantly depending on how many elements there are for a given transaction.

Wednesday, December 3, 2008

Nifty Nifty Forecaster Input Options

This week finds me in Denver for some client trainings and our annual year end event on Friday.

So, just a quick little post on a feature of Forecaster that I think is pretty nifty. It is the ability to input by all segments. Let's say that your Forecaster installation has the following segments: DEPT (3 characters) and MAIN (4 characters). DEPT is the assignable segment and MAIN is the reversable segment.

In a "typical" input, the user would go to Data>>Input, select an Input Set, and be asked to specify a DEPT. Then the multi-row input will appear with the MAIN accounts listed as rows and periods as columns.

However, as an alternative, let's say that you would like to see both DEPT and MAIN as rows to allow you to display multiple DEPT and MAIN accounts on the same multi-row input? The answer is to use the All Segments feature.

A quick cheat sheet to enable and use the functionality...

1. Setup>>Budgets>>Input Set, select Input Set, select Options tab, mark "Allow All Segments in Lines"
2. Data>>Input: Select Input Set, then for "Rows are" select All Segments
3. In the input window, simply select the MAIN and DEPT values you wish to display on each line OR use the open icon to select segments including a rollup (which will then automatically explode to details on the input window with the individual MAIN and DEPT values).

A nice alternative for administrative users, and for troubleshooting or comparisons.

Happy holidays!
Christina