My blog has moved! Please visit the new blog at: https://blog.steveendow.com/ I will no longer be posting to Dynamics GP Land, and all new posts will be at https://blog.steveendow.com Thanks!
Wednesday, December 24, 2008
Inline Scalar Function Performance (T-SQL geeks only)
Itzik recently posted an excellent article about performance issues using inline scalar functions (functions that return only 1 value and are used within a larger query). He demonstrates the performance of a typical inline scalar function, and then offers a relatively simple alternative approach that dramatically improves performance.
http://www.sqlmag.com/articles/index.cfm?articleid=101104&
In his sample, he demostrates how this change makes his 14 second query run in less than 1 second. That's a massive performance improvement for changing just a few keywords that will not affect the structure of your query.
Since I wrote an earlier post about UDFs, I thought that this would be a great follow up. To summarize his post:
1) Add the "RETURNS TABLE" clause to your UDF
2) Change your inline function call to use a SELECT statement against the function
Well worth the minimal effort, even if you aren't querying a million rows.
Thursday, December 18, 2008
Dynamics GP and SharePoint Resources
SharePoint Edition Comparison, to understand differences between Windows SharePoint Services (WSS) and Microsoft Office SharePoint Server (MOSS), along with other editions:
http://office.microsoft.com/en-us/sharepointtechnology/FX101758691033.aspx
For Dynamics Partners, information on the Dynamics Client for Office. The DCO is a bundled licensing of Microsoft Office SharePoint Server and other Dynamics features. Includes a FAQ, Fact Sheet, and Presentation:
https://mbs.microsoft.com/partnersource/newsevents/news/newsgeneral/microsoftdynamicsclientofficesharepoint.htm?printpage=false
High level information on the Dynamics GP and Microsoft Office SharePoint Server integration points:
http://www.microsoft.com/dynamics/gp/product/sharepointofficeintegration.mspx
For Dynamics Partners, fact sheets on specific integration points between MOSS and Dynamics GP:
- General Fact Sheet on MOSS and Dynamics GP integration points, https://mbs.microsoft.com/partnersource/sales/salestools/productfactsheets/MD_GP10_SP_Server
- For Search, https://mbs.microsoft.com/partnersource/sales/salestools/productfactsheets/SearchGP10
- For Workflow, https://mbs.microsoft.com/partnersource/sales/salestools/productfactsheets/MSDYGP10_WorkflowFS
- For Business Portal, https://mbs.microsoft.com/partnersource/sales/salestools/productfactsheets/MSDYGP10_BusnPortalFS.htm
Project Accounting Default Account Sourcing
In the core modules of Dynamics GP, the defaulting of posting accounts is fairly straightforward. The system looks first to a master record, and if no accounts are specified, it then looks to the posting account setup. So, for example, when entering a payables transaction, the system will first look to the vendor card for default posting accounts and whatever it cannot find there, it will attempt to pull from the posting accounts setup (Tools>>Setup>>Posting>>Posting Accounts).
Project accounting presents users with a more flexible defaulting model, where default locations can be specified by transaction type and by account. Additionally, segment overrides can also be specified at the contract and project levels. This creates tremendous flexibility in terms of how the system can default accounts, and where accounts need to be set up in the first place.
So, to outline the different options...
For each cost transaction type in project accounting:
- Miscellanous Logs
- Equipment Logs
- Purchases/Materials
- Employee Expenses
- Timesheets
And then for each account type (I only list a few of the basics below for the example that follows):
- Cost of Goods Sold/Expense
- Contra Account for Cost
- Project Revenue/Sales
- Accounts Receivable
For each account type, you can specify a "source" to pull the default accounts from:
- None (do not create distributions for this account)
- Customer (use Cards>>Sales>>Customer>>Project>>Accounts)
- Contract (use Cards>>Project>>Contract>>Accounts)
- Project (use Cards>>Project>>Project>>Accounts)
- Cost Category (use Cards>>Project>>Cost Category>>Accounts)
- Trx Owner (depending on transaction type, can pull from Employee, Equipment, Miscellaneous, or Vendor)
- Specific (enter one specific account to always be used)
- The expense account used on timesheets should vary by the cost category, but the project determines which department segment should be used.
- The offset to the expense (the contra account for cost) is determined by the employee's home department.
- When the labor is billed, the revenue account should vary by the cost category, but the project determines which department segment should be used.
- When the labor is billed, the AR account will be different based on the customer.
Sometimes it takes a bit of discussion to get items distilled down to the bullet points I show above. I actually start with a cost category worksheet that I have clients complete as homework that documents the cost categories and accounts they need in project accounting as a way to help us think through the requirements. Then, from that worksheet, we can discuss how to best set up the account sourcing. Feel free to post on this blog and I am happy to share the worksheet I use in case anyone is interested.
So, the outcome we came to for timesheets in the example above:
- Cost of Goods Sold/Expense- Source: Cost Category
- Contra Account for Cost- Source: Trx Owner
- Project Revenue/Sales- Source: Cost Category
- Accounts Receivable- Source: Customer
Then, at the project level, we will specify a sub-account format (Cards>>Project>>Project>>Sub Account Format Expansion Arrow>>Account Segment Overrides) that includes the department segment value and we will mark to override the segment on the Cost of Goods Sold/Expense and Project Revenue/Sales account. So this setting will substitute in the department value specified on the project, while pulling the remaining account segments from the cost category.
With the sources set as noted above, the client needs only specify accounts as follows:
- On the cost category (Cards>>Project>>Cost Category>>Accounts), enter the Cost of Goods Sold/Expense and Project Revenue/Sales accounts.
- On the employee (Cards>>Payroll>>Employee>>Project>>Accounts), enter a Contra Account for Costs.
- On the customer (Cards>>Sales>>Customer>>Project>>Accounts), enter an Accounts Receivable account for timesheets.
And, more importantly, the accounts do not have to set up repeatedly in multiple places. And, because the project determines the department segment, there is no need for cost categories to be set up for each department, only for each main account needed.
Although the effort involved in coming to this conclusion may make you feel like accepting the default of all accounts defaulting from the cost category, taking the time to set this up in the most efficient way possible will minimize mistakes in posting and simplify the setup in the long run.
As always, please feel free to share your experiences and perspectives. Happy holidays!
Monday, December 15, 2008
More HyperV Resources
http://blogs.technet.com/askcore/archive/2008/09/19/migrating-to-hyper-v-from-virtual-server-or-virtual-pc-tips-and-suggestions.aspx
Which led to this tool, which is apparently a simple means for translating the VMC file to HyperV. Since I only have a few servers to migrate, I don't know that this tool is particularly valuable to me--it only takes a few minutes for me to create a new virtual server and then just attach the VHD. If you have alot of servers to migrate, the article mentions that System Center Virtual Machine Manager is probably a better option.
http://blogs.technet.com/matthts/archive/2008/09/12/vmc-to-hyper-v-import-tool-available.aspx
And finally, here is a post with an amazing list of HyperV links and resources. I will be looking over these during the next few weeks.
http://blogs.technet.com/virtualworld/archive/2008/11/24/pfe-hyper-v-ops-day-october.aspx
Sunday, December 14, 2008
Transitioning to HyperV
I've finally built a new Windows 2008 Server, so I'm going to be transitioning my virtual machines over to HyperV, which is the replacement for Virtual Server 2005. Because of the memory requirements of virtual machines, I've installed the 64-bit (x64) version of Server 2008, which allows me to get past the 3GB / 4GB limit of 32-bit windows. The desktop machine I used for Server 2008 only supports up to 8GB of RAM, but I figure that should allow me to pretty comfortably host up to 4 virtual servers (typically using 1.5GB of memory each)--more than I need running at any one time.
I'm still learning about HyperV and how it is different from Virtual Server 2005, and in which ways it is the same, so for the time being I'll offer some links to articles that found helpful.
My focus is not just learning and understanding the HyperV features, but also understanding how to perform equivalent maintenance operations and processes in HyperV to replace the tricks I learned with Virtual Server 2005. For instance, what is the best way to backup a HyperV image? The files that are created by HyperV are slightly different than VS 2005, so I need to learn what to backup, when, and how (i.e. Can I write a script to pause the VM and run a backup like I did with VS 2005?). Also, can I migrate an existing VS 2005 VHD file to HyperV (the answer is yes), and if so, what steps do I need to complete to get all of the server features working properly in HyperV? (still working on this one...)
To get started, here are a few articles on HyperV by Janique Carbone on the VirtualizationAdmin.com web site that do a great job of explaining the basics of HyperV and some of the new HyperV features.
HyperV Integration Services: My interpretation is that this is effectively the replacement / upgrade for the old "Virtual Machine Additions" in Virtual Server and Virtual PC. You will definitely want to install these in your VMs (it makes the mouse 'usable', just like VM Additions).
http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/general/windows-server-2008-hyper-v-integration-services.html
Understanding and Using HyperV Snapshots: I still have alot of questions about HyperV snapshots, but so far, it seems like an amazing feature that works great.
http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/general/understanding-microsoft-windows-server-2008-hyper-v-snapshots.html
And here is a nice video she created showing how simple it is to create and use snapshots:
http://www.virtualizationadmin.com/articles-tutorials/videos/microsoft-hyper-v-articles/general/using-hyper-v-snapshots.html
What is pretty wild to me is how HyperV maintains 'branches' of snapshots, so it seems you can revert to a prior snapshot, then create new snapshots to create a new 'branch'. This is a pretty powerful feature, especially considering it only takes seconds to create or apply a snapshot.
But the caveat is that it can produce alot of permutations of your virtual server, so you'll have to learn how to manage those snapshots effectively so that they can be used in a meaningful fashion and not pile up over time to the point where they make a mess.
One small recommendation I have so far for taking snapshots is to try and log out of all sessions on the server before taking a snapshot. If you don't logout, your remote desktop connections will be saved as a disconnected sessions that you'll end up having to clear if you ever revert.
Wednesday, December 10, 2008
Exceeding maximum eConnect char field lengths
It turns out that the answer is: It depends.
If you manually enter an order into GP, yes, you are limited to 17 characters. But the eConnect documentation says that the field length is 21. And if you check the database tables, the SOPNUMBE field is a char(21). What gives?
So two clarifying questions came up:
1) If you are importing transactions using eConnect, what is the max length for a SOP Order number?
2) And, what happens if you send eConnect a document number that is greater than the max field length?
It turns out that you can send a 21 character SOP Number to eConnect. The number will be faithfully stored in the SOP tables. If you attempt to lookup that SOP Order in GP, all 21 characters will display in the lookup window, but only 18 characters will show in the Sales Transaction Entry window. Not 17, not 21, but 18 characters.
And, more interestingly, at least to me, is that you can send a SOP document number of any length to eConnect. 30 characters, 50 chars, you name it. You will not receive an error, or any indication of a problem. It appears that eConnect silently truncates all char values at the max field length. This appears to apply to any char field, as I tested a 30 char batch number, which was truncated to 15. Maybe this is common knowledge, but since I've never knowingly exceeded a char field length, I really didn't know how eConnect would behave.
So the lesson is to make sure to check char field lengths before sending data to eConnect, since you could potentially have some eventual problems if you did accidentally send long char values.
Friday, December 5, 2008
eConnect Performance Benchmarks
When I first used eConnect, I developed a simple prototype to import a very basic payables voucher. Back then I was using GP 7.5, and developed the integration using VB 6. I recall that my prototype was able to import about 20 transactions per second. Since then, that benchmark has been the general number that I use when speaking with clients who are concerned about the performance of an eConnect integration. Having written integrations that have processed millions of transactions, I have yet to work with a client who really needs to import transactions real-time into GP at 20 per second, let alone any faster, so performance has never really been an issue.
But I recently worked on a data migration project where performance was a concern, simply because of the volume of data to be transferred. It involved migrating nearly 27,000 customers and 58,000 addresses from multiple GP companies into a single GP company. In this particular case, I suppose that DTS or SSIS might have been an option, but because of changes to various IDs and customer classes, it was alot safer and much easier to use eConnect and get the benefits of its validation and error handling (which were put to good use). The performance concern wasn't that the migration needed to be completed in a matter of minutes--it was that simply testing the migration of that many records takes quite a long time, especially if you have to run the test multiple times.
The database server where I ran the eConnect integration had two Xeon E5345 Quad Core processors running at 2.33 GHz, 4GB of RAM, and was running 32-bit Windows 2003 Server Standard. The data source was a separate SQL server with 5 GP company databases.
For this integration and testing, I didn't perform any database or code optimization to try and speed things up (I didn't have time)--I just wanted to see the performance of a typical integration written without regard for performance.
Anyway, here are the statistics from the totals of my largest data sets.
25,444 customers (with main address) imported in 2,325 seconds = 11 customers per second average
56,520 customer address IDs imported in 1,136 seconds = 50 addresses per second average
46,459 inet info records imported in 1,019 seconds = 46 records per second average
These averages were pretty consistent regardless of the size of the source data set (21 records to 15,000 records)
You can see from the averages that there can be significant variability in speed depending on the transaction, which makes sense. These particular transactions aren't the most meaningful to me, as I would really like to see stats on SOP Orders and PM Invoices, but at least it's one data point to consider.
12/5/08 - UPDATE: One thing I forgot to mention is that while this integration was running on the server, only 1 CPU core indicated any activity, and it generally stayed under 10%. So processing power was definitely not a bottleneck for performance. And given the data sources and size, I'm inclined to completely rule out disk and network as bottlenecks. Given this, I'm not yet sure why the integration didn't run faster--my current assumption is that some elements of my integration application have a lot of latency, or there is something about the eConnect calls that produce latency or a bottleneck. One test I'm thinking of performing is creating a test integration with 'source' data values that are hard-coded, and see how fast that data can be inserted into GP a few hundred times. That should completely eliminate any latency or delays due to reading or accessing the source data.
12/20/08 - UPDATE: I was able to create an integration for AR invoices, and for over 3,000 invoices, the average speed was 14 transactions per second.
3/23/09 - UPDATE: I recently created two different SOP transaction imports.
The first was a one-time load of 13,000 Invoices an average of 2.3 lines per order. The lines were non-inventory items, and had totals of $0, so there was no inventory impact, and were no distributions. The import averaged about 1,000 transactions per minute, or 16.7 transactions per second.
The second integration was tested with 3 line items per order, as well as 3 SOP payments per order. That integration imported 500 transactions per minute, or 8.3 transactions per second. The customer needs to import up to 40,000 transactions per day, so this performance should be sufficient.
Other than understanding these benchmarks, one lesson is that the performance varies significantly depending on how many elements there are for a given transaction.
Wednesday, December 3, 2008
Nifty Nifty Forecaster Input Options
So, just a quick little post on a feature of Forecaster that I think is pretty nifty. It is the ability to input by all segments. Let's say that your Forecaster installation has the following segments: DEPT (3 characters) and MAIN (4 characters). DEPT is the assignable segment and MAIN is the reversable segment.
In a "typical" input, the user would go to Data>>Input, select an Input Set, and be asked to specify a DEPT. Then the multi-row input will appear with the MAIN accounts listed as rows and periods as columns.
However, as an alternative, let's say that you would like to see both DEPT and MAIN as rows to allow you to display multiple DEPT and MAIN accounts on the same multi-row input? The answer is to use the All Segments feature.
A quick cheat sheet to enable and use the functionality...
1. Setup>>Budgets>>Input Set, select Input Set, select Options tab, mark "Allow All Segments in Lines"
2. Data>>Input: Select Input Set, then for "Rows are" select All Segments
3. In the input window, simply select the MAIN and DEPT values you wish to display on each line OR use the open icon to select segments including a rollup (which will then automatically explode to details on the input window with the individual MAIN and DEPT values).
A nice alternative for administrative users, and for troubleshooting or comparisons.
Happy holidays!
Christina
Thursday, November 27, 2008
Problems with Modal GP AddIn Window
It worked fine and was pretty straightforward--until I tried to make it a modal window.
When I used .ShowDialog, the window would display, but it would not respond to mouse events (brief video - IE and Flash required).
Clicking on check boxes did nothing, Save and Cancel would not work, I could not move or reposition the window, and even the control box buttons in the upper right corner of the window would not respond to a mouse click. But I could still use the keyboard to tab through the window controls, and could press the ESC key to close the window.
After further testing, it turned out that the modal window worked properly the first time it was displayed. After that, the problem occurred and would not go away until GP was shut down and restarted.
After posting to Steve Gray's excellent Dynamics GP development forum on Visual Studio tools (and seeing that someone else had the exact same issue), he assured me that he uses Modal AddIn windows all of the time, and that it definitely should work. So I tried creating a new GP window form from scratch in my VS project, but had the same issue. The next test that Steve recommended was to create a new project from scratch with just a plain modal window and no additional code.
Unfortunately, before I could try that test, my virtual server bit the dust, and my last full backup of the VHD was from August (ya, I know, I know, lesson learned). So after restoring my August backup, I saw that I didn't even have the VS Tools SDK installed. So I installed it, along with SDK 10 SP3, and tried to create a new test project.
Well, the problem no longer existed--now my modal AddIn windows work fine. So unfortunately I don't have a specific resolution other than to let you know that you aren't going crazy if you run into this issue, and just to recommend installing the latest Visual Studio Tools SDK service pack and see if that does the trick. If it does not, then it would seem to be something related to the development machine that is causing the issue.
Wednesday, November 26, 2008
Checking eConnect Version Number Programmatically
Based on these experiences, I figured it might be interesting to add a routine to my integrations to verify that the proper version of eConnect is installed before it attempts to run.
So today I decided to trace the "eConnect Release Information" utility to see how it was checking the eConnect version.
Turns out that it is just calling a single stored procedure:
exec DYNAMICS..taeConnectVersionInfoDYNAMICS
This returns a result set with 3 fields, with a row for each company:
Database Name
Version
CompanyName
I have verified that this procedure is available with eConnect 9 and 10, and from what I have read, may go back to version 7 / 7.5.
One good thing about the stored procedure is that it returns a "friendly" version number, like 9.0.3.0, 10.0.2.0, etc.
The stored procedure is encrypted, and I haven't bothered to buy a decryption app, so I can't tell exactly what it is doing, but I do know that it is iterating through each company database and checking something in the database.
The reason I know this, and the unintended consequence of this design, is that if there are any company databases that are Offline, or otherwise inaccessible, the version check fails completely and does not return any results. I have a client with over a dozen company databases, and several off line, so I'm unable to use the Release Information to check their eConnect version. Not very graceful.
And since this is only a stored procedure and is returning a version per company database, I assume that it is not checking the versions of the eConnect .NET DLL files.
If you want to check the DLL versions, you have a few options, but none are terribly appealing for programmatic checking.
1) The Knowledge Base recommends using Control Panel -> Add/Remove -> Select eConnect -> Click on Support Information link.
2) If you are adventurous, that same Add/Remove information can be found in the registry. Though it appears the GUID is the same across eConnect 10 releases and service packs (3C90E5C5C8D49BD42A1B7F8FF2B481C4), I suppose it could change at some point.
3) You can write code to locate the eConnect DLL and use FileVersionInfo to get the version number via .NET. The challenge is searching for the file on multiple drives and directories on a server.
And there are probably a few other creative ways I don't know about, like somehow checking the version in the .NET GAC.
The annoying part of checking the file versions is that it returns the typical useless GP version numbers like 10.00.0836.00 and 10.00.1072.00. Though documentation of such numbers is available for GP service packs, I haven't found such a list for eConnect service pack version numbers. If you know which version you want, you can just check to see if it matches, but I would rather be able to have a simple dialog that says "eConnect 10 SP2 is installed, but SP3 is required".
For now I think I'll try the stored procedure approach.
Tuesday, November 18, 2008
Microsoft Dynamics Sure Step- Practice and Opinion
The first day of the class focuses on the soft skills that are essential in each phase of an implementation. As I am sure many of you can appreciate, there were a ton of fabulous conversations during class, on breaks, after class. I attended the first day, and really enjoyed the opportunity to hear a variety of perspectives on project management related to Dynamics implementations. I think that project managers are often a lonely bunch, and a class like this is exactly what many will need to gain the confidence and focus to take their skills and processes to the next level. Usually I am exhausted after a day in class, but I will admit that this class left me energized with new ideas that I want to share with our internal project manager. I actually sent my boss an email before lunch that day saying that our project manager MUST attend the class!
But what I really wanted to write about is the two-day class that focused on Microsoft Dynamics Sure Step. First, what is Sure Step? Sure Step is a toolkit that folllows a methodology that Microsoft developed for Dynamics implementations. It contains a variety of templates focused on CRM, AX, GP, SL, and NAV implementations. Currently, there is a bit more available for CRM and AX, but more templates are forthcoming and there are already a variety of generic templates that can be adapted for each product.
To download Sure Step, you must be on a partner service plan with access to PartnerSource:
https://mbs.microsoft.com/partnersource/partneressentials/serviceplans/surestep/
One of the things that I found most beneficial about the class is that it follows a case study model. So instead of focusing on learning Sure Step in a vacuum, it presents a very practical way to understand the methodology and see how it can be applied to your organization. In the past, I know that a common issue for many partners was exactly how to begin to use Sure Step. Where to begin? Because Sure Step is scalable to the largest of implementations, partners with smaller implementations can feel overwhelmed by the formality of the phases and abundance of documentation. But all of those options and approaches are just different suggestions of how you might approach an implementation. A great takeway from the class, which I think many of the students last week would agree with, is that Sure Step is a toolkit that can be adapted to a variety of implementation sizes by picking, choosing, and adapting it to your needs.
From the case study, students work with documents like a Fit/Gap Analysis and Solution Blueprint, Functional Requirements Document, and a Statement of Work. Students see how all of the documents tie together, and how you might take one or two or all of them and apply them practically to an implementation. We had many side discussions in class concerning how Sure Step might scale up or down: for a smaller implementation you might work with one simple document that evolves to represent the requirements while a larger implementation may call for a full cycle of deliverables with a formal approval process and tollgate reviews.
So, where to begin? Well, for those organizations lacking a formal project management approach- Sure Step can provide an understanding of the formal phases and their purposes, including the customer and partner roles who participate and the associated deliverables. For others who already have a formal methodology in place, Sure Step can serve to reinforce what you are doing and fill in any gaps. And for those that are somewhere in the middle, the templates available in Sure Step can be used standalone to begin (or further) the development of your own processes.
Sure Step provides project paths for Standard, Enterprise, Rapid, and Upgrade projects, giving you different approaches based on the scale of the project. Additionally, in the Diagnostic Phase, Sure Step provides a guidebook for a series of Decision Accelerators that can assist customers with the decision to purchase, and provide the desired due diligence for the customer to feel comfortable with the decision. Partners can pick and choose from these, using the ones that are appropriate for the case at hand and best address the customer’s concerns.
I am writing this on a plane to Portland, Oregon. With my project plan for an upgrade in my bag. A little documentation and formal process never hurts :) Which makes me think that a future blog topic might be “Why even have a methodology?” For those of you that know me personally, you know I could talk on that for days!
Please share any questions, comments, or perspectives on Sure Step. I would also love to hear of successes (or failures) deploying Sure Step in your organization.
Tuesday, November 4, 2008
SQL User Defined Functions
Have you ever written a SQL query only to come across a relatively small requirement that suddenly makes the query more complex than you want?
In the last few months, it's happened to me twice, but fortunately I was able to utilize a User Defined Function (UDF) to meet the requirement while simplifying my query.
The first project involved exporting data from GP. It had a requirement to split the GP SOP Order number into two fields--the alpha prefix in one field, and the numeric suffix in the other. Well, because there were different SOP order types involved, the alpha prefixes had different lengths, as did the numeric suffixes, and to top it off, some of the order numbers had a dash separating the prefix and suffix, while others did not. Examples:
ORD0000095
WAR-1025
SP001024
CONS-2933
So, how do you split a field value 'in half' when you don't know where one half ends and the other begins? Unfortunately, SQL 2005 still does not have native Regular Expression capabilities. There are some functions that provide some limited character searches (PATINDEX, CHARINDEX), but they don't allow you to search for a pattern or range like RegEx.
After doing some research, I found several references to using a .NET CLR assembly to make Regular Expressions available within SQL statements. While this approach was very appealing, it was more complicated than I wanted, as it would require a change to SQL Server security to allow assemblies to run, and would introduce CLR into the mix, something that is not typically used in GP environments.
Since I was responsible for assisting with deployment of the solution in a remote client environment, I needed something very something simple and self contained in standard SQL--the fewer pieces, the better. So I began to consider the possibility of using a cursor to loop through each character of the order number to determine where the prefix ended, and the suffix began. I'm very reluctant to use cursors in SQL unless absolutely necessary because they typically can't perform nearly as well as a set-based solution, but given my constraints, it seemed like my only option.
After checking with a few colleagues about my cursor approach, I received a recommendation to package up the prefix / suffix routine into a SQL User Defined Function. So here is where I admit that in the 10 years that I've been working with SQL Server, I have never seen the need to use a User Defined Function. Most business data that I've worked with has been pretty straightforward, with multiply and divide and a string function being as complex as things would get. I had heard of User Defined Functions, but didn't realize how simple they really were to create and use--which probably contributed to me never "seeing the need" to use them!
As SQL Server Books Online (BOL) states under "User-defined Function Basics":
The benefits of using user-defined functions in SQL Server are:
--They allow modular programming
--They allow faster execution
--They can reduce network traffic
It goes on to point out that UDFs can be written in T-SQL or a .NET language.
So after reading a little more about UDFs, I built my function. At first, I was going to have the function return the index number indicating where the numeric portion of the order number started. But after realizing how convenient the function would be, I decided to write two functions: one to return the prefix, and the other to return the suffix. (Code listed below)
Once I created the functions, using them was a snap:
SELECT DYNAMICS.dbo.GetOrderPrefix(RTRIM(ORIGNUMB)) FROM SOP30200
This approach made my already complex SQL query much simpler and cleaner than if I had tried to code it all in one query.
Like most technical solutions, there are often many approaches to solving the problem. You weigh the requirements against the platform, environment, tools, performance, constraints, your knowledge and skills, budgets, and time. The lesson I'm trying to convey is not the specific solution or function that I implemented, but rather the opportunity that User Defined Functions presents, offering the potential to reduce the complexity of your SQL queries, views, or stored procedures.
One final note. Although I chose not to use a CLR-based UDF (I would have liked to!), I would strongly recommend looking into them if it is an option in your environment. They offer the tremendous flexibility and power of the .NET CLR, and based on my research, they are extremely fast for complex computations or logic.
Here is the GetOrderPrefix prefix function:
USE [DYNAMICS]
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[GetOrderPrefix]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
BEGIN
DROP FUNCTION [dbo].[GetOrderPrefix]
END
GO
CREATE FUNCTION [dbo].[GetOrderPrefix] (@SOPNumber varchar(40))
RETURNS varchar(20)
AS
BEGIN
DECLARE @NumStart int
SET @NumStart = 0
IF LEN(RTRIM(@SOPNumber)) > 0
BEGIN
DECLARE @Position int
SET @Position = 0
--If the doc number has a dash, locate the prefix relative to the dash
IF CHARINDEX('-', @SOPNumber) > 0
BEGIN
--Locate the dash in the document number
SELECT @Position = CHARINDEX('-', @SOPNumber)
END
ELSE
--If no dash was found, loop through the characters and find the first numeric character
BEGIN
--Loop through the initial characters of the doc number to identify the end of the alpha prefix
SET @Position = 2
WHILE @NumStart = 0
BEGIN
IF ISNUMERIC(SUBSTRING(@SOPNumber, @Position, 1)) = 1
BEGIN
SET @NumStart = @Position
END
ELSE
BEGIN
SET @Position = @Position+1
END
END
END
-- Return the alpha prefix
RETURN SUBSTRING(@SOPNumber, 1, @Position - 1)
END
--If an empty string was passed in, return one back
RETURN ''
END
GO
And GetOrderSuffix:
USE [DYNAMICS]
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[GetOrderSuffix]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
BEGIN
DROP FUNCTION [dbo].[GetOrderSuffix]
END
GO
CREATE FUNCTION [dbo].[GetOrderSuffix] (@SOPNumber varchar(40))
RETURNS varchar(20)
AS
BEGIN
DECLARE @NumStart int
SET @NumStart = 0
IF LEN(RTRIM(@SOPNumber)) > 0
BEGIN
DECLARE @Position int
SET @Position = 0
--If the doc number has a dash, locate the prefix relative to the dash
IF CHARINDEX('-', @SOPNumber) > 0
BEGIN
--Locate the dash in the document number, add 1 for the starting point of the numeric suffix
SELECT @Position = CHARINDEX('-', @SOPNumber) + 1
END
ELSE
--If no dash was found, loop through the characters and find the first numeric character
BEGIN
--Loop through the initial characters of the doc number to identify the end of the alpha prefix
SET @Position = 2
WHILE @NumStart = 0
BEGIN
IF ISNUMERIC(SUBSTRING(@SOPNumber, @Position, 1)) = 1
BEGIN
SET @NumStart = @Position
END
ELSE
BEGIN
SET @Position = @Position+1
END
END
END
-- Return the numeric suffix
RETURN SUBSTRING(@SOPNumber, @Position, LEN(RTRIM(@SOPNumber))-@Position+1)
END
--If an empty string was passed in, return one back
RETURN ''
END
GO
Tuesday, October 21, 2008
What makes excellent training?
1. Training content must meet the needs of the students
Obvious, right? Sometimes this is out of your control, especially with non-custom classes like the partner academy. Sometimes students do not review the syllabus or enroll based on an assumption of what the class will cover. But the instructor also bears a burden to deliver the content in a relevant way, including real-life examples so that students can see how the content applies to their situations. Students can also contribute to this process, through their own experiences and ideas....which leads us in to my next point...
2. Students must learn from the instructor, but also from each other
I am only half joking on the first day of class when I emphasize the importance of students asking questions. Talking all day long can get pretty boring for me, and for the students. Only when students start to ask questions, and share experiences, does the class really begin to take off. Of course, a knowledgeable instructor is key to providing the forum and managing the exchanges; but I think the light bulbs really start to go off when students start explaining and relating concepts to each other. Here I go again, leading in to the next point...
3. Practical application of the knowledge is key (in the classroom, in discussions)
We can talk all day about what a particular field means, or how a particular process works in Dynamics GP. But it is the knowledge of how to apply those facts to real life situations that is critical. The ability to apply knowledge increases the value and life of the training to beyond the classroom. How can instructors contribute to this application? Maybe by sharing their own practical experiences, by bringing more than just the factual knowledge to the classroom, and by encouraging students to share their own practical experiences. I encourage this even beyond the classroom, asking students (particularly those new to GP) to continue to share their experiences with me after the class ends. I love getting the emails from students who have successfully completed implementations, or tackled a tough business process and found a creative solution.
4. Students and instructor must be engaged
Hmmm. Where to begin with this one. As an instructor, I have witnessed a variety of types of disengagement. The "I have too much work at my real job" disengagement, the "I was forced to come to training" disengagement, or even the "I am in denial" disengagement. I can completely identify with each type, and try to be proactive in the early hours of class to engage those individuals through gentle coercion to bring them in to the discussion. On the flip side, as a student, I have seen instructors who seem to want to be somewhere else. Or who are not engaged to the degree that they are unwilling to stray far from the printed word. As you probably suspect from my earlier points, I think that the most beneficial classes are those where all parties jump in feet first and contribute fully. This means that instructors are willing to entertain new ideas and approaches, and students are focused on the content being presented. Students and instructor both think critically about the task at hand, bringing their full mind to the classroom.
As a side note, I think a subset of the last point is that the instructor must be knowledgeable about the subject at hand. A bit basic, right? But, I think that is an aspect of being engaged. A fully engaged instructor is well-prepared for the class being taught and has the practical knowledge to provide a well-rounded learning experience.
So...what do you all think? What makes excellent training? What makes an excellent student? How about an excellent instructor? Share your thoughts!
Saturday, October 11, 2008
Multi-Company SmartLists in GP - Part 2
Before we proceed, let me provide the answer to the Bonus Quiz from Part 1. So what step did I omit that would prevent you from adding a custom view to a new SmartList in SmartList Builder? Well, in SQL Server databases, there are security settings that determine which database objects users can access, such as tables, views, and stored procedures. After you create any custom object in a Dynamics GP database, you need to make sure to grant access to the object so that Dynamics GP users can access it. Fortunately this is relatively simple. For convenience, I highly recommend using a script to grant permissions, as it is simpler and much faster than trying to use SQL Server Management Studio to change permissions on a single object. It can certainly be done with Management Studio, but after waiting for several minutes as the windows refresh, you'll understand why I prefer scripts.
In this case, we created a new view called "csvwAllVendors".
(Aside #1: I borrowed this naming convention from Lorren Zemke and his colleagues at WennSoft. It means "cs" = Custom, "vw" = View. cstb = custom table, cssp = custom stored procedure. The convenience of using this naming convention consistently is that you always know that you can find your custom objects in Dynamics databases by searching for "cs".)
(Aside #2: Because this view queries data from multiple databases, in which database should it be created? My recommendation would be the Dynamics database, since all GP users have access to that database, and because it can serve as a central repository for all multi-company views.)
To grant access to this new view to all Dynamics GP users, you can run the following script:
GRANT SELECT ON DYNAMICS..csvwAllVendors TO 'DYNGRP'
That's it! The view should now show up in the SmartList Builder.
Okay, so now that the quiz is settled, let's get back to our original requirements:
1. Display data from multiple GP databases in a single SmartList accessible from any GP company
2. Display the name of the database where the data is stored
3. When new GP company databases are created, automatically include data from the new databases
4. Allow some companies to be excluded from the multi-company SmartLists
In Part 1, I discussed how to fundamentally display data from multiple GP databases, and I also explained how to display the name of the company for each record.
But how can we automatically include data from a new GP company database in our query? Since the multi-company view is hard-coded with the name of each database, how can we "automatically" have new companies included in the query?
This requires a little bit of creative VB scripting. (There are certainly other approaches, but this is what came to my mind first and seemed easiest.) Instead of thinking of the view as a hard-coded query of specific company databases, let's rewrite it as a generic query for any company database:
SELECT '^' AS Company, * FROM ~..PM00200
In this case, the caret '^' serves as our placeholder for the company name, and the tilde '~' is the placeholder for our physical database name. Now imagine if we could have some way of substituting these two characters with the names of every company database.
First we would have to have a list of every company database. Of course, Dynamics GP has such a list:
SELECT INTERID, CMPNYNAM FROM DYNAMICS..SY001500
Great, so we now have a list of the company databases and company names. But we still need to combine that list and our generic query so that we have a multi-company view. With a few lines of VBA, we have such a solution that will perform a search and replace on our query template:
Dim GPCommand As New ADODB.Command
Dim rsCompanies As New ADODB.Recordset
Dim strQuery As String
Dim intCompany As Integer
Set GPConn = UserInfoGet.CreateADOConnection
GPConn.CursorLocation = adUseClient
'Set DYNAMICS as the default DB
GPConn.DefaultDatabase = "DYNAMICS"
GPCommand.ActiveConnection = GPConn
'Get a list of companies
strSQL = "SELECT INTERID, CMPNYNAM FROM DYNAMICS..SY01500"
GPCommand.CommandText = strSQL
Set rsCompanies = GPCommand.Execute
intCompany = 0
strView = "CREATE VIEW csvwAllVendors AS "
'Loop through each company record and build the SQL for the view
While Not rsCompanies.EOF
intCompany = intCompany + 1
'Replace the ^ with the full company name
'Replace the ~ with the database name
strQuery = Replace(Replace("SELECT '^' AS Company, * FROM ~..PM00200", "^", Trim(rsCompanies.Fields("CMPNYNAM").Value)), "~", Trim(rsCompanies.Fields("INTERID").Value))
If intCompany = 1 Then
strView = strView & vbCrLf & strQuery
Else
strView = strView & vbCrLf & "UNION" & vbCrLf & strQuery
End If
rsCompanies.MoveNext
Wend
'Execute the CREATE VIEW statement
GPCommand.CommandText = strView
GPCommand.Execute
'Grant permissions to DYNGRP
strSQL = "GRANT SELECT ON csvwAllVendors TO DYNGRP"
GPCommand.CommandText = strSQL
GPCommand.Execute
MsgBox "csvwAllVendors was created!"
(NOTE: This code sample is for GP 10, as it uses the UserInfoGet object to connect to the GP database, which means that it must be run from within GP 10 as VBA. For GP 9, you will need to use the RetrieveGlobals9.dll, which gives you the freedom to run the script from GP, or as a VBS file outside of GP--although a user must be logged in to GP on the workstation when the VBS file is run.)
If you just use Modifier to create a new button on a window, like on the Company Setup window, you can paste this code into a button event, and it will create the multi-company view.
But what is missing from the script? It works the first time it is run, but what problem will you have on the second attempt? Well, on the first run, it is creating the view. The second time it is run, the view will already exist, and the CREATE VIEW statement will fail with an error. So we have to enhance it to detect whether custom view already exists or not. Before our code begins to build the view SQL statement, we need to add something like the following:
'*********************************************************
'Check to see if the view already exits
'SQL Server 2000 - Use older SQL 2000 sysobjects table
'GPCommand.CommandText = "SELECT COUNT(*) AS Records FROM sysobjects WHERE type = 'V' AND name = 'csvwAllVendors'"
'SQL Server 2005 - Use SQL 2005 sys.objects system view
GPCommand.CommandText = "SELECT COUNT(*) AS Records FROM sys.objects WHERE type = 'V' AND name = 'csvwAllVendors'"
Set rsViewExists = GPCommand.Execute
If rsViewExists.Fields("Records").Value = 0 Then
strView = "CREATE VIEW "
blnGrant = True
Else
strView = "ALTER VIEW "
blnGrant = False
End If
rsViewExists.Close
'*********************************************************
So we now our script can create the view or alter the view if it already exists!
Okay, so in concept we now have the ability to "automatically" update the view when a new company database is added. But what about excluding certain companies from the multi-company SmartLists? How about we use one of the User Defined fields on the Company Setup window to indicate whether to include the company in custom view? If we set User Defined 1 on the Company Setup window to have a value of "MULTICOMPANY VIEWS", we can adjust our company query to be:
SELECT INTERID, CMPNYNAM FROM DYNAMICS..SY01500 WHERE UDCOSTR1 = 'MULTICOMPANY VIEWS'
Any value in this field would work, but something obvious and explanatory helps avoid confusion. Using a UDF field makes it easy for a GP user to specify which companies are included in the views. So if we combine all of this together, we get a script like:
Option Explicit
Private Sub RefreshViews_AfterUserChanged()
Dim GPConn As New ADODB.Connection
Dim GPCommand As New ADODB.Command
Dim rsCompanies As New ADODB.Recordset
Dim rsViewExists As New ADODB.Recordset
Dim strQuery As String
Dim intCompany As Integer
Dim strView As String
Dim blnGrant As Boolean
Set GPConn = UserInfoGet.CreateADOConnection
GPConn.CursorLocation = adUseClient
'Set DYNAMICS as the default DB
GPConn.DefaultDatabase = "DYNAMICS"
GPCommand.ActiveConnection = GPConn
'Get a list of companies
strQuery = "SELECT INTERID, CMPNYNAM FROM DYNAMICS..SY01500 WHERE UDCOSTR1 = 'MULTICOMPANY VIEWS'"
GPCommand.CommandText = strQuery
Set rsCompanies = GPCommand.Execute
intCompany = 0
'*********************************************************
'Check to see if the view already exits
'SQL Server 2000 - Use older SQL 2000 sysobjects table
'GPCommand.CommandText = "SELECT COUNT(*) AS Records FROM sysobjects WHERE type = 'V' AND name = 'csvwAllVendors'"
'SQL Server 2005 - Use SQL 2005 sys.objects system view
GPCommand.CommandText = "SELECT COUNT(*) AS Records FROM sys.objects WHERE type = 'V' AND name = 'csvwAllVendors'"
Set rsViewExists = GPCommand.Execute
If rsViewExists.Fields("Records").Value = 0 Then
strView = "CREATE VIEW"
blnGrant = True
Else
strView = "ALTER VIEW"
blnGrant = False
End If
rsViewExists.Close
'*********************************************************
strView = strView & " csvwAllVendors AS "
'Loop through each company record and build the SQL for the view
While Not rsCompanies.EOF
intCompany = intCompany + 1
'Replace the ^ with the full company name
'Replace the ~ with the database name
strQuery = Replace(Replace("SELECT '^' AS Company, * FROM ~..PM00200", "^", Trim(rsCompanies.Fields("CMPNYNAM").Value)), "~", Trim(rsCompanies.Fields("INTERID").Value))
If intCompany = 1 Then
strView = strView & vbCrLf & strQuery
Else
strView = strView & vbCrLf & "UNION" & vbCrLf & strQuery
End If
rsCompanies.MoveNext
Wend
'Execute the CREATE VIEW statement
GPCommand.CommandText = strView
GPCommand.Execute
If blnGrant Then
'Grant permissions to DYNGRP
strQuery = "GRANT SELECT ON csvwAllVendors TO DYNGRP"
GPCommand.CommandText = strQuery
GPCommand.Execute
MsgBox "csvwAllVendors was created!"
Else
MsgBox "csvwAllVendors was updated!"
End If
End Sub
So we've made alot of progress! But...thinking about the bigger picture, what's wrong with this approach? It works for our custom view for vendors, but what if the company wants to have 20 different multi-company SmartLists? We certainly don't want to put 20 buttons on the company setup window, and we probably don't want to hard code all 20 queries into our code.
How could we use the design we have developed so far to accommodate an arbitrary number of custom multi-company views?
We'll discuss that in Part 3!
Any questions about what has been covered so far?
Tuesday, October 7, 2008
Multi-Company SmartLists in GP - Part 1
This series of posts will show you how to use SmartList Builder and some custom scripts to view data from multiple companies in a single SmartList. It will cover multi-database views, custom SmartLists, SmartList security, and even automatically updating the multi-company views when a new company database is added.
This solution is probably most valuable to corporations that have many GP company databases, but have a centralized accounting department that enters transactions for all companies.
Ultimately, this is a technical solution that will be fairly straightforward for individuals that are comfortable writing moderately complex SQL queries and are comfortable working with Visual Basic Script. I’ve tried to explain each step so that you don’t have to be a SQL guru or a full programmer to implement this solution, but it does discuss several concepts that will be more familiar to a person with a more technical skill set.
2 Scenario
Suppose Consolidated Property Corporation, a rapidly growing property management firm, has 12 different property management subsidiaries across the US. Each subsidiary has as a separate company database in GP, but all accounting is processed at their corporate headquarters in Denver, Colorado. Consolidated Property expects to add 10 new subsidiaries over the next year, and will therefore be adding 10 more GP company databases. 2 of the current 12 companies are leasing firms, providing different services than the other 10 companies.
Consolidated Property Corporation has national contracts with some vendors for building maintenance services, such as commercial roofing, air conditioning repair, parking lot maintenance, window washing, and office cleaning. They also have customers that hire Consolidated to manage multiple properties in multiple states, such as national commercial property firms.
Consolidated wants to be able to view data from their 10 main property management subsidiaries in a single SmartList. They do not want to include data from their 2 leasing firms in the SmartList.
Consolidated offers a story to explain the solution they are looking for:
Susan, the AP clerk at the corporate office receives a call from Crystal Window Washers inquiring about an outstanding invoice. Because all invoices are mailed to the corporate offices, the Crystal Window Washers collections agent does not know which Consolidated property the invoice was for—he only knows the invoice number and amount. Since Susan does not know which GP company to log into to look for the invoice, she would normally have to login to each GP company to try and find the vendor and then the invoice.
Using a multi-company Vendor SmartList, Susan would be able to search for Crystal Window Washers and see that they are setup as a vendor in only 2 different GP databases—San Diego and Los Angeles. She could then use the multi-company AP Invoice SmartList to search by invoice number see that the invoice was entered into the San Diego company. She could then log in to the San Diego company and research the transaction.
3 Requirements
In the scenario I’ve outlined, the custom SmartLists need to provide the following:
1. Display data from multiple GP databases in a single SmartList accessible from any GP company
2. Display the name of the database where the data is stored
3. When new GP company databases are created, automatically include data from the new databases
4. Allow some companies to be excluded from the multi-company SmartLists
4 Solution
This solution will require several different elements to meet the requirements. I’ll discuss the first element of the solution in this post, and will discuss the other elements in subsequent posts in this series.
4.1 Multi-company queries
The core requirement is to display data from multiple GP databases in a single SmartList. With SQL Server and Dynamics GP, this is fairly easy to do. It all rests on the "UNION" statement.
Let's start with vendors, since that is an easy query. If you were to query vendors in GP, you would launch SQL Server Management Studio, connect to the TWO database, and write a query like:
SELECT * FROM PM00200
This would list all vendors in the TWO database. But knowing that we need data from multiple companies, let’s be more explicit with our query syntax:
SELECT * FROM TWO.dbo.PM00200
Since I don’t like to type any more than I have to, this query can also be written as:
SELECT * FROM TWO..PM00200
So this gives us a list of all vendors in TWO. If we wanted all vendors from the Los Angeles (California database #4) database, we would write:
SELECT * FROM CA001..PM00200
And San Diego and Denver:
SELECT * FROM CA004..PM00200
SELECT * FROM CO001..PM00200
So now we have separate lists of vendors from each company:
SELECT * FROM CA001..RM00101
SELECT * FROM CA004..PM00200
SELECT * FROM CO001..PM00200
But, we need to get them to display together in a single query, so naturally, we use the UNION statement, which neatly combines all of the results from our 3 different queries into a single result set:
SELECT * FROM CA001..PM00200
UNION
SELECT * FROM CA004..PM00200
UNION
SELECT * FROM CO001..PM00200
Great! But what’s wrong with this new query that combines the list of vendors from all 3 companies? How can you tell which company any given vendor is from? To solve this, let’s add a company field:
SELECT 'Los Angeles' AS Company, * FROM CA001..PM00200
UNION
SELECT 'San Diego' AS Company, * FROM CA004..PM00200
UNION
SELECT 'Denver' AS Company, * FROM CO001..PM00200
Much better! Now we can see which database every vendor record came from.
I think you can see where this is going—just create similar queries for the 7 other GP companies, and you now have a single list of vendors for all 10 of the property management companies.
Pretty simple, right? So now that you have this query, how do you turn it into a SmartList? The easiest way that I’ve found is to turn the query into a custom “view” in SQL Server. This is quite simple:
CREATE VIEW csvwAllVendors AS
SELECT 'Los Angeles' AS Company, * FROM CA001..PM00200
UNION
SELECT 'San Diego' AS Company, * FROM CA004..PM00200
UNION
SELECT 'Denver' AS Company, * FROM CO001..PM00200
GO
Now, instead of having a large, complex looking query, we can simply query:
SELECT * FROM csvwAllVendors
Now that you have created a view, you can use SmartList Builder to create a new SmartList that points to your custom view, just as if it were a database table.
Bonus Quiz: For you technical database folks out there, what step have I omitted near the end? Hint: It will prevent you from adding the custom view to a new SmartList in SmartList Builder.
I’ll share the answer in Part 2!
Wednesday, October 1, 2008
Project Accounting Ramblings
I do not want to come across as negative about the module, as it meets many requirements and can provide significant productivity gains. However, I think it is important to know the parameters you are working with so that they can be planned for and addressed during the design and development stages of an implementation as opposed to coming to light during training or in a setup session. Some key limitations I have found that can impact the satisfaction with the module:
1. Reporting capabilities: Project comes with a variety of standard reports, however I have found that many companies require some degree of customized reporting. I think this is related to the fact that people approach project accounting differently, and analyze data differently. Also, consider that the project accounting hierarchy of Customer/Contract/Project/Cost Category must, in turn, support the reporting that is required.
2. Milestone billing: Although this is not standard functionality, milestone billing can be accomplished by scheduling the fee amounts for billing. This can be a process adjustment for users, but can work nicely in many situations.
3. General Ledger reporting: In some cases, users want GL reporting by project. Although project does have a trial balance report, be careful about assuming other reports can be created easily that combine GL activity with project information. The links between GL and PA are a bit complicated, so you just need to plan for any GL reporting carefully. If users want FRx style reporting (and the associated flexibility) for projects, it often leads to a discussion regarding adding a segment in the GL for projects (the easiest, albeit not always the most desirable, solution).
4. Cost Category Transaction Usage: Cost categories in project accounting can only be used with one transaction type. This can cause issues when budgeting projects, as you might end up with one cost category TRAVEL-EE (for travel expenses on employee expense reports) and TRAVEL-PM (for travel expenses from outside vendors recording using the purchasing module). So in that case, the budget for travel would have to be divided between the two cost categories. Not an ideal situation for users who plan on using the cost budgeting functionality, but it is something that users do get used to over time. If PS Time and Expense (the Business Portal based time and expense entry tool for employees) is not being used, the issue can be easily addressed by recording both employee expenses and outside vendor purchases using the purchasing module rather than using project's employee expense entry window. However, if you are using PS Time and Expense, the expenses will automatically integrate with employee expense entry not the purchasing module.
5. Budget input and updates: Currently, this is a manual process per project. There is not an import tool, which is a common request. This is particularly true when the users want to interface project accounting with other project management systems. If it is a critical need, you want to accomodate customization/development of a tool in your implementation.
When these items come up in discovery, it is important that plans be made to address them. In some cases this may mean additional costs to the clients in terms of report development and customization, or it may mean a change in business process to better align with the software. In either situation, planning ahead can spare everyone frustration.
Some popular non-traditional project accounting uses include:
1. For law firms, accumulating costs on cases
2. Tracking costs and budgets for internal projects and activities (development, marketing, etc)
3. Tracking internal construction costs for building stores, etc.
Please share your thoughts and experiences with the project accounting, I would love to hear them!
Tuesday, September 16, 2008
Framework, Format, Goose!
What is an account framework?
The account framework is the largest that the account format may grow. This framework is defined when Dynamics GP is initially installed on the server (and the Dynamics database is created after logging in to Dynamics GP utilities). It is important to note that this happens during install, since sometimes installation may happen before discovery has been completed. Also, note, the framework is a limitation for ALL companies in the installation...so all companies must be considered. The framework is not easily changed once installation has been done (more on that later). Generally, you will hear that a framework can be no larger than 66 characters and 10 segments although there are further constraints regarding storage size that are discussed in the installation instructions for Dynamics GP.
What is account format?
The account format is defined per company, and is the number of characters and segments to be used for account numbers. The account formats for multiple companies can be different, although each account format must fit within the account framework defined during installation.
So, take the following example:
Company A has a format of 2-3-5
Company B has a format of 4-2-3-6
In this case, the minimum acceptable account framework would be 4-3-5-6, but of course, we would probably want something larger than that to allow for future growth in both the number of characters as well as the number of segments.
So, what about changing these?
Changing the account framework requires some work, either reinstalling (if no data has been entered) or using a tool to complete the reformatting by essentially transferring all data in to a new database with the new framework. Microsoft Dynamics Professional Services can handle the transfer, or you can purchase Corporate Renaissance Group's Re-Formatter tool and do it yourself. It is definitely an undertaking in either case, and not a lot of fun.
Changing the account format is a bit easier. If no data has been entered, you can just delete any account numbers that have been added and adjust the format as you wish (assuming it falls within the framework). If data has been entered, you have a few different options. At any point, you can increase the account format up to the framework. The only dilemma with this approach is that any existing accounts will automatically get spaces for the new characters and segments. So most likely, the users will not be happy with spaces on existing accounts and you will have to use (and purchase) either the Microsoft Dynamics Prof Services Tools Library Account Modifier tool or the Corporate Renaissance Group Changer tool to change the existing accounts to new account numbers. Either tool is really straightforward, easy to use, and economical (after a brief training, most clients/end-users can easily use it themselves).
But how about decreasing the account format? Well, that can be done as well, but then you are back to having to use a tool like Re-Formatter, as the standard Dynamics GP logic will not allow you to decrease the account format after accounts and activity have been added.
So, the lesson here?
When planning your implementation, make sure you consider not only current needs but future needs as well. This is particularly critical for the account framework. My personal approach is to make sure that the framework allows for at least 2 characters of growth in each segment that is currently used by any of the companies in the installation and at least 2-4 additional segments (based on the nature of the business) to allow for expansion of divisional reporting requirements in the general ledger. And, I always try to make sure that at least preliminary account format discussions are completed for all companies to be implemented prior to the installation being completed (this can be challenging in a time-sensitive implementation, but it can be key to success).