Friday, December 5, 2008

eConnect Performance Benchmarks

Over the last several years, I've used eConnect quite a bit because of it's features, capabilities, flexibility, and performance. A few times I've written an integration to replace an Integration Manager import due to the relatively poor performance of IM with large imports. I know that eConnect performance is much better than Integration Manager, but it has been a long time since I benchmarked its speed.

When I first used eConnect, I developed a simple prototype to import a very basic payables voucher. Back then I was using GP 7.5, and developed the integration using VB 6. I recall that my prototype was able to import about 20 transactions per second. Since then, that benchmark has been the general number that I use when speaking with clients who are concerned about the performance of an eConnect integration. Having written integrations that have processed millions of transactions, I have yet to work with a client who really needs to import transactions real-time into GP at 20 per second, let alone any faster, so performance has never really been an issue.

But I recently worked on a data migration project where performance was a concern, simply because of the volume of data to be transferred. It involved migrating nearly 27,000 customers and 58,000 addresses from multiple GP companies into a single GP company. In this particular case, I suppose that DTS or SSIS might have been an option, but because of changes to various IDs and customer classes, it was alot safer and much easier to use eConnect and get the benefits of its validation and error handling (which were put to good use). The performance concern wasn't that the migration needed to be completed in a matter of minutes--it was that simply testing the migration of that many records takes quite a long time, especially if you have to run the test multiple times.

The database server where I ran the eConnect integration had two Xeon E5345 Quad Core processors running at 2.33 GHz, 4GB of RAM, and was running 32-bit Windows 2003 Server Standard. The data source was a separate SQL server with 5 GP company databases.

For this integration and testing, I didn't perform any database or code optimization to try and speed things up (I didn't have time)--I just wanted to see the performance of a typical integration written without regard for performance.

Anyway, here are the statistics from the totals of my largest data sets.

25,444 customers (with main address) imported in 2,325 seconds = 11 customers per second average

56,520 customer address IDs imported in 1,136 seconds = 50 addresses per second average

46,459 inet info records imported in 1,019 seconds = 46 records per second average


These averages were pretty consistent regardless of the size of the source data set (21 records to 15,000 records)

You can see from the averages that there can be significant variability in speed depending on the transaction, which makes sense. These particular transactions aren't the most meaningful to me, as I would really like to see stats on SOP Orders and PM Invoices, but at least it's one data point to consider.

12/5/08 - UPDATE: One thing I forgot to mention is that while this integration was running on the server, only 1 CPU core indicated any activity, and it generally stayed under 10%. So processing power was definitely not a bottleneck for performance. And given the data sources and size, I'm inclined to completely rule out disk and network as bottlenecks. Given this, I'm not yet sure why the integration didn't run faster--my current assumption is that some elements of my integration application have a lot of latency, or there is something about the eConnect calls that produce latency or a bottleneck. One test I'm thinking of performing is creating a test integration with 'source' data values that are hard-coded, and see how fast that data can be inserted into GP a few hundred times. That should completely eliminate any latency or delays due to reading or accessing the source data.

12/20/08 - UPDATE: I was able to create an integration for AR invoices, and for over 3,000 invoices, the average speed was 14 transactions per second.

3/23/09 - UPDATE: I recently created two different SOP transaction imports.

The first was a one-time load of 13,000 Invoices an average of 2.3 lines per order. The lines were non-inventory items, and had totals of $0, so there was no inventory impact, and were no distributions. The import averaged about 1,000 transactions per minute, or 16.7 transactions per second.

The second integration was tested with 3 line items per order, as well as 3 SOP payments per order. That integration imported 500 transactions per minute, or 8.3 transactions per second. The customer needs to import up to 40,000 transactions per day, so this performance should be sufficient.

Other than understanding these benchmarks, one lesson is that the performance varies significantly depending on how many elements there are for a given transaction.

14 comments:

  1. Steve,

    Very useful information. Thank you for sharing this.

    Victoria

    ReplyDelete
  2. I have noticed that when I run sop transactions its very slow to do the first one, after I do the first one it's pretty quick but then its slow again if I let it sit for a while I also noticed that when I tried to return and SOP number after letting it sit for a while it throws an error if I try again it works do you have any ideas or suggestions of things I can try? Any way of keeping econnect alive all day?

    ReplyDelete
  3. @DrDave M: Yes, I have the same experience with eConnect.

    When you first submit a transaction to eConnect, it typically has a "spool up" time, where it appears to be loading or starting the import process. In my experience, that spool up time is typically 10 to 15 seconds.

    Once it is "running", subsequent transactions are processed immediately or in less than a second. After some period of time, eConnect appears to unload / shut down if it hasn't received a request, and you will experience the startup delay again with the next transaction.

    I have one client with a real-time integration where 10-15 seconds is an eternity, so I developed a solution that eliminates the eConnect delay and accomplishes another goal.

    I created a "Check Status" routine that checks the availability of following items:

    1. External credit card processing gateway web service
    2. Dynamics GP third-party credit card processing module web service
    3. SQL Server
    4. eConnect

    The customer's web site calls the Check Status method every 30 seconds so that they can be notified immediately if there are any issues.

    For items 1 and 2, I call a test web service method with those web services. For SQL Server, I query the DYNAMICS..Activity table.

    And for eConnect, I import a test customer (ZZTESTCUSTOMER). The side benefit of this eConnect test is that an "insert entity" call is made to eConnect, which serves as a "keep alive".

    As a result, the customer never experiences the 10-15 second eConnect startup delay.

    So I would recommend adding a test eConnect import that imports the same test record every few minutes. You could package it up as a separate EXE and use Windows Task Scheduler to run it continuously throughout the day. And you could add email notification in the EXE so that if the test eConnect import failed, you send an email to the system administrator to let them know there is a problem.

    Let me know if that makes sense and if you have any questions.

    Steve

    ReplyDelete
  4. Hey Steve,
    I'm trying to find blogs/info on how to troubleshoot why eConnect would be slow. I'm running a GL account import, 268 accounts, nothing fancy (no scripting, nothing) via SmartConnect and it took nearly 4 minutes. That's crazy, Integration Manager macros are probably faster than that.

    Any thoughts on what kinds of things to look at? I'm thinking moving the import source file local to where SC is, might be one culprit (over a network) but still shouldn't be that big a deal.

    Jen

    ReplyDelete
  5. Hi Jen,

    I agree--if you are only importing standard GL accounts, I would think that they should import in a few seconds--certainly less than 1 minute.

    Unfortunately I'm not familiar with SmartConnect performance, but I'm assuming it isn't usually that slow.

    Have you tried testing the account import against Fabrikam? I would be curious if the performance is better against TWO.

    Email me if you want to have a call to look at it.

    Steve

    ReplyDelete
  6. Hi steve
    I am working on an integration where we are migrating almost half million transactions (SOP) to GP 2010. we are using econnect 11. I am able to write a sop doc creation service but I am concerned with the performance. It imports almost 20K transactions in 60minutes and then in the next 60 minutes only 6K and then so on performance is decreased. I ran sql profiler and see that for every transaction, there is new connection to database. is this the reason or what you suggest to make it consistent?
    NK

    ReplyDelete
  7. Hi NK,

    I recommend using a query to monitor the number of active connections you have on your SQL Server to ensure you aren't accidentally leaving connections open. But I don't know if that would cause significant performance degradation--that would typically just cause a connection error.

    One possible reason for decreased performance while running a large integration is insufficient SQL Server memory. During the import, there are dozens or potentially hundreds of SQL statements being executed per transaction. They will hit the company database and tempdb, indexes, you name it.

    As a test, I would recommend trying to import 10k transactions and time how long that takes. Then import another 10k and time that. Repeat that process of 10k imports and see if the performance degrades each time.

    Once you have those numbers, repeat the same test, but restart your SQL Server service after each run. See if your performance improves for imports 2, 3, 4, etc by restarting SQL after each import.

    If restarting does help, consider setting the SQL Server Maximum Server Memory value to a number that is at least 4GB less than your SQL Server machine's physical memory. e.g. if your server has 32GB of RAM, set the SQL Max Memory to 24GB.

    Let me know if restarting SQL or adjusting the SQL memory helps.

    Steve

    ReplyDelete
  8. I am curious about a company with such high transaction volume having the ability to perform routines such as check links, reconciliations, and especially year-end close. If a company is importing 40,000 to 50,000 transactions each day and relies on GP to also connect with external payment processing systems to send money - paypal & similar - AND it would be really bad for business to be down any time of day, how do you run any routines?

    ReplyDelete
  9. Hi Chad,

    I had an ecommerce client with very, very high transaction volume--at least a few million transactions per year. They also used the Company Data Archive, but their company database was still huge.

    They were basically unable to run CheckLinks. They had a support case where MS recommended CheckLinks to try and resolve an issue, but after it ran continuously for 7 days with no end in sight, they had to abort it.

    The rule after that was they would never run CheckLinks. Period.

    I've since encountered customers with much higher transaction volume. One processes over 2 million transactions per month. Another company literally imports and posts batches 24 hours a day, 7 days a week, non-stop. I don't know how they do any maintenance or deal with issues that interrupt normal processing.

    I don't know what the answer is or if there is a good answer with GP.

    Steve

    ReplyDelete
  10. Thank you, Steve. That is what I am afraid of, there not being a good answer.

    Our client actually ran out of Journal Entry numbers - the most you can have is 99,999,999. The task to free up more is to close out the year, moving GL transactions to history. It took them 8 days to close a single year - a downtime an ecommerce company does not want endure again. Based on current volume, they will run out every 3.5 - 4 years.

    Fortunately, with 17 characters and 20 characters available for SOP and AP, we can travel to the center of the galaxy before running out of AP transaction numbers.

    ReplyDelete
  11. Interesting thread of comments!

    Chad: has your client ever considered posting at a more summary level than the default from the various subledgers? Most clients never look at it but with that extreme volume, is posting in summary any sort of option to reduce the pure volume of GL trxs each month? That may buy them a year or two or more, if that's an acceptable option!

    I'm sure you know this already with this customer: JE numbers can be re-used as long as they are in a closed year. I did a blog on that because I was curious what would happen when someone gets to the 99,999,999 mark!

    Jen

    ReplyDelete
  12. I appreciate your thinking, Jen, but unfortunately that does not work for them. They require the Sales and AP modules to host the details for transactions. AP works as a payment engine and there are integrations with outside banks and services that facilitate sending money to customers. There are also returns and substitutions that are done on new SOP documents and unique IDs refer back to the ecommerce original and substitute transactions.

    One thought a colleague had since yesterday is would the Period Consolidation routine in the Financial module provide benefit? I'm not familiar with the workings or results of this routine. Would running it frequently, spread out over time, and perhaps on small ranges of accounts, reduce the time it takes for Year-End Close?

    ReplyDelete
  13. Just clarifying one thing in case we misunderstood each other. You said they require sales and AP modules to host the details. What I'm suggesting doesn't change that, it just changes the detail level of the GL result of those transactions. Example: a SOP batch of 5000 invoices, with the posting setup for Sales Transaction Entry set to "create journal entry per batch" (default is per Transaction", would result in 1 GL journal entry, summarized, for the entire batch. The A/R subledger would still have every detail as normal, just the GL side would show the summary. Drilling into the GL entry after posting would display a supplementary window displaying all of the 5000 invoices that are in that batch and you can drill back to the detail in SOP just like normal. With fewer GL entries, year end close in theory should run faster?

    I don't know much about period consolidation unfortunately so I can't comment on that part.

    Jen

    ReplyDelete
  14. Thank you for that clarification; I was not on the same page as you. I will certainly verify which option they have for that setting.

    Thank you so much for the suggestion!

    ReplyDelete