The Decisions Fall 2011 Conference is in two weeks, with the Dynamics GP day on Tuesday, December 6!
Registration is free and only takes a minute. This is a virtual conference, so no travel, no hotel, no convention centers--the entire conference can be accessed using your web browser.
I'll be giving a presentation at the conference titled "Importing 10 Million Transactions Into Dynamics GP: Lessons Learned". It is based on a large project that I worked on over the last few years where I've gained a new perspective on what matters most for an environment with very high volume, complex integrations.
In addition to great presentations, there will be several vendors in the virtual conference hall presenting their enhancements for Microsoft Dynamics GP. I will be at the Envisage Software Solutions booth showing customers the very popular Post Master add-on for GP, which fully automates the batch posting process, as well as the very powerful Search Master product, which allows you to instantly find any data in any of your Dynamics GP databases.
Please take a minute to register and devote a few hours of that Tuesday to learn more about Dynamics GP!
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
My blog has moved! Please visit the new blog at: https://blog.steveendow.com/ I will no longer be posting to Dynamics GP Land, and all new posts will be at https://blog.steveendow.com Thanks!
Monday, November 21, 2011
Friday, November 18, 2011
My Favorite Payroll Support Articles
It seems like the vast majority of payroll issues are related to taxability, which in turn is related to the setup of the payroll module. So I thought I would share a couple of my favorite payroll support articles related to taxability.
How to correct overwithholding of payroll taxes
http://support.microsoft.com/kb/858712
I have used this article time and time again. It can be broken down in to three key steps:
http://support.microsoft.com/kb/862929
This article explains how both reports calculate, and the subtle differences between them. The key is that the 941 report uses the current tax status of deductions to recalculate taxable wages while the payroll summary relies on the taxable wages calculated at the time the payroll was posted.
To minimize tax issues in payroll in GP, you should (in my humble opinion)...
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
How to correct overwithholding of payroll taxes
http://support.microsoft.com/kb/858712
I have used this article time and time again. It can be broken down in to three key steps:
- Refund the overwithheld taxes
- Correct the tax summary information for total taxes and taxable wages
- Adjust the pay code used to pay back the taxes
http://support.microsoft.com/kb/862929
This article explains how both reports calculate, and the subtle differences between them. The key is that the 941 report uses the current tax status of deductions to recalculate taxable wages while the payroll summary relies on the taxable wages calculated at the time the payroll was posted.
To minimize tax issues in payroll in GP, you should (in my humble opinion)...
- Ensure that the payroll setup tax flags for pay codes, deductions, and benefits are set up correctly from the start (Setup-Payroll-Deduction, Benefit, Pay Code)
- Resist the urge to change tax flags at the individual employee level (Cards-Payroll-Pay Code, Deduction, Benefit)---simply because it becomes more complicated in terms of the variations in taxability that could exist for a single code
- Ensure that all pay codes that are marked as SUTA taxable also have a SUTA state specified (this is a required field when manually setting up a pay code, but the behavior can vary when rolling down pay code assignments)
- Ensure that a default state tax code is specified for the employee, this is DIFFERENT than setting the employee up for a state tax (Cards-Payroll-State Tax). You need to go to Cards-Payroll-Tax and make sure a default state tax code is specified for transaction entry.
- Print your 941 after every payroll when you first go live, and validate the results.
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
Tuesday, November 15, 2011
Consultant Tools Series: Travel Power Outlets
Nearly every hotel I've stayed at the last several years has had one limitation: Not enough power outlets. There may be one outlet available on the desk, or two if I'm lucky, but sadly, that isn't enough to handle the laptop, cell phone, bluetooth headset, iPad, and hotspot. I have often had to charge my cell phone using a bathroom power outlet and leave my laptop powered on all night so that I can charge something else via a USB port.
I was poking around the electronics section of one of those massive discount retail stores a few months ago and I came across a gadget that looked pretty interesting. It is a mini power-strip of sorts that provides three full outlets, but what is unique is that it also includes two USB charging ports.
I brought it home, and to break it in, I tried it on my overburdened kitchen counter power outlet where we have a pile of iPhones, an iPod, and iPod Touch, and a few other devices. It worked perfectly, allowing us to use two USB cables to charge phones, freeing up two outlets.
This week I had a chance to try it on the road. The desk in my hotel room has two outlets in a recessed compartment that would have made it difficult to use two of my chargers, so the power strip worked like a charm. I was able to charge my iPhone and hotspot via USB, and then plugged in my laptop and bluetooth headset in the power outlets.
Since I have a MacBook Air with only 2 USB ports, one of which is always used by my Logitech wireless mouse receiver, my one remaining USB port is pretty valuable, and can now remain available.
The only minor downside is that the power strip is a bit bulky, but given it's value, I don't mind carrying it in my luggage.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
I was poking around the electronics section of one of those massive discount retail stores a few months ago and I came across a gadget that looked pretty interesting. It is a mini power-strip of sorts that provides three full outlets, but what is unique is that it also includes two USB charging ports.
I brought it home, and to break it in, I tried it on my overburdened kitchen counter power outlet where we have a pile of iPhones, an iPod, and iPod Touch, and a few other devices. It worked perfectly, allowing us to use two USB cables to charge phones, freeing up two outlets.
This week I had a chance to try it on the road. The desk in my hotel room has two outlets in a recessed compartment that would have made it difficult to use two of my chargers, so the power strip worked like a charm. I was able to charge my iPhone and hotspot via USB, and then plugged in my laptop and bluetooth headset in the power outlets.
Since I have a MacBook Air with only 2 USB ports, one of which is always used by my Logitech wireless mouse receiver, my one remaining USB port is pretty valuable, and can now remain available.
The only minor downside is that the power strip is a bit bulky, but given it's value, I don't mind carrying it in my luggage.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
Thursday, November 10, 2011
One Massive Flaw in SugarSync: It is not a backup solution...
I've been a huge fan of SugarSync since I started using it several years ago. It will automatically backup files on my desktop and synchronize them with my laptop, and vice versa. It even works with my mobile phone so that I can easily access frequently used files when I'm away from a computer. It also let's me share files or entire folders with other people. It's not perfect, but it's been nearly perfect for me. But today I discovered an interesting flaw. Admittedly, this is probably a potentially unusual situation, and I'm waiting to hear back from SugarSync Support on whether this is a bug or whether it was a highly unusual fluke, but it definitely got my attention.
So with that out of the way...
My desktop machine has been misbehaving lately, primarily in the form of Blue Screens. After further digging, it seems that there is some type of issue with one or more hard drives or the motherboard. To diagnose the issue, I unplugged all drives except the C: drive. I tried booting with just the C: drive, and although Windows will load and work for a while, I am still getting the blue screens.
Well, a funny thing happened while I was doing those tests. Well, maybe not so funny.
On my desktop, I have a dedicated C: drive with only Windows and Program files. I store all of my user data and files on a D: drive. That way, if I ever need to reinstall Windows, I can just wipe the C: drive and not worry about losing any data. And naturally, I have SugarSync backup all of my files on the D: drive and synchronize them to my laptop.
It seems that when I disconnected the D: drive on my desktop, SugarSync decided that I had deleted all of my files. And I mean ALL OF MY FILES. Apparently since it could no longer see the D: drive, it sent messages up to the SugarSync servers telling the Mother Ship that ALL OF MY FILES were deleted.
And then what happened? Well, when I fired up my laptop, SugarSync on my laptop dutifully downloaded all of the synchronization commands to delete ALL OF MY FILES from my laptop.
At the time, I was looking for some files on my laptop and noticed that one directory was missing. I checked my backups on my local server to confirm the files existed, and by the time I switched back to my laptop directory listing, everything was gone. Tens of thousands of files and thousands of directories were wiped from my laptop.
I just stared at the screen in disbelief, thinking that Windows Explorer wasn't refreshing or that I was going blind. But nope, everything had been deleted. In a panic, it took me about 30 seconds to realize what had happened.
Initially, the first level SugarSync support rep quickly claimed that when SugarSync detects that a drive is missing, it will automatically "disconnect" the synchronized folders, whatever that means. But clearly that didn't occur in my case, and the SugarSync web site shows all of my files have been deleted. I am now waiting for them to review my log files and see if they can figure out what happened. The support rep foolishly closed the chat session with "...please remember that you should not disconnect a drive on which the SugarSync folders are present", which is a preposterous statement, especially since he initially claimed that SugarSync would "disconnect" the backed up folders if a drive was not found.
And thinking about the obvious real world situations, what if you want to backup data on an external USB drive? Does SugarSync suddenly delete files everywhere when you disconnect that drive? What if my D: drive had died? Will my laptop be wiped out when that happens too? It doesn't make any sense that the files should be deleted when a drive is no longer detected.
Since I rely heavily on SugarSync (or at least I used to before this happened), I apparently need to test all of these scenarios to assess the damage that may occur.
Fortunately, I am only highly annoyed by all of this, primarily because of the time it has wasted and will continue to waste until I get a resolution. I'm not going insane for a few reasons:
1. It appears that all of my files are still present on the SugarSync servers, but are marked as deleted. So I'm guessing they should be able to revive them, assuming their second level support is more competent their their first level reps. I could double click on the deleted files to restore them myself, but I am waiting for them to figure out what happened before I touch anything.
2. My D: drive on my desktop did not die, I just had to unplug it, so my files are still intact on that drive. But I am now very wary of plugging the drive back in, should SugarSync decide that it needs to wipe that drive as well.
3. I also use Carbonite to backup all of the files handled by SugarSync, plus many, many more, so I have another copy on Carbonite's servers. Apparently Carbonite does not have the same flaw as SugarSync and does not immediately delete my files when the drive is not connected. But Carbonite sometimes gets back logged with my photos and other large files I backup, so files that I change regularly may be several days old on the Carbonite server.
4. Every evening a scheduled task runs on my desktop that uses RoboCopy to backup my files from my desktop to my file server. Unfortunately because of the issues with my desktop, it looks like it has been a few days since that ran successfully. So I do have another copy of everything, but several files will be a few days old.
So this has been a good lesson about an ironic downside to a seemingly fantastic backup solution. And it's been a good, albeit unwanted, test of my neurotic multi-layered backup strategy. It seems to work, but like most things, it isn't perfect. I now have some clear validation that you can't have too many backups...
UPDATE 1: I have since had several rounds of discussions with the very mediocre support at SugarSync, and they have essentially confirmed that SugarSync is designed to delete all of your files on all of your synchronized computers if a hard drive is no longer detected or a drive fails. They have effectively said that SugarSync isn't a backup solution--it's a synchronization solution, and if a non-system drive fails on any of your synchronized computers, then SugarSync is supposed to consider all of the files on that drive deleted, and that all of those files should therefore be deleted on every other computer that is synchronized. Of course this is preposterous, as I think anyone would agree there is an obvious difference between a hard drive disappearing on a system and a file being deleted. Heaven forbid if you change the drive letters on a computer--I assume that would cause SugarSync to delete everything as well.
All of the deleted files are technically available to recover via their web site, but you have to recover each sub-directory individually. There is no way to recover a directory and all of its sub-directories. For serious users with thousands or even just hundreds of directories, this is a nightmare.
UPDATE 2: Since I now know that I can't rely on SugarSync to safeguard my data, I now use Acronis TrueImage to make a full image backup of my D: drive, along with the existing full image backup of my C: drive. Because you can't have too many backups!
UPDATE 3: A reader, Loren M., saw my post and appears to have experienced the same hassle I did, so it seems that this is not a random issue. And as he points out, the irony is that a casual / non-critical SugarySync user would probably never experience this issue--only the serious power user with multiple hard drives and multiple synchronized computers would encounter this issue.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
So with that out of the way...
My desktop machine has been misbehaving lately, primarily in the form of Blue Screens. After further digging, it seems that there is some type of issue with one or more hard drives or the motherboard. To diagnose the issue, I unplugged all drives except the C: drive. I tried booting with just the C: drive, and although Windows will load and work for a while, I am still getting the blue screens.
Well, a funny thing happened while I was doing those tests. Well, maybe not so funny.
On my desktop, I have a dedicated C: drive with only Windows and Program files. I store all of my user data and files on a D: drive. That way, if I ever need to reinstall Windows, I can just wipe the C: drive and not worry about losing any data. And naturally, I have SugarSync backup all of my files on the D: drive and synchronize them to my laptop.
It seems that when I disconnected the D: drive on my desktop, SugarSync decided that I had deleted all of my files. And I mean ALL OF MY FILES. Apparently since it could no longer see the D: drive, it sent messages up to the SugarSync servers telling the Mother Ship that ALL OF MY FILES were deleted.
And then what happened? Well, when I fired up my laptop, SugarSync on my laptop dutifully downloaded all of the synchronization commands to delete ALL OF MY FILES from my laptop.
At the time, I was looking for some files on my laptop and noticed that one directory was missing. I checked my backups on my local server to confirm the files existed, and by the time I switched back to my laptop directory listing, everything was gone. Tens of thousands of files and thousands of directories were wiped from my laptop.
I just stared at the screen in disbelief, thinking that Windows Explorer wasn't refreshing or that I was going blind. But nope, everything had been deleted. In a panic, it took me about 30 seconds to realize what had happened.
Initially, the first level SugarSync support rep quickly claimed that when SugarSync detects that a drive is missing, it will automatically "disconnect" the synchronized folders, whatever that means. But clearly that didn't occur in my case, and the SugarSync web site shows all of my files have been deleted. I am now waiting for them to review my log files and see if they can figure out what happened. The support rep foolishly closed the chat session with "...please remember that you should not disconnect a drive on which the SugarSync folders are present", which is a preposterous statement, especially since he initially claimed that SugarSync would "disconnect" the backed up folders if a drive was not found.
And thinking about the obvious real world situations, what if you want to backup data on an external USB drive? Does SugarSync suddenly delete files everywhere when you disconnect that drive? What if my D: drive had died? Will my laptop be wiped out when that happens too? It doesn't make any sense that the files should be deleted when a drive is no longer detected.
Since I rely heavily on SugarSync (or at least I used to before this happened), I apparently need to test all of these scenarios to assess the damage that may occur.
Fortunately, I am only highly annoyed by all of this, primarily because of the time it has wasted and will continue to waste until I get a resolution. I'm not going insane for a few reasons:
1. It appears that all of my files are still present on the SugarSync servers, but are marked as deleted. So I'm guessing they should be able to revive them, assuming their second level support is more competent their their first level reps. I could double click on the deleted files to restore them myself, but I am waiting for them to figure out what happened before I touch anything.
2. My D: drive on my desktop did not die, I just had to unplug it, so my files are still intact on that drive. But I am now very wary of plugging the drive back in, should SugarSync decide that it needs to wipe that drive as well.
3. I also use Carbonite to backup all of the files handled by SugarSync, plus many, many more, so I have another copy on Carbonite's servers. Apparently Carbonite does not have the same flaw as SugarSync and does not immediately delete my files when the drive is not connected. But Carbonite sometimes gets back logged with my photos and other large files I backup, so files that I change regularly may be several days old on the Carbonite server.
4. Every evening a scheduled task runs on my desktop that uses RoboCopy to backup my files from my desktop to my file server. Unfortunately because of the issues with my desktop, it looks like it has been a few days since that ran successfully. So I do have another copy of everything, but several files will be a few days old.
So this has been a good lesson about an ironic downside to a seemingly fantastic backup solution. And it's been a good, albeit unwanted, test of my neurotic multi-layered backup strategy. It seems to work, but like most things, it isn't perfect. I now have some clear validation that you can't have too many backups...
UPDATE 1: I have since had several rounds of discussions with the very mediocre support at SugarSync, and they have essentially confirmed that SugarSync is designed to delete all of your files on all of your synchronized computers if a hard drive is no longer detected or a drive fails. They have effectively said that SugarSync isn't a backup solution--it's a synchronization solution, and if a non-system drive fails on any of your synchronized computers, then SugarSync is supposed to consider all of the files on that drive deleted, and that all of those files should therefore be deleted on every other computer that is synchronized. Of course this is preposterous, as I think anyone would agree there is an obvious difference between a hard drive disappearing on a system and a file being deleted. Heaven forbid if you change the drive letters on a computer--I assume that would cause SugarSync to delete everything as well.
All of the deleted files are technically available to recover via their web site, but you have to recover each sub-directory individually. There is no way to recover a directory and all of its sub-directories. For serious users with thousands or even just hundreds of directories, this is a nightmare.
UPDATE 2: Since I now know that I can't rely on SugarSync to safeguard my data, I now use Acronis TrueImage to make a full image backup of my D: drive, along with the existing full image backup of my C: drive. Because you can't have too many backups!
UPDATE 3: A reader, Loren M., saw my post and appears to have experienced the same hassle I did, so it seems that this is not a random issue. And as he points out, the irony is that a casual / non-critical SugarySync user would probably never experience this issue--only the serious power user with multiple hard drives and multiple synchronized computers would encounter this issue.
Steve,
It is interesting you would note this. Unfortunately I can conclusively confirm that SugarSync has not fixed the problem since you reported it, since I fell victim to the same fate with terrifyingly similar results. I also discovered a few additional tidbits of information… To go through the reason you mentioned point by point and why I think people should still be worried:1. First, take no comfort in the idea that SugarSync allows you to recover your deleted items. I also had thousands of directories and subdirectories on my system. The initial problem in trying to recover these files is that each and every one of these subdirectories must be recovered individually – there is no way to select one directory and all the subdirectories below it, each subdirectory (and underlying subdirectory) must be opened, and all the files selected, and a restore stared. This alone may take the better part of one’s life to work through. However even those willing to suffer through this will be disappointed with the results. I found major sections of my file structure were no longer recoverable at all – the subdirectories simply did not exist on the SugarSync server. They were just gone.
2. In my situation, the removed drive was only disconnected for an hour or so. It automatically remounted itself, by that time it came back online SugarSync had already decided to delete all the synchronized files from my three other computers. When the offline drive came back, it then “synchronized” with the deleted computers and also deleted everything. If the reason you or anyone is using SugarSync is to have a “backup” copy, you should stop using SugarSync right now even if you have paid for it. Once you have SugarSync up and running, it is the opposite of a backup – loss of your document drive on any of your synchronized computers will ensure it will be deleted from each and every computer you are syncing to. This is the worst possible “fail deadly” configuration imaginable for “backup” system, yet it is exactly how SugarSync operates and this can be proven by repeatable testing.
3. The only possible way to save yourself from the horrors of SugarSync is to backup all of your data somewhere else. I happened to be using Crashplan, which is the only way I got my data back. If anyone out there is using SugarSync – and I can’t stress this enough – a separate backup tool must also be used which does not rely on SugarSync to protect data. Keep in mind however that if the backup tool is running in the background or on some schedule, there may well come a time when SugarSync has deleted all of the data from the drive being backed up – recovery of deleted files from the backup service is the only possible route at this point, but this too can be problematic since the backup service has no way of discerning what was intentionally deleted vs. what SugarSync deleted when it went crazy.
4. In a scenario eerily similar to Steve, I was using Allwaysync to synchronize all my files to my file server. Unfortunately I had been setup to “synchronize” instead of copy, and since deletions were synchronized this copy was promptly deleted as well. I have since switched to a “copy” mode, however it is less than ideal, since it means nothing I legitimately delete will ever be deleted from the server, and I’ll have to manage this additional copy manually just to make sure I don’t suffer the “SugarSync Suicide” again.
My takeaway from this whole experience was that SugarSync is not a backup solution – it is a synchronization tool with a nasty penchant for destruction of file systems. To anyone using SugarSync - you need a separate backup solution (and a very good one) to protect yourself from what SugarSync will eventually do to you if your synchronized drive goes bad.
Another insidious aspect of this problem is that “casual” users of SugarSync or those who are simply using the free few GB as a trial before purchasing more storage are highly unlikely to ever experience a problem. However paying customers who are more often than not synchronizing entire documents devices (on a separate drive) will almost certainly be hit by this problem at some point, and it will be when they are most vulnerable -- after a drive failure.Good luck, and let’s hope SugarSync addresses these problems soon!
Loren
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
Tuesday, November 8, 2011
Dynamics GP 12 Web Client Coolness
There was a lot of talk at GPPC about the new web client for Dynamics GP 12. It really got me to thinking about how we might incorporate it in to our deployment strategies, and which customers (and prospects) might benefit from it. Of course, there are those that ask for it directly (I see it on requirements lists constantly) but it still leaves me wondering, practically, how the web client will be used.
I will not pretend to know how to explain the architecture and technology that goes in to delivering the web client, but I can appreciate the effort and complexity of what they have accomplished with it. And here are the practical tidbits I have taken away from the keynote and sessions I have attended here in Vegas:
Well, that is my totally non-technical post on the web client. I personally am very excited by the potential, and can't wait to see more.
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
I will not pretend to know how to explain the architecture and technology that goes in to delivering the web client, but I can appreciate the effort and complexity of what they have accomplished with it. And here are the practical tidbits I have taken away from the keynote and sessions I have attended here in Vegas:
- The web client represents a move to a 3 tier architecture-- a presentation layer, an application/logic layer, and the data layer. From a non-technical perspective, I think the key concept is that the user interface (UI) has been separated from the application/logic layer. So we can have these windows in the web client that use the same application/logic as the hard client, without the need to replicate/duplicate all of the logic contained in Dynamics GP today. Not the most technical answer, I know, but I have to distill these things down to the basics for me :)
- The web client windows will include ribbons across the top to initiate actions, and hopefully will also contain sub-windows on separate tabs. And, just like the hard client, you will be able to have multiple windows open at the same time in the web client (woohoo!).
- Windows will be dynamically generated using Silverlight, which means all of your dexterity customizations and third party products will be availab, including macro capability. The only hitch in this right now is VBA, which will not transfer to the web client. Microsoft is working on some ideas on how to address this, due to the technical complexity.
- I asked specifically about Word Templates and emailing of documents, as these are two items that sometimes complicate terminal server deployments (since Word and Outlook have to be on the terminal server, in the same environment as GP). These are also issues that are still be worked out, and there may be some assumptions about Outlook and/or Word on client machines that are using the web client (which seems reasonable to me).
- IIS
- Dexterity
- Silverlight
- Web Services
- Internet Explorer (including security)
- XML
- Visual Studio (potentially a tool for modifying the templates used to generate the web client windows)
Well, that is my totally non-technical post on the web client. I personally am very excited by the potential, and can't wait to see more.
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
GP12 Update From The Partner Connections Conference
The past three days have reminded me of just exactly how old I am. Vegas wears me OUT! And I am not very good at gambling, either. I have been learning and mingling at the GPPC Partner Connections conference in Las Vegas since Saturday. I presented a couple of sessions on Sure Step, and have sat in on sessions covering GP12, the new web client, demo tools, and the fabulous SQL report builder. Very informative and fun few days.
I thought I would share a few of the most enhancements I heard about in GP12, exciting stuff coming!!
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
I thought I would share a few of the most enhancements I heard about in GP12, exciting stuff coming!!
- Business Analyzer can appear on the home page (Cool!)
- Customizable area pages (Also cool!)
- Select printer at the time of printing (oh, my, so super cool that it garnered applause...long time coming)
- Reprint PM check remittance (much-requested)
- Print SQL reports from a Dynamics GP form (I can't help but wonder if this is a precursor for SQL reports to be able to replace a Dynamics GP report similar to Word templates)
- Subledger reconciliation for Bank Rec to GL (also much-requested)
- Prepayments on purchase orders (yay!)
- Bank rec void indicator, so if you try to void a check that is reconciled you will get a message (will help so many customers avoid this issue!)
- BAI format for EFT
- Document attachment (not quite document management, but it is a cool start...with the documents stored in the SQL server)
- PO tolerance handling (I get so many questions about this, so I am very excited!)
- Historical depreciation reporting
- Mass deprecation reversal
- Feb 2012- TAP program
- Mar 2012- Convergence
- July 2012- Beta program
- Sept 2012- GP Technical Airlift
- Oct/Nov 2012- Launch activities
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a supervising consultant with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
Saturday, November 5, 2011
Simplifying Your Passwords
A few months ago, I came across three great items regarding passwords.
The first is an excellent comic on XKCD.com. It helps to debunk a common misinterpretation about passwords: that passwords must be "complex" in order to be effective. Or perhaps more accurately, it reframes the concept of "complexity" with regard to passwords.
It makes the great distinction that "hard to remember" (for humans) and "hard to guess" (for computers) are two very different things, and demonstrates that it is possible to have a password that is easy for you to remember, yet very secure against brute force attacks performed by a computer.
It's an excellent explanation in comic form, and great lesson about how to think differently about passwords.
The second resource is an informative article and tool by Steve Gibson at GRC.com.
Steve's "password haystack" concept is insightful, and is very similar to the XKCD lesson. Steve's calculations with "search space" are different than XKCD's calculation with entropy, but Steve explains that in terms of brute force password guessing (versus attacking the underlying encryption algorithm or keys), it's the search space that matters, not entropy.
And the key lesson is that increasing the search space is MUCH easier than increasing the entropy.
He provides a nice demonstration comparing two sample passwords:
Which one do you think is more "secure"?
Which one is easy to remember and type?
The first password, "D0g", followed by a bunch of periods, theoretically wins on both counts. As he points out, the first one may have much less entropy, but when it comes to brute force password crackers, the only aspect of entropy that matters is making sure that you are using at least one character from each "type": uppercase letters, lowercase letters, numbers, and symbols. Once you have at least one of each of those (preferrably more than just 4 characters), you can then start using padding to dramatically increase your search space.
The third item is a comment that a friend made when I discussed this topic with him. He works alot with IT security, and he pointed out that "password" is semantically flawed. We should refer to them as "pass-phrases". If we can transition away from the idea of using a single "word", to phrases that can contain multiple words, it should increase the search space, and also increase the ease of memorization.
Together, I think these provide a great basis for how we should start thinking about passwords, and how they should educate users about passwords.
Users hate passwords like "dqkGx^D,c=41S5a", but something like "Fargo Is #1!!!" can be memorized very quickly, and can be recalled very easily.
So, having learned all of this, how do I use it?
I have been using RoboForm for securely managing all of my passwords, so in theory, I only have to remember one "master password" for RoboForm. I can then let RoboForm use high entropy, difficult to remember passwords, like "dqkGx^D,c=41S5a" for my various web site logins. But in the rare occasions when I have to manually login to a site without RoboForm, those cryptic passwords are a hassle, so I may just convert most of my passwords to "haystack" style passwords.
Unfortunately, there are probably some applications or web sites that will make it difficult to use these passwords, such as ones that may limit password length, and others that require combinations of passwords and PINs. And there are quite a few "random password" generators that are widely used (including the one in RoboForm) that don't support this methodology, so you will need to come up with your own technique for generating the pass-phrases and using padding.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
The first is an excellent comic on XKCD.com. It helps to debunk a common misinterpretation about passwords: that passwords must be "complex" in order to be effective. Or perhaps more accurately, it reframes the concept of "complexity" with regard to passwords.
It makes the great distinction that "hard to remember" (for humans) and "hard to guess" (for computers) are two very different things, and demonstrates that it is possible to have a password that is easy for you to remember, yet very secure against brute force attacks performed by a computer.
It's an excellent explanation in comic form, and great lesson about how to think differently about passwords.
The second resource is an informative article and tool by Steve Gibson at GRC.com.
Steve's "password haystack" concept is insightful, and is very similar to the XKCD lesson. Steve's calculations with "search space" are different than XKCD's calculation with entropy, but Steve explains that in terms of brute force password guessing (versus attacking the underlying encryption algorithm or keys), it's the search space that matters, not entropy.
And the key lesson is that increasing the search space is MUCH easier than increasing the entropy.
He provides a nice demonstration comparing two sample passwords:
D0g.....................
PrXyc.N(n4k77#L!eVdAfp9
PrXyc.N(n4k77#L!eVdAfp9
Which one do you think is more "secure"?
Which one is easy to remember and type?
The first password, "D0g", followed by a bunch of periods, theoretically wins on both counts. As he points out, the first one may have much less entropy, but when it comes to brute force password crackers, the only aspect of entropy that matters is making sure that you are using at least one character from each "type": uppercase letters, lowercase letters, numbers, and symbols. Once you have at least one of each of those (preferrably more than just 4 characters), you can then start using padding to dramatically increase your search space.
The third item is a comment that a friend made when I discussed this topic with him. He works alot with IT security, and he pointed out that "password" is semantically flawed. We should refer to them as "pass-phrases". If we can transition away from the idea of using a single "word", to phrases that can contain multiple words, it should increase the search space, and also increase the ease of memorization.
Together, I think these provide a great basis for how we should start thinking about passwords, and how they should educate users about passwords.
Users hate passwords like "dqkGx^D,c=41S5a", but something like "Fargo Is #1!!!" can be memorized very quickly, and can be recalled very easily.
So, having learned all of this, how do I use it?
I have been using RoboForm for securely managing all of my passwords, so in theory, I only have to remember one "master password" for RoboForm. I can then let RoboForm use high entropy, difficult to remember passwords, like "dqkGx^D,c=41S5a" for my various web site logins. But in the rare occasions when I have to manually login to a site without RoboForm, those cryptic passwords are a hassle, so I may just convert most of my passwords to "haystack" style passwords.
Unfortunately, there are probably some applications or web sites that will make it difficult to use these passwords, such as ones that may limit password length, and others that require combinations of passwords and PINs. And there are quite a few "random password" generators that are widely used (including the one in RoboForm) that don't support this methodology, so you will need to come up with your own technique for generating the pass-phrases and using padding.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
Thursday, November 3, 2011
A Tale of Two BSODs: Diagnosing Windows Blue Screen of Death
Ever had a Windows machine display the Blue Screen of Death? Through amazing coincidence, blue screens showed up on both my desktop machine and a client's production Dynamics GP Terminal Server in the same week! I got fed up with the cryptic errors and finally decided to learn how to diagnose the infamous BSOD.
A few months ago I built a new desktop machine. Although there were a few quirks with 64-bit Windows 7, it seemed to work well. Until the day when I started to get the dreaded Blue Screen error.
Having dealt with blue screens occasionally over the years, my general interpretation is that once you start getting them on a machine, they don't tend to go away on their own. Sure enough, my desktop started to blue screen a few times a week.
While at my desk, I saw the blue screen occur and flash on my monitors, but my computer instantly rebooted, preventing me from seeing the message. By default, Windows 7 and Server 2008 are set to automatically restart when a "System failure" occurs. This option is set under System Properties -> Startup and Recovery Settings. The first change I made was to disable the automatic restart option so that I could know when the blue screen occured and see the error messages.
Sure enough, the blue screens showed up again a few days later, but unfortunately, the message displayed wasn't very helpful.
Sometimes you will get lucky and see a specific driver listed, like "ETRON_USB_3", which can tell you immediately that a third party USB 3 driver is causing the problem.
But in my case, since a specific driver wasn't listed on the blue screen, just a cryptic "STOP" error, I didn't have any clues as to a possible cause. I figured that my only option would be to try and reinstall Windows, which isn't on my favorite-things-to-do list. So I put it off and just ignored the occasional crash, knowing I would eventually have to deal with it.
Then, the other evening, while connected remotely to a client's server, I was suddenly disconnected. When I was able to reconnect, I saw a message indicating that the server had experienced a blue screen and had restarted automatically.
Figuring that two systems with blue screens in the same week was too much of a coincidence, I took it as a challenge to learn how to diagnose the cause of the dreaded BSOD.
To my surprise, it turns out that it is shockingly simple to get diagnostic information about the BSOD error--if you know what tools to use and once you know how to use them.
In the Startup and Recovery options in Windows, there is an option to "Write debugging information". In the latest versions of windows, the default setting is to write a "Small memory dump", also known as a "minidump".
When Windows encounters a "system error", it writes certain diagnostic information to this memory dump file explaining the specific area of the operating system that caused the crash and possible causes of the problem.
I naively thought that reading memory dumps were some type of complex process that only Wizards at MS Support could perform, but to my surprise, there are several tools available to make the diagnostic process extremely easy.
I found this blog post by the famous Mark Russinovich, which got me started on the "old school" method of reading the debug files using the Microsoft WinDbg utility.
There are a few challenges with this approach. First, you have to figure out which version of WinDbg you need for your OS, and Microsoft seems to want to make it as difficult as possible to get just that one tool. You either need to download the Windows Driver Development Kit (MSDN subscription and login required), or you have to download the Windows SDK just to get one little EXE file. It's absurd. You then have to try and figure out the extremely arcane tool, since it is obviously not designed to be a polished consumer-friendly product.
Anyway, I jumped through all of these hoops, installed WinDbg, and read my minidump files. Immediately, the tool showed me the cause of the blue screens:
Probably caused by : memory_corruption
Wow. With a few clicks, I was able to determine the cause of a blue screen! It seemed like magic.
So I ran MemTest on my workstation, and sure enough, it instantly showed memory errors. Since I know just enough to be dangerous when it comes to these things, I figured that it was possible that my memory was physically fine, but that there was something else that was causing the issue.
I booted into the BIOS settings and disabled the XMP memory profile, which has the memory operate at a faster speed. Sure enough, once I saved that setting and ran MemTest again, no errors. I tried changing various settings with XMP enabled to see if I could get XMP working, but I couldn't get rid of the errors, so for now I'm running sans-XMP, which is fine for the relatively simple tasks that I perform on my desktop.
Feeling very confident after conquering my first BSOD in a matter of minutes, I then decided to diagnose the BSOD on the client's production Dynamics GP Terminal Server.
I launched WinDbg, set the debug symbol path, and loaded the minidump, and well, unfortunately the results weren't quite as simple as my workstation.
Probably caused by : Ntfs.sys ( Ntfs!NtfsDeleteFile+8d3
So what does this mean? My interpretation is that something about a file delete operation caused the server to crash. Later on, there is a reference to iexplore.exe, which is Internet Explorer:
PROCESS_NAME: iexplore.exe
ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.
This is where an expert would be required, since this just doesn't seem to be enough information to determine a specific cause. My only interpretation is that some aspect of Internet Explorer is somehow causing the crash.
So for now, no magic solutions like what I found with my workstation, but we at least have some information that we can use to monitor the server.
Having gone through the process, here's a summary of what I learned:
1. It seems that WinDbg comes in two flavors, listed on this MSDN Dev Center web page.
http://msdn.microsoft.com/en-us/windows/hardware/gg463009.aspx
The newer version (6.2.8102 8/23/2011) is included with the Windows Developer Preview WDK (MSDN subscriber only), which seems to work fine for Windows 7 minidump files (and perhaps Server 2008 R2). But when I tried to use this version on the client's Windows Server 2008 (not R2), it spewed a bunch of complaints about being unable to load ntoskrnl.exe.
So, for Windows 2008 and versions prior to Windows 7, there is a second version (6.12.2.633 2/1/2010), available in the Windows SDK.
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8279
When you install the Windows SDK, you only need the Debugging Tools, and can uncheck all of the other options.
2. In order to properly read the dump files, you need to first set the symbols path. The Russinovich blog post mentions one (a), but I found a different one on a forum thread that seemed to work better for me (b).
a) asrv*c:\symbols*http://msdl.microsoft.com/download/symbols
b) SRV*C:\WebSymb*http://msdl.microsoft.com/download/symbols
Version (a) worked on my Windows 7 machine with the newer WinDbg, but I had to use bersion (b) with the older WinDbg.
3. To perform the debugging, launch WinDbg, select File -> Symbol File Path and paste in one of the symbol paths from above. Then select File -> Open Crash Dump and select your minidump file.
4. Wait a few seconds for WinDbg to analyze the file and display results. Hopefully you see something like the following, including the helpful "Probably caused by" note:
***********************
* Bugcheck Analysis *
***********************
Use !analyze -v to get detailed debugging information.
BugCheck 24, {1904aa, c941b6a0, c941b39c, 9247f5fc}
Probably caused by : Ntfs.sys ( Ntfs!NtfsDeleteFile+8d3 )
5. You should see a link on the text "!analyze -v". If you click on that link, it will display more information that may help you further diagnose the problem. It's all pretty cryptic looking, but a technical person or developer should be able to pick out a few clues.
6. There are apparently other tools that are much easier to use than WinDbg, but may not be as comprehensive. I quickly tried one called BlueScreenView that is amazingly simple and easy to use. The only downside is that it doesn't appear to offer the "Probably caused by" note provided by WinDbg. Once you get familiar with the typical errors, you may not need that helpful message, but I still need that pointer, so for now I'll stick with WinDbg.
My experience has been that blue screen errors are pretty rare these days, but obviously they do still occur. If you are feeling adventurous, hopefully this information helps you navigate the relatively simple process of doing some initial diagnostics on your own before rebuilding a server or paying for a support case.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
A few months ago I built a new desktop machine. Although there were a few quirks with 64-bit Windows 7, it seemed to work well. Until the day when I started to get the dreaded Blue Screen error.
Having dealt with blue screens occasionally over the years, my general interpretation is that once you start getting them on a machine, they don't tend to go away on their own. Sure enough, my desktop started to blue screen a few times a week.
While at my desk, I saw the blue screen occur and flash on my monitors, but my computer instantly rebooted, preventing me from seeing the message. By default, Windows 7 and Server 2008 are set to automatically restart when a "System failure" occurs. This option is set under System Properties -> Startup and Recovery Settings. The first change I made was to disable the automatic restart option so that I could know when the blue screen occured and see the error messages.
Sure enough, the blue screens showed up again a few days later, but unfortunately, the message displayed wasn't very helpful.
Sometimes you will get lucky and see a specific driver listed, like "ETRON_USB_3", which can tell you immediately that a third party USB 3 driver is causing the problem.
But in my case, since a specific driver wasn't listed on the blue screen, just a cryptic "STOP" error, I didn't have any clues as to a possible cause. I figured that my only option would be to try and reinstall Windows, which isn't on my favorite-things-to-do list. So I put it off and just ignored the occasional crash, knowing I would eventually have to deal with it.
Then, the other evening, while connected remotely to a client's server, I was suddenly disconnected. When I was able to reconnect, I saw a message indicating that the server had experienced a blue screen and had restarted automatically.
Figuring that two systems with blue screens in the same week was too much of a coincidence, I took it as a challenge to learn how to diagnose the cause of the dreaded BSOD.
To my surprise, it turns out that it is shockingly simple to get diagnostic information about the BSOD error--if you know what tools to use and once you know how to use them.
In the Startup and Recovery options in Windows, there is an option to "Write debugging information". In the latest versions of windows, the default setting is to write a "Small memory dump", also known as a "minidump".
When Windows encounters a "system error", it writes certain diagnostic information to this memory dump file explaining the specific area of the operating system that caused the crash and possible causes of the problem.
I naively thought that reading memory dumps were some type of complex process that only Wizards at MS Support could perform, but to my surprise, there are several tools available to make the diagnostic process extremely easy.
I found this blog post by the famous Mark Russinovich, which got me started on the "old school" method of reading the debug files using the Microsoft WinDbg utility.
There are a few challenges with this approach. First, you have to figure out which version of WinDbg you need for your OS, and Microsoft seems to want to make it as difficult as possible to get just that one tool. You either need to download the Windows Driver Development Kit (MSDN subscription and login required), or you have to download the Windows SDK just to get one little EXE file. It's absurd. You then have to try and figure out the extremely arcane tool, since it is obviously not designed to be a polished consumer-friendly product.
Anyway, I jumped through all of these hoops, installed WinDbg, and read my minidump files. Immediately, the tool showed me the cause of the blue screens:
Probably caused by : memory_corruption
Wow. With a few clicks, I was able to determine the cause of a blue screen! It seemed like magic.
So I ran MemTest on my workstation, and sure enough, it instantly showed memory errors. Since I know just enough to be dangerous when it comes to these things, I figured that it was possible that my memory was physically fine, but that there was something else that was causing the issue.
I booted into the BIOS settings and disabled the XMP memory profile, which has the memory operate at a faster speed. Sure enough, once I saved that setting and ran MemTest again, no errors. I tried changing various settings with XMP enabled to see if I could get XMP working, but I couldn't get rid of the errors, so for now I'm running sans-XMP, which is fine for the relatively simple tasks that I perform on my desktop.
Feeling very confident after conquering my first BSOD in a matter of minutes, I then decided to diagnose the BSOD on the client's production Dynamics GP Terminal Server.
I launched WinDbg, set the debug symbol path, and loaded the minidump, and well, unfortunately the results weren't quite as simple as my workstation.
Probably caused by : Ntfs.sys ( Ntfs!NtfsDeleteFile+8d3
So what does this mean? My interpretation is that something about a file delete operation caused the server to crash. Later on, there is a reference to iexplore.exe, which is Internet Explorer:
PROCESS_NAME: iexplore.exe
ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.
This is where an expert would be required, since this just doesn't seem to be enough information to determine a specific cause. My only interpretation is that some aspect of Internet Explorer is somehow causing the crash.
So for now, no magic solutions like what I found with my workstation, but we at least have some information that we can use to monitor the server.
Having gone through the process, here's a summary of what I learned:
1. It seems that WinDbg comes in two flavors, listed on this MSDN Dev Center web page.
http://msdn.microsoft.com/en-us/windows/hardware/gg463009.aspx
The newer version (6.2.8102 8/23/2011) is included with the Windows Developer Preview WDK (MSDN subscriber only), which seems to work fine for Windows 7 minidump files (and perhaps Server 2008 R2). But when I tried to use this version on the client's Windows Server 2008 (not R2), it spewed a bunch of complaints about being unable to load ntoskrnl.exe.
So, for Windows 2008 and versions prior to Windows 7, there is a second version (6.12.2.633 2/1/2010), available in the Windows SDK.
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8279
When you install the Windows SDK, you only need the Debugging Tools, and can uncheck all of the other options.
2. In order to properly read the dump files, you need to first set the symbols path. The Russinovich blog post mentions one (a), but I found a different one on a forum thread that seemed to work better for me (b).
a) asrv*c:\symbols*http://msdl.microsoft.com/download/symbols
b) SRV*C:\WebSymb*http://msdl.microsoft.com/download/symbols
Version (a) worked on my Windows 7 machine with the newer WinDbg, but I had to use bersion (b) with the older WinDbg.
3. To perform the debugging, launch WinDbg, select File -> Symbol File Path and paste in one of the symbol paths from above. Then select File -> Open Crash Dump and select your minidump file.
4. Wait a few seconds for WinDbg to analyze the file and display results. Hopefully you see something like the following, including the helpful "Probably caused by" note:
***********************
* Bugcheck Analysis *
***********************
Use !analyze -v to get detailed debugging information.
BugCheck 24, {1904aa, c941b6a0, c941b39c, 9247f5fc}
Probably caused by : Ntfs.sys ( Ntfs!NtfsDeleteFile+8d3 )
5. You should see a link on the text "!analyze -v". If you click on that link, it will display more information that may help you further diagnose the problem. It's all pretty cryptic looking, but a technical person or developer should be able to pick out a few clues.
6. There are apparently other tools that are much easier to use than WinDbg, but may not be as comprehensive. I quickly tried one called BlueScreenView that is amazingly simple and easy to use. The only downside is that it doesn't appear to offer the "Probably caused by" note provided by WinDbg. Once you get familiar with the typical errors, you may not need that helpful message, but I still need that pointer, so for now I'll stick with WinDbg.
My experience has been that blue screen errors are pretty rare these days, but obviously they do still occur. If you are feeling adventurous, hopefully this information helps you navigate the relatively simple process of doing some initial diagnostics on your own before rebuilding a server or paying for a support case.
Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
http://www.precipioservices.com
Subscribe to:
Posts (Atom)