Get your free copy!

Discover 7 operational warning signs that your ERP system is failing to deliver after go-live.

“IT|Fandango talked more sense in one hour of discovery call than the Microsoft partner of our client did in the previous twelve months.

Sean O’Hara
CEO at ANPRO

Dynamics 365 Testing, Part 2: Common Traps And Best Practices In ERP Implementations

Introduction

Testing in a Microsoft Dynamics 365 Finance and Operations (D365 F&O) ERP implementation is not just another checkbox in the delivery plan. It is your best chance to expose flaws, assumptions, and blind spots before the system hits production. But even the most structured testing strategy will collapse if you haven’t laid the right groundwork.

Many organisations treat testing as the final hurdle, a thing you do once the “real work” is done. That’s the first mistake. Testing is not a final check. It is a stress test of your decisions, your data, your processes, and your people. If those aren’t in good shape, testing won’t save you. It will simply expose everything you’ve ignored up until that point.

In this article, we focus on four elements about testing best practices during your implementation of Microsoft Dynamics 365 F&O:

  • Prerequisites: finalised solution design, detailed business processes, and valid test data
  • Setup: clear strategy, clean environments, tracking tools, and key owners
  • Common traps: bad test cases, unrealistic timelines, lack of accountability, and dirty data sets
  • Next steps: defect triage, retesting, signing off, and managing timelines

We’ll walk through what needs to be in place before testing starts, how to set up your testing process properly, where we’ve seen projects fall flat on their face, and what key decisions matter after each test round. This is how to lay the foundations right, for the benefits of your Dynamics 365 implementation project.

Wait! Before we begin…

Testing is one challenge of ERP projects, but there are others — so in our free guide, we described 10 common functional issues during a D365 F&O implementation: what they are, why they matter, what happens if they’re neglected, and how to spot red flags before they become critical issues. Get your free copy from the form below.

10 common implementation functional issues and how to identify them

Download your free copy!

D365 F&O Functional Health Check

Short summary of key testing phases

Let’s start with a quick recap of the main testing phases in a Dynamics 365 F&O implementation, which we covered extensively in this other article. These phases are sequential for a reason: each one builds on what came before it. Skip or shortcut one, and the next one gets shakier.

  • Functional testing: During the system build, functional testing is useful to verify whether individual features do what they’re supposed to do, based on business requirements and solution design. Functional testing is the responsibility of (you guessed it) functional consultants, and it covers specific scenarios and sub-processes. This testing type is one level above validating whether individual buttons or functions work, which is part of unit testing.
  • Solution integration testing (SIT): At this stage, the focus shifts to how different modules and applications talk to each other. Can the ERP handle a sales order passed from your CRM? Does the stock reservation go all the way to your warehouse management system? This phase exposes issues with data flows, field mappings, interfaces, and process handoffs.
  • End-to-end testing (E2E): Now you’re simulating how the business actually works. Transactions go through departments, systems, and people. From purchase requisitions to product receipt. From starting a production order to dispatching it to the client. This is where your overall assumptions get tested. Can the system operate under real-world conditions, from start to finish?
  • User acceptance testing (UAT): This is the final gate. Business users validate that they have a system suitable to support their daily operations. Here the goal is not just passing test scripts, but also building people’s confidence. Can the company bet on running their operations with the new ERP?

These test phases are not optional. They are critical control points. Get sloppy in functional testing, and features will fail under real conditions. Skip integration testing end E2E, and processes will have loose ends because transactions won’t flow smoothly. Ignore concerns raised by users during UAT, and your go-live will become a firefighting exercise. Testing is cumulative. Weak foundations don’t get stronger with time.


Prerequisites for effective testing

A building needs robust foundations before it’s constructed, and so does your testing process.

Before you even think about test scripts or colourful Azure DevOps dashboards, you need to make sure you have a testable software solution. That might sound obvious, but we’ve seen too many projects shoot into test execution without knowing what they were testing exactly. Prerequisites are important: identify testing requirements, define business processes, and finalise your solution design to ensure a smooth and effective testing process. Testing doesn’t start with testers. It starts with the right decisions made early. Let’s see what these are.

Stable and finalised solution design

Test cases are only as good as the software they’re testing. If your solution design, system configuration, and custom features are incomplete, incorrect, or still changing mid-test, then you’re not validating anything. You’re just burning hours. Yes, in real-world projects, some design areas remain fluid while others stabilise. That’s normal, especially when there are plenty of customisations. But you must know which parts are locked and which are not, or your testing will be useless.

To enable test coverage with confidence, your solution design should:

  • Be documented and agreed across all functional areas and design documents
  • Meet specific requirements, which are defined from clear business needs
  • Include customisations, integrations, and cover any edge scenarios

Testing a moving target is a guaranteed way to waste time. Freeze what you need. Document what you can. Flag anything still in progress.

Mapped business processes with owners

A software solution can be final and complete, but it is only meaningful if it reflects how the business actually works. Which is what testing is supposed to prove. That means your business processes must be mapped, agreed upon, and owned by someone. If users can’t explain what they do, or there’s internal disagreement about how something should work, then you’re testing a solution that may technically work while being completely useless for its purpose.

To avoid this, you need:

  • Complete business process mapping, end-to-end in all the core streams
  • Key users who are nominated process owners and have the last say
  • Visibility of exceptions, workarounds, sub-processes, and special cases

Incomplete processes without ownership are dangerous. If nobody is accountable for the big picture, your testing will win the battle and lose the war. Make sure that’s not your case.

Fit-for-purpose test data

It is possible to have tests that succeed with mock-up records, but fail with real ones. Testing sales order processing with a made-up, perfect customer (let’s just select the first value in the drop-down) may pass functional testing, but it’s a lot riskier with E2E testing. Validated test data is a huge factor in the thoroughness of your testing process. That includes master data, configuration records, and transactional examples that mirror real-world scenarios.

Good test data is data that:

  • Is your actual migrated data where possible, or a reasonable simulation where it’s not
  • Covers standard cases, edge cases, exceptions, and any specific business conditions
  • Includes correct configuration parameters, that match the production environment where possible

So migrate your data early. Refresh your test environments often. Keep a GOLD environment for finalised configuration and master data. Guard it like your testing efforts depend on it. They don’t get stronger with time.


Setting up for testing

Preparing for a testing round is akin to setting up your gear before an excursion.

Once the prerequisites are in place, it’s time to think about how you actually run the test themselves. Having the right setup can be the difference between controlled execution and a chaotic scramble.

Clear, organised test strategy

Testing without a strategy is just clicking around in the dark. You need a clear testing strategy that lays out what you’re testing, why you’re testing it, and how success will be measured.

Your testing strategy should include:

  • Test phases and timelines, including time for defects fixing
  • Test scope aligned to business priorities
  • Test scenarios that reflect real processes
  • Measurable and specific entry and exit criteria for each test phase
  • Where possible, test automation to streamline testing

Don’t just test for the sake of it. The goal is not 100 percent pass rates. It’s confidence that the system behaves correctly across the range of real-world use. And if it doesn’t, it’s about having enough knowledge to manage risk and make an informed decision on whether to move forward.

Environment readiness and ownership

You can’t run consistent testing in an environment that changes every other day. And you can’t troubleshoot issues in an environment that doesn’t reflect the testing conditions.

So make sure your test environments:

  • Are designated clearly (e.g. UAT vs SIT vs GOLD) and have clear owners
  • Are refreshed with the most recent data and config before testing starts
  • Grants and tracks the right access to the right people (no “test user 1”)

Naturally, to ensure your tests are run smoothly and results are reliable, prepare your test environments to mirror the production environment as the project inches towards go-live. Again, use a GOLD environment to safekeep finalised configuration and master data.

Tracking and execution tools

Forget Excel spreadsheets and email threads. Use a proper testing tool like Microsoft Azure DevOps to manage your test cases. If this is viable, also use automation tools for the testing process.

With Azure DevOps, you can:

  • Link test cases to processes, user stories, and business requirements
  • Track who executed what, when, and what the result was
  • Record defects directly from failed tests and assign them to owners
  • Develop a framework to execute automated tests later on

Proper testing tools reduce manual overhead and improve traceability. The right organisation and tools allow you to save time, reduce manual work, decrease chances of human error, and facilitate tracking of each test run. The result is less noise, more visibility, faster defect triage, and overall better decisions.

Who owns testing?

If your test team is just consultants and QA staff, you’re having students grading their own homework. Business users must play a central role. Because they understand what’s acceptable and what’s not. They know the messy edge cases that never make it into specs. They’re the ones actually using the system after go-live. And they can tell when something works technically but is unusable operationally.

The testing process needs to be jointly owned:

  • Functional consultants help define scenarios and support defect resolution
  • Test leads supervise creation of scripts and coordinate execution with testers
  • Business users validate the processes, confirm outputs, and give go/no-go feedback

Naturally, the engagement of business users varies across test phases (lowest during functional testing, highest during user acceptance testing). But if business users aren’t actively involved in testing, then they won’t own the system and have no trust in it at go-live. And frankly, you shouldn’t either.


Common testing traps

The best preparation and the fastest car won’t save you if you drive recklessly on dangerous roads.

Now, let’s take a step back. Even with a great strategy and perfect setup for your testing, things can still go wrong. Here are the traps we see most often, and how to avoid them.

Poor test case design

Too many test cases only cover happy paths. These are the simple, ideal, fairy-tale scenarios that rarely reflect how the business actually operates. This can be due to bad test design. But poor testing coverage is often a direct consequence of poor functional solution design. Nobody thought that this use case or exception might happen, so nobody wrote it down, so no test exists to verify if it can be handled.

In other situations, test cases may lack clarity, skip failure conditions, or ignore validations that span roles and systems. Effective testing means pushing the system to its edges, not just confirming that it works under perfect conditions. Invest time in defining all scenarios that reflect real tasks, including edge cases, what-ifs, and exceptions, so that you can have good test design and a complete test plan.

Unrealistic timelines and compressed cycles

Together with user training, testing is always the phase that gets squeezed when the project timeline slips. The upstream delays pile up, but the go-live date holds firm. Suddenly, what was a six-week test window becomes two, and testers are told to “do what you can with the time we have”.

This is not a testing strategy… it’s a panic move. Compressed timelines mean rushed execution, minimal retesting, and defects that never see the light of day until users trip over them in production. These shortcuts often lead to more time-consuming rework and defect resolution later, negating any perceived time savings. Testing needs air to breathe. If you don’t build in contingency for rework and allow time for meaningful validation, you’re not testing — you’re gambling.

No business accountability

If testing is driven entirely by the IT department or by the system integrator, you can expect disengagement from key users and late-stage surprises. The most common post-go-live complaint we hear when the ERP system is good but not good enough is “we didn’t realise it worked like that”. This is a symptom of passive business involvement during testing. If users had tested the solution before signing it off (assuming there was a formal sign-off), they would’ve known how it works, no?

This means that business users must validate the design of scenarios to test, be active in test execution, own the outcome of the test cases, and push for fixing any defects that were spotted. It’s their ERP. They’re the ones who need to sign off with confidence. If they aren’t embedded in the process, the sign-off means nothing, and your risk of operational disruption after go-live skyrockets.

Uncontrolled data

Testing can only be as good as the data behind it. We mentioned this already, but it’s worth stating it again since we saw it happen more than once. If your test environment lacks relevant master data or realistic configuration, you’ll spend more time diagnosing false errors than validating the solution.

Then after go-live transactions fail to post, reports show blank fields, and testers throw their hands up since the pass rate was high. Often, these cases aren’t solution design issues by themselves, bur rather preparation failures. Make sure the data landscape is planned just as carefully as the test scenarios. The goal is a credible, production-like environment. Without that, your test results will be misleading at best, and dangerously wrong at worst.


What happens next

Completing a testing phase is like a successful rocket launch: the work is not done yet.

Ok, let’s say you did everything right. You managed to execute all your tests, with great involvement from key users and without impacting the timeline. What now? Here is what you should consider next, to make sure you’re not missing any important step.

Defect triage and ownership

Once a test phase concludes, the tough work often begins. Too many project managers treat testing like a destination. In reality, it’s a feedback loop. The outcomes of your tests (the bugs, the gaps, the user feedback) all feed into the next iteration. But this only works if you have a structured response process.

The first step is defect triage. Not all bugs are created equal. Some break the system. Others just annoy users. Others weren’t bugs in the first place, but just missing configuration or human error. Your team needs a framework to categorise issues, assign ownership, and track resolutions. Someone must own defect review meetings, log the outcomes, and ensure fixes are prioritised based on impact, not just volume.

Retesting and regression

A fix isn’t done until it’s verified. And not just by the developer who updated the code. Testers must confirm that the issue is resolved in the test environment, using the same steps that originally triggered the fault. If relevant, key users too must be involved in the retesting process. If it’s a critical fix, regression testing may be needed to ensure the change hasn’t broken something else. This is where proper documentation, traceability, and automation really earn their keep.

Closure and exit criteria

You also need to define what “done” looks like. That means having clear exit criteria for each test phase. This could be a pass rate threshold, a list of known and accepted defects, or a formal go/no-go checklist reviewed by stakeholders. What are the success criteria? How good is good enough? How much risk can you take? What’s the plan either way? Without clarity, you risk advancing based on hope rather than evidence.

Managing timelines and decisions

Testing reveals the truth. Sometimes the truth hurts. If test results show more issues than expected or expose unstable processes, you must be ready to respond. That might mean running another round of testing, adjusting the project scope, or delaying go-live. It might also mean escalating for business decisions.

When making go/no-go decisions, focus on the most critical issues to ensure that you can manage the risk (and any consequences), that you priorities are clear, and resources are used effectively where it matters. What you must not do is pretend everything’s fine just to stay on schedule. That’s how you sleepwalk into go-live, just to wake up abruptly to a house on fire.


Conclusion

Good testing doesn’t just prove your system works. It proves that your design decisions hold up under pressure, that your business users are ready, and that the whole thing fits together in the messy, real world.

If you treat testing as a rubber stamp at the end of each project milestone, you’ll miss the opportunity not just to fix, but potentially even to spot any issues present. Bugs and defects will be built into the fabric of your ERP system, and testing will fail to be a tool for alignment, validation, and confidence.

The foundations matter. So do the traps. Invest in the prep, structure the process, and take the results seriously. It’s the only way to make sure your Dynamics 365 F&O system is delivered successfully. You don’t want to just reach go-live. You want to go live with a stable solution that supports your business without surprises, and continues to run smoothly for years to come.

Thank you for reading! We hope that this article gave you some useful knowledge about Dynamics 365 F&O implementations. The ERP evolves fast, but the implementations challenges remain the same. Request a free discovery call to find out how we can help you.

Still unconvinced? Read our articles and gain more insights from over a decade of experience in the industry.