Get your free copy!

Discover 7 operational warning signs that your ERP system is failing to deliver after go-live.

“IT|Fandango talked more sense in one hour of discovery call than the Microsoft partner of our client did in the previous twelve months.

Sean O’Hara
CEO at ANPRO

Dynamics 365 Testing, Part 1: Key Testing Phases During Your ERP Implementation

Introduction

Testing is possibly one of the most underestimated parts of a Microsoft Dynamics 365 ERP implementation — yet it’s a key element that marks the difference between a solution that works and one that does not. Despite this, too many organisations treat it as a tick-box exercise instead of a serious safeguard to their ERP investment.

In this article, we break down the major types of testing you’ll encounter during a Dynamics 365 F&O implementation:

  • Main test phases: functional testing, SIT, E2E, UAT
  • Technical testing: unit testing, performance testing, regression testing
  • Other test types: CRP, data validation, reporting, security roles, smoke testing, and mock cutovers

We focus primarily on ERP implementation testing from a functional point of view (that is, how the application impacts critical business processes), since it’s where our expertise is. For each of those testing phases, we explain:

  • What they are
  • Why they matter
  • How things can go wrong
  • What a worst case example looks like

After this, we also briefly mention technical types of testing and other types of tests and validation you should be aware of. But we will leave to future articles any mention of specific testing tools, as well as preparation and prerequisites for testing.

Our goal here is to walk you through each testing stage so you can understand, prioritise, and execute with clear ideas — before your go-live turns into go-wrong.

Before we begin…

Testing is one challenge of ERP projects, but there are others — so in our free guide, we described 10 common functional issues during a D365 F&O implementation: what they are, why they matter, what happens if they’re neglected, and how to spot red flags before they become critical issues. Get your free copy from the form below.

10 common implementation functional issues and how to identify them

Download your free copy!

D365 F&O Functional Health Check

Main implementation test phases

The robustness of any Microsoft Dynamics ERP implementation is determined by its testing process. And no, we’re not talking about clicking a few buttons to see if they work and calling it a day. These testing phases are the backbone of your assurance layer. Done right, they expose flaws before they become operational disasters. Done wrong, they become the reason your ERP stops working on day one. Below, we break down the main test phases your team must treat as non-negotiable.

Functional testing

A technician inspecting a car engine
Functional testing is a check on individual components.

Functional testing is where your system capabilities get their first serious interrogation. The question is: can the application perform its required tasks, based on the configuration and business logic defined? Answering it is even more important if you have designed extensions and customisations to the standard Microsoft Dynamics 365 F&O features.

So it’s not just a matter of working without bugs, but working as intended. Consequently, functional testing should be aligned with business needs and processes to ensure that the solution meets specific criteria. Can your users post a journal under this and that use case? Once done, is the posting correct?

Typically, the baton here sits with your functional consultants and QA testers, possibly using a mix of manual and automated testing tools. Business analysts and key users should feed into the test cases to ensure they reflect reality, helping to create comprehensive test scenarios for thorough process validation. If your team is smart, they’ll also start using a test automation platform early for automating and managing test cases, laying the foundation for stable tests that can be reused and extended later. After all, manual testers can burn out when volume increases, and manual testing is prone to human error, compromising coverage and quality. But automation is only possible if your processes are clearly mapped and the design is finalised, which isn’t always true.

Functional testing is also where bugs get baked in if nobody speaks up. We’ve seen consultants test happy paths to death, yet skip every edge case — only to realise post-go-live that users regularly deviate from standard textbook processes. People don’t use the ERP in a predictable, linear way. Exceptions happen, use cases and processes have variance. The job of functional testing is to predict and anticipate this as much as possible.

What can go wrong

Even seasoned teams fall into predictable traps:

  • Test cases don’t reflect real business use. If your testing scripts are written by someone who’s never processed an invoice and doesn’t understand the solution design, you’re setting yourself up for failure.
  • Tests are performed on incomplete data or incorrect configurations. While full data migration may not be possible when functional testing takes place, testing only on a restricted set of ad-hoc records can render test outcomes meaningless when real-world complexity hits.
  • Functional documentation lacks testing guidance. Many consultants skip the critical step of embedding test scenarios and expected outcomes into their functional design documents, especially for custom features. This leaves testers guessing what to validate and how success is defined, which often results in inconsistent or incomplete testing.

A logistics company built a custom feature in Microsoft Dynamics 365 F&O to handle split shipments for partial deliveries. It looked fine during design, but it was a different story when users began using it with real scenarios after go-live. Orders with multiple warehouses or last-minute changes triggered errors and generated duplicate shipment records. None of these edge cases had been tested, rendering the feature almost unusable during business operations.

Solution Integration Testing (SIT)

A mechanic checking a car's open bonnet
SIT is verifying that all components work well together.

Once you’ve confirmed each brick works individually, it’s time to see whether the wall holds together. Solution Integration Testing (SIT) evaluates how well your Dynamics 365 solution components integrate across modules and external systems. Testing integrations and data flows between applications is the only way to ensure seamless operation beyond individual functionalities.

This phase is where your consultants, integration leads, and infrastructure teams all need to talk… and they rarely do unless forced. And yet, SIT is essential. It validates business-critical processes that cross application boundaries, for example between the ERP and a third-party warehouse system. You’ll want real data, realistic scenarios, and systems configured exactly like production. No shortcuts here.

If you’re lucky, your project has an automation strategy in place, and integration test runs are repeatable. Automated testing simplifies the execution of integration tests, making it easier to manage repeated test runs. Test automation also saves time, reduces errors, and ensures consistency across cycles. If not, you’ll rely on fragile manual walkthroughs and hold your breath.

What can go wrong

SIT fails in subtle but spectacular ways:

  • Data and data mappings are incorrect or incomplete. Missing field, inconsistent formats, simplistic or made-up records can bring down the whole chain across the process.
  • Systems are tested in isolation, not together. Even with integrations in place, outputs and inputs on the two sides are simulated and never run as full data flows. A dangerous shortcut that increases the risk of things breaking when all parts are used together.
  • No one owns the end-to-end picture. Each consultant owns their own area, so problems with interfaces are more likely to be pushed around and fall between the cracks.

A retail company implemented Microsoft Dynamics 365 F&O and integrated it with both a new customer web portal and a third-party warehouse management system. They tested order flows from the portal to the ERP, and separately from the ERP to the 3PL system — but not the full journey. After go-live, orders placed by customers in the web portal failed to reach the warehouse due to an internal interface format error. This caused order fulfilments to stall, forcing manual workarounds for two weeks and a drop in customer satisfaction.

End-To-End (E2E) testing

A mechanic doing a car inspection with checklist
E2E testing is your final inspection from start to finish.

Now we’re simulating real life. End-to-end testing is where you validate that full business processes flow seamlessly across multiple users, departments, and systems. Organisations rely on E2E testing to ensure their processes meet operational goals and function as intended across all areas. Things don’t have to just work, but they have to work in concert from start to finish. This is where you run transactions from origin to completion, verify outputs, check financial postings, and ensure every handoff works. It’s testing a sales order from creation, through fulfilment, to invoicing, and finally payment collection.

Business users must be front and centre here. They’re the only ones who truly understand how the process unfolds day to day, with all its glorious messiness. Consultants and test managers should enable, not dominate. The focus should not just be on expected outcomes, but on confidence: thorough E2E testing helps build trust among key users by demonstrating that the system can reliably support business operations. This confidence is key for the next phase, UAT (more on this in a second).

Done well, E2E testing is a critical stress test of your overall solution design. It’s the first time real users from different departments work together in the new system, using shared data and end-to-end workflows. This is where gaps in handoffs, misaligned configurations, and shaky process assumptions come to light. It also shows whether teams can operate in sync or if the design falls apart under cross-functional pressure. Better to catch that now than later.

What can go wrong

E2E testing is where cross-collaboration is really put to the test, and will fail if:

  • You test only ideal paths and simple cases. The real world doesn’t follow scripts. This is where variance and exceptions must be tested too. If your process collapses on the first deviation, it was never fit for purpose to begin with.
  • Cross-team coordination fails. Logistics doesn’t know what finance is doing, and sales is running its own show. They don’t work on the same data, so there is only local testing.
  • Data doesn’t represent real operational loads. Made-up records won’t cut it anymore. You need real or representative data from start to finish.

In one implementation, E2E testing was executed in silos — sales, warehousing, and finance each tested their steps independently because some areas of the solution were still incomplete, so users worked with placeholder configurations. When those were later finalised, key process flows changed. As a consequence, the tests performed during E2E were no longer valid, and people were thrown into confusion during UAT. Test scripts had to be rewritten on the fly, eating into time meant for actual validation and impacting the project timeline.

User Acceptance Testing (UAT)

A man driving a car on the road
UAT is your road test, done by the driver.

User Acceptance Testing is where your business decides whether to bet its operations on the new system. This is where the entire solution is handled to business key users, who has to sign it off. Can you run your company with it? UAT validates whether the configured system supports day-to-day work in a way that makes sense to the people doing the work. This testing round is essential for ensuring the quality of the final system, confirming that it meets business requirements and maintains high standards.

If business users can’t complete their core tasks without confusion, workarounds, handholding by consultants, or Excel lifeboats — then you’re not ready. Period. The mistake most companies make is treating UAT as a formality, failing to cover key cases with real data and with key users taking ownership. And neglecting to rectify and retest any failed test cases. Bug fixing can’t be postponed further.

Your best UAT cycles (yes, plural, as you may need more than one round) are immersive. You give users real data, define business-oriented test cases, and let them do their work. By now they should be comfortable with the new ERP and be aware of what they want to test and how, based on their actual day job.

Tracking UAT outcomes is how you judge your overall readiness. Now more than ever failed test cases must be logged, analysed, and retested — not ignored. The results reveal whether issues are isolated or systemic, and whether the risk is manageable. This informs your go/no-go call. If users aren’t confident, they won’t be magically ready after once the ERP is live.

What can go wrong

Ignore UAT at your peril. Here’s how it fails:

  • Users are underprepared. They’ve never seen the system, let alone tested it. Insufficient human resources (not automation tools) can make this worse, as there may not be enough support or training available for users.
  • Scripts are irrelevant. If users can’t relate to the test cases, they’ll skip steps or misinterpret the intent. They should be able to understand why and how they’re testing something.
  • Push to go-live overrides concerns. Feedback is ignored, and issues are labelled as “phase 2 enhancements”. Sign-off is forced, because sticking to the project timeline becomes more important than having a stable solution.

A manufacturing company pushed through UAT even though users weren’t ready. Key users flagged that they hadn’t completed all scenarios and weren’t confident using the new system without support. Despite this, management forced sign-off to avoid delaying the go-live. As a result, they got post-go-live chaos. Things failed so spectacularly that the IT team had to work late nights processing orders manually and firefighting issues that should have been caught weeks earlier. What should have been a transition became a crisis.


Technical testing

Functional and process testing is not all that matters. In fact, while business users and stakeholders obsess over workflows, data, and outcomes, technical testing quietly ensures that your Dynamics 365 solution doesn’t collapse under its own weight.

Unlike other test phases during the ERP implementation, technical testing lives mostly behind the curtain. It’s run by developers, technical architects, and infrastructure engineers, rather than business users or functional consultants. And while it might feel distant from day-to-day operations, ignoring this layer is like building a skyscraper without checking the foundations.

Technical tests validate the system’s stability, performance, and resilience. Effective technical testing starts during software development, relies on automation, and uses specialised testing processes and tools to verify the robustness of your entire Dynamics 365 ERP solution. Let’s see how.

Unit testing

Unit testing checks the smallest building blocks: individual functions or pieces of logic. If your developer tweaks tax calculations or writes a new method, these tests confirm that the code is actually working without technical errors. Done right, they catch bugs early, before they reach functional experts or end users.

Performance testing

This is your stress test. Performance testing simulates realistic transactional loads and concurrent usage to verify the system won’t buckle under pressure. Slowdowns in production after go-live are often the result of performance tests that never happened. Performance testing also verifies the power and resilience of your overall setup, including devices and networks.

Regression testing

Every change, like a new feature introduced by an upgrades or developing a new customisation, risks breaking what used to work. Regression testing ensures stability by re-running key test cases for existing features against new builds. Automating these is non-negotiable. Without automated testing, teams resort to selective guessing.


Other types of testing

Beyond the main and technical phases, there’s a handful of specialised testing and validation types that may not have a dedicated item in your ERP implementation GANTT chart — but they still are important. Some of these are early-stage fit checks. Others are post-configuration sanity tests. Together, they round out a complete Dynamics 365 testing strategy.

Conference Room Pilot (CRP)

A pre-build simulation of core business scenarios, sometimes done even before software selection. It’s less about bugs and more about solution fit. If your solution doesn’t make sense at this stage, you’ve got bigger problems than testing processes.

Data validation

More of a validation than testing it itself, it checks that migrated data is accurate, complete, and usable. If customer records are broken or historical orders are missing key fields, other test phases will be impacted negatively.

Reporting testing

Ensures business reports pull correct and relevant data and deliver the expected information — again, this is more about validation than formal testing. Naturally, it requires that data is valid and the design is complete to generate reports correctly.

Security roles testing

Often done in an iterative way (even after go-live), it confirms that users can only access parts of the system that they’re supposed to. Incomplete or incorrect security roles can negatively impact the effectiveness of later test phases like UAT.

Smoke testing and mock cutover

Smoke testing confirms basic system stability after each environment deployment and setup. It’s your first line of defence before deeper tests begin. Mock cutover, meanwhile, simulates the full go-live sequence (data loads, config import, batch jobs setup) to ensure the during the actual go-live people won’t unravel under pressure.


Conclusion

Not every test type applies equally to every Dynamics 365 F&O implementation. If your organisation runs a complex third-party tech stack, SIT becomes mission-critical. If you’re rolling out heavy customisations, functional testing will demand more time and rigour. The key is not skipping or compressing the tests that matter, since in most cases each testing phase builds on the last and has additional requirements to be done correctly. Shortcutting test phases is a fast track to instability, poor adoption, and rework after go-live. Treat testing as the safety system it is — and give it the time and focus it deserves.

Thank you for reading! We hope that this article gave you some useful knowledge about Dynamics 365 F&O implementations. The ERP evolves fast, but the implementations challenges remain the same. Request a free discovery call to find out how we can help you.

Still unconvinced? Read our articles and gain more insights from over a decade of experience in the industry.