From Missed Defects to Measurable Quality: How I Fixed the Reporting Gap at Exact Systems
problem: Inspectors reported defects by email, without a standard format. Key details like order numbers or quantities were frequently missing.
Reports took time to write - and even more time for managers to retype into internal systems.
solution: I introduced a structured reporting flow into the existing Exact People Go app - replacing scattered emails with a guided, step-by-step process.
The new flow reduced friction, ensured complete data, and gave both inspectors and managers full traceability.
MY ROLE
- Created the new reporting UX from low-fi flows to final design - Collaborated with devs during implementation to ensure fidelity - Ran usability tests and iterated based on real-time confusion points
94%
of reports included all required fields Structured input ensures every report is usable - without chasing missing info.
38s
average time to submit Streamlined flow fits real work - no delays, no context switching.
Before the app - there was just confusion
Without a real-time system, even important defect reports were easy to miss or delay.
At Exact Systems, workers are deployed across dozens of client sites to perform quality inspections. Until recently, if they found there a defect, they had to send an email to their manager or coordinator.
This process was messy and unreliable. Reports were buried in inboxes. There was no standard structure, which meant key details were often missing or inconsistent.
Talking to the people stuck in the chaos
I interviewed inspectors to uncover how a broken system created confusion, delays, and distrust in the defect reporting process.
To understand what really wasn’t working, I conducted 10 in-depth interviews with quality inspectors from multiple regions at Exact Systems. Patterns quickly emerged around frustration, confusion, and distrust in the old reporting process.
1.
Managers had to manually rewrite reports into internal systems.
2.
Critical data (e.g. inspection time or order number) was often missing.
3.
Workers had no standard form - just emails, messages, or paper.
When a defect was spotted, the real problem was just beginning
From remembering details to writing emails from scratch, the friction made errors inevitable.
Finding a defect should trigger action - but for inspectors, it triggered uncertainty. I mapped the user journey to understand what actually happened after an issue was spotted, and where the breakdowns occurred.
Key insight: They had to remember what details to include, switch to email, and manually write it out - every time. No standard, no confirmation, no feedback.
From field problems to product features
I translated user pain points into clear product requirements through structured workshops, aligning the team around what mattered most.
We ran a remote brainstorming session with inspectors, coordinators, and devs. Over 30 ideas landed on the board - from voice notes to barcode scanning. We used the MoSCoW method to cut noise and focus on features that delivered measurable impact.
A key decision: photo uploads vs. speed. “We debated whether photo uploads should be required - users wanted speed, managers wanted proof.”
In the end, we focused on structured reporting instead: choose defect type, quantity, and add comments only if needed.
First usability test: the form looked simple, but felt confusing
The first prototype included all key fields, but with no hierarchy or guidance, users didn’t know where to begin - or if they’d done it right.
We tested the first prototype with 6 inspectors across two regions. The form had all the right fields - but everything was shown at once. Users saw defect type, component number, quantity, comments, and executor in one scrollable view. It felt complete on paper, but not in practice.
What didn't work:
1.
No visual hierarchy - everything had the same weight on screen.
2.
No progress indicator - users didn’t know how far they were in the process.
3.
Users expected step-by-step flow, not a freeform worksheet.
reporting flow v1
I broke the flow into steps - and clarity followed
Splitting the form into single-task screens gave users the guidance they needed - and the confidence they lacked in the first version.
The first prototype tried to do it all at once: select defect, enter quantity, write a comment, assign the executor. It looked simple in Figma. But in testing, users froze.
There was no visual hierarchy. No progress bar. Just a dense worksheet. So I did the opposite of what we thought would make it faster: I slowed it down.
Each step became a separate screen, with a clear call to action and feedback. Instead of guessing what to do next, users moved forward with confidence.
The result? Immediate clarity - and a form that finally felt usable in the field.
reporting flow v2
I tested the new flow - and users just… got it
After splitting the form into steps, users completed reports nearly twice as fast - and called the experience “easy” and “clear.”
After splitting the form into steps, I ran another round of usability tests with 6 inspectors from different sites. This time, something unexpected happened: no one asked for help.
Even users who had struggled with the first version completed the report in under a minute. The average completion time dropped from 74 seconds to 38, and all testers described the flow as “easy” or “clear.”
What changed?
1.
Clear step-by-step guidance
2.
Built-in validation at each step
3.
Progress feedback after each input
Key insight: The redesign didn’t just improve usability. It built trust. Users felt like the app finally “spoke their language.”
From prototype to production - and one bug that almost broke trust
A critical issue in early Android devices caused reports to be lost offline - we fixed it fast, but it reminded us how fragile user trust can be.
We rolled out the new flow in phases - but within days, one issue surfaced we hadn’t caught in testing. On older Androids, reports made in offline mode weren’t being saved - despite a “success” message. The problem? Some phones blocked background saving due to battery settings.
Together with devs, we: - Switched to SQLite-based local storage - Added pending sync labels and visible error toasts - Updated the UI to include a local “draft saved” confirmation in offline mode, so users could feel reassured even without network access. This fix mattered: 1 in 8 inspectors worked in places with weak or no signal. Catching it early kept trust intact.
Users saw a success message - but the report silently disappeared.
Was it worth it? The numbers said yes - and so did the people
Within six weeks of launch, report completion jumped from 18% to 92%, data quality improved, and trust was restored - with users and managers alike.
To measure success, we tracked usage metrics, report quality, and qualitative feedback from coordinators and managers - all within the first 6 weeks post-release.
Key results: - 92% of defect reports now submitted via the app (up from 18%) - Avg. completion time: 38 seconds per report - Incomplete/missing reports down by 64% - 5 team leads reported better traceability during client audits
And what about the remaining 8%? - They mostly came from two older factory sites still using legacy workflows. We’ve already started onboarding them to the new system.
Roadmap & What I Learned
The work didn’t end with launch. Each release solved real problems - and revealed new ones. Here’s how the product evolved, and what I learned along the way.
1
Even a small UX change - like breaking a form into steps - can dramatically improve accuracy and reduce friction. Structure builds trust.
2
People don’t just want to complete tasks - they want to feel progress. UX that supports autonomy and agency drives adoption.
3
Personalization isn’t a “nice to have” - it’s essential when serving a large, diverse workforce. Relevance reduces noise and increases trust.
4
The biggest UX wins don’t always come from features. Delivering value beyond daily operations can reframe the entire product.
I didn’t just solve a reporting problem - I helped turn a workflow into a product users trust. That’s the kind of impact I want to keep having.
See my other case studies
- Da cross-platform music discovery feature for Apple TV+.
- Boosted Apple Music engagement by simplifying how users explore content.