When competency records are reviewed among employees who work in multiple departments, the variation is often striking. One supervisor documented observations in a highly detailed checklist — another had just the CLIA elements listed as tasks on a worksheet with robust quizzes. A third relied on a combined paper packet they’d been using for the last 10 years. Each reflects a genuine attempt to meet CLIA compliance expectations —and in a program without a standardized structure; each is a reasonable response to an unreasonable burden. What those records reveal, side by side, is not a people problem. It is what the absence of shared infrastructure looks like in practice.
This is what inconsistency looks like in the laboratory: the same task, the same goal, and completely different results due to the absence of standardization. In this interview, Leah Westover, draws from firsthand experience to unpack how these gaps form, why they persist, and how they can contribute to inspection findings, reputational risk, and compromised patient safety.
During your time as a Medical Laboratory Scientist, what compliance challenges did you encounter with lab competency documentation — particularly around interpreting CLIA compliance requirements and ensuring that competency elements were evaluated according to regulatory expectations?
When I was working as a Medical Laboratory Scientist, and before we’d adopted MediaLab, the biggest compliance challenges around lab competency documentation weren’t about willingness or effort — they were about interpretation and consistency.
I remember one inspection prep where we pulled competency records for the same employee across multiple departments and realized that no two looked quite the same. One supervisor had documented direct observations in detail. Another relied heavily on quizzes. A third had checkboxes marked complete with minimal narrative. Everyone believed they were meeting CLIA compliance expectations — and technically, each had evaluated competency — but when we laid the records side by side, it was clear we had very different approaches to the process.
When the structure isn’t standardized, competency becomes defensible only as long as the right people are still in the room — and that’s a fragile place to be.
One common issue labs face is maintaining a consistent schedule for competency assessments. Can you share specific challenges you saw related to timing, evidence collection, or assessor consistency in routine competency programs, or something else entirely?
Timing was one of the hardest things to manage consistently. Initial, semiannual, and annual competencies all have specific timing requirements, and without a centralized system with automatic reminders, it was very easy for things to slip.
Laboratories know that competencies need to be done, but the oversight to know who, when, and what when the day-to-day is already so busy, isn't always a simple task. Documentation for some labs lives in multiple places: paper competency packets, shared spreadsheets, digital checklist templates, shared drives, emails, or file cabinets and binders.
Assessors also need to be qualified and have their own assessments as well. I see quite a bit of variation in how labs document qualification evaluations and ongoing assessments for supervisory roles, too.
On top of that, many laboratories are also responsible for POCT assessments, which adds another layer of responsibility on already stretched laboratory leaders.
The challenge isn’t just completing competencies, it’s proving that they are completed on time, by qualified assessors, and in alignment with regulatory expectations. Every. Single. Time.
Without strong structure and visibility, even well-run laboratories can look a little disorganized in their competency assessment programs.
Laboratory competency isn’t just about technical tasks — problem solving is critical. What were some examples where competency assessments didn’t fully capture analytical judgment or troubleshooting skills, and what impact did that have on quality?
I think there are often gaps here. Many competency assessments focus on whether someone can follow a procedure step-by-step, but they didn’t always capture how that person responded when something went wrong.
For example, a technologist might demonstrate perfect technique running an assay, but we didn’t always assess how they interpreted unexpected results, recognized QC trends, or knew when to escalate an issue. Those judgment calls are where quality and patient safety really live.
When problem-solving isn’t intentionally built into competency, you end up validating task completion rather than decision-making. That can create a false sense of security when everything looks compliant on paper, but you haven’t fully assessed readiness for real-world scenarios.
Medical lab scientists must remain current across multiple disciplines and protocols. What challenges did you see in keeping staff up-to-date and ensuring training translated into real-world competence?
Honestly, the hardest part for many laboratories is just sheer volume. Competency in the laboratory isn’t one thing — it’s dozens of assessments, across multiple benches, instruments, methods, and shifts. CLIA requires six specific competency elements (or more in states like New York), and each one needs to be evaluated, documented, and supported with evidence. Multiply that by every test system and every employee, and it adds up very quickly.
For supervisors and leads, competency often becomes a second full-time job layered on top of patient testing, staffing challenges, and daily operations. Before there was software for this, you were not just assessing skills — but also chasing signatures, tracking due dates, gathering evidence manually, checking the folders/files for completeness, etc.
The reality is that most laboratories tend to struggle because the process is genuinely complex and time-consuming, and it’s very easy to focus on completing everything on time rather than stepping back to ask whether the training truly translated into confidence and judgment at the bench.
What made it especially challenging was knowing that even when the work was done, it still had to be defensible. You could spend hours evaluating someone, but if the documentation wasn’t clear, consistent, and tied to the CLIA elements, it could still feel fragile during an inspection.
That weight, the number of competencies, the regulatory expectations, strict timing, and the constant pressure to prove it all, is something anyone who’s worked in a lab understands very deeply.
From your perspective, where do most labs tend to get “stuck” regarding competency management, and how can they overcome that issue?
The biggest gap I see is that some laboratories are still forced to treat competency “as they always have” because their systems don’t support anything deeper. When everything is manual or fragmented, or you’re using just a 6-point CLIA checklist, or your departments have differing documentation and competency styles, survival mode takes over and progress toward something more sustainable is difficult to prioritize.
Sustainable competency programs come from structure, visibility, and consistency. When competency, training, and documentation are connected and standardized, labs can shift from asking, “Did we check the box?” to “Do we actually trust this process?”
My advice to laboratory leaders is to focus on standardization and repeatability. Build systems that make the right thing easier to do than the wrong thing — especially in your competency program. Encourage technologists to stay aware of their own competency timelines and the evidence they’ll need, rather than placing the entire burden on laboratory leadership. When ownership is shared and expectations are clear, competency management starts to become part of the daily rhythm of the laboratory. That’s when compliance feels more manageable, inspections feel less destabilizing, and competency assessment quality becomes something the whole team participates in — even when timelines are tight and priorities compete.
From Completion to Confidence
What Leah describes is a divide that plays out across competency programs every day: a “completion” approach versus a “confidence” approach to compliance management.
Laboratories operating in completion mode check the box — often in isolation, often in ways that differ from department to department. Documentation lives in different places. Processes are executed differently. When records are viewed together, the inconsistency is visible. Not because teams aren’t working hard, but because the structure that would produce consistent results was never in place.
Confidence is something else entirely: standardized workflows, shared visibility, and a single source of truth that ensures the same task is performed and documented the same way each time — as a matter of how the laboratory operates, not how it prepares for a specific event.
The inconsistency that results from siloed, fragmented systems is a real risk to quality, patient safety, and a laboratory’s reputation. If these challenges sound familiar, MediaLab by Vastian was built specifically for this work. Request a demo to see what a more connected competency program looks like in practice.

