Real-world evidence (RWE) is increasingly referenced in regulatory conversations, particularly following the FDA’s 2023 guidance. Roche’s recent FDA clearance for its cobas® SARS-CoV-2 Qualitative test (510(k) K240867) has been characterized as a regulatory breakthrough for real-world evidence (RWE). But the public documentation raises a deeper concern: if the FDA diverged from its own real-world data guidance, why wasn’t that made transparent?

The public FDA documentation provides limited detail regarding how the submission’s real-world data (RWD) aligns with the criteria outlined in the FDA’s 2023 RWE guidance. The agency’s decision cited two datasets: a longitudinal self-testing study (TUAH) and an occupational testing program conducted by the National Football League (NFL), which included RWD, according to the FDA. In this article, we analyze the available evidence supporting the clearance and raise concerns about whether the FDA’s own criteria for real-world data were upheld in this decision.

The FDA’s 2023 guidance on real-world evidence (RWE) promised clarity: real-world data (RWD) should come from routine clinical care, not from tightly controlled studies or employer-driven testing protocols. Yet just months later, the agency granted clearance to Roche’s cobas® SARS-CoV-2 test—calling structured surveillance data from the NFL “real-world.” How can sponsors plan RWE strategies when the FDA’s own definition is being rewritten by precedent instead of public guidance? Why is the FDA calling a structured employer testing protocol (NFL Dataset) RWD? Did the FDA quietly change its own definition of RWD?

In this article, we examine the evidence cited in the FDA’s clearance (510(k) K240867) and explore how it aligns—or doesn’t—with the agency’s own definition of real-world data.

The TUAH Study: A Structured Longitudinal Design

According to the FDA decision summary, the TUAH (Test Us at Home) study was a longitudinal clinical study conducted between October 2021 and April 2022. It enrolled asymptomatic participants who were required to self-administer anterior nasal swabs every 48 hours for a period of 15 days under this defined testing schedule. The study was submitted as part of a standard 510(k) premarket notification, with Roche seeking to expand the intended use of its cobas SARS-CoV-2 Qualitative test to include asymptomatic individuals. This pathway—focused on demonstrating substantial equivalence to a previously cleared device.

A total of 38,192 samples were included in the performance analysis. To evaluate test performance, Roche applied a comparator algorithm: two consecutive molecular test results over a 48-hour window were used to determine the reference result for each sample. This method allowed classification of test outcomes without relying on clinical diagnosis alone.

The study reported the following results:

  • Positive Percent Agreement (PPA): 94.3% (315/334) with a 95% Confidence Interval of 91.4% to 96.8%
  • Negative Percent Agreement (NPA): 99.2% (37,586/37,858) with a 95% Confidence Interval of 99.2% to 99.4%

These metrics demonstrate strong analytical performance under the study’s structured conditions, which included predefined testing intervals, participant instructions, and algorithm-driven outcome determination. 

The NFL COVID-19 Surveillance Dataset

According to the FDA’s decision summary for 510(k) K240867, the clinical performance of the cobas® SARS-CoV-2 Qualitative test in asymptomatic individuals was evaluated using data from the NFL COVID-19 Surveillance Program.

The decision summary states:

“The clinical performance of the cobas SARS-CoV-2 Qualitative with asymptomatic subjects was assessed using real-world data collected from the 2020 National Football League (NFL) COVID-19 Surveillance Program where samples were collected and tested between August 2020–January 2021 as part of an Occupational Testing protocol.”

“Anterior nasal swab samples were prospectively collected on a near-daily basis from NFL players and staff.”

A total of 1,776 samples were included in the analysis. These were evaluated using a comparator algorithm based on molecular test results and/or clinical adjudication from the testing program. The reported performance metrics were:

  • Positive Percent Agreement (PPA): 100.0% (11/11), 95% CI: 74.1%–100%
  • Negative Percent Agreement (NPA): 99.8% (1762/1765), 95% CI: 99.5%–99.9%

This degree of structure would typically require justification under the FDA’s own guidance for real-world data. The FDA offered no such justification in the public record.

Dissecting the FDA’s Real-World Evidence Guidance

The FDA’s August 2023 guidance on RWE draws a clear distinction between RWD and data collected in the context of a clinical trial. RWD is defined as “data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources,” such as EHRs, claims data, registries, and data from wearables. This type of data originates from care delivered at the discretion of providers, not assigned by research protocols. Crucially, the guidance emphasizes that RWD reflects routine clinical care, and should not be confused with structured data generated through protocol-driven studies or clinical investigations.

Below is a table that summarizes direct quotes from the FDA guidance alongside our interpretation of what each means:

FDA Guidance StatementOur Interpretation
“FDA defines real-world data (RWD) as data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources.”The agency clarifies that RWD comes from situations where a provider makes decisions based on individual patient needs in real clinical settings, not assigning participants to fixed groups or interventions. This draws a key distinction between RWD and clinical trial data.
“Non-interventional studies analyze data reflecting the use of a marketed drug administered in routine medical practice, according to a medical provider’s clinical judgment and based on patient characteristics, rather than assignment of a participant to a study arm according to a research protocol.”The FDA draws a bright line between routine clinical practice and protocol-driven research. RWD should come from care delivered at the provider’s discretion, not from interventions or schedules imposed by a study. The agency also expects sponsors to critically assess whether their data source truly qualifies as “real-world.” This includes determining if the dataset reflects unstructured, real-life healthcare delivery and if it’s robust enough to answer the research question.
The classification of a dataset as RWD requires more than a label; its structure and origin must reflect routine clinical care as outlined in the guidance.

These passages emphasize that the FDA has delineated what constitutes RWD. Structured, protocol-driven studies—particularly those involving scheduled procedures or interventions (as is the case in the NFL dataset)—require significant justification to be considered RWD, demonstrating that the structured elements do not compromise the data’s reflection of routine clinical care, including justification of how such data remains unbiased and fit-for-purpose. The distinction between routine care and research protocols is central to determining whether submitted data meets the FDA’s definition of real-world data.

Is The NFL Dataset Considered Real-World Data?

In its 510(k) decision summary, the FDA classified data from the NFL COVID-19 Surveillance Program as real-world data. The agency stated:

“The clinical performance of the cobas SARS-CoV-2 Qualitative with asymptomatic subjects was assessed using real-world data (RWD) from the National Football League (NFL) COVID-19 Surveillance Program…”

However, based on the FDA’s own 2023 guidance, this classification raises questions. The guidance defines RWD as:

“data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources.”

According to the FDA’s own words, real-world data should reflect everyday clinical decisions made by doctors—not testing regimens assigned by an employer or research team. Yet that’s exactly what happened in this case. In a peer-reviewed study analyzing data from the NFL COVID-19 Surveillance Program—likely the same dataset referenced in the FDA’s decision summary—the data were collected under a structured occupational testing protocol, which involved:

  • Daily and later seven-day-per-week testing of players and staff, irrespective of symptoms or clinical suspicion
  • Pre-scheduled nasal swab collection conducted at fixed intervals with no deviation based on patient presentation
  • Uniform testing protocols enforced across all 32 NFL clubs using centralized lab vendors and tightly defined workflows
  • A non-clinical setting, driven by organizational risk mitigation—not by medical providers tailoring care to patients


These elements suggest the NFL dataset is more akin to structured surveillance than the kind of provider-directed, routine care from various sources that the FDA defines as real-world data. If the agency is expanding that definition to include structured surveillance programs, it would be helpful for sponsors to understand how and why. No evidence in the publication suggests that medical professionals independently ordered or modified testing based on symptoms, risk factors, or medical judgment, as would be expected in true real-world healthcare delivery.

Public Positioning vs. Regulatory Pathway

Roche’s description of this clearance as a “breakthrough” and “huge achievement” reflects the FDA’s own language—labeling the underlying datasets as “real-world data.” This investigation focuses not on Roche, but on how the FDA’s recent decision challenges its own guidance—and the implications that has for industry-wide trust.

The FDA’s decision summary for 510(k) K240867 states:

“The clinical performance of the cobas SARS-CoV-2 Qualitative with asymptomatic subjects was assessed using real-world data (RWD) from the National Football League (NFL) COVID-19 Surveillance Program and a prospective clinical study called Test Us at Home (TUAH).

Comments from Roche Executives on LinkedIn

So why is the FDA calling these datasets RWD? That’s the question. And it’s one the agency hasn’t publicly answered. Both datasets — the NFL dataset and the TUAH study— involved structured procedures that may not clearly align with how the agency defines RWD in its 2023 guidance, which emphasizes that RWD should be drawn from care delivered according to provider discretion, not study protocol or organization-driven instructions.

Thus, the FDA’s own labeling of these datasets as RWD appears misaligned with the agency’s guidance. The FDA decision summary referred to structured study datasets as ‘real-world data’ without explaining how these datasets met the agency’s own RWD criteria that emphasize routine care, provider discretion, and minimal protocol-driven intervention.

The FDA has not, in the public record, provided additional context or clarification as to how these datasets were evaluated under the agency’s RWD criteria in its guidance document on RWE. In a statement to us, an FDA spokesperson said:

The FDA will consider the use of Real-World Evidence (RWE) to support regulatory decision-making for medical devices when it concludes that the RWD used to generate the RWE are of sufficient quality to inform or support a particular regulatory decision.

Roche offered to provide additional background context in an off-the-record discussion. The Clinical Trial Vanguard later reached out for background context and comment, but no background or comment was available at the time of publication.

Implications for Future RWE Submissions

This apparent inconsistency highlights the need for greater clarity in how the FDA interprets and applies the term “real-world data.” According to the FDA’s own 2023 guidance, RWD refers to data collected during routine clinical practice without protocol-driven interventions. However, in this instance, the agency classified what appears to be structured study datasets—collected under defined research protocols—as RWD. The public documentation does not clarify how these datasets align with the FDA’s stated criteria, and this absence of detailed public explanation introduces uncertainty for other sponsors and stakeholders seeking to develop RWE strategies in accordance with agency expectations.

Darshan Kulkarni
Darshan Kulkarni, Kulkarni Law Firm

Darshan Kulkarni adds that while guidances are non-binding, they still matter.

If guidance reflects the agency’s current thinking, then this decision leaves us wondering — what exactly is the FDA thinking now? This isn’t just about one clearance—it’s about clarity. Sponsors, researchers, and patients rely on the FDA’s definitions to build trust and deliver innovation. If the rules are changing, it would behoove the agency to say so.

This clarity is particularly important for stakeholders developing RWE strategies grounded in real-world, unstructured healthcare environments. To ensure regulatory predictability and public trust, future submissions that rely on RWE should be accompanied by clear, transparent explanations demonstrating how the datasets align with the FDA’s definitions and expectations for RWD.

Former FDA official Jonathan Helfgott offered additional insight on how the agency views guidance documents in regulatory decision-making:

Jonathan Helfgott, Former Associate Director for Risk Science at FDA


Jonathan Helfgott notes that FDA decisions rely on a patchwork of guidances—not just one.

By definition FDA final guidances are never enforceable and simply reflect the Agency’s current thinking on any given topic. Typically, it’s a patchwork of combining the applicability across various FDA guidances, policies, and applicable provisions at the product-level that are utilized to support an FDA decision. It’s almost never just a single standalone guidance document, recognizing some topics (i.e.-RWD/E) may be more critical than other (i.e.-user labeling). FDA evaluates the ‘totality of evidence’ presented by the sponsor when making any pre-market decision, which [is] inclusive of all pre-clinical, clinical, post-market, and virtual data. Focusing on one aspect of the data set included does not always tell the whole story of the entirety of meeting the valid scientific evidence threshold as enumerated under 21 CFR Part 860.

Helfgott’s point reinforces what’s at stake: if FDA guidances aren’t enforceable, the only thing holding the system together is consistency in FDA’s actions. When the agency signals one thing in public guidance and does another in practice, it doesn’t just confuse—it recalibrates the entire playing field. Sponsors don’t need enforcement. They need reliability—so they can make decisions with eyes open, not in the dark.

When Guidance Doesn’t Guide, What Comes Next?

Even with input from legal and regulatory experts, one question remains: what does this mean for the future of FDA guidance and RWE strategy? If the agency is broadening its definition of real-world data to include structured employer-driven testing protocols, it should do so transparently and with public stakeholder input—not silently through selective labeling.

The FDA’s own 2023 guidance on real-world evidence draws a clear line: real-world data should reflect routine clinical care, not protocol-driven testing imposed by employers or research frameworks. Yet in this clearance, the agency labeled tightly structured datasets as RWD—with no public explanation of how they met the criteria.

The question isn’t whether Roche’s data were high quality. It’s whether the FDA is following its own guidances—or quietly rewriting them.

If the agency has shifted its standards for what qualifies as RWD, the public deserves to understand why. Other sponsors, researchers, and health systems are watching—and building regulatory strategies based on the framework the FDA laid out just months ago. When the foundation shifts without notice, it leaves those stakeholders exposed—and the credibility of RWE itself in question.

Without clarity, the regulatory landscape risks becoming unpredictable—where the term ‘real-world evidence’ shifts at will, undermining both innovation and trust. That’s not scientific progress. It’s regulatory improvisation.

The Clinical Trial Vanguard has initiated a Freedom of Information Act (FOIA) request on this case.

Website |  + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.