Resources

Company

Resources

Company

Resources

Company

AI vs. Humans: Inside the JAMA Study That Proves It’s Time to Ditch Manual Prescreening

human shaking hand of robot illustration
human shaking hand of robot illustration
human shaking hand of robot illustration

If you’ve ever spent an afternoon manually combing through EHR notes to determine if a patient qualifies for a clinical trial, you probably don’t need another reminder of just how time-consuming and oftentimes mind-numbing that process can be. From buried lab results to borderline exclusion criteria hidden deep in a physician’s narrative, the manual chart review has long been a necessary evil in the world of clinical research.

But what if it didn’t have to be?

In a groundbreaking new study published by JAMA (Journal of the American Medical Association), researchers pitted humans against AI in a randomized, blinded trial to see who could screen patients more efficiently and more accurately. RECTIFIER, the tool native built on Large Language Models (LLMs), didn’t just hold its own— it outperformed trained study staff in every key metric, from speed to accuracy to enrollment rates.

Why does this matter to us? Because RECTIFIER is built on the same LLM-powered technical foundation as Trially AI. And this study? It’s precisely the kind of real-world validation our team has been excited about for years! In fact, the JAMA results closely mirror what our clients are already experiencing with Trially, like: 

  • 2x increase in patient enrollment

  • 73% reduction in screen fails

  • 91% reduction in manual chart review time

Let's examine why these findings matter for sponsors, sites, and the patients awaiting innovative treatments.

The Screening Struggle is Real

Patient recruitment has always been one of the toughest parts of running a clinical trial, and the numbers don’t lie. Nearly 7 out of 10 sites fail to meet their enrollment targets, and 86% of trials face delays, costing sponsors more than $600,000 per day in lost time and opportunity.

One of the biggest bottlenecks being…you guessed it…manual chart reviews.

Despite billions being invested in streamlining clinical research, the fundamental process of matching the right patients to the right trials remains stubbornly inefficient. 

Why patient identification is still one of the biggest bottlenecks

At the heart of the issue is the data itself. While structured EHR fields capture basic info like age, labs, and diagnoses, they only tell part of the story. The real clinical nuance, like the kind that determines trial eligibility, is usually buried in free-text physician notes, scanned PDFs, and unstructured reports. 

Research coordinators often spend an average of 2+ hours a day manually sifting through these records—not just for patient identification but for documenting adverse events, verifying eligibility, tracking follow-ups, and so much more. Simply put, when it comes to matching patients to trials, the manual review of unstructured data remains one of the most burdensome and error-prone parts of the process. Critical details are easy to miss, eligibility is often unclear, and the potential for screening failures or delays in enrollment remains high.

What’s more, the increasing specificity of trial protocols only compounds the challenge. Many studies now include 15 or more inclusion and exclusion criteria, each one a potential point of failure if overlooked. At the same time, the number of trial endpoints has surged, adding further complexity. Between 2004 and 2012, the average number of endpoints per protocol nearly doubled from 8 to 14. By 2021, Phase III trials averaged 25.8 endpoints (a 37% increase since 2015)!

This growing complexity places a heavy burden on research teams, who are having to manually sift through more data than ever to find eligible patients. Multiply that across thousands of records, and it’s no wonder sites struggle to meet enrollment goals, sponsors face costly delays, and patients wait longer for access to potentially life-changing treatments.

Inside the Study: How JAMA Put AI to the Test

To evaluate whether AI could actually outperform humans in clinical trial pre-screening, researchers at Mass General Brigham launched a randomized, blinded clinical trial within an ongoing heart failure study called the “Manual vs AI-Assisted Clinical Trial Screening Using Large-Language Models.” 

Their tool of choice? RECTIFIER. 

RECTIFIER stands for Retrieval-Augmented Generation Enabled Clinical Trial Infrastructure for Inclusion Exclusion Review. While the name’s certainly a mouthful, the idea is simple: Use AI to rapidly and accurately determine whether patients meet trial criteria by reading free-text clinical notes the same way a human would.

From May to September 2024, the team ran structured EHR queries to identify over 193,000 patients who met basic inclusion criteria. After filtering out those who didn’t qualify, 4,476 patients were randomized into two groups:

  • One screened manually by trained study staff

  • The other screened using LLM-powered review via the RECTIFIER tool

Both groups were given the same eligibility criteria. Both had the same pool of patients. The only difference? One used traditional manual chart review. The other used AI.

The goal was to compare not only how fast each method could determine eligibility but also how many patients actually went on to enroll in the trial. And that’s where things got interesting. 

The Outcome? AI Delivered Measurable Gains

The results are clear, and for anyone still skeptical about AI in clinical research, they’re hard to ignore. 

Patients screened with the AI-assisted method reached eligibility determination 78% faster than those screened manually. In practical terms, that meant moving from a slow crawl to a confident sprint when it came to identifying qualified participants.

But speed wasn’t the only win. The eligibility rate in the AI group was 20.4% (458 out of 2,242 patients), compared to just 12.7% (284 out of 2,234 patients) in the manual group. That’s a 60% improvement in yield, achieved without any change in inclusion or exclusion criteria. 

Most notably, the AI-assisted method led to nearly double the number of enrolled patients:

  • 35 enrollments in the AI group

  • 19 enrollments in the manual group

What’s even more exciting? Trially outperforms RECTIFIER in real-world settings, delivering a full 2x increase in enrollment for its users. 

Here’s a more holistic view of the results for reference: 5 Reasons This Study Validates Trially’s Approach 

The JAMA study on RECTIFIER provides compelling evidence for the effectiveness of AI-assisted prescreening, and it's particularly exciting for us because Trially is built on the same technological foundation. Here's why these results further validate our approach:

  1. Speed: Achieve your enrollment goals

Just like RECTIFIER, our platform parses complex inclusion and exclusion criteria in under 5 minutes, automatically identifying eligible patients from both structured and unstructured EHR data. That means faster eligibility determinations, faster site activation, and ultimsately, faster access to treatment.

But we’re not just fast…we’re 4x faster than other solutions, thanks to our seamless API integrations. We've helped sites achieve a 91% reduction in chart review time and 10x more high-quality candidates by rapidly parsing trial protocols and mapping them to real-world patient data in real time.

  1. Accuracy: Say goodbye to screening fails

Missing a critical detail in a patient’s chart can mean wasted outreach, costly delays, and failed screens. With Trially, those misses are minimized. Our LLM-powered platformAI reduces screen failure rates by up to 73%, from 52% down to just 14%, by analyzing entire patient records—not just what fits into structured fields.

When it comes to determining patient eligibility, AI-assisted methods significantly outperform humans:

  • 20.4% eligibility rate in the AI group

  • 12.7% eligibility rate in the manual group

The result? Higher-quality matches that multiply enrollment outcomes, even for the most complex trials.

  1. Scalability: Multi-protocol matching across sites

While RECTIFIER showed impressive results in a single heart failure trial, Trially is already going bigger. Our tech is built to support multiple sites, protocols, and therapeutic areas with equal precision.

Whether you’re running one trial or twenty, Trially adapts in real time, assessing protocol feasibility, aligning trial fit to your site population, and helping you identify new BD opportunities based on your patient data. This scalability is critical for sponsors, CROs, and site networks managing multiple studies across many sites. 

  1. Efficiency: Centralize recruitment and slash chart review time

Research coordinators are drowning in documentation. Trially cuts through the noise by automating chart review entirely, reducing manual review time by up to 91% from an average of 36 hours per month to just 2.5 hours.

That means more time spent enrolling patients and less time spent buried in PDFs. It also means lower burnout, smoother workflows, and better performance across your entire site team.

  1. Patient Enrollment: More qualified patients faster

Perhaps the most important validation from the JAMA study is the 84% increase in actual patient enrollment (35 vs 19 patients). With Trially, eligible patients don’t sit in limbo. Our system can pre-screen patients before their next visit, enabling faster outreach and streamlined enrollment.

By surfacing qualified candidates in real time, our clients are doubling their monthly enrollment rates, even for complex or "white whale" trials with stringent criteria. For patients awaiting breakthrough therapies, this acceleration can make a life-changing difference.

The RECTIFIER study provides scientific validation for what we've already seen in practice. AI-assisted screening isn’t just faster and more accurate. It’s smarter, and it's a more scalable way to run trials. So, what should that mean for research teams, sponsors, and CROs looking to stay ahead?

Let’s break it down.

What Sponsors, Sites, and CROs Should Take Away From This

The results of the JAMA study provide a clear indication that it’s time to rethink how clinical trials are designed, staffed, and executed. Whether you’re a sponsor managing a global portfolio or a research coordinator screening patients on-site, AI-assisted prescreening is a shift worth paying attention to. 

For sponsors and CROs

AI tools like Trially make it possible to standardize recruitment operations across all sites, no matter the trial phase or protocol complexity. Instead of relying on inconsistent manual review processes, you can now:

  • Multiply enrollment rates across multiple trials and sites

  • Standardize screening processes to reduce variability and site-to-site inconsistency

  • Assess protocol feasibility against real-time EHR data to select the right sites from the start

  • Shorten enrollment timelines and reduce screen failures, cutting costly delays

  • Maximize ROI by hitting recruitment milestones faster and more predictably

For research sites and coordinators

Manual chart review eats up time and staff energy. With Trially, sites can:

  • Multiply enrollment rates across multiple trials and sites

  • Automate chart review, saving up to 91% of coordinator hours

  • Reduce screen failure rates by up to 73% with more precise matching

  • Increase monthly enrollment rates, even for complex or “white whale” trials

  • Respond to feasibility surveys faster with data-driven confidence

  • Boost site performance metrics and improve your competitive edge for future studies

For hospitals and academic centers

For integrated healthcare systems that support research, the benefits extend beyond individual trials. AI-assisted screening creates value by:

  • Multiply enrollment rates across multiple trials and sites

  • Harness 100% of your EHR data, including free text physician notes, in real time

  • Quickly identify eligible patients for both sponsored and investigator-initiated trials

  • Support protocol amendments without restarting screening from scratch

  • Accelerate research timelines while contributing to high-impact, real-world evidence generation

No matter your role in the clinical trial ecosystem, AI-assisted patient screening isn't just a nice-to-have tool—it's becoming an essential component of successful clinical research operations today. As trials become increasingly complex and targeted, the gap between AI-powered screening and manual methods will only widen further. 

Final Take: Clinical Research Has Officially Entered Its AI Era

As exemplified throughout this article, the JAMA study featuring RECTIFIER represents more than just another incremental improvement in clinical trial operations. Arguably, it marks a definitive turning point within the clinical trial landscape. With robust scientific evidence now confirming what early adopters have already experienced, we can confidently say that clinical research has officially entered its ‘AI era.’

Whether you’re navigating complex protocols, working with lean site teams, or managing multi-site portfolios, it's evident that traditional screening methods can’t keep up with today’s demands. 

So, if you’re still relying on spreadsheets, PDFs, and hours of manual review, it might be time to ask yourself: What’s that delay really costing you? And what is your time really worth? 

The question is no longer whether AI will transform clinical trial recruitment but how quickly organizations will adapt to this new reality.

Schedule a demo to learn more.

Frequently Asked Questions

How is Trially different from RECTIFIER?

What kind of AI technology does Trially use?

Does Trially integrate with our existing EHR or CRM?

What types of trials can Trially support?

What kind of results can we expect using Trially?

Can Trially help with feasibility and site selection?

Is Trially compliant with healthcare security and privacy regulations?

Frequently Asked Questions

How is Trially different from RECTIFIER?

What kind of AI technology does Trially use?

Does Trially integrate with our existing EHR or CRM?

What types of trials can Trially support?

What kind of results can we expect using Trially?

Can Trially help with feasibility and site selection?

Is Trially compliant with healthcare security and privacy regulations?

Frequently Asked Questions

How is Trially different from RECTIFIER?

What kind of AI technology does Trially use?

Does Trially integrate with our existing EHR or CRM?

What types of trials can Trially support?

What kind of results can we expect using Trially?

Can Trially help with feasibility and site selection?

Is Trially compliant with healthcare security and privacy regulations?

©

All rights reserved.

All information presented is for illustrative purposes only and does not represent actual data. Trially's product is fully compliant with HIPAA, SOC 2, FDA 21 CFR Part 11 and ISO 27001 regulations, ensuring the highest level of data security, safety and privacy.

©

All rights reserved.

All information presented is for illustrative purposes only and does not represent actual data. Trially's product is fully compliant with HIPAA, SOC 2, FDA 21 CFR Part 11 and ISO 27001 regulations, ensuring the highest level of data security, safety and privacy.