At Veeva’s 2025 R&D & Quality Summit, site leaders described how brittle identity systems, sprawling vendor networks, and poorly designed feasibility processes continue to slow enrollment and frustrate staff. Across small independents, children’s hospitals, and academic research centers, the message was the same: tools meant to simplify are instead introducing new bottlenecks — and every lost day means patients wait longer for access to studies.
SSO: A Fix That Fizzled
Single sign-on (SSO) was pitched as a convenience: one login, one password, seamless access. In practice, panelists indicated it has become a single point of failure.
Alisha Garibaldi, CEO of Skylight Health Research, indicated that one of her sub-investigators had been locked out of a trial since June because of SSO issues. She emphasized that identical systems, such as EDC platforms, often require different access routes depending on the sponsor, multiplying complexity for coordinators managing multiple studies.
From a coordinator’s perspective, Theresa Oswald, Director, Research Operations and Conduct at Stanley Manne Children’s Research Institute indicated that staff resorted to handwritten password notebooks to cope with constant resets and variations. She explained that these failures routinely delay randomization and patient enrollment in the clinic — precisely the moments when time matters most.
Instead of reducing friction, SSO has concentrated risk. One login problem can now shut down an entire portfolio of studies at a site.
Vendor Sprawl and Duplicated Work
The conversation quickly moved to the explosion of vendors in trial operations. Rather than making processes more efficient, the sheer number of platforms has left sites without clear maps of who does what.
Garibaldi indicated that her site often receives login emails without any sponsor or CRO context — leaving her team unsure whether a system is relevant or even safe to access. She said a simple one-pager listing each system, its function, and who needs access would prevent days of confusion, but that such documentation is rare.

Christina Brennan SVP of clinical research at Northwell noted that even during site initiation visits, sites often discover new vendors weeks later, long after patient screening should have begun. That gap, she suggested, undermines trust in the process and erodes startup timelines.
At UCLA, Senior Director of Research Finance and Strategy, Bishoy Anastasi described how his team addresses duplication by holding quarterly “safe space” meetings with sponsors. He indicated that these sessions identify shared pain points, assign single points of contact, and in some cases, link faster activation to performance-based fees. By aligning expectations upfront, both parties reduce the number of redundant requests that drain the coordinator’s bandwidth.
For smaller sites, the cost of duplication is even sharper. Garibaldi indicated that feasibility surveys often force her to re-enter the same address or institutional details dozens of times, sometimes in unfillable Word documents. She noted that even platforms designed to centralize site profiles end up creating duplicate, outdated entries across studies.
Across the board, the problem is the same: too many vendors, too little clarity, and too much rework.
Feasibility: Data Without Context
If vendor sprawl creates noise, feasibility surveys amplify it. Panelists agreed that questionnaires rarely capture the real capacity of a site and often force investigators into guesswork.
Anastasi indicated that investigators at UCLA often struggle to remember how they answered the same feasibility questions months earlier, creating inconsistencies that cast doubt on accuracy. He argued that much of this information already exists in EHR systems, but sites are asked to recreate it manually each cycle.
Brennan added that many digital forms cannot be saved or shared across departments, forcing coordinators to complete them in a single sitting without input from pharmacy, radiology, or other groups. This not only wastes time, but also risks errors when staff attempt to answer questions outside their expertise.
Garibaldi highlighted a different issue: as a standalone research site without an EHR, her team is routinely asked hospital-style care questions like “How many patients do you treat with osteoarthritis per week?” She indicated that answering truthfully with a zero disqualifies her site, even though it is fully capable of running the trial through partnerships and referrals.
The result is a process that measures the wrong things. Instead of identifying which sites are best positioned to run a study, feasibility surveys often exclude strong candidates and overestimate less-prepared ones.
Protocol Design and Study Startup: Amendments on Day One
Oswald shifted the discussion to protocol design, emphasizing how often sites are asked to start activation with incomplete documents. She indicated that even small changes — like adding de-identification requirements for imaging — ripple through pharmacy, radiology, budgets, and consent forms, creating weeks of rework.
She pointed out that coordinators and investigators are rarely consulted early enough to flag these issues. By the time they see the protocol, it is essentially fixed, and operational impracticalities only surface after first patient in. Brennan agreed, noting that investigator and coordinator meetings used to be forums for shaping protocols but now occur too late, when amendments are the only option.
The pattern is predictable: pushing sites to “start fast” with incomplete packets that can delay activation later through amendments, retraining, and budget revisions. What looks like speed at the sponsor level translates to wasted time on the ground.
The Human Stakes
Anastasi closed with a reminder that cut through the operational details. He indicated that every day a study sits “almost open,” a patient with no other options is told to wait. In pediatric oncology, he said, families walk in daily — and if coverage analysis or missing manuals delay activation, the trial does not exist for them.
The point landed: startup isn’t just a regulatory checklist. It is the gateway to access. And when operational inefficiencies persist — whether SSO lockouts, vendor confusion, or unusable feasibility surveys — they don’t just frustrate coordinators. They keep patients from potentially life-saving options.
The Takeaway
The panel didn’t call for more tools. They called for:
- Identity systems that don’t strand investigators.
- Vendor maps that sites can actually use.
- Feasibility processes that reuse known data instead of forcing duplicative inputs.
- Protocols tested for operational feasibility before they are locked.
Each is a small, concrete shift, but together they decide whether trials start smoothly or stall. And for the families waiting in clinics this week, that difference isn’t theoretical. It’s access.
This article is sponsored by Veeva Systems.
Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.

