Blog

Blog

Blog

What Makes a Trial Good for a Patient?

Adam Blum

Sep 22, 2025

Traditional trial matchers often ask clinicians to choose the “best trial” for a patient. In practice, that choice blends subjective impressions of eligibility, patient fit, and potential benefit. It is not transparent, reproducible, or verifiable.

CancerBot takes a different approach. Eligibility is handled objectively through structured trial attributes matched against a patient’s health record. Once that binary filter is applied, we still need a way to compare the eligible trials. Here, CancerBot introduces trial goodness scoring: a transparent framework to rate eligible trials across four patient-relevant dimensions: Burden, Risk, Benefit, and Distance.

These four rubrics are scored on a consistent 0–20 scale. By default, they are weighted equally in a multi-criteria decision analysis (MCDA), but patients (or navigators working with them) can adjust the weights. For example, some patients prioritize benefit above all else, while others care most about finding a nearby trial.

Patient Burden

Let’s look first at patient burden scoring. Our goal is an open verifiable score that allows reproducible results. This eliminates using a proprietary method such as Medidata (which is also not oncology-specific). Instead, we use the Getz et al method described here. The scores are then normalized to a 0 to 20 basis.

Benefit

We compute the ESMO-MCBS score of 1–5 for the trial and then based on the hazard ratio to move to a scale of 1–20.

Risk

We need a metric that does not rely on patient data to assess the intrinsic risk of the trial. This is important to allow decision-making without full patient info, as we often have only partially completed patient records. Once a trial is selected we can attempt to use probabilities of adverse events for that patient to refine the risk assessment.

To compute this metric, we sum the 0 to 4 scores from the categories below:

1) Intrinsic agent/class hazard flags

Use drug class, label/IB warnings, and mandated safety programs.

  • 0 — Oral/non-vesicant agent, no boxed warnings, no REMS, no known life-threatening class toxicities.

  • 1 — Known manageable risks (e.g., mild IRRs), standard caution statements; no special program.

  • 2 — Specific serious hazards called out (e.g., QT prolongation, hepatotoxicity, pneumonitis risks) requiring monitoring or dose-interruption rules but no boxed warning/REMS.

  • 3Boxed warning or REMS or class with recognized life-threatening syndromes (e.g., CRS/ICANS for T-cell engagers/CAR-T; anthracycline cardiotoxicity) where the protocol includes detailed mitigation algorithms.

  • 4Multiple boxed warnings or REMS and high-hazard modality (e.g., cell therapy with step-up dosing/strict algorithms), or explicit fatal risk language with strict controls.

2) Administration/setting risk

What setting and precautions are mandated?

  • 0 — Self-administered oral; no premedication/observation requirements.

  • 1 — Outpatient IV/SC; optional simple premeds (e.g., acetaminophen/antihistamine).

  • 2 — Outpatient with required premeds and prolonged observation (e.g., ≥2h first dose) or irritant/vesicant requiring central access practice.

  • 3Inpatient for first administration or protocol-mandated extended observation (e.g., ≥6h) or telemetry/cardiac monitoring during dosing.

  • 4Inpatient administration required each cycle or ICU-capable setting mandated.

3) Safety monitoring & prophylaxis intensity (type, not count)

Score the types of risk controls required by protocol (don’t count visit frequency — that’s burden).
Consider: primary G-CSF, antiviral/antifungal/PCP prophylaxis, anticoagulation, mandated ECG for QT risk, echo/MUGA schedule, ophthalmology exams, therapeutic drug monitoring, pregnancy prevention program, on-site toci/steroid algorithms, telemetry.

  • 0 — None beyond routine labs.

  • 1 — One low-intensity measure (e.g., ECG each cycle or antiviral prophylaxis).

  • 2 — Any two measures or one medium-intensity (e.g., echo schedule or TDM).

  • 3Three measures or any high-intensity control (e.g., telemetry or detailed cytokine-management kit).

  • 4Four or more distinct measures or any REMS-driven monitoring bundle.

4) Procedural/invasiveness risk (mandatory procedures)

Add and cap at 4.

  • +2 each: Tumor biopsy (baseline or any mandated repeat).

  • +1 each: Bone marrow biopsy, lumbar puncture, central line placement, procedures under general anesthesia.

  • +1: Very dense PK sampling (≥12 venipunctures/month) as specified.

  • +1: Inpatient stay mandated for procedure-related monitoring.

5) Standard-of-care (SOC) disruption risk

How much does the trial force interruption of effective therapy or limit rescue?

  • 0 — Add-on to SOC or no meaningful interruption; rescue permitted.

  • 1 — Short washout (≤4 weeks) or temporary SOC hold; early rescue allowed.

  • 2 — Washout 4–8 weeks or SOC interruption with crossover/rescue available later.

  • 3 — Must stop effective SOC to enroll; placebo periods without guaranteed early rescue.

  • 4 — Requires stopping life-prolonging SOC with no rescue or long washout (>8 weeks), or prohibits essential supportive meds.

Distance

With the three categories above (burden, risk and benefit) we have a solid basis for Multi-Criteria Decision Analysis. We add a fourth one called Distance, even though distance could be thought of as an aspect of patient burden. This is based on discussions with patient navigators and patients that distance is often the primary criteria, and patients are far more sensitive to it than other aspects of patient burden.

1. Local / Same City (1–5 points)

  • 1 = ≤5 miles / ≤15 min

  • 2 = 6–15 miles / 16–30 min

  • 3 = 16–30 miles / 31–45 min

  • 4 = 31–50 miles / 46–60 min

  • 5 = 51–75 miles / 61–90 min

2. Regional Travel (6–10 points)

  • 6 = 76–100 miles / 1.5–2 hr

  • 7 = 101–150 miles / 2–3 hr

  • 8 = 151–200 miles / 3–4 hr

  • 9 = 201–250 miles / 4–5 hr

  • 10 = 251–300 miles / 5–6 hr

3. Long-Distance Domestic (11–15 points)

  • 11 = 301–400 miles / 6–7 hr

  • 12 = 401–500 miles / 7–8 hr

  • 13 = 501–750 miles / 8–10 hr

  • 14 = 751–1,000 miles / 10–12 hr

  • 15 = 1,001–1,500 miles / 12–15 hr

4. International / Extreme Travel (16–20 points)

  • 16 = 1,501–2,000 miles / 15–20 hr

  • 17 = 2,001–3,000 miles / 20–24 hr (short-haul international flight)

  • 18 = 3,001–5,000 miles / intercontinental flight, ~1 day

  • 19 = 5,001–7,000 miles / intercontinental flight, 1–2 days with stopovers

  • 20 = ≥7,000 miles / >2 days of travel, multiple international connections

Putting It Together

CancerBot calculates a goodness score by summing the normalized four rubric subscores (subtracting bad factors from 20, and normalizing the score between 0 and 1 by dividing by 20), ensuring applying patient-selected weights to each factor. We then multiply the resulting 0 to 1 score by 100 so that it is on a 1–100 scale for patient understandability.

Where the weight coefficients add up to 1.0. Note that since patient burden (PB), risk and distance subscores are bad, we need to use (20-subscore) for them.

The result is a transparent, patient-centered ranking among objectively eligible trials.

  • For researchers, this provides a reproducible, oncology-specific MCDA framework that can benchmark protocols or compare competing trial designs.

  • For navigators and patients, this supports conversations about trade-offs:

  • “If travel is impossible, we can weight Distance higher.”

  • “If you want maximum chance of benefit, we’ll weight Benefit higher.”

Worked Example

  • Trial A: Local (Distance = 2), oral targeted therapy (Risk = 6), monthly visits (Burden = 4), moderate efficacy signal (Benefit = 8).

Using the above formula the score is (0.20+0.175+0.225+0.10)*100=70.

  • Trial B: CAR-T at a referral center 250 miles away (Distance = 9), high toxicity (Risk = 16), inpatient stay (Burden = 12), but strong efficacy and durability (Benefit = 18).

For this trial the formula results in (0.10+0.05+0.1375+0.225)*100=51.25.

These two scores assume equal weights of each factor. But a patient valuing proximity would prefer Trial A. A patient valuing maximum benefit regardless of distance would likely prefer Trial B. The rubric makes these trade-offs explicit.

Conclusion

By separating eligibility (binary, objective) from goodness (multi-dimensional, weighted by patient values), CancerBot makes trial selection more transparent for researchers and more empowering for patients.

The four rubrics — Burden, Risk, Benefit, Distance — are oncology-specific, reproducible, and open. They allow both rigorous protocol benchmarking and patient-centered decision support, bridging the gap between clinical research design and real-world patient navigation.

References

  1. Cameron D. Assessing Participation Burden in Clinical Trials. Appl Clin Trials. 2020.

  2. Getz KA, et al. Quantifying protocol complexity and its impact on clinical trial performance. Ther Innov Regul Sci. 2015.

  3. Getz K, et al. Assessing Patient Participation Burden Based on Protocol Design Characteristics. TIRS. 2020.

  4. Smith Z, et al. Enhancing the Measure of Participation Burden… TIRS. 2021.

  5. Medidata/Acorn AI. Patient Burden Index White Paper. 2019.

  6. ICH E8(R1): General Considerations for Clinical Studies. 2019.

  7. Chan A-W, et al. SPIRIT 2013 Statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013.

  8. De Cock E, et al. Time savings with subcutaneous vs intravenous oncology therapies. Adv Ther. 2016.

  9. Peppercorn J, et al. Ethical considerations for research biopsies in oncology clinical trials. JCO. 2017.

  10. OHRP/FDA. Guidance on blood draws in research. 2005.

  11. van Luijn HEM, et al. Assessment of the risk-benefit ratio of phase II cancer clinical trials by IRB members. Ann Oncol. 2002.

  12. VCCC Alliance. Trial Risk Assessment and Management Framework. 2020.

  13. Ulrich CM, et al. Cancer clinical trial participants’ assessment of risk and benefit. J Empir Res Hum Res Ethics. 2016.

  14. FDA. Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics. 2023.

  15. Eisenhauer EA, et al. RECIST 1.1: updated guidelines for response evaluation. Eur J Cancer. 2009.

  16. Bruner DW, et al. Predictors of geographic access to clinical trials using GIS. JCO. 2015.

  17. Lamont EB, et al. Distance traveled to referral center and its impact on survival in cancer clinical trials. JNCI. 2003.

  18. Mseke EP, et al. Travel time vs distance as a measure of healthcare access. J Transp Health. 2024.

  19. Thokala P, Devlin N, Marsh K, et al. Multiple criteria decision analysis for health care decision making — an introduction: report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Value Health. 2016;19(1):1–13.

  20. ESCO MCBS Guidelines. https://www.esmo.org/guidelines/esmo-mcbs

  21. ASCO Value Framework. https://www.asco.org/news-initiatives/current-initiatives/cancer-care-initiatives/value-cancer-care

About CancerBot

About CancerBot

About CancerBot

Turning frustration into innovation

After being diagnosed with follicular lymphoma, AI tech entrepreneur Adam Blum assumed he could easily find cutting-edge treatment options. Instead, he faced resistance from doctors and an exhausting search process. Determined to fix this, he built CancerBot—an AI-powered tool that makes clinical trials more accessible, helping patients find potential life-saving treatments faster.

Start your search for clinical trials now

New treatment options could be just a click away. Start a chat with CancerBot today and get matched with clinical trials tailored to you—quickly, easily, and at no cost.

Start your search for clinical trials now

New treatment options could be just a click away. Start a chat with CancerBot today and get matched with clinical trials tailored to you—quickly, easily, and at no cost.

Start your search for clinical trials now

New treatment options could be just a click away. Start a chat with CancerBot today and get matched with clinical trials tailored to you—quickly, easily, and at no cost.