Table of Contents
Could you start by outlining for our readers your career background and what core problem you’re solving at 4G Clinical?
Sure, I studied biology in college and then went into the military, where I became a ranger in the Army, after school. While I was serving, I got sick and was diagnosed with multiple sclerosis when I was 23. That ended my military career.
I came back to the US and worked in management consulting at EDS, Arthur Andersen, and Oracle. I left Oracle in 2002 and started a management consulting firm on the West Coast serving multiple industries and eventually sold that business.
In 2015, I started 4G Clinical with a good friend of mine, Ed Tourtellotte. Ed is a pretty well-known technologist in this space. He built the first parameter-based Interactive Response Technology (IRT) system in the world, called Impala, for Pfizer in 2000. He later built another system called Trident, which he sold to BioClinica.
We’d stayed in touch over the years, and in the summer of 2015, we had both exited our businesses. We had some capital, some credibility, and Ed had a very good idea about how to innovate in this space; specifically, about using natural language processing to remove a pretty hefty step in the study launch process, when it comes to Randomization and Trial Supply Management (RTSM), or IRT.
But more importantly, we were motivated by our own experiences with disease. I’ve been living with MS for some time, and Ed lost his wife to triple negative breast cancer, with four kids at home. Neither of us was going to go back and get a PhD in biochemistry and discover a molecule, but we wanted to do something that could help.
That’s what we’re doing. From my point of view, technology matters, and it’s helped us take a lot of share in this space. But the entire business is really focused on purpose. Everyone’s touched by disease. A couple of months ago, we launched our 1000th clinical trial. The impact is incredible. That alignment with purpose, and being able to see the impact you’re having, is super motivating.
Diving into some of the technical details, could you tell us about RTSM and 4C Supply, and what are they offering the industry?
Sure, the basic thing to understand is that an IRT or RTSM system is something you cannot run a phase two or phase three trial without. At the end of the day, our core responsibility is to randomize patients and move drug around in the clinical supply chain, getting drug to sites at the right time.
We’re responsible for randomizing patients, stocking sites, instructing clinicians on dosing, and securing the blind. Whenever anybody starts at 4G, they get a computer, some swag and all that stuff. But the most important thing is a T-shirt. On the back it says, “We will never mis-randomize, stock out, miss dose, or compromise the blind.” All four of those put trial results at risk, and two of them can hurt people.
That’s the baseline of what the product does. Obviously, there are other vendors who do this, as well. Where we’ve differentiated ourselves is that everything is built on a modern stack. It’s all AWS cloud-deployed. The most important innovation has been using natural language processing to read a specification and automate system configuration, removing manual build steps while preserving controlled validation and approval.
Historically, a vendor would get a clinical protocol; read it; get additional information from the sponsor; build a system specification; send that to the client for approval; and then go and build or configure the system.
What we do instead is take the protocol and supply information, build the specification in a very specific way, and then use a natural language interpreter to automate system configuration, removing manual build steps while preserving controlled review, validation, and approval.
When we first went to market, we pushed for speed, because no one is faster at this than we are. But speed isn’t always that important. It matters if you don’t plan well or if you’re making changes right up against FPI (first patient in), but not always.
What really matters is that, because we can build so quickly, we can show the system to the sponsor before they approve the requirements. That creates an iterative build process where they can see everything, concretely, early on. That’s been the thing that’s allowed us to take a lot of share in this market, and, hopefully, have a lot of impact.
How have your technological capabilities been enhancing and scaling up, over the past year, with the development of AI?
We have some AI in the core product that’s being released soon. The way I’ve looked at AI is probably how most people have. First, you figure out how to apply it to drive internal, operational efficiency. Second, you use it to build things you can give to customers, like training materials, or that kind of thing. And third, you embed it in the core product.
In a space like this, with very important regulation and with our system being a critical business system, the industry is very risk averse. So, aggressively adopting AI on core transactions isn’t going to happen immediately.
What I say is that we have been, and are being, moderately aggressive about AI, with a very heavy focus on applied AI. That doesn’t mean AI for AI’s sake but applying it to real business problems. We’re definitely embracing the technology, internally, and trying to figure out how best to apply it in the core product.
What are some of the main hurdles that you have faced, with regard to validation and regulatory processes, when you’re dealing with Natural Language Processing (NLP)-based software that’s working on clinical trials?
For the NLP-based piece, there haven’t been meaningful regulatory hurdles, because it’s really just changing an existing process, by removing the configuration step or the development step. There haven’t been meaningful regulatory differences because everything is still controlled by a specification that the customer signs off, and they’re still testing all of it.
Where there is regulatory or policy tension is around using public large language models with sensitive patient data. Importantly, patient data remains protected within validated, controlled environments, and we do not use public large language models on sensitive clinical or patient information. That’s not something that’s well received by anybody. There’s still mistrust of guidance that’s been generated by AI, but we’re working through it. There’s still a lot of human checking before anything actually gets done.
When you’re trying to think about scaling and working with your clients, how are you balancing organic growth inside the company versus strategically partnering externally?
We’ve maintained our focus, which isn’t very common. The business has been around for 10 years, and we’ve done what we said we were going to do the whole time, which was to build this specific capability, along with 4C, which is a forecasting tool.
Rather than building adjacencies in the eClinical space, we partner on every study. We integrate with multiple parties. We integrate every study with an electronic data capture (EDC), and with a shipping partner. In most cases, especially with larger pharma, we integrate with a lot of their internal systems.
Not to be too flippant about it, it really is a best-of-breed kind of play, where we integrate with other strong players.
Where does the data for 4C come from, and do you handle fulfillment?
4C, itself, is a planning tool. The data comes from the planners - they look at the protocol and the network and set it up.
The way 4C works is that it takes all of those inputs and runs them through a very sophisticated algorithm to produce an optimized supply plan. Our core developers are in Belgium, and they’re all computer science and applied math people.
If you think about it, trial design complexity drives supply complexity. If you have multiple titration schemes and different paths patients can go down, that creates a huge amount of complexity. Some of the visualizations really show how complex this gets.
What’s your approach to pricing your software?
In general, we aim to align pricing with the value delivered, recognising that pricing is always study specific and driven by design complexity and support requirements. That’s not always possible. At the study level, pricing is really a function of the complexity of the study and the support burden, once the study is live.
Design features drive pricing, and then things which impact support burden, like the number of patients, number of sites, and number of dosing visits factor into how we price.
At the enterprise level, there are different pricing models, typically based on volume and, sometimes, on therapeutic area.
When you’re talking about pricing based on value, if you had to highlight to potential clients some of the main ROI and metrics that they should be watching, what would be the most relevant?
It’s funny, value shifts. When we first started, a lot of the value was around startup time and speed. Now, what we see more, from a value perspective, is supply chain efficiency, such as waste reduction. That can be difficult to measure, but it’s one clear area of value.
The other area of value, which we haven’t really talked much about, is flexibility. What we’ve seen, especially over the last 12 to 18 months, is a dramatic increase in protocol amendments mid-study. So, there, pricing for value means if a sponsor wants to shift significantly and do it quickly, we’re going to dedicate the resources and make sure the product is flexible enough to support that. Those are definitely areas of value.
Could you give us an idea of the revenue split between RTSM and 4C?
4C is still very small - today, the vast majority of revenue still comes from RTSM, with 4C representing a smaller but growing portion of the business.. If you think about the total addressable market for forecasting systems, that’s also relatively small, maybe $100 to $120 million.
It’s a decision support system. It’s not a mandatory transactional system.
What stops the commoditization of the random assignment algorithm, itself?
The randomization part is relatively commoditized. We have a sub-product called UltimaRand, which is a randomization and kit list generator. There are different randomization approaches, and whatever the sponsor or biostatistician selects, our tool will generate that list.
So, I don’t think of that portion of RTSM as highly differentiated, because the responsibility for making sure it’s statistically viable really sits with biostats.
RTSM, as a whole, is different. In the eClinical space, there are certain things you simply must have. You need an RTSM. You need an EDC. You need a way to submit your results, an Electronic Trial Master File (ETMF). You usually have a Clinical Trial Management System (CTMS) upfront, as well.
RTSM is the most critical business system in that chain because of the level of transactional control and data integrity required throughout execution.
In terms of RTSM supporting sponsors, are there any therapeutic areas or types of trial design that you think will have the greatest opportunities in the coming years?
It’s interesting. We cut our teeth in early-stage oncology. That’s where you go when you’re a startup in this space and you have a critical business system. You don’t go to large pharma right away, as they’ll laugh you out of the door.
So, we worked on some of the most complex studies you can imagine in early-stage oncology. That’s perceived well because you’re dealing with very complex phase one designs and, yes, you’re compensated for that. But when you get to phase three, at least for RTSM, the design is locked, so the build itself is easier.
The issue is that the risk is much higher. If you screw something up in a phase three study, the implications are dire. Phase one isn’t great either, but it’s not anywhere near the same level.
So, we’ve focused on building all the capabilities needed to support any kind of trial from a therapeutic perspective, and then layering in enterprise capabilities that can support large portfolios of later-stage studies.
If you ask me which studies are most difficult, metabolic studies typically aren’t that hard. Oncology, yes, as there’s a tremendous amount of creativity in the study designs and a lot of amendments. There are certain programs and portfolios, particularly in oncology, where amendment frequency is simply much higher. They will amend a study twice a year, and that’s generally in oncology.
You also have therapeutic areas, like ophthalmology, where you wouldn’t expect the designs to be complex, but some of those are the most difficult studies we’ve ever seen.
As the industry pushes for growth, there’s also increasing focus on sustainability and ESG. What role can optimizing clinical trials play, and how does 4G approach that?
I think the simple way to answer it is that in the core RTSM, we have resupply algorithms that are constantly resupplying sites. Depending on what a sponsor wants to prioritize, whether it is sustainability, CO2 emissions, cost, or waste, they can adjust those objectives and change their supply settings accordingly. That’s one way.
The other way is through 4C, our forecasting tool. 4C is more of an enterprise system used to forecast supply across an entire portfolio of studies, or across a compound, to inform manufacturing, procurement, and budgeting.
That tool has more capabilities aligned with ESG priorities, like optimizing shipping lanes, emissions and that kind of thing. We’re not quite as mature as we’d like to be in that area yet, but we’re getting there. There are certainly some large pharma companies that are pushing that agenda.
Could you give us an idea of market share and global presence?
In terms of our staff - of around 400 people, as of late 2025 - about 170 of them are outside the US, mostly in Europe. We also have roughly 25 to 30 people in Japan, servicing the Asian market.
From a revenue perspective, it’s probably about 60% US, with the rest coming from outside the US. We’ve been in that kind of global posture for quite some time. As far as market share goes, industry estimates typically place the RTSM market at around $1.2 to $1.3 billion in total, and by those estimates we remain under 10 percent of overall share.
In this journey of scaling from a startup to a multinational company, what’s the secret sauce to leadership that inspires people to stick with you across that journey?
I think we’ve stuck to the same ambition and the same sense of purpose from the beginning. There hasn’t really been a shift.
And while we’re an American-based company, we’re not an American company with outposts. We’re truly global. The first person we hired was in Belgium, so from the very beginning we were a global company.
I think that diversity, combined with a focus on purpose, has been the secret sauce. My leadership style is to be very aggressive but focused, and we talk about impact all the time.
At every all-hands meeting, and it may be painful for the staff, I end by reading every indication we’ve ever worked on.