Video: Clinical and Financial Feasibility Forecasting in One: PSI VISIONAL™ | Duration: 1636s | Summary: Clinical and Financial Feasibility Forecasting in One: PSI VISIONAL™ | Chapters: Webinar Introduction (5.28s), AI-Powered Trial Planning (101.63s), Assessing Enrollment Risks (677.61s), Cost Prediction Solutions (831.22s), Q&A and Conclusion (1246.785s)
Transcript for "Clinical and Financial Feasibility Forecasting in One: PSI VISIONAL™":
Good morning, everyone, and welcome. I'm so glad that you could join us for our webinar on clinical and financial feasibility forecasting in one, PSI Visional. My name is Ashley Collins, marketing specialist here at PSI, and I have the pleasure of introducing Emily McIntyre, director of feasibility at PSI, and Alesh Mishlaiva, head of proposals and market growth here at PSI. We will have a brief q and a at the end of the session, so please send any questions that you have in the q and a box to the right, and we'll be sure to answer them at the end. Without further ado, I'll turn it over to Emily. We'll start we'll start in a second. All set, Ashley. Okay. Wonderful. So all of our sponsors have similar challenges when they start planning a clinical trial. We know these complexities all too well. It's really focused on time, cost, and risk. How efficiently can we deliver within a specific budget, and how resilient is that plan? So we've developed a platform that essentially answers these questions and addresses how we can shift things around, what an impact will be. Say, for example, if we speed things up, what will that do to cash flow within year one? Visional has been built with AI capabilities, essentially marrying the feasibility data and the budget data to create quick and accurate project plans with budgets that are available to our sponsors. Now, of course, to predict things accurately, this type of platform requires a lot of data. So we've combined our rich internal data, which covers half a million sites, institutional benchmarks, global reach, and budget insights from recent trials. Essentially, this integration really gives us a unique ability to forecast realistic and be able to benchmark effectively, based on all of our sponsors' key milestones, how we are going to deliver on those, and we are providing evidence based site selection, budgeting, and trial planning all in one. So to put in really simple terms, we're working on providing solutions to three key problems. So let's step through those key problems. The first one, too much data for a human brain. We know that the data available to us isn't the challenge. It's how we collate and translate that data into effective objectives and how we can deliver on what the data is telling us. So we can essentially frame this as a complimentary partnership. Humans need bringing the judgment, the expertise, and the context, while the AI function provides speed, scale, and accuracy in digesting these massive datasets. So this is really reinforcing the need and the value of AI powered feasibility. What we need to be able to do and what we can do with Visional is model hundreds of hundreds of scenarios accurately in a short period of time. So we're utilizing AI to turn that complexity into value. All of this data allows us to approach and run and compare hundreds of data driven trial scenarios, each reflecting different constraints, site allocation, geographies, and budget assumptions. So instead of relying on single point forecasts and best guesses, we're showing our sponsors multiple paths forward with a clear view of trade offs and time, cost, and risk. So here's how it works based on a recent example, with a phase two b GI study. So first, we are utilizing constraints when we are working with our sponsor teams. So we need to set out and define all of the key objectives and the considerations. So sample size, number of patients to screen and enroll, and then we can work to refine number of patients to be enrolled by a certain date, number of regions or countries we would like to have, potentially a certain number of patients coming from various regions, and then we can really look at how we are going to integrate additional complexities. So percentage of patients coming from a specific reason, or perhaps we need emerging countries for treatment naive cohorts or a limited number of sites per country. So we can imagine, modeling an enrollment scenario with ulcerative colitis patients, for example. So we need to screen over 500. We're looking to enroll about 350, with forty percent of screen failure. So we want to have at least 40 sites in The US, and we want to, limit the number of countries to 14. So this is the example that we've utilized to showcase next our country selection criteria. So we take this input from our highly trained and experienced feasibility directors. They are medically trained and they are working in country to pull forward that local country intelligence. So when we are evaluating countries, of course, every protocol is different and it's key to identify the elements that are going to drive optimal selection of where we place the trial. We don't need a 100 different criteria. We need to select from the list of the most impactful criteria. So this may include medical and operational aspects with endless subcategories essentially representing the complexity of all of the trials that we work on day in and day out. Mandatory categories certainly include number of sites, regulatory requirements, and enrollment potential. At the end, all criteria are weighted and according to importance. So looking at country ranking, all of these main categories and subcategories have certain weights depending on how important they are for a specific protocol. So Visional automatically ranks these countries based on the data and the assigned weights, and we've been able to, here, showcase the analysis of 45 countries. The values that you can see here are automatically powered by AI based on the similarity search in about five to eight minutes, and our feasibility directors are working in step with all of this data, ensuring that, the countries are included that have similar experience. It's pulling site performance, from internal and external data sources, including country specific enrollment data and incorporating data from external databases as well. And the architecture of Visional is incredibly flexible, so we are consistently adding in new data sources and new ways to, look at the data, translate the data, and apply the data effectively. So as we're doing this, it's critical that we also pull the relevant enrollment data on a global and company level. So looking at what has been done historically, within the company and within the industry. So looking at a global view really gives us the context. It's what the industry what is the industry average enrollment rate for this type of trial? So Visional is aggregating enrollment performance data globally to show these averages and medians. It really sets that baseline for benchmarking to understand how your trials enrollment plans compare to what's been done in the past and what those global norms truly are. This helps our sponsors see if their assumptions are aggressive, conservative, or somewhere in the middle and, optimally aligned with what has been experienced across this space. Then moving forward to our country enrollment. Country level is essentially the decision making level. So we are working with our data scientists who have spent an immense amount of time analyzing and normalizing data, and we've incorporated all of this data into a blended approach that includes the historical country level rates for study where there is enough relevant data and then therapeutically, aligned and normalized performance approach when there isn't data yet. So we're able to make these projections again in a data backed and validated way. We're also able to incorporate any information and data from our site identification projects, and that is incorporated where relevant as well. So all of the rates are adjusted to account for startup that varies across different studies as we are making these projections. So when we move to modeling, we want to make sure that all constraints have been defined and the benchmarking is accurate, and then we move into the modeling phase. The scenarios are taking a look at what the data is telling us and then building in the more subjective, you know, human led decisions or considerations that need to be taken into account. And this is how we move to define, the probability for our enrollment scenario, looking at enrollment duration and really what is going to be required to hit our sponsor's objectives. So as we put together these scenarios, all within about one to two minutes, we can showcase multiple different options that integrate trial experience, performance, the enrollment data, external benchmarking, really to achieve that holistic evidence based view. So beyond the data, it's factoring in again those operational realities, the regulatory complexities, and the site capacity thresholds. Really what's key here is our ability to showcase what's feasible and what is not so we can have conversations on both ends of that coin, and we're able to showcase this to our sponsor in a very, rapid pace that helps move decision making along. The second problem is really the difficulty to assess the risks. We know that certainty is a huge challenge. Everyone wants a sure thing, and we have to be able to provide the evidence that will showcase how this is achievable, in what level of risk is palatable from the sponsor perspective and from what our data is telling us. Can we guarantee the enrollment model that we're providing to our sponsor teams? So the solution the solution that we've built in is the fact that we have worked out the key parameters that drive probability. So this is the availability, the available pool of sites and that meet the key protocol criteria as well as, the, number of sites, the number of patients required, and what's been done historically as well. So all of this is included in the probability projections that we therefore model. So all of the ranges that we showcase, we are identifying those that are feasible so our sponsors can see what is really going to be bulletproof in terms of a model, or how, risky do we want to go, and then what are the impacts in terms of budget that align with that. So we can have these bookended conversations with our sponsors during the planning stages, so we collectively understand how we can make changes, how we can make the model more flexible if needed working in real time based on elements at the site level, that always potentially change as we start to activate a study and move through to enrollment. So utilizing this probability is really key for our sponsors and has been incredibly successful in helping us to project very realistic and feasible specifications as we're starting up a study. So here's an example of the different models with different probabilities. So the same study can be planned for 85 to 130 sites depending on that risk tolerance. So with our track record in the industry, we always recommend working within the moderate probability for initial planning and then utilizing those median historical values, but then discussing through what that higher risk and lower risk scenarios look like. And here's another neat way of comparing the data to look at how that risk tolerance and probability levels impact the overall enrollment model. Thanks, Emily. Thanks. This brings us to the third problem that we're solving with this platform, and this is the problem of the budget. We all know that traditionally a full study budget always comes comes last. Once all the countries have been selected, all the scenarios have been modeled, approved by feasibility, approved by operations, the countries, the plan goes for budgeting, and this is often too late and a reason for a lot of remodeling or additional delays. And the question that we asked was, how do we make cost part of the, decision making process? And we opted to use machine learning for cost predictions as part of modeling. Essentially, in the two minutes that, when we model our scenarios, the system analyzes different combinations of countries, timelines, and, sites to find a balance between the most cost efficiency scenario and the most feasible approach. In this slide, we show a radiopharm study from, earlier this year. You can see that you can run the same 230 patient study at, for 10,000,000 or 27,000,000 depending on your timelines, objectives, country requirements. And you can see that in this scenario, we modeled something for seven months, for eighteen months, for sixteen months, with Western Europe, without Western Europe, with Asia Pac, and you can see that the sponsor went for the most cost efficient solution, which is labeled with a green, with a green, label there clarity. And this model was very different from the original constraints that they set. So when cost predictions are part of the modeling, you can actually match the plan to the available budget, which eliminates any last minute surprises. What we also did is we incorporated cash flow. So we, automated cash flow projections for all the models, and this is based on on the feedback from sponsors who always need who sometimes need to know how much they need to spend, per quarter, per per per year. And this each model can generate this kind of cash flow for the entire study duration. It distributes the budgets based on the, milestones per month, per quarter, per year, and includes inflation and even shows the value, by services you can see here. This is available and can be exported for each model. So if you if you model several, scenarios within minutes, you can also get cash flows for all of those, scenarios, which is a very nice feature. I really, enjoy this one. And this is just another example of how you can visualize the data for your, cost benefit analysis. What we usually do here is look at the ranges and try to find, solutions in between. In this example, you can see that you can run a study at fifteen point six months, but once you increase the timeline to eighteen point six to twenty three months, you can see that the cost, cost difference is marginal. And so in this case, we recommended eighteen point six months, which, with a reasonable budget and was fast enough for the client. And now, let's shift gears and talk about the use cases. The most frequent use case is study level modeling. This approach helps compare different study plans to select the most optimal for an individual study. In this example, our IBD sponsor came to us with, with their own assumptions. They wanted to run the study at 100 sites in twenty four months, but we managed to find solutions within this range. So we found a solution with thirteen point five months and 100 sites and twenty four months and 60 sites. And then we ended up with a more balanced scenario with, sixteen months and eighty eighty sites. And you can see we even increased the enrollment rate by, using the top performing countries and top performing sites. This scenario obviously was, less expensive and more more efficient than the original plan. Another use case is, portfolio level modeling, and this, approach helps us model a diverse per portfolio. Like, in this example, for example, in this example, we have, an oncology portfolio with six compounds, and we modeled all of them with a conservative approach, moderate to high probability, and we could estimate the costs, so low and high, costs for, for each study, for each, year, and, you can even do it by by compound or by program. So all of this is doable within the system. And, of course, inflation is incorporated in in the modeling. Now there's a lot of unnecessary anxiety around AI taking over our jobs, but, what we've noticed, since we've implemented the system is that, we completely replaced manual data analysis and pivoted to more strategic work for our teams. We redefined their job descriptions. We added, UAT, modeling development, review of data, data validation, into their job descriptions. And, essentially, AI and machine learning do the dirty work, and our highly experienced, medically trained feasibility and study strategies have the time to focus on what actually matters. So they review the ready to use models, and they add the knowledge and the expertise that hasn't been encoded yet, something that AI would miss. It's very important that humans are involved, always in the loop. And then I also wanted to mention the complex foundation of the system. We call it, Cynetic, and it's a semantic knowledge platform that encodes the knowledge and connects all the dots literally in this graphic behind the scenes. It started as site and project insights and has now morphed into something much, much bigger. It's our brain. And, compared to many traditional knowledge platforms, it has more a more advanced framework that allows for more intelligent accurate data retrieval. And I think, Emily mentioned the numbers, but at this stage, the database contains, project data for 3,000,000 sites, 500 unique institutions, 330,000 medical professionals across 1,000,000 roles. And this is all across all projects and site data in the system. As we speak, we're our teams are testing AI agents for site ID and selection, as part of this platform as well, and we're very excited about where it will take us. And the objective is very clear. We want to speed up, processes to bring drugs to the market faster, and, AgenTek AI has the potential to give us a site list in just one click. So I do site selection 50% faster, and we're also, looking at eliminating nonenrolling sites with this system. But this is a different webinar, and we'll talk about it separately. Thanks, everyone. I think, we have a few minutes for questions. Right, Ashley? We do. And we got some questions in our chat. So the first question is, can the country weights of medical operational feasible sites, etcetera, be modified for a given study, or are they fixed at 25%? They are customized for every protocol. So it's all protocol dependent, and we approach the waiting as such. So it's all flexible. Wonderful. Next question. What optimal ratio applies to the historic data to account for changes over time and somehow project to future? Changes over time. So the datasets that we analyze aren't simply, you know, past six months, past five years. We're looking at protocol criteria that align with the protocol that we are working through. So we are lining up and analyzing as close as we can in terms of the patient population, all of the different designs within the protocol, and we are looking at that. So the debt the data can travel over ten years, seven years, whatever the case may be in terms of relevancy. So can you repeat the first part of the question? I wanna make sure I answer it directly. Yeah. So, what is the optimal ratio as it applies to historic data that accounts for changes over time? It really depends on the protocol and all of the specifications, the sponsors, parameters, or corporate milestones. So we really can't give a specific ratio that is ideal. It's really also customized that we really have to look into everything that is driving the outcome that the sponsor is hoping to achieve. So we'll be happy to to model with anyone who's asking that question and provide some of those parameters and what that looks like. Wonderful. Next question. For feasibility assessments, is past performance a good indicator of future success? And if not, how do the algorithms adjust for this? So I would say sometimes, we know that this space, all of the therapies and all of the designs we're seeing, different designs really take shape throughout different therapeutic areas and specific indications. So it moves so quickly. We have to adapt to that speed. So yes, past performance is an indicator. Is it the ideal and only indicator of future performance? Absolutely not. But we have to understand what's happened, and then where all those changes in terms of the landscape, competition, you know, the overload at sites, whatever is relevant to that specific patient population, and the specialists and the multidisciplinary teams, if applicable, how is that really working in today's settings, inclusive of approved therapies? Does does a site or a country have access? What are their reimbursement plans? So it is a predictor, but it is not the only predictor. But we need to utilize all of the data we have at hand and then weight it accordingly to the indication and to the setting that we're exploring. Can I just. add to this that, we added, we, we launched the platform a couple years ago, two or three years ago, and we're already starting, to get more data, the the real data based on the projections that that we made, two, three years ago, and we see that, the studies are within the ranges? So we work with ranges for start up. We work with ranges for enrollment and site count, and we can see that the, studies are completing within the ranges, which is amazing. Wonderful. We, did. have a few questions about, what are the data sources and where do the data sources, come from to accumulate for the system. Sure. So we have, as I mentioned, our own historical data. We have all of the databases that we have partnerships with that are publicly available, through subscription, metadata, all of the data that we're bringing in throughout the life of our project, and all of the data that we can incorporate, from the industry. So example, global data. Utilizing all of that information, and we're building on those sources every single day. We're integrating more. Wonderful. And for budgets, we use our own, historical data. So I think we have around 200 or 300 budgets in the system. Great. I think we have time for one more question. How accurate are these projections? Well, as Alla had mentioned, they're very accurate. So now that we are seeing, you know, from start to finish and what that loop around looks like with all of our lessons learned from projects that we've modeled within the system and what the results are at the end of that study. They are all within the feasible ranges that we are predicting based on the modeling through visional, so very accurate. Wonderful. And the budget is also accurate as it applies the latest budget data to each model. We use machine learning for predictions, but once the model is there, it uses the latest rates, the latest budget assumptions, the latest investigator fees. So it's it's, we can add it to the contract, basically, based on this, model. Well, that's great. I certainly learned a lot on this webinar. I hope everybody who is in attendance also learned a lot as well. Finally, there will be a short survey at the end of this webinar, just to give us some additional feedback on how helpful you found this as well as to help inform our future webinars. Thank you so much for your attendance, and I hope everyone has a wonderful rest of the day. Thank you. Thank you. Bye bye.