DPP sponsors:                            

Virtual Immunohistochemistry. How Owkin uses artificial intelligence to generate IHC stains without antibodies w/ Victor Dillard

If you are working with immunohistochemistry (IHC) you know how challenging it can sometimes be to optimize all the steps in the process to obtain a high-quality stain. It often takes testing different antibodies, antibody concentrations, antigen retrieval methods, and incubation times.

What if there was a way to produce an IHC stain virtually, without antibodies or even the need to step into the lab?

Today’s episode’s guest is Victor Dillard, the commercial operation director of Owkin.
Owkin is a company leveraging artificial intelligence and machine learning for medical image analysis and its offering includes virtual immunohistochemistry staining.  We talk about how it was developed, how it works, and how it can be deployed at interested institutions.

To learn more about Owkin visit https://owkin.com/

This episodes resources:
 Deep learning-based classification of mesothelioma improves prediction of patient outcome

Transcript

Aleksandra Zuraw: [00:00:19] Welcome to the digital pathology podcast today my guest is Victor Dillard. He’s the commercial operation director at Owkin. Owkin is a company that is active in different areas, but one of the areas is digital pathology and that’s what we’re going to focus on. So welcome Victor to the podcast. Thank you so much for joining me.

How are you?

Victor Dillard: [00:00:48] It’s a pleasure. Yeah, I’m doing very well. Thanks very much.

Aleks: [00:00:52] Nice.  Tell the listeners who you are and tell them a little bit more about Owkin.

Victor: [00:01:00] Yeah, with pleasure. So my background actually is in chemical engineering and started in my career in entrepreneurship, working for flagship ventures as a fellow in Boston after which I founded my own company in developing software solutions for CRISPR gene editing [00:01:19] and all of that was based on artificial intelligence. I grew that company and exited in 2019 and I learned a ton about how AI could be useful in fundamental and preclinical research and all the work that we did was focused on basically R&D and everything leading up to IND filings.

[00:01:36] And so having spent the first part of my career, basically preclinical R&D I really wanted to broaden my experience and get some experience into the clinical development side. And I joined Owkin in January 2020 after meeting a lot of different AI teams in the space. And I was really impressed after being introduced to Owkin by the high caliber of people the division and the quality of the science.

[00:01:59] And the tech stack which enables this and this also privacy-preserving federated learning that I’m happy to talk about. And so I joined  Owkin as commercial operations director. And what that means is I support our commercial and sales activities with processes and systems to structure that and all the reporting that goes around it. And a lot of my job [00:02:19] is also to look at the technologies that come out of my of our lab and turn them and shape them into commercial offerings. So that means talking to early adopter customers, setting up pilot projects.

[00:02:29] And gathering that early feedback that helps inform the development,  before we, before we then really formalize a pricing sheet and a method of deployment, and so we, a lot of my work has focused on doing that.

Aleks: [00:02:41] Okay. So what are the areas Owkin is active in?

Victor: [00:02:45] It’s not the only pathology. So anything that kind of touches AI and clinical development, it falls under our mission umbrella. And so the way that we’re structured is we have Owkin lab, which is our R&D core, R&D engine. And that’s split into four different data modalities. So the first is histology. So we work a lot on what we do. We have a lot of digital pathology work that goes on within that team. And then we also work on radiology data, clinical data, and genomics. And in each of those teams, we have data [00:03:19] scientists who are also paired up with medical experts or pathologists in the case of the pathology team or radiologists, actually practicing radiologists who can really bring that expertise into the team.

[00:03:31] And then the kinds of projects that we try and work on are either projects that fall purely within one of those lab teams or more excitingly sometimes when they can actually leverage multiple lab teams. So multimodal or transversal projects where there are multiple types of clinical data.

Aleks: [00:03:49] And was it set up that way from the beginning, or did you move into pathology as a company at some point? How was that setup and why there is also pathology because there is more to medicine and to clinical development than just those four categories that you meant of? Why did pathology earn the place there?

Victor: [00:04:13] I think pathology earned the place there, cause it’s a really key part of a lot of modern medicine.

[00:04:19] [00:04:19] And with the advent of digital pathology, it lends itself really nicely to AI, right? It’s a perfect time to get into AI in pathology because we have now a much more and Mo and growing adoption of digital pathology. So we have the images and then there’s AI technology has been developed in image recognition and image to image translation.

[00:04:42] There’s a ton of technology developments that allow us to actually look at pathology and say, okay, yeah. Now we can do really amazing things to automate and simplify the pathologist’s workflow to accelerate or to even discover new biomarkers with the naked eye. There’s really opens up a myriad of different opportunities.

[00:05:00] And so when our two co-founders Thomas Clozel and Gilles Wainrib came together and they were looking at where does AI fits within, transforming medicine today. I think pathology was definitely top of the list because we have, we have the images and we have the algorithms to work on them, the same thing with radiology,  and of course, structured clinical data is always the [00:05:19] starting point and then genomics also being now digital science, if you will lend itself very nicely to, to AI.

Aleks: [00:05:27] And how has the pathology team structured and I’ll tell you why I’m asking because I want to ask how you work with pathologists.  Do you have people on staff or do you have collaborations? What’s the role of pathologists in this pathology offering?

Victor: [00:05:43] Yep. Both actually. So we have pathologists who are collaborators. So I’ll tell you a little bit more maybe about how Owkin operates and then it’ll contextualize it. So Owkin sets up partnerships with top tier academic institutes and key opinion leaders to build machine learning algorithms and collaboration with KOLs. And these KOLs provides obviously the expertise and they usually also then provide a dataset that comes from their research institution and then Owkin brings the machine learning and together we build these fantastic models and that’s how the virtual staining that we’ll talk [00:06:19] about later, I came about, and then some of these KOLs have an extended relationship than with the, with Owkin and also to advise on specific projects outside of what we have worked with them directly on. And so we will have expert pathologists in our network that are either part of our larger KOL network that we can go and talk to and get advice or specific pathologists that are also, acting on as a part-time consultant to Owkin.

And then within the pathology team or the radiology team, we will also have full-time experts. So, in radiology we have a full-time radiologist and the pathology, I think we have a part-time pathologist.  That’s the way that we operate and really trying to bring the two minds together because a data scientist needs an expert to work alongside them and vice versa.

Aleks: [00:07:03] So you mentioned the virtual standing, this is the product or service. Is it the product or service at the moment that you’re offering?

Victor: [00:07:12] That’s part of my, yeah, that’s part of my job is that, where it was, it’s part of the, that definition.  It’s an algorithm. And right now it’s available on [00:07:19] the licensing basis.

[00:07:19] So it’s it’s a product that you can license. And I can tell you a little bit more about it. It’s so it’s a very kind of curious, a new product. The idea is that we think about it as actually a technology platform because what we started with are two markers that we think prove the concept that we can expand on this, but it takes an H&E whole slide image.

[00:07:42] And without having to do any real staining it will create the digital, immunostain a whole slide image just from the back of the H&E slide. And we launched this with two IHC markers CD 3 for staining lymphocytes and AE1/ AE3 for staining epithelial cells. And so from the end user’s perspective, the way that it works is pretty easy.

[00:08:04] It’s the platform.  We either have it on our server as an algorithm or within our software called studio, which is which has a user interface and the platform ingests an H&E slide in .svs or .TIF format, for example. [00:08:19] And just using that the algorithm will then generate a new image as a file on the server with the IHC marker selected.

[00:08:26] So that then medical researchers and pathologists can create thousands of digitized immunostained slides just from the H&E images at the click of a button. And. On a normal computer, the whole process, I would say on a single GPU takes on average 10 minutes to generate a whole slide. So not just a title or a section, but the whole image,  and if you have a more advanced computing capability and some of the fancy new servers, this can take, a couple of seconds per slide. Yeah.

Aleks: [00:08:56] So basically it’s instead of doing HC and buying antibodies and validating this essay, you would take the algorithm and use it on a whole slide image, H&E, and validate this process instead of doing the wet work with IHC.

Victor: [00:09:17] Yep. That’s the idea. And at the [00:09:19] moment the output is just the image which you could then use to load into cell counting or cell detection model that you already have or software. But so yeah, the output is a file that you could, you can download and browse, and ideally, which is just the same as the image you would have scanned if you had done the staining with an antibody marker that you would have bought and done in the wet lab.

Aleks: [00:09:39] And how did you come up with this idea?

Victor: [00:09:42] Yeah, as with many of the innovative models, we created at Owkin it started from a strong research collaboration that I mentioned earlier with a pathologist, from our Owkin loop, which is this network of partner hospitals that we have.

[00:09:55] And one day they effectively short on time and budget and to carry out some physical C3 staining. So we simply ask the question, what if we could generate the IHC image digitally, purely from the H&E slide, is that even possible? And basically, that’s how the whole idea was kicked off after that there were many hurdles to overcome,  but that was what incepted the idea.

Aleks: [00:10:17] Okay. And [00:10:19] can you tell a little bit about the development process of this? How did you transition from wet to virtual?

Victor: [00:10:28] Yep. The, actually the, so the first hurdle is to get the data right with any machine learning project, the data is key.

[00:10:36] And so we had to generate two real images of the same slide, one stain with H&E, and then the other with a, in this case, CD 3, which was the marker we started for. So not consecutive cuts but really, the idea was to generate a whole slide stained with both markers one after the other. And so effectively a double staining, right?

[00:10:56] So take the first slide, stay in it, then wash it, but then restate it with another one so that we can have so that, cause if we did consecutive cuts for them, we’d have three or four micrometers difference, which means that the exact cells in the whole slide image would change. And we would not be able to really train our model.

[00:11:13] Two because it’s a, I’ll get on that in a second. It’s a model that does an image translation. So we needed to have the [00:11:19] two perfect images. So we have to create this double staining protocol effectively to generate two images that we could overlay.  The next, the second hurdle is then the alignment of the two images so that they match perfectly cell to cell.

[00:11:33] So it’s not a trivial task because it’s I don’t know, imagine aligning to satellite images of the city taken at different times and in different weather conditions there are maybe some minor changes that are difficult to pick up and to align, and so we wanted to make sure that the whole image really was aligned overall, but then also at a patch level to have perfect alignment.

[00:11:53] So that was the second hurdle that we had to overcome. And that’s where a lot of the data science work went into was to really make sure that the courage illustration was spot on. So then once we had those double-stained aligned data, okay. We used these machine learning generative adversarial networks, or GAN, to train our virtual staining model.

[00:12:15] Specifically we used one called Pix2Pix, which is typically used [00:12:19] actually to, for example, add color to old black and white photos. And GANs are a pretty recent development, I would say in machine learning, they’re composed of two networks that compete against each other. So the first one is called a generator and the second one is called the discriminator.

[00:12:35] So the generator as the name suggests creates these new CD 3 images. And then it gives them to the discriminator who then decides whether the image that was generated is real or fake by comparing it to the original, real stained CD 3, and then informs the generator whether basically passed or failed the test, and then the generator iterates.

[00:12:58] And so in the round and round, they go until the discriminator can no longer recognize a fake image from the real one because the generator has become so good at creating these images that it can effectively trick if you will, the discriminator. So if you were to take the analogy of a painting, for example, if you want it to copy a Picasso painting, you’d be the generator.

[00:13:17] And then I, the [00:13:19] Picasso expert would be the discriminator and tell you whether your painting is a real Picasso or not. And then after thousands of cycles, you get so good at replicating castle that I can’t tell the difference. And so that’s a that was the use of GANs. I think another interesting aspect of what we had to do is that’s so these image generation algorithms like Pix2Pix, usually work with images of about a thousand pixels wide, so we had to adapt, Pix2Pix by dividing the whole slide imaging to these tiles of 500 by 512 pixels. And that actually was really helpful because it allowed us to have like about 25,000 tiles. To train the algorithm from just one image. So from a data perspective, we get really rich data from a whole slide image.

[00:14:04] And then, so we could then also be a bit creative and add boosting features to train the algorithm more effectively by let’s say spiking in some very variety of like images or adding some color augmentations so that the algorithm can [00:14:19] be used and get trained with a bit of noise and a bit of, so it’s not, perfect slide by slide and we cut the slides into these tiles and then it randomly picks the tiles to learn from.

[00:14:32] Yeah, so that’s the process of creation.

Aleks: [00:14:34] Let me just sum up and correct me if I’m wrong. In summing this up. So it’s two networks working at the same time and the one is generating images out of nowhere. So to say, and then improving the generation based on the feedback of the other network of the discriminator.

[00:14:57] And then it gets more feedback and it gets trained. Whereas the the discriminator has. Labeled data because it has the original IHC image since they supervised a feedback to an unsupervised image

Victor: [00:15:16] generation. [00:15:19] Yeah. Effectively. Yeah. I would be putting it. Yep.

Aleks: [00:15:23] Okay. Okay, great. So who is this for?

Victor: [00:15:29] Again, you want

Aleks: [00:15:31] it to be for him because what stage of the project are you now?

Victor: [00:15:36] Yeah, that’s a useful point to start. I couldn’t buy off the shelf. It it is, but let me maybe give a bit more information around the performance of it then, and that helps people situate a little bit about what stage it’s at.

[00:15:49] We assessed the, so we trained today three different markers, as I said so two of them, we are in advanced stages of that, which were the C3 and the epithelial markers. And then we also have  which is. Just being finalized at the moment. And so when we finished training those models, we assess the performance by comparing the virtually stained image back to the ground truth.

[00:16:13] And we look at the correlation between both images. In the case of CD 3, we looked at [00:16:19] lymphocyte density, for example.   When we take the correlation coefficient for the CD 3 marker, we’ve got about 96% correlation between the two images. And the, in the case of the epithelial one, it was 97%.

[00:16:31] So we think that’s pretty good for first-generation technology. And the other test that we did is we gave the virtually stained image to our consultant pathologist, and they were not able to tell the two images apart. Tile by tile random kind of hasn’t, it can really differentiate but for some they could, but on the whole parts of quite a difficult task.

[00:16:51] So we then also looked at like at a cellular level, when we run, let’s say a positive cell detection algorithm on both images and then compare the results we obtained precision and 72% and a recall rate of 63 for CD 3.  We think we can actually still improve on those results because we think those are important metrics as well with additional data.

[00:17:10] As well as better as cell detection algorithms. Cause they’re not perfect either as we know. And we invite all these continue to invite academic collaborators. , But so based [00:17:19] on those metrics, we think that the algorithm is ready for folks who work in computational pathology or digital pathology teams within pharmaceutical and biotech companies.

[00:17:31] And, the clinical teams, for example, who may want to run some proof of concepts studies, or wants to explore retrospective cohorts of H&E slides for which they may not have the CD 3 stain or the epithelial stain to be able to maybe run some analysis and see whether or not there’s a correlation of interest there.

[00:17:49] So we think that the technology is at a stage right now that for some of these markers from an R&D perspective, they’re ready to go and you can come in and license them and use them onto your images. But obviously, from a clinical perspective, we’re still a long way away from validating the algorithm.

[00:18:05] And we’re required to do a really rigorous study for that.

Aleks: [00:18:08] So if there’s anyone listening that would be interested, they should contact you.

Victor: [00:18:14] Yeah, absolutely. We’d be delighted to speak to them,

Aleks: [00:18:17] Interested to take this [00:18:19] to the next level or apply. And then I will link the contact information in the show notes as well.

[00:18:27] So apart from this virtual staining, are you working on anything else in digital pathology or do you have your eyes set on something as the next one,  once this is totally mature?

Victor: [00:18:40] Oh I wish life was, yeah. Life was as easy for us as one project after the other,  Owkin is now a hundred person company, and so we’ve got a lot of projects happening in parallel. Lots of really exciting things. And in pathology, especially, it’s hard to be objective, I think, cause love all the pathology projects that we’ve got. So I’ll pick out a few, but maybe I’ll just focus on the track record that we have.

[00:19:01] Think one of the exciting work that we did was in the mesothelioma, which was published in nature medicine just last year where we showed by combining digital pathology with artificial intelligence, we could accurately predict prognosis for mesothelioma patients. [00:19:19] But not just that., we also were able to discover new biomarkers that predict the response to treatments for mesothelioma.

[00:19:25] So we think that’s a really great kind of case study for the capabilities of our pathology team, especially when we then bring in also the clinical data team and the two teams work together on these projects. The other side of what we’re working on. We published also recently a model that’s called H&E to RNA.

[00:19:46] And so what H&E two RNA does, it’s another great invention from our pathology team. It basically predicts RNA seek signatures from HNE slides.  Think the whole sector of genomic prediction from images is really just starting to open up, I think, and now can be really pioneering in this field where we’re excited about this.

[00:20:05] But so you can think about  An RNA seek profile, like HRD, for example, that we could predict just starting from an H&E image. And then that can then be used to inform that say patient identification or treatment decisions. So that’s [00:20:19] another area that we’re innovating in and excited about some of the work that we’re doing there.

Aleks: [00:20:24] I definitely want to link those publications as well in the show notes.

Victor: [00:20:30] Yeah. And there’s a whole page in our website with all the publications, but I think I’ll send you the ones with the H&E two RNA it’s very cool stuff and we’re excited about some of the more, it’s a publication that talks about the.

The 80 20 models as a whole. And then we’re excited to then come out with some really specific examples of how it works on particular signatures.

Aleks: [00:20:51] So you say for the virtual staining, you’re using gams, the generative adversarial networks, and they’re pretty new before it was mostly convolutional neural networks.

[00:21:04] Still is for the supervised learning. Where does this information come from in terms of innovation who comes up with this idea? Let’s use this AI method.

[00:21:18] Then [00:21:19] the other question, what about the pathology part of the innovation, but that’s like more my part of the story.

[00:21:26] So that’s why I’m starting to

Victor: [00:21:27] yeah. I think I’ll answer both at once. Cause at Owkin I think one of the real fundamental beliefs is that innovation is collaborative and really groundbreaking research is collaborative. And so this is why a lot of our, a lot of our work is done in partnership with key opinion leaders who bring that expertise from the hospital,  from the pathology lab, from the radiology lab. And then Owkin brings the machine learning expertise within individual disciplines, as well as a really good and deep understanding of the clinical landscape around that.

[00:22:02] So that we’re able to have a common language to speak with the KOLs and the experts, but it always usually starts with the partnership, a very close partnership with our academic partners, as well as,  the lab that I said our R&D engine. It’s a world-class data science team.

[00:22:16] One of our co-founders was a professor in machine learning. These are [00:22:19] the experts and they’re at the forefront with machine learning techniques, and this is what they do. And they go to machine learning conferences and they read machine learning papers all the time. So that’s their expertise in their job. They’re very good at it.  I think they have that information, but your question is really around how does that innovation happened. For me, it happens when the collaborations come together and the experts challenge each other and also ask questions that trigger things. That’s that then make the connections, in your brain that allows you to say, actually, yeah, we should try this. It doesn’t happen in a silo.

Aleks: [00:22:50] I think it’s a very important point that you have to have an understanding of both disciplines on both sides you’re just expert in one but if you don’t have enough understanding of the other side, you don’t know which methods you can apply.

[00:23:06] So if you don’t have enough understanding of how IHC works in the lab, you will not be able to apply the appropriate machine learning method to generate it [00:23:19] digitally or virtually. So that’s my mission as well of the blog and of the podcast to bring those two worlds together.

Victor: [00:23:29] Yeah, I think, one of the things also sometimes with machine learning, for people who aren’t in the space is that it’s a pretty black-box approach, right? Where people just say, Hey, okay I’ve got this algorithm, it takes this stuff, and then it outputs this magical result. And we don’t really go into detail about how it does that. I think a key part of, again, its core to our mission here at Owkin is to make machine learning understandable.

[00:23:52] And we believe in in a clear box approach,  We’re actually when we get a result, we can go back and understand how that result was generated. It doesn’t apply itself necessarily or lend itself easily to every type of machine learning. But if you’re going to develop an algorithm, that’s going to be used by scientists or ultimately by clinicians, you need to be able to explain it, and you need to be able to open it up and build that trust and then understanding that there is a logic to how the result is generated. [00:24:19] And oftentimes that actually is a really interesting part of the whole conversation is how did it get to that result? That’s often more important than the result itself.

Aleks: [00:24:26] Definitely. I think that also increases the credibility of the method. If you can see what’s actually happening, and now that it’s also required by the regulators, at least in the European Union, the data protection regulation requires this so if ever this is going to be applied to decisions on humans, it has to be explainable.

[00:24:51] And I have talked about that. I think it’s another episode that there is this whole new emerging area of explainable AI. That’s exactly what’s growing on top of the machine learning movement., If you can say that machine learning scientific discipline, there is another sub-discipline that tries to explain.

[00:25:12] So it’s great that you’re embracing this. You’ll have to because you’re located in Europe, right? [00:25:19] Where you are and where you work.

Victor: [00:25:21] Owkin is now a global company. But our R&D center is in Paris. That’s really where the majority of our team is located. And then in the beautiful city of Nantes as well in the West of France.

[00:25:33] But our corporate headquarters are in New York. And me personally, I work from the London office.  We just finished our series A, which brings a total series A round to 70 million. So it’s really a great time to join Owkin and we’re in full expansion. We’ll be in Switzerland soon.

[00:25:48] I think it’s hard to when people ask me, where are you guys located as is now? It’s the trend now we could work remotely as and it’s that’s the day and age that we live in now. So that’s not a problem, but also Owkin is really focused on hiring the right talent for the job.

[00:26:00] And if the talent is in, in San Francisco or if it’s in Berlin or if it’s in Madrid, then you know, that we will adapt, to people’s situations. And I think that’s a key part of the mission of growing the team.

Aleks: [00:26:10] So your R& D lab, this is not a wet lab. This is a machine learning and computer lab.

Victor: [00:26:17] Yeah, exactly. That’s based in [00:26:19] Paris and in any wet lab activity is usually done by the academic collaborators,

Aleks: [00:26:24] By the partners. Okay. So what was the most difficult part of this virtual staining development? What was the thing? Was there something that you did not expect, like the major hurdle that you had? You mentioned some of them like the aligning of slides

Victor: [00:26:40] I think it’s it depends on who you speak to within the company, because of course the data scientists working directly on this, they might say do your part of the alignment was particularly tricky, or, but I think as a whole process overall one lesson that you always take away is you want to engage, not just the expert pathologists early on, but you also want to engage your customers as quickly and as early on in the process.

[00:27:01] And they will give you ideas about, let’s say, I don’t know which markers are more interesting to them. We chose CD 3 because that’s, we have the data there, and as an AI company, it’s always a balance of, If I’ve got the data I can start and I could do the proof of concept, or do I try and find the perfect data and then delay?

[00:27:17] So here we would show CD 3 to [00:27:19] validated. But I think as learning, it’s always around, how quickly can you engage your customers and get that feedback about which markers are interesting for them? Do they prefer the fluorescent markers, for example, should we adapt our machine learnings, maybe work with fluoro and multiplex systems and that kind of information around where should we take this once we’ve got the proof of concept, I think is always key learning on how quickly you can generate that feedback tube.

Aleks: [00:27:42] Yes, it’s also crucial now that you don’t get stuck in your idea because if nobody uses it, then you had a nice idea, that’s it. You cannot really build a business around it.

Victor: [00:27:58] It’s not always super easy because sometimes, people don’t necessarily see the application right away. And sometimes maybe with virtual standing once, once we get five or 10 markers available, then suddenly you’ve got a whole multiplexing capability that you can do straight onto your computer that you didn’t need to do before. And maybe some people will come and say, actually, this [00:28:19] triggers a really great idea for me. I think we could go ahead and look at this whole panel of things that would have taken me a ton of time to do manually.

[00:28:27] Again it’s part of that iterative cycle of innovating. You’ve got to keep in touch with the market and your customers, but also continue pushing on the vision and taking risks. And, right now we’re, we continue to take the risks that, that we believe this is a great platform and it needs to continue progressing.

[00:28:42] And even if not every marker works, we learned a lot about machine learning at the same time, we learned a lot about digital pathology workflows and particular markers. So it’s certainly not a lot of knowledge that’s wasted or lost.

Aleks: [00:28:55] I think there’s definitely a lot of application for this.

[00:28:58] I think that one hurdle will come from that skepticism of the future applicants of this, of, and the here, this explainable AI comes into play and I think because IHC as such is is a complex method. There is a lot of variables that influence the [00:29:19] binding of the antibodies. And I think there’s going to be a significant challenge to convince people that it can actually be done digitally. That’s such a complex procedure interaction between the molecules in the tissue can just be translated digitally from that information in the image. But if you have this validated sufficiently for different markers and you can point out what’s being the driving factor in deciding that these cells are positive and it doesn’t have to be something that we see with our eyes, because if it was done, probably we would have already seen it.

Victor: [00:30:00] Yeah. That’s the debate internally. We, I think that there will certainly be markers, which for which the model and the platform will not necessarily perform with sufficient accuracy, that it can be deployed.

Aleks: [00:30:11] That is fine. It doesn’t have to be everything.

Victor: [00:30:14] Exactly. Yeah. And so we hope that at least for the ones that visually we can [00:30:19] distinguish from an H&E slide, for that we can definitely, hammer it out of the park and have a great performance algorithm.

[00:30:25] And then, the other side of it, which we think is exciting is maybe there are morphologies that we’re not necessarily aware of right now or distinctions that we’re not aware of right now to the naked are too small to distinguish, but that the algorithm will be able to detect. And maybe it doesn’t yield super high performance, but that’s linking it back to the explainable AI is if we can go back and understand how did it try to differentiate and do this marking, maybe we’ll pick up some new things. So it was weird. I think that’s part of why we want to test a range of markers from, what people might think are easy ones, all the way to what is certainly going to be some of the more challenging ones like probably will be an interesting one to go after.

Aleks: [00:31:04] Yeah. I am excited. I keep my fingers crossed for this to succeed and to get to the next level. And is there anything else you want to tell the listeners?

Victor: [00:31:16] First of all, thanks for listening to me talk [00:31:19] about Owkin and virtual staining. And if you have any interest in technology or of course,, in learning how it was developed,  Aleks, I’ll share with you the link, so you can share it with your listeners.

[00:31:29] There’s a blog post where our data scientist Olivier, who did the great work here, talks about his journey in developing the model. And then of course, if you want to trial it, get in touch with us, we’d be more than happy to show you how it works. And also try it on some slides H&E slides of your own.

[00:31:45] So we’re open. Yeah.

Aleks: [00:31:47] So one last question you said it can be either sent as an algorithm to use on your own platform or within your software platform.

Victor: [00:31:58] Yeah.  As a kind of platform on which you could run this, there are really two ways. The first is you can imagine if you have a server in your institution we can deploy the algorithm and you can basically send an image via an API, an application programming interface, and then the algorithm will just run directly on your [00:32:19] server and then generate the image. And then you can just go and file, find the file directly there. So no user interface, it’s much more of a collab command line approach. Owkin also is developing a software platform called studio in parallel. And studio is a software that we developed for medical researchers who want to train new machine learning models.

[00:32:38] And so it works with pathology images. And just recently, we also added radiology as a functionality. And so you can build your cohort within studio. So assemble your pathology images, for example, and then train a new machine learning model like or apply one of our existing models, like a prognostic model or a recognition of a certain gene expression.

[00:32:58] And you could also do some virtual staining, so on the studio as soon to be released, get onto the platform and take on the virtual standing upload your H&E slides and then you get the virtually stain slide to be able to download, or just visualize and browse directly in studio.

[00:33:15] So that’s much more of a user in a phase you can click around and it’s safe. If you’re [00:33:19] not a software person who is comfortable with the command line, that would be the solution for you.

Aleks: [00:33:24] Okay, thank you so much for joining me today. Thank you for taking your time and have a great day.

Victor: [00:33:31] Thanks, Aleks. Yes, all the best. Bye-bye.

Related Projects