Year 2: High-throughput truthing of microscope slides to validate artificial intelligence algorithms analyzing digital scans of pathology slides: data (images + annotations) as an FDA-qualified medical device development tool (MDDT).
- Here is an overview presentation given at Pathology Informatics.
- “A Collaborative Project to Produce Pathologist Annotationsto Evaluate Viewers and Algorithms.”
- 20190508-HTToverviewGallasAtPIsummit-v4.pdf (2 MB, uploaded by Brandon D. Gallas 5 months 4 weeks ago)
- Here is an executive summary (four slides) of the project with two new exciting deliverables.
- 20190402-HTTexecSummaryPublic.pdf (193 KB, uploaded by Brandon D. Gallas 7 months 2 weeks ago)
- Here is a project overview presentation given Nov.-Dec. 2018 to FDA/CDRO/OSEL management, the www.TILsinbreastcancer.org working group, project collaborators, and others.
- 20190402-HTToverviewPublic.pdf (348 KB, uploaded by Brandon D. Gallas 7 months 2 weeks ago)
- Here is a link to the original proposal for internal funding
- Link to full proposal submitted 10/19/2018. Funding awarded in March 2019.
- Link to list of collaborators
- Link to updates
Pitch: We are launching a project to crowdsource pathologists and collect data (images + pathologist annotations) that can be qualified by the FDA/CDRH medical device development tool program (MDDT). The MDDT qualified data would be available to any algorithm developer to be used to validate their algorithm performance in a submission to the FDA/CDRH.
Notice, the year 2 title changed to emphasize, “data (images + annotations) as an FDA-qualified medical device development tool (MDDT)”. If we can “qualify” a data set via FDA/CDRH MDDT program, it will be available to developers to use as their pivotal validation data in a submission to the FDA. That’s the primary aim of year 2. In the lead up to the year 2 submission is the recruitment of partners to help. Check out the letters of support in the submission! We plan to organize data-collection events at meetings where all the pathologists go and at dedicated workshops at collaborating sites.
This project is generally open to new participants.
We will use the eeDAPstudies NCIPhub group to coordinate communications. So if you are a member, you will receive related communications about that project in addition to communications about the eeDAP MDDT. If you are not a member, sign up or check for updates here and in the blog. Updates will also be provided to the WSI working group on a less frequent basis.
Plain language summary
Artificial intelligence (AI) promises to reduce pathologists’ burden searching and evaluating cells and features on the slides; let the computer do it. The regulatory question is then, “How well can the computer algorithms do the tasks?” The most practical ground-truth for evaluating algorithm performance is pathologists’ assessments of the WSI images. The problem is that clinicians make mistakes and don’t always agree. Furthermore, the scanners have limited spatial and color resolution and currently produce a 2D slice of a 3D specimen. In this work, we plan to conduct high-throughput truthing studies to qualify data (images and annotations) as a medical device development tool. The data can then be used by any algorithm developer as the validation set for an FDA submission.
We have developed a hardware and software evaluation environment for digital and analog pathology (eeDAP). eeDAP allows us to automatically present pre-specified regions of interest or individual cells and features on a microscope for pathologist evaluation. This allows us to compare location-specific computer algorithm results to microscope-based pathologist evaluations. Last year, we installed eeDAP on a 14-head microscope and completed a data-collection session, collecting evaluations from 12 pathologists simultaneously in a single visit to Memorial Sloan Kettering (MSK). We loaned the eeDAP system to MSK and they proceeded to conduct a study to compare mitotic figure (MF) counting on a microscope and four different WSI scanners. We did the primary analysis of the results using statistical methods that we developed that account for reader and case variability, and we are coauthors on the paper under review at Diagnostic Pathology.
Since our last submission, we have identified two other partners and challenges that are still in the development stage, giving us a better opportunity to get the glass slides and impact the challenge. The first group is led by DIDSR colleagues; they are designing a challenge for the SPIE Medical Imaging community. The second group is organized at www.tilsinbreastcancer.org. This highly motivated group has more than 140 pathologists, and the chair, Roberto Salgado, wants to leverage his group to create an MDDT dataset. He believes that a regulatory-grade FDA-qualified dataset for the community to use in a submission to the FDA would encourage algorithm developers to focus on the detection of lymphocytes to prognose cancer. We just piloted a lymphocyte evaluation study at the annual meeting of the American Society of Clinical Pathology and hope to collect pivotal data in partnership with that and other society conferences that have thousands of pathologists in attendance, enabling high throughput truthing. We also have several new partners that are offering support with hosting data-collection events, digital data infrastructure and technology, and providing/recruiting pathologists.