About the project

Project phases


The key research issue of the first phase of the project will be the development of a reference imaging dataset that comprises mpMRI scans of the prostate gland supplemented with full medical descriptions and a set of annotations. The final retrospective dataset is planned to contain between 400 and 600 cases. This will be achieved in collaboration with the Lower Silesian Oncology Center in Wroclaw, Poland. At this stage, the criteria regarding the inclusion of patients in the database and relating to the basic objectives of the project will be specified by a team of radiologists, clinicians, and researchers. The experts will identify study groups of patients with clinically significant and insignificant lesions in the prostate gland, the expected number of subjects, and other criteria that are characteristic for the distribution of the patients across the groups—including their ages, serum PSA levels, rectal examination results, and the presence of lesions in specific zones of the prostate.

Patients (a retrospective group) will be selected among patients who underwent prostate biopsies at the Lower Silesian Oncology Center between 2017 and 2021 and for whom mpMRI scans are available. Following analysis of their medical documentation, an optimal group of patients will be selected according to the assumptions for each study group. Data and scans will be verified to confirm that they are accurate, complete, and of high quality. On the basis of historical records and under the guidance of clinical experts, detailed medical descriptions of the cases stored in the database will be prepared; they will include information on suspicion of cancer and preliminary diagnoses (historical PI-RADS scale assessments), diagnoses (histopathological type of tumour and histological risk factors for recurrence and dissemination, TNM classification, degree of malignancy, and number of foci in the gland), prognoses (risk group according to the European Association of Urology and life expectancy according to World Health Organization), comorbidities, medications, and the effects and consequences of therapies. Available historical clinical data (including historical PSA values, previous prostate procedures, and risk factors) will also be collected.

The image data will be fully anonymised and merged with data stored in other forms to prepare complete, consistent cases for the database. Data will be uploaded regularly, where it will be supplemented with annotations. During the annotation process, prostate and lesion contours will be marked based on the corresponding sequences and layers of each mpMRI scan. The contours will be drawn using the external MD.ai platform, which enables the creation of high-quality training and validation datasets with labels. Each change will be also described in a structured manner using a standardised lexicon of terms consistent with the PI-RADS terminology. The structured annotations will rely on a structural report template on the eRADS platform, which is being developed by OPI PIB. Annotations will be added independently by three radiologists who are experienced in describing prostate mpMRI scans and evaluating lesions on the PI-RADS scale.

The first phase will also focus on the selection of appropriate machine learning models that are suitable for various forms of computer analysis of mpMRI scans of the prostate gland. Research that aims to review the literature and determine the state-of-the-art will be conducted. Machine learning methods and architectures will be adapted to various usage circumstances. To properly evaluate the solutions, experts must define the evaluation criteria, select appropriate comparative measures and validation rules, and establish solution acceptance levels.

During the first phase of the project, a concept of how the structural reporting system should operate will be developed and assumptions for the proof of concept solution will be proposed. Functional and nonfunctional assumptions concerning the system’s operation must also be designed—including those that pertain to the adaptation of the system to handle all usage scenarios. The first phase includes the development of an initial version of the system with basic functionalities (integration with the imaging database and the delivery of structural reports), which does not integrate with imaging analysis algorithms.


The main research challenge of the second phase will be to train the predeveloped models designed for various usage scenarios using the entire database and properly prepared training and testing sets. The models will be optimised. If the models’ efficiency results are unsatisfactory, alternative solutions will be sought. Efforts will focus on the fundamental development of algorithms used to analyse mpMRI prostate images. Such algorithms, by design, serve various applications and assist radiologists in their work. They can be described as follows:

1. Automatic segmentation of the prostate gland

The prostate volume is most commonly calculated according to the ellipsoid formula (the prostate’s maximum anteroposterior dimension x the maximum right-to-left dimension x the maximum superoinferior dimension x p/6). This approach is applied because the process of manual segmentation of the prostate gland is extremely time-consuming. It is subject to approximation errors, including in the estimation of the PSA density—an important prognostic marker for prostate cancer.  Automatic segmentation of the prostate gland allows PSA volume and density to be estimated rapidly and accurately. Scan areas where the prostate has been segmented automatically can also provide input for subsequent algorithms.

2 Automatic lesion detection

Lesions on mpMRI scans are difficult even for experienced specialists to detect. A variety of methods are used: identification of a potential location (lesion focus), provision of that information on the spatial probability of lesions (heatmaps that depict the probability distribution), and full segmentation of lesion regions (full segmentation masks that provide information on the extent and volume of lesions).

3. Automatic lesion recognition

Recognition algorithms can classify lesions either as clinically significant or insignificant. Alternatively, PI-RADS scale values may be assigned automatically; this approach better aligns with real-life radiologic procedures and methodologies. In the case of automatic lesion recognition, the explainability of the decisions made by the algorithms becomes particularly relevant. Lesions can also be recognised through multiclass algorithm outputs; each output is associated with the recognition of specific lesion features. The PI-RADS score is a result of specific lesion features in various mpMRI sequences, and is supplemented with information on the location of lesions. The results of feature recognition can be translated into a final estimation of lesions (for example, by using a set of decision rules), which can lead to more reliable determination of their clinical relevance, or to increased confidence in earlier, automatic assignments of PI-RADS values to previously detected lesions.

4. Automatic lesion progression determination

In the case of the mpMRI diagnostics, a tumour’s progression is assessed through biopsy and histopathological evaluation using the Gleason score. Modern radiomics opens new perspectives for stratifying tumour progression based on direct imaging data. Radiomics involves the extraction of a large number of features from radiographic images. These radiomic features have the potential to uncover tumoral patterns that fail to be observed by the naked eye. Coupled with the power of machine learning, radiomics enables analysis of large sets of images, and the construction and extraction of radiomic signatures. The distribution mapping of such signatures in images enables the creation of maps of areas of different characteristics or of various degrees of lesion progression.

5. Automatic search for similar cases and progression monitoring

Traditionally, content-based image retrieval is used to search image databases for visually similar objects. Through image analysis and application of machine learning methods, specific lesion features can be recognised automatically and used as elements that define the profiles of cases characterised by specific visual features. Given that certain combinations of features translate into different PI-RADS scores, the use of such profiles to search for similar cases should lead to the selection of cases with similar tumour progression. When examining the same patients at different times, such profiles can be used to assess the degree of lesions’ progression.

For each algorithm, individual learning and training datasets will be prepared using a stratification technique.It is worth noting that the dataset will comprise cases with varying attributes, such as ages, PI-RADS scores, and the locations of prostate lesions. Random division of the data into learning and test sets may lead to the accumulation of cases within a single set—for example, cases characterised by similar locations of prostate lesions. To fully control the process of creation of the learning and testing sets, stratification algorithms will be developed, which will select the optimal division of data by set and ensure that the sets are balanced and representative.

In the second phase of the project, the reference imaging dataset, which comprises mpMRI scans of the prostate gland, will be extended. The goal is to obtain new cases from other sources (other oncology centres) to ensure the generalisation of the image analysis models. We plan to acquire information on 150–300 new cases—preferably from two different centres. The data will be supplemented with annotations according to the same methodology used for the primary database. The contours will be supplemented with annotations via the external MD.ai platform, which enables the creation of high-quality training and validation datasets with labels. Each lesion will also be described in a structured manner by using a standardised lexicon of terms consistent with the PI-RADS terminology. Structured annotations will rely on a structural report template available on the eRADS platform, which is being developed by OPI PIB. Annotations will be added independently by three radiologists who are experienced in describing prostate mpMRI scans and evaluating lesions on the PI-RADS scale.

The second phase will also see continued development of the structural reporting system, primarily through integration with models that enable analysis of mpMRI scans.


The third phase will focus on the final optimisation of the system: additional functionalities will be included and the final version to be implemented will be developed. The system will be supplemented with updated image recognition models (with potentially improved generalisation capabilities). The examination report generation functionality will also be enhanced: the structural description will be complemented with various support results. The effects of the results will be explained in the most detailed manner. An e-learning platform will be implemented. Logged-in users will gain access to the cases stored in the database. Cases used for learning and training purposes will be will form separate groups. Users will also be able to track their progress and compare their results with those of other users.

The e-learning platform will offer various educational scenarios.

Scenario 1: A collection of cases (the use of the database of cases)

A classic scenario that involves viewing selected cases stored in the database. Image data is viewed in the DICOM viewer, which is integrated with the system and the platform. Users have access to classic case descriptions that contain information on diagnoses (diagnosed foci supplemented with PI-RADS scores, and biopsy results) and therapies undergone.

Scenario 2: A collection of cases supplemented with system structural reports and AI algorithm-supported results

This scenario is similar to the one outlined above; it differs, however, in that when cases from the database are viewed, they are supplemented not only with classic descriptions, but also with full reports generated by the structural reporting system. The reports are integrated with the results of the mpMRI scan analysis algorithms.  The reports describe all foci in a structured manner using a standardised lexicon of terms consistent with the PI-RADS terminology. They also contain maps of potential lesion locations, automatically suggested PI-RADS scores, and evaluations of the clinical relevance of detected lesions. All of these elements are the result of the mpMRI scan analysis algorithms that are integrated in the system. Radiologists can compare the results of the reporting system (AI-enhanced structured reports) with historical data and with their own assumptions pertaining to particular cases.

Scenario 3: Interactive work in the system without the AI support

In this scenario, instead of relying on the collection of cases, radiologists focus on their own examination of cases. After studying a case, radiologists must describe their examination in the structural reporting system (mark and label the lesions detected, specify their features, and determine the PI-RADS scores). The results of the radiologists’ work are then validated against the reference data available in the system. Radiologists can compare their reports (which are automatically generated by the system based on the data provided and the lesion features marked in the report form) with reference reports to identify differences and discrepancies. In this scenario, no AI-supported results are available. Reports are generated based on the structured report forms, which are standardised according to PI-RADS.

Scenario 4: Interactive work in the system with the AI support.

This scenario is similar to Scenario 3. The main difference lies in the preparation of structural report forms. Radiologists are provided with additional suggestions that are generated by mpMRI scan analysis algorithms (maps of potential lesion locations, automatically suggested PI-RADS scores, and evaluations of the clinical relevance of detected lesions). They can compare their reports (which are automatically generated by the system based on the data provided and the lesion features marked in the report form, and on the results of the automatic image analysis) with the reference report, and evaluate to what extent AI helped them to make their decisions.

Scenario 5: Similar case search.

Based on a similar functionality available in the structural reporting system, the e-learning system will allow users to search for similar cases. The platform enables logged-in users to connect to the system. The integration of the e-learning platform with the structural reporting system is key. It will enable all system functionalities to be used in each learning scenario (but not in the diagnosis scenario).

The third phase will also see a series of preimplementation works. The system, ideally, will be classified as a medical device, be assigned to a specific class, and be labelled with the ‘CE’ marking. Tests of the final version will also be conducted. The final version of the system will be submitted for testing by the medical community. During the system verification procedure, the group of cases to be verified will be selected. The target group will comprise approximately 200 patients. The reporting system will be verified by analysing system indications and reporting discrepancies. The experts will compare patients’ historical data (descriptions of mpMRI scans, estimated prostate volumes, numbers of described foci in the glands, foci PI-RADS scores, and biopsy and TMN classification results) with the information available in the preliminary versions of system reports, which will be generated using the algorithms developed to analyse mpMRI scans. The historical data will be compared with the results of automatic prostate segmentation (automatically estimated volume), automatic lesion recognition algorithms, and automatic PI-RADS classification. The experts will record discrepancies between the historical data and that from the structural reporting system. They will verify whether automatically recognised lesion features (this data may not be available in the patients’ history) conform with the indications of the system and identify the features affected by the discrepancies. Preimplementation works will also incorporate UX tests. They will be conducted by the Laboratory of Interactive Technologies at OPI PIB. After the tests are completed, five external radiologists will be hired to conduct additional UX tests and to participate in an expert panel to evaluate the usability of the application and to provide recommendations for the development of individual user preferences in the system. The UX tests will help optimise the system interface and identify the needs of end users (radiologists), which may translate into further improvement of work ergonomics—particularly in the effective use of AI assistance. AI support is a novel idea with which some radiologists may be unfamiliar. The way by which the suggestions generated by the algorithms are incorporated into the system interface and by which the data flows in the system will require empirical verification and optimisation. Radiologists participating in the UX tests will also gain open access to the platform if they complete an evaluation survey. They will be given the opportunity to test the platform in a near-target environment.