It is well known that radiologists interpreting screening mammograms occasionally miss early signs of breast cancer - for that reason two radiologists search every image. Scientists have developed Computer Aided Detection (CAD) systems to indicate probable locations of abnormalities to radiologists who then decide whether they are actionable, but due to the complexity and variability of the images, these systems are imperfect, sometimes missing cancers and sometimes prompting normal regions of the images. There are also different kinds of cancers, and the algorithms that detect them have different error rates. If there are too many prompts, the radiologists can become distracted or ignore them, but with more prompts the chance of prompting abnormal regions is increased. We have evaluated CAD clinically and found that current CAD systems increase the cancer detection rate for some readers, but they also increase the proportion of women who are called back for further investigation but don't have cancer.
In this project we are trying to find the best balance between on-target prompts and prompts marking normal features. Because this involves testing several different combinations of prompt performance we can't do it using radiologists, and have opted instead to follow a 'citizen science' approach, framing our experiment as a game. Our targets are bats, which we insert digitally into images of flocks of birds.
In order to play you need to have a valid account and to be logged in. Play here.