Access the full text.
Sign up today, get DeepDyve free for 14 days.
Abstract.Purpose: A recently proposed model observer mimics the foveated nature of the human visual system by processing the entire image with varying spatial detail, executing eye movements, and scrolling through slices. The model can predict how human search performance changes with signal type and modality (2D versus 3D), yet its implementation is computationally expensive and time-consuming. Here, we evaluate various image quality metrics using extensions of the classic index of detectability expression and assess foveated model observers for search tasks.Approach: We evaluated foveated extensions of a channelized Hotelling and nonprewhitening matched filter model with an eye filter. The proposed methods involve calculating a model index of detectability (d ′ ) for each retinal eccentricity and combining these with a weighting function into a single detectability metric. We assessed different versions of the weighting function that varied in the required measurements of the human observers’ search (no measurements, eye movement patterns, size of the image, and median search times).Results: We show that the index of detectability across eccentricities weighted using the eye movement patterns of observers best predicted human performance in 2D versus 3D search performance for a small microcalcification-like signal and a larger mass-like. The metric with a weighting function based on median search times was the second best predicting human results.Conclusions: The findings provide a set of model observer tools to evaluate image quality in the early stages of imaging system evaluation or design without implementing the more computationally complex foveated search model.
Journal of Medical Imaging – SPIE
Published: Jul 1, 2021
Keywords: model observers; psychophysics; visual search; 3D image modalities
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.