Download PDF

IIRRT, Date: 2016/10/08 - 2016/10/08, Location: Dublin

Publication date: 2016-10-08

Author:

Decoster, Robin
Toomey, Rachel ; Mol, Harrie ; Bulter, Marie-Louise

Abstract:

Background. Radiographers evaluate the clinical acceptability of a radiograph before submitting it for review by a radiologist. Whether a radiograph is deemed acceptable for diagnosis or not depends on the individual decision of the radiographer. According to the literature, the cognitive definition of image quality is formed soon after training. These findings are rooted in qualitative research. This study explores the difference between novice radiographers and experienced radiographers concerning the evaluation of image quality with a more quantitative approach. Methods. Twelve radiographers, equally divided into two groups of novice (mean experience of 1.5±0.5y.) and experienced radiographers (average experience 9.7±3.83y.), evaluated on a secondary class display the visibility of five anatomical structures in 22 lateral cervical spine and 22 chest AP radiographs. All observers assessed both datasets using a VGC approach with a scale from 1 to 5. Both groups also expressed for each radiograph the clinical acceptability using RadLex. The time to perform the task was recorded. Results. No significant (p>0.05) difference was found between the groups for any of the five structures in either type of radiographs. The senior radiographers performed the task slightly but significantly slower than the junior radiographers (5.7 sec, p=0.03). Furthermore, the novice radiographers rejected 55.3% of the radiographs, compared to the experienced radiographers’ 37.9%. Conclusion. A similar evaluation of the anatomical structures indicates a similar cognitive definition between novice and experienced radiographers and supports the literature. In the overall assessment of image quality by RadLex, the junior radiographers appear to be more severe. The difference in time may be associated with computer literacy.