That's a good point. It would have been more interesting if the view changed simply when he made a decision (diagnosis), even if it was the wrong one. So, he would really need to know. It would still be the uber convenient x-ray vision recognising abnormalities, but there would be no exact precognition beyond that, it would just be like ctrl-clicking items in Windows for selection, in case someone had multiple problems. It is dumb that the skill already knows the disease, but he still needs to name it. It's like some training software for med students.