Researchers who evaluated medical information online regarding 2 common ophthalmic procedures found that the information on both is “of variable and often poor quality” when accessed through a search engine. The study says that even ophthalmologists’ own websites can contain inaccuracies.
While there is a demand for patient-centered health websites, studies indicate that patients reading them may self-misdiagnose. The researchers sought to study the available online information regarding peripheral iridotomy and trabeculectomy.
Evaluators conducted advanced searches on May 15, 2019, in Vancouver, Canada, and independently used 2 quality assessment instruments to assess the first 50 functional websites addressing each procedure.
They used a quality assessment tool described in the Journal of the American Medical Association (JAMA) and the DISCERN instrument. The DISCERN tool, described in the Journal of Epidemiology and Community Health, incorporates a series of 15 questions addressing specific areas regarding the quality of health information, as well as 1 question regarding the overall quality of the website. Using the JAMA tool, evaluators assigned each website either a 0 (absent) or 1 (present) based on clear display of authorship, attribution (references for claims), disclosure (of any conflicts of interest), and last-updated date. With DISCERN, they assessed websites on Likert scales of 1 (low-quality) to 5 (high-quality).
Eleven web sites on peripheral iridotomies received a perfect score on JAMA’s scale while 14 received 3 points, 7 received 2 points, 4 received 1 point, and 14 received 0 points. “Disclosure” was fulfilled in 18 websites and “attribution” was found in 20 websites. “Authorship” and “last-updated date” were noted in 33 websites apiece.
Seventeen websites on trabeculectomy received a score of 4, 9 received 3 points, 3 received 2 points, 4 received 1 point, and 17 received 0 points. “Attribution” in 21 of these websites, “disclosure” in 24, “authorship” fulfilled on 29 websites, and “last-updated date” on 31 websites.
Ophthalmologists’ clinical websites had the lowest mean scores, and journals’ websites had the highest mean scores (peripheral iridotomy: 0.22, 3.6; trabeculectomy: 0, 3.7, respectively).
Level of agreement for questions ranged from k=.874 for “disclosure” for peripheral iridotomy to k=1.00 for “attribution” for both procedures and for “last updated” for iridotomy.
DISCERN scores for peripheral iridotomy ranged from 25 to 67, with an average score of 44. Mean scores for individual questions ranged from 1.3 (support for shared decision-making) to 4.7 (whether websites achieved aims).
For trabeculectomy, scores ranged from 23 to 64, with an average score of 43.7. Mean scores on questions ranged from 1.2 (regarding what consequence of no treatment) to 4.5 (whether websites achieved aims).
Level of agreement for questions ranged from k=0.550 for question 2 for trabeculectomy to k=0.884 for question 16 for iridotomy. Intraclass correlation coefficient ranged from 0.980 (P <.001) for DISCERN trabeculotomy to 0.987 (P <.01) for JAMA trabeculotomy.
The study authors recommended doctors use the rating instruments when recommending websites to their patients for diagnoses and treatments.
Limitations of the study included the fact that the search was specific for surgical procedures related to glaucoma and lack of knowledge of which search terms patients use to access ophthalmology information. The tools are also not designed to account for information in videos.
Khan AM, Khan HM, Huang P, et al. Assessing the quality of online information on glaucoma procedures. Can J of Ophthalmol. Published March 13, 2021. doi: 10.1016/j.jcjo.2021.02.010