Duochrome Test Reliability Essay

Kajal Shah PhD, Kovin Naidoo, PhD, OD, Luigi Bilotto, OD, and James Loughman, PhD

PDF of Article

 

Introduction

The Mozambique Eyecare Project is a higher education partnership between the Dublin Institute of Technology (DIT), the Brien Holden Vision Institute (BHVI), the University of Ulster and Universidade Lúrio (UniLúrio), Nampula, for the development, implementation and evaluation of a model of optometry training at Unilúrio in Mozambique. The four-year optometry program was based on a curriculum developed by BHVI with competencies drawn from the global competency-based model of the World Council of Optometry (WCO) and the Association of Regulatory Boards of Optometry (ARBO). The model allows for objective comparisons of scope of practice between countries. The global competency-based model provides a vertical career ladder for individuals seeking to expand their scope of clinical practice and includes four categories of clinical care. Each category requires a set of competencies that includes the previous category: optical technology, visual function, ocular diagnostic and ocular therapeutic.1 The minimum required for individuals to call themselves an optometrist is demonstrating competence in dispensing, refracting, prescribing and the detection of disease/abnormality.1 In Mozambique the exact scope of practice of optometry is not defined, but the curriculum enables at least the ocular diagnostic category to be met.

Competencies are seen as a framework for entry-level abilities in the profession of optometry in most countries. Students have to show by some means of assessment (a specific examination or some form of continuing assessment program) that they are competent in the areas listed.2 The definition of competence provided by the General Optical Council (GOC) in the United Kingdom (UK) is: “Competence has been defined as the ability to perform the responsibilities required of professionals to the standards necessary for safe and effective practice. A competency will be a combination of the specification and application of a knowledge or skill within the occupation, to the appropriate standard.”3

Literature on methods of assessing clinical competency has existed in medicine for many years; however, little published research exists for optometry.4-7 In the UK, the GOC describes the required competencies in detail, but it does not specify the method of assessment. This is left to the respective training institutions and professional organizations responsible for assessment and certification.4

An ideal assessment tool would have to be reliable and valid.8 Reliability is a measure of the reproducibility or consistency of a test, and is affected by many factors such as examiner judgments (inter-rater, examiner experience), inter-case (student) reliability, inconsistency of patient performance, and reliability of rating scales.6 Validity refers to the ability of the assessment to measure what it is supposed to measure. No valid assessment methods that measure all facets of clinical competence have been designed.6 Other factors, including the feasibility of running and resourcing the examination, are also important in a developing country context.8

Miller’s pyramid conceptualizes the essential facets of assessment of clinical competence.7 The base represents the knowledge components of competence: ‘knows’ (basic facts), ‘knows how’ (applied knowledge), ‘shows how’ and ‘does’. The base levels are assessed with written tests of clinical knowledge such as multiple-choice questions, short-answer questions, essays and oral examinations. They are still popular in the training of optometry students in the UK and Europe and in the entry-level examinations to the profession in the United States (US).9,10 Direct observation of students in clinics, the use of standardized patients (SP) and objective structured clinical examinations (OSCE) are commonly used to test the ‘shows how’ component.10-12 The final assessment of pre-registration optometry students in the UK is in the form of an OSCE, wherein students rotate through a series of stations to demonstrate clinical skills applied in a range of contexts.11 However, little literature exists on assessment of exit-level competencies from the optometry program, which is the context in Mozambique, as opposed to entry level into the profession, even though assessment strategies can be similar.

Uncorrected refractive error has been identified as a major cause of visual impairment in Mozambique.13 The only providers of refraction services within the national health system in Mozambique are ophthalmic technicians. However, previous assessment of their refraction skills showed they needed upskilling to make them competent at refraction.14 We did not set out to assess dispensing and contact lens fitting, which are all part of the competency skill set of an optometrist.1 There were various reasons for this. For dispensing, the spectacle supply system at the university had not been established when the students graduated; therefore, their exposure to dispensing was restricted. Once the students have graduated and started working within the national health system, their access to contact lenses is limited apart from in the larger central and provincial hospitals. Hence, refractive error measurement was deemed the most important responsibility at present of the Mozambican optometrist. For this paper, refractive error management includes the clinical judgement related to the patient’s age, symptoms, accuracy of the subjective or objective refractive result, binocular vision status and disease.15 Moreover, there is little or no supervision of students once they’ve graduated. In the absence of alternative refractive care provision, emphasis had to be placed on ensuring they were competent in their refraction routine.

The aim of this study was two-fold: 1) to report on the development of a process for assessing refractive error management competence that is practical to implement and keeps staffing and resourcing costs at sustainable levels within the context of limited academic resources, and 2) to understand the effectiveness of implementation of the process in the context of a low resource environment, in terms of its reliability and validity.

Competence Assessment Development and Implementation

This article describes two components: 1) the development of the competency assessment methods and process, and 2) the implementation of the assessment process. The evaluative elements of this work were conducted according to the tenets of the Declaration of Helsinki and approved by the research ethics committee at the Dublin Institute of Technology.

Assessment Development: Methods

Information was gathered from a literature review of assessment methods in medicine6-8,16,17 and high-stakes optometry exams,9-11,18 the latter being the only literature available for optometry.

A focus group discussion was conducted with two lecturers from Unilúrio responsible for the clinics of the first cohort of students, and three of the program developers on the basis of their clinical and academic expertise. They were asked to read and sign a consent form by the investigator acting as the facilitator of the focus group. The members of the group, two each from South Africa and Colombia and one from Canada, had an average of 16 years of clinical experience and an average of nine years teaching experience in international undergraduate optometry education, particularly in curriculum design, teaching and developing and conducting assessments.

The investigator informed the participants about the objective of the focus group. The primary intention was to develop the assessment methods for evaluating competencies of the optometry students, concentrating on refraction, before they graduated. Qualitative data based on grounded theory was captured on assessment methods and their evaluation, which would be feasible given the challenges that existed for a new program in a low academic resource context.19 The participants were asked how best to evaluate the optometry students’ refraction competencies as the standard necessary for entry into the profession in Mozambique. The discussion was recorded by the investigator, read, coded, categorized and analyzed thematically. In order to improve the credibility of the data, member checking was used.20 This involved the data being presented to the focus group members to confirm the credibility of the themes and whether the overall account was realistic and accurate.

Assessment Development: Results

The key themes extracted from the focus group, which informed the development of the assessment methods included: 1) exclusion of OSCEs, 2) practice assessment by direct observation, 3) theory exams, and 4) qualitative observations of the competency assessment process.

Exclusion of OSCEs

The existing literature on different assessment procedures suitable for use in medicine6-8,16,17 and optometry9-11,18 was discussed. The two most commonly cited methods of assessment of clinical competencies, identified from the literature review, are the direct observation of students performing these clinical skills and objective structured clinical examinations (OSCEs). However, there is little published literature on the use of OSCEs in Africa. In a review of the economic feasibility of OSCEs in undergraduate medical studies, only 17 of the 1,075 publications were from Africa.21 A study comparing six assessment methods for their ability to assess medical students’ performance and their ease of adoption with regard to cost, suitability and safety in South Africa revealed OSCEs to be the most costly.22 Hence, OSCEs and the use of standardized patients were ruled out due to lack of academic resources and examiners.

Recommendations were made on the most suitable methods for competency assessment in Mozambique, taking into account that integrating disease and binocular status with the refractive result is necessary for prescribing a refractive correction.15 The competencies would be assessed as follows.

Practical assessment by direct observation

This had to be constructed to maximize validity and reliability against the time and cost of running and resourcing the exams. Students undertook an eye examination of two real patients, a presbyope and a pre-presbyope, under observation of two examiners for each patient. Clinical performance was assessed for communication, history and symptoms, vision and visual acuity (with pinhole if necessary), pupil distance, assessment of pupil responses, cover test, ocular motility, near point of convergence, externals, retinoscopy, best sphere, cross cylindrical refraction, binocular balance and near vision, final prescription, ophthalmoscopy, advice, recording, management and time-keeping. (Appendix A)

The WCO global competency model would be used as the framework for the assessment, with the assessment method mapped to the elements of competencies and performance criteria and the level of difficulty expected to be mastered by the student specified, to enhance content validity.

Direct ophthalmoscopy and an external exam using a slit lamp were also included because the presence/absence of pathology would indicate the level of best-corrected visual acuity and help in the management of the patient. A pass-fail cut-off score of 75%, as stipulated by the university and backed by literature, was maintained by the participants of the focus group discussion.10 The skills were weighted according to their importance for safe, effective clinical practice based on the literature and clinical assessment experience of the focus group participants.10 The weightings and number of checklist items for every skill are reflected in the results in Table 1 with 100% being the overall score.

The time allowed was 50 minutes. If the examiner considered that the examination was difficult (due to a complex refraction, low vision, pathology, patient being illiterate or unable to communicate in Portuguese), an additional 15 minutes could be allowed. Examiners were to consider the difficulty of the patient in the marking of the student.

Theory exams

To cover the background knowledge required for the competent practice of refraction, two theory exams would be set, short-answer questions and a structured oral exam. Both exams would be double-marked using checklists. The overall pass mark for this was set at 50%, as stipulated by the university, backed by literature,5 and agreed upon by the focus group, with each section contributing equal weight.

  1. short-answer questions (SAQ) (one hour): This consisted of six case slides. Five of the patient cases had a color photograph of an ocular condition, and one comprised a binocular vision scenario in which the patient history and clinical data were presented. The student was examined on recognition (signs and symptoms), judgment (differential diagnosis and extra tests necessary), refraction management and decision-making skills (e.g., referral, low vision appliances) for the five cases with a photograph, and a diagnosis and treatment plan for the binocular vision case. The cases were standardized in terms of content (the elements of competencies and performance criteria assessed) and difficulty for both cohorts, taking into account the depth of coverage of a topic expected in the students’ answers and the amount of time required to answer a question to the appropriate standard. Model answers were prepared ranking the importance of the different components using guidance from best practice tools in optometry, and graded using a checklist.10,23
  2. structured oral exam (half hour): This consisted of an oral exam of three case studies from the students’ portfolio: one low vision, one binocular vision and one pathology patient, and the management of their refractive error. A checklist with a set of questions was used to elicit the students’ knowledge and rationale in the management of the topic under examination as well as the ability to communicate this knowledge. The checklist included the competencies to be assessed and was adapted from checklists used in optometry registration exams in the UK.23

For both theory exams, each question/case was first marked independently out of 10 by two examiners and then averaged to give a final score. Students who passed both the theory and the clinical exam were deemed competent to refract.

Qualitative observations of the competency assessment process

Qualitative observations of the competency assessment process were made by the examiners. These were used to provide information, regarding the results, to the university and the faculty. This would help identify factors affecting student performance that the quantitative assessment results would not provide. Feedback would be provided to the faculty enabling them to develop an understanding of the results from the clinical assessments of the optometry students. Faculty would have the opportunity to learn from this and improve teaching as a consequence.

Overall, the methodology should be appropriate to provide an assessment of optometry students’ refraction knowledge, skills, behaviors, attitudes and values, undertaken in a clinical context of a complete eye examination. This would be a low-stakes assessment with the students’ performance not affecting their overall university end-of-year result. Before the clinical assessments were carried out, all the students had a portfolio that documented their refraction competencies including retinoscopy, sphero-cylindrical refraction and binocular balance tests. The students were eligible for the final examination when they had: a) been signed off on the relevant competencies in their portfolio, and b) successfully completed multiple-choice questions in the five courses (clinical optometry, low vision, binocular vision, optometry and clinical medicine and occupational optometry) in their seventh (penultimate) semester.

Assessment Implementation: Methods

Subjects

All 15 students (nine from the first intake, cohort A in 2012, and six from the second, cohort B in 2013) who had progressed to the final semester in their fourth year were invited to participate in the study. The students read and signed a consent form for their inclusion in the study, and confidentiality of the results was maintained throughout.

Equipment

The research equipment used in the study comprised:

  • visual acuity chart (3-meter phoroptor chart with duochrome and cross-cylinder targets)
  • streak retinoscope
  • trial lens set and frames / phoroptor
  • cross cylinders +/-0.25D and +/-0.50D
  • +/-0.25DS and +/-0.50Ds flippers
  • torchlight
  • cover stick
  • slit lamp
  • ophthalmoscope

Data analysis

Data were entered into an SPSS database (version 21) and analyzed for inter-rater agreement. Consistency between the students and the examiners was analyzed with Cohen’s kappa statistic. Descriptive statistics were produced for the clinical competency assessments, and the difference in performance between the first and second cohort were analyzed using a Mann Whitney U test. A significance value of p < 0.05 was adopted throughout the analysis.

Refractive error analysis

Based on the literature of repeatability and reproducibility of refractive values, a variance of +/-0.75D sphere and cylinder was set as the limit of acceptability for retinoscopy and subjective refraction.24

Examiners

The selection criteria for the external examiner were clinical and academic optometry experience, ability to communicate in Portuguese, familiarity of the health context and availability for placement in Mozambique. The researcher with 14 years clinical and public health experience in optometry and four years experience in the training and evaluation of pre-registration optometry students in the UK met the criteria to carry out the evaluations.

Four of the Unilúrio lecturers, two for each cohort, were recruited as internal optometrist examiners. Two were from Colombia and two from Spain. Two had completed their post-graduate studies, one in Spain and one in the UK. The internal examiners had an average of 10 years clinical experience and four years teaching experience.

All examiners had knowledge of the methods used and were provided training by the program developers on the use of the standardized checklists along with the performance criteria and competency standards necessary for the students to exhibit entry-level competency in refraction on graduation. Two of the internal examiners assessed the practical competency, and two assessed the theoretical exam consisting of the SAQs and the oral exam (one for each cohort), along with the external examiner.

Assessment Implementation: Results

Clinical competency assessment

Thirty patients were examined (mean range 37.6 years; standard deviation 18.4 years; age range 7 to 72 years; 16 male [53%] and 14 female [47%]) by nine students from the first cohort and six students from the second cohort.

Fourteen patients had low refractive error (sphere within +/-0.75) and seven had best-corrected decimal visual acuity <0.4. Refraction results from the two graders were averaged. Inter-rater K value was >0.6 for all skills, showing a good strength of agreement between the two raters.25 The only significant inter-cohort difference was in binocular balance and near visual acuity. Table 1 summarizes the mean marks with the standard deviation for both cohorts for every technique, the inter-cohort difference and the total number of students passing every skill.

Theory exam

Table 2 demonstrates the number of students passing the two sections of the theory exam. The inter-rater agreement for the theory exam was >0.6, indicating good agreement.

Qualitative observations of clinical assessment

The examiners noticed certain factors in play during the assessments. Eleven students did not carry out binocular balance tests. For both retinoscopy and subjective refraction, there was a lack of clarity with the instructions, with poor fixation targets being presented. The students did not detect a retinoscopy reflex in any of the patients with high myopia. They could not control the subjective response of patients whose response pattern was poor. They took too much time on history and symptoms, which resulted in less time for refraction and other tests. Overall, 10 students did not relate patient symptoms to management.

Discussion

The aim of this study was to evaluate the design of a competency assessment process and gain an understanding of the effectiveness of the process for assessing clinical competency in refraction. Before the clinical assessments were carried out, all the students had a portfolio and had been ‘checked off’ for all the refraction competencies. However, from the results of these assessments, it appeared that the portfolio served to reflect the procedure being performed and audit skills acquisition rather than check on quality or even proficiency.

Overall, only four students passed the clinical competency assessment. As this was a low-stakes assessment there could have been a lack of motivation to perform well on the part of the students. The qualitative observations identified some of the factors that led to the students failing. These were communicated to the lecturers in a feedback session. This input to the faculty, isolated in a developing country context, has enabled them to learn how to refine student training.

There are several factors that need to be considered in assessing the implications of this study: the lack of standardization of patients; the methodology of direct observation of real clinic patients; the use of SAQs and an oral exam; the increasing importance of using OSCEs; the setting of competency standards and the training and recruitment of examiners. These are all discussed below.

Seven students saw patients with severe, untreated pathology and complex refractive errors. The mix of patients being tested and the complexity of skills being assessed can result in a lack of reliability. SPs are people who are simulating real patients with defined criteria to provide students with consistent and equivalent assessment experiences.26,27 Overall, the high costs of training and expertise to ensure reproducibility and consistency of scenarios could not be justified in the context of student assessment in Mozambique.6 The recommendation is to integrate a degree of standardization for future student assessment. A focus group discussion by the faculty to set the criteria for standardization is proposed. The criteria could include patient age, range of refractive error (if complex then every student should get a complex case), best-corrected visual acuity, past experience of optometric exam, absence or presence of pathology, and ability to communicate in Portuguese. This will facilitate the selection of patients that meet defined criteria, by faculty, for competency assessments without incurring an increase in cost and ensure that the marks on the assessment correlate well with the assessments of the students over their entire program.

The methodology of direct observation of real clinic patients is increasingly challenged on the grounds of authenticity and unreliability due to examiner and patient variance.6 Inter-rater reliability measures the consistency of rating of performance by different examiners.6 The use of two trained raters, for every practical and theoretical exam, with good inter-rater agreement (kappa greater than 0.6) helped to increase consistency.25 Providing the examiners with a standardized checklist increased the reliability of direct observation, and this has been shown to be as reliable as an OSCE.26 A ‘Hawthorne’ effect occurs when a student or practitioner behaves differently because they are being observed. This effect can have a positive impact on student performance;12 however, the effect is inevitable with any methodology involving direct observation.12

Students were familiar with the test formats employed for the theory exams. SAQs were designed to assess problem-solving and data-interpretation skills when faced with common clinical management problems. The oral exam was based on the students’ case records, and examined the knowledge, values and attitudes that informed the students’ management of the patients. The issue of reliability and validity in this study was addressed by using two trained raters with good inter-rater agreement and checklists for both exams. The exams were mapped to the elements of competencies and performance criteria and the level of difficulty expected to be mastered by the student specified.

As a potential solution to the concerns of reliability and validity of the other assessment methods, the OSCE has gained increasing importance in the assessment of clinical competency in medicine and optometry in the UK and US.10,11,28,29 Wide sampling of cases and structured assessment improve reliability, but the OSCE is expensive and labor-intensive.4,6 In Mozambique, due to the lack of SPs and expertise among the faculty to implement and grade OSCEs, they were not considered a feasible assessment method for a new program in a low resource environment. In addition, students were not familiar with the format of OSCEs. Direct expenses of an OSCE include the cost of training standardized patients, examiners, support staff, development of scoring tools and venue costs dependent on the number of stations. However, these costs can be reduced by the use of volunteer faculty, volunteer patients and students as raters.21 Further research is required on the cost of implementing the OSCE (materials, examiners and patients or patient simulators) and the reliability and validity offered compared with the other methods, specifically in a low resource environment.

In this study, the setting of competency standards was stipulated by the university, backed by a literature review and agreed upon by the focus group (75% clinical and 50% theory).5,10 Absolute standards that are criterion referenced are most appropriate for tests of competence.30 In this case, the exams for the two cohorts were not identical as they contained different patients and cases. Hence, percentage scores did not reflect the same level of knowledge. In the long run, a more systematic, transparent approach to standard-setting and pass-fail criteria, supported by a body of published research, needs to be adopted. This involves evaluating the content and difficulty of the examination.30 Standards should be consistent with the purpose of the test and based on expert judgement informed by data about examinee performance.30

The examiners were all experienced and competent optometrists. The use of multiple examiners is a means that has been shown to enhance reliability.6 The examiners were all given explicit criteria and training in the use of checklists, performance criteria and competency standards based on good practice.5 The ideal proposed for an exit assessment is a group of external assessors, accredited for suitability by a professional body of optometrists, trained at the required level with experience in competency teaching and assessment.31 They should all be competent in the area they are to assess and familiar with the competency standards. The selection of examiners in Mozambique will evolve over time as more students graduate, a professional body is formed and accreditation to become an assessor offered.

There are certain limitations to the study of assessment methodology. Our sample of 15 students was small but represented 100% of the final-year optometry students. The study concentrated only on refraction because the spectacle supply system at the university had not been established and access to contact lenses is limited. Intraocular pressures were not assessed as this assessment was concentrating on refraction error management competence. However, this assessment methodology could be expanded to include the additional elements in a more comprehensive “suitability to practice” exit competency assessment.

Conclusion

As optometry continues to move towards competency-based curricula, educators require appropriate tools to support the assessment of competencies. The use of existing checklists and rating skills helped to identify areas of competence deficits. Overall, the methodology of direct observation, SAQs and an oral structured exam has shown good inter-rater reliability with the use of these standardized checklists. The main recommendations are the provision of clear guidelines to faculty for the standardization of patients during exams for the assessment to be reliable and repeatable, and increasing assessor training. More data on the use of OSCEs and standard-setting to ensure case specificity and increase validity are required for this methodology to be adapted for use in optometry schools with similar academic resource limitations.

References

  1. Global Competency Model – World Council of Optometry [Internet]. 2005 [cited 2014 Feb 7]. Available from: www.worldoptometry.org.
  2. Kiely P, Horton P, Chakman J. Competency standards for entry-level to the profession of optometry 1997. Clin Exp Optom. 1998;81(5):210–221.
  3. General Optical Council. The General Optical Council Stage 2 Core Competencies for Optometry. Available at: https://www.optical.org/en/Education/core-competencies–core-curricula/.
  4. Siderov J, Hughes JA. Development of robust methods of assessment of clinical competency in ophthalmic dispensing–results of a pilot trial. Health Soc Care Educ. 2013;2(1):30–36.
  5. Kiely PM, Horton P, Chakman J. The development of competency-based assessment for the profession of optometry. Clin Exp Optom. 1995;78(6):206–218.
  6. Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. The Lancet. 2001 Mar 24;357(9260):945–9.
  7. Miller G. The assessment of clinical skills/competence/performance. Acad Med. 1990 Sep;65(9):63–7.
  8. Wass V, McGibbon D, Van der Vleuten C. Composite undergraduate clinical examinations: how should the components be combined to maximize reliability? Med Educ. 2001 Apr 22;35(4):326–30.
  9. European Diploma in Optometry; Candidate Guidelines. European Council of Optometry and Optics [Internet]. [cited 2012 Jun 4]. Available from: http://www.ecoo.info.
  10. National Board of Examiners in Optometry: Exam information [Internet]. [cited 2014 Nov 14]. Available from: http://www.optometry.org/part_matrix.cfm.
  11. Pre-registration scheme [Internet]. [cited 2015 Mar 8]. Available from: http://www.college-optometrists.org/en/qualifying-as-an-optometrist/pre-registration-scheme/.
  12. Shah R, Edgar D, Evans BJ. Measuring clinical practice. Ophthalmic Physiol Opt. 2007;27(2):113–125.
  13. Loughman J, Nxele L, Faria C, Thompson SJ. Rapid assessment of refractive error, presbyopia and visual impairment and associated quality of life in Nampula, Mozambique. J Vis Impair Blind. 2014;in press.
  14. Shah K, Naidoo K, Chagunda M, Loughman J. Evaluations of refraction competencies of ophthalmic technicians in Mozambique. J Optom. 2016 Jul-Sep;9(3):148–57.
  15. Hrynchak PK, Mittelstaedt AM, Harris J, Machan C, Irving E. Modifications made to the refractive result when prescribing spectacles. Optom Vis Sci. 2012 Feb;89(2):155–60.
  16. Epstein RM. Assessment in medical education. N Engl J Med. 2007 Jan 25;356(4):387–96.
  17. Van Der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ. 1996;1(1):41–67.
  18. OCANZ: Candidate Guide [Internet]. [cited 2014 Nov 20]. Available from: http://www.ocanz.org/candidate-guide.
  19. Patton MQ. Qualitative research. Thousands Oaks, California: Sage Publications; 2005.
  20. Creswell JW, Miller DL. Determining validity in qualitative inquiry. Theory Pract. 2000;39(3):124–130.
  21. Patrício MF, Julião M, Fareleira F, Carneiro AV. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach. 2013 Jun 1;35(6):503–14.
  22. Walubo A, Burch Vanessa. A model for selecting assessment methods for evaluating medical students in African medical schools. Acad Med. 2003 Sep;78(9):899–906.
  23. The College of Optometrists: Examiner and Assessor Training Workbook [Internet]. 2012. Available from: http://www.college-optometrists.org/.
  24. MacKenzie GE. Reproducibility of sphero-cylindrical prescriptions. Ophthalmic Physiol Opt. 2008 Mar;28(2):143–50.
  25. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977 Mar;33(1):159–174.
  26. Wass V, Jones R, Van der Vleuten C. Standardized or real patients to test clinical competence? The long case revisited. Med Educ. 2001;35(4):321–325.
  27. Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. AAMC. Acad Med. 1993;68(6):443–51.
  28. Newble D. Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004;38(2):199–203.
  29. Swanson DB, van der Vleuten CPM. Assessment of clinical skills with standardized patients: state of the art revisited. Teach Learn Med. 2013;25(supp1):S17–25.
  30. Norcini JJ. Setting standards on educational tests. Med Educ. 2003 May 1;37(5):464–9.
  31. Toohey S, Ryan,G, Hughes C. Assessing the practicum. Assess Eval High Educ. 1996;21(3):215–27.

Dr. Shah [kajshah@aol.com] is a Research Optometrist at Dublin Institute of Technology in Ireland. Her research has focused on evaluation of competence and developing competency frameworks for optometrists and mid-level eyecare personnel in Mozambique.

Dr. Loughman is a Professor of Optometry at Dublin Institute of Technology in Ireland. He has a specific academic and research interest in preventive eye health interventions for the most common causes of blindness and visual impairment.

Dr. Bilotto is the Human Resource Development Global Director for the Brien Holden Vision Institute in Durban, South Africa. His responsibilities include setting up sustainable and high-quality optometry training programs in the developing world.

Dr. Naidoo is the CEO of the Brien Holden Vision Institute and the Africa Vision Research Institute and an Associate Professor of Optometry at the University of KwaZulu Natal in Durban, South Africa.

1. Background

Uncorrected refractive errors, such as myopia, hyperopia and astigmatism, have a high impact on the prevalence of visual impairment or blindness, as recently reviewed by Naidoo and colleagues [1]. Since it is known that the prevalence of refractive errors is increasing [2], its assessment will become one of the major tasks in the public health sector worldwide. Digitalization is already affecting our lives, and its influence will increase in the future; the aim is to develop smart products that are able to assess refractive errors in order to provide adequate correction for people living in developing, as well as in industrial, countries. Several “smart” solutions are already available that assess refractive errors objectively and subjectively. For example, the company EyeNetra (EyeNetra Inc., Somerville, MA, USA) developed a smartphone-based refraction for mobile measurements of refractive errors [3,4,5] that uses a pinhole optic to display a stripe pattern on the participant’s retina, where the task of the subject is to align a red and green stimulus [3]. Another smartphone-based autorefractor is SVOne (Smart Vision Labs, New York, NY, USA), where a portable Hartmann-Shack wavefront aberrometer is attached to a smartphone [6]. In contradiction, Opternative (Opternative Inc., Chicago, IL, USA) is an online solution that aims to measure the refraction of the eye in a self-directed way, using a computer-based response to presented stimuli (https://www.opternative.com/) [7]. Thus far, the performance of these products is limited. Using Opternative, the range of refractive errors that can be measured is between 0 D and −4 D [7]. EyeNetra claims to have an extended range of refractive errors that can be measured (−12.5 D to +5.5 D) [8]. While the assessment of the refractive errors is the core competence of eye care professionals (ECPs), all used methods have to agree between each other and have to be reproducible, as well as repeatable. Published data on the variability of refraction, either assessed subjectively or objectively, were mainly evaluated during studies that assessed the variability and repeatability of auto-refractors. Most of the studies [9,10,11] used two repetitive measurements of subjective refraction per subject, only Rosenfield and Chiu, 1995 [12] assessed the subjective refraction with five repeated measurements. The reported 95% limit of agreement for the subjective measurement of the spherical equivalent refractive error, which was ±0.29 D, suggesting that the subjective refraction is accurate to about a quarter diopter [12]. In contradiction, Zadnik, Mutti and Adams, 1992 [10] reported a 95% limit of agreement for the SE of ±0.63 D for cycloplegic and of ±0.72 D for non-cycloplegic refraction. Studies regarding the agreement between the earlier mentioned “smart” products, such as EyeNetra or SVOne, with the traditional methods, showed good agreement between traditional and smartphone-based technologies. Clinical results in 27 subjects (54 eyes) with refractive errors ranging from 0 D to −6 D for the EyeNetra (Netra G) device showed that the difference between the EyeNetra and the subjective refraction for spherical equivalent error SE was 0.31 ± 0.37 D [4]. Later, it was shown that two different versions of the EyeNetra device overestimated myopia (spherical error) by 0.48 ± 0.66 D (Netra G #243) in 24 subjects and by 0.64 ± 0.71 D (Netra G #244 with a smaller pupillary distance) in 19 subjects [3]. In 50 visually-normal, young subjects with an average spherical equivalent refractive error of −2.87 D, the 95% limit of agreement for the assessment of the power vector components (J0 and J45) was highest for the SVOne device, when compared to retinoscopy and two autorefractors (Topcon KR-1W and Righton Retinomax-3) under cycloplegic and non-cylcoplegic conditions [6]. In case of the SE, no significant differences were found between the different methods and devices and the 95% limit of agreement for the SVOne was comparable to Retinoscopy under non-cycloplegic conditions [6]. Clinical data for 60 eyes from 30 subjects (aged between 18 and 40 years and with spherical refractive errors of up to −4 D and astigmatism of up to −2 D) for the Opternative software are available on their webpage [7], but is not yet published. In summary, they report that a difference in the spherical equivalent difference of 0.25 diopter was apparent in 70% of the eyes, and a spherical equivalent difference of 0.50 diopter or less was found in 90% of the eyes. Nevertheless, in their review, Goss and Grosvenor, 1996 [13] concluded that the assessment of refractive errors using subjective methods, such as a trial frame or a phoropter, are far better compared to any other method, and that the agreement of the measurement of the SE (either intra-examiner or inter-examiner) was close to ±0.25 D in 80% of the measurements, while the agreement was ±0.50 D for 95% of the measurements for the SE, the sphere and the cylinder power.

The purpose of the current study was to investigate the inter-device agreement and mean differences between a newly developed digital phoropter and the two standard methods (trial frame and manual phoropter) currently used to assess the refractive error of the eye.

2. Methods

2.1. Subjects

Inclusion criterion for participation was a refractive error of less than ±8.0 D of spherical ametropia, ≤−4.0 D of astigmatism and best corrected visual acuity of minimum 0.0 logMAR. Subjects with known ocular diseases were not allowed to participate in the course of the study. Two examiners (examiner 1: Author AO and examiner 2: Author AL, both certified optometrists) measured the refractive errors in two independent studies, using the same devices/methods but different subjects. Examiner 1 measured refractive errors in 36 subjects, aged 23–47 years (mean: 36.4 ± 7.4 years), with a mean spherical refractive error (S) of −0.71 ± 1.62 D (range +2 D to −5.75 D and a mean astigmatic refractive error of −0.59 ± 0.45 D (range 0 D to −2 D)). The study group for examiner 2 included 38 subjects, aged 22–47 years (mean: 36.7 ± 7.1 years), with a mean S of −0.83 ± 1.39 D (range +1.75 to −3.5 D and an average astigmatic error of −0.67 ± 0.49 D (range 0 D to −2 D)). All subjects were naïve to the purpose of the experiment. The study course was approved from the Ethics Commission of the Medical Faculty of the University of Tuebingen. The research followed the tenets of the Declaration of Helsinki, informed consent was obtained from all subjects after explanation of the nature and possible consequences of the study.

2.2. Equipment

To assess the refractive errors of an individual’s eye objectively, a wavefront-based autorefractor was used (ZEISS i.Profiler plus, Carl Zeiss Vision GmbH, Aalen, Germany). Subjective refraction was assessed using a Subjective Refraction Unit (SRU), a trial frame (UB4, Oculus, Wetzlar, Germany) in combination with trial lenses (BK1, Oculus, Wetzlar, Germany) and a manual phoropter (American Optical Phoropter M/N 11320, American Optics, Buffalo, NY, USA). The SRU includes a digital phoropter (ZEISS Visuphor 500, Carl Zeiss Vision GmbH, Aalen, Germany) and a screen to display optotypes (ZEISS Visuscreen 500). The digital phoropter covers spherical refractive errors from −19 D to +16.75 D and astigmatic errors from 0 D to ±8.75 D. A Tablet PC (iPad3, Apple, Cupertinoj, CA, USA) was used that controlled the mentioned devices from the SRU, using an application called i.Com mobile (Carl Zeiss Vision GmbH). All optotypes (SLOAN Letters) to subjectively measure the refractive errors were displayed on a digital visual acuity chart (ZEISS Visuscreen 500, Carl Zeiss Vision GmbH) at a distance of 6 m with a minimum luminance of 250 cd/m2.

2.3. Experimental Procedures

Objective measurements of refractive errors were obtained three times for each eye prior to the subjective measurements using a wavefront-based autorefractor (ZEISS i.Profiler plus) and the most positive reading served as the starting value for the subjective refraction (sphere, cylinder, axis). Both examiners measured the refractive errors under monocular as well as binocular conditions. The procedure was as follows: Objective refraction was measured prior to the subjective refraction by a technician in order to test if the subject meets the exclusion and inclusion criteria and both examiners were masked to the results of the autorefraction measurement. Following, the examiners conducted the subjective refraction, starting either with the manual phoropter or the digital phoropter (the procedure was randomized), since it was possible to mask the “workflow” (the power of the used lenses and the results) from the examiner. In case of the manual, as well as the digital, phoropter, both examiners were masked from the results of the subjective refraction. The trial frame refraction was performed at the end of the study. The SRU provides a preconfigured refraction workflow that guides the eye care professional through the process of the subjective refraction procedure. This workflow contains the following steps to determine the monocular refractive error, starting with the right eye: (a) determination of the best sphere; (b) determination of a cylindrical error, using a Jackson cross cylinder (if a cylindrical error was measured using the objective method, this step is skipped); (c) determination of the axis of the existing cylindrical refractive error; (d) determination of the power of the existing cylindrical refractive error; (e) fine adjustment of the sphere (monocular). The same workflow is used to assess the subjective refraction of the left eye, after measurements of the right eye are finished. After assessment of the monocular prescription, the following binocular tests are done: (a) polarized duochrome test to achieve binocular balance and (b) polarized optotypes for the assessment of the best sphere under binocular conditions. Lighting conditions during the experiments followed the international standards DIN EN ISO 8596 and 8597, which defines an ambient luminance of 80–320 cd/m2.

2.4. Analysis

Both examiners used three methods of subjective refraction to assess the refractive errors in two separate study cohorts. Examiner 1 measured refractive errors in a group of 36 subjects, while examiner 2 assessed the refractive errors in a group of 38 subjects. Data about the refractive errors of the left eyes after the monocular refraction was used to assess the inter-device agreement between the different methods. Refractive errors were analyzed for the power vector components spherical equivalent refractive error (SE) and J0 and J45 that were introduced by Thibos, Wheeler and Horner, 1997 [14]. Time needed for each measurement of refraction including the tests under binocular conditions, was directly saved by the i.Com software. The time durations for each used method were analyzed in order to assess differences between the three methods.

2.5. Statistics

Statistical analyses were performed with the statistics software package JMP 11.1.1 (SAS Institute, Cary, NC, USA) and IBM SPSS Statistics 22 (IBM, Armonk, NY, USA). JMP was used to investigate the presence of normality of the data using the Shapiro-Wilk test. An ANOVA was performed to analyze differences between all three methods, when inter-device agreement was assessed. SPSS was used to calculate Intraclass Correlation Coefficients [15] and Bland Altman [16] plots were used to investigate the inter-device agreement and the differences, when refraction was assessed using the three different methods.

3. Results

3.1. Descriptive Statistics

Table 1 gives an overview on the mean refractive data (±standard deviation) of the left eye for the monocular correction of SE, J0 and J45, when refraction was assessed with all of the used methods, separated for the two examiners. Standard errors were calculated from the standard deviation divided be the square root of the sample size. The mean values and standard deviations of all three power vector components of refraction showed similar values across all three methods.

3.2. Bland-Altman Analysis for Inter-Device Agreement

Bland-Altman analysis for inter-device agreement represents the level of agreement between several measurements of the refractive error of one subject that is refracted by the same examiner under the same conditions but with different methods. Results for examiners 1 and 2 are summarized in Figure 1 for the measurement of the SE for each pair-by-pair comparison of the three methods, using a Bland-Altmann plot and 95% Limits of Agreement (LoA), calculated as 1.96 multiplied by the standard deviation of the difference [15]. Comparison of trial frame vs. digital phoropter are shown in (a) and (d), manual phoropter vs. digital phoropter in (b) and (e) and manual phoropter vs. trial frame in (c) and (f).

For examiner 1, agreement between measurements of the spherical equivalent of the refractive error and both power vector components J0 and J45 (see Table 2) was similar for all used subjective methods. A statistically significant difference was found in the measurement of the SE between the trial frame and the digital phoropter, while the trial frame showed more positive readings. Ninety-five percent LoA for the measurement of the SE was smaller between trial frame and automated phoropter for exaimer 1 (±0.56 D), followed by manual phoropter vs. trial frame (±0.59 D) and manual phoropter vs. digital phoropter (±0.65 D). Ninety-five percent CI of the lower and upper 95% LoA for the SE in case of examiner 1 were ±0.17 D (trial frame vs. digital phoropter, Figure 1a), ±0.19 D (manual phoropter vs. digital phoropter, Figure 1b) and ±0.18 D (manual phoropter vs. trial frame, Figure 1c). In the case of examiner 2, measurement of the spherical equivalent refractive error was more positive using the trial frame, when compared to either the digital (mean difference = 0.19 D, Figure 1d) or the manual phoropter (mean difference = 0.12 D, Figure 1f). When comparing the manual and the digital phoropter, the manual phoropter showed more positive measurements for the spherical equivalent (Figure 1e). Ninety-five percent LoA for the assessment of the SE was smallest for the comparison of manual vs. digital phoropter (±0.45 D), followed by the manual phoropter vs. trial frame (±0.49 D) and the trial frame vs. the digital phoropter (±0.56 D). Calculated 95% CI of the upper and lower limit of the 95% LoA for the measured SE were ±0.18 D (trial frame vs. digital phoropter, Figure 1d), ±0.13 D (manual phoropter vs. digital phoropter, Figure 1e) and ±0.10 D (manual phoropter vs. trial frame, Figure 1f). For both examiners, no influence of the refractive error of the subject’s eye on the difference between the used methods was observed. An ANOVA that analyzed the variances between the three methods for SE, J0 and J45 showed no significant differences between the methods (SE: p = 0.13, J0: p = 0.58 and J45: p = 0.96, two-way ANOVA) for examiner 1 and for examiner 2 (SE: p = 0.88, J0: p = 0.95 and J45: p = 1, two-way ANOVA).

Calculations regarding the 95% LoA were also done for both power vector components J0 and J45 to evaluate differences between the three subjective methods and the data are summarized in Table 2 for both for examiners. Additionally, the 95% CI for the upper and lower limit of the 95% LoA [15] were calculated and are presented in the same table.

3.3. Intra Class Correlation Analysis

In addition to the use of a Bland Altman plot and the calculation of the 95% LoA, Intra-Class Correlation coefficients (ICC) [17] were also used for the analysis of the inter-device agreement. In the conducted analysis, a two-way random absolute agreement calculation ICC(2,k) was performed and the ICCs were calculated for both examiners separately with pairwise correlations of each device and for the three power vector components of refraction (SE, J0 and J45). Results can be obtained from Table 3.

3.4. Time to Assess Subjective Refraction under Binocular Conditions

The time needed to assess the subjective refraction under binocular conditions was saved automatically by the i.Com software. Analysis was conducted for each measurement with all three methods and for each examiner (for examiner 1, 36; and for examiner 2, 38 measurements). Figure 2a represents the time for each subject, as well as the associated mean values ±1 standard deviations (seconds) for examiner 1, while Figure 2b represents results for examiner 2.

For both examiners, subjective refraction with the digital phoropter was significantly faster compared to the assessment when the trial frame and the manual phoropter were used.

4. Discussion

Many studies [8,9,10,11,12] on the reproducibility, the repeatability, and the level of agreement between different methods to measure the subjective refraction of the eye have been conducted and have resulted in different estimates for the various methods.

4.1. Bland-Altman Analysis for Inter-Device Agreement

For the subjective measurement of sphero-cylindrical refractive errors for repeated measures by the same examiner, previously reported 95% limit of agreement range from ±0.94 D for cycloplegic subjective refraction to ±0.63 D for the non-cycloplegic assessment of refractive errors [10]. In the case of retinoscopy, 95% limit of agreement was reported to be ±0.95 D for cycloplegic retinoscopy and ±0.78 D for non-cycloplegic retinoscopy, for repeated measures of refractive errors by the same examiner [10]. In the current study, we compared the 95% limit of agreement when refractive errors were assessed by the same examiner but with three different subjective methods for non-cycloplegic refraction in either 36 or 38 participants for examiner 1 and examiner 2, respectively. Similar 95% levels of agreement where observed, when comparing the measurement of non-cycloplegic measurement of the spherical equivalent refractive error with the three used methods for examiner 1 (95% LoA trial frame vs. digital phoropter: ±0.56 D, 95% LoA manual phoropter vs. digital phoropter: ±0.65 D, 95% LoA manual phoropter vs. trial frame: ±0.59 D) and examiner 2 (95% LoA trial frame vs. digital phoropter: ±0.60 D, 95% LoA manual phoropter vs. digital phoropter: ±0.45 D, 95% LoA manual phoropter vs. trial frame: ±0.49 D). Rosenfield and Chiu, 1995 [12

0 thoughts on “Duochrome Test Reliability Essay

Leave a Reply

Your email address will not be published. Required fields are marked *