Microperimetry is an imaging technique that measures the sensitivity of the retina to light stimuli in different locations of the visual field.
1 It is a useful tool for assessing the functional outcomes of gene therapy for inherited retinal disease (IRD), a group of disorders that cause progressive vision loss due to mutations in genes involved in photoreceptor function or survival, including retinitis pigmentosa and Leber congenital amaurosis.
2,3 The ideal outcome measure would identify changes in visual function due to novel therapeutic interventions, such as gene therapy, without being influenced by the natural patient variability.
4
However, evaluating the efficacy of gene therapy for IRD poses several challenges, one of which is the issue of multiplicity. Multiplicity refers to the increased chance of observing a false positive (risk for type I errors) when multiple tests or comparisons are made.
5 Previously, the US Food and Drug Administration (FDA) recommended that for clinical trials using standard automated perimetry, like those in glaucoma studies, a between-group mean difference between the treated and untread cohorts of at least 7.0 decibel (dB) in mean sensitivity change for the entire field is considered clinically significant.
6 This guideline reflected the FDA’s traditional reliance on functional measures, particularly visual field testing, as primary end points in glaucoma clinical trials. However, as discussed at the 2010 NEI/FDA Glaucoma Clinical Trial Design and Endpoints Symposium, the FDA has shown openness to considering new end points, including structural measures, provided they demonstrate a strong correlation with clinically relevant functional outcomes and are validated by the research community.
6 Lately, the FDA has also indicated that a positive outcome should be based on the mean improvement of at least 7 dB from baseline in at least 5 prespecified loci within the central 30 degrees of the visual field, and this improvement should be sustained over time (FDA Clinical IR for IND 17634, October 30, 2020). However, as demonstrated in the XIRIUS phase II/III study on Cotoretigene Toliparvovec, achieving this outcome can be challenging.
7 The study failed to meet the primary end point, with no significant difference in the percentage of participants meeting the responder criteria between the treatment and control groups.
7 This outcome highlights the difficulties inherent in prespecifying loci, such as the variability in patient responses and the potential for high false positive rates. Moreover, selecting specific loci prior to treatment might overlook broader improvements in retinal sensitivity outside five prespecified loci, thereby underestimating the therapeutic effect. These challenges underscore the need for alternative approaches that balance multiplicity control with the ability to detect genuine treatment effects across the entire visual field.
Herein, we propose an alternative approach that does not require prespecifying loci, but instead uses a statistical method that adjusts for multiplicity. We demonstrate that this method can reduce the false positive rate and increase the power of detecting a true treatment effect. We also discuss the clinical relevance and implications of this method for patients with IRDs who undergo gene therapy.