November 2023
Volume 12, Issue 11
Open Access
Retina  |   November 2023
Development of an Automated Electroretinography Analysis Approach
Author Affiliations & Notes
  • Andrew J. Feola
    Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Medical Center, Atlanta, GA, USA
    Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA, USA
    Department of Ophthalmology, Emory Eye Center, Emory University School of Medicine, Atlanta, GA, USA
  • Rachael S. Allen
    Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Medical Center, Atlanta, GA, USA
  • Kyle C. Chesler
    Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Medical Center, Atlanta, GA, USA
  • Machelle T. Pardue
    Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Medical Center, Atlanta, GA, USA
    Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA, USA
    Department of Ophthalmology, Emory Eye Center, Emory University School of Medicine, Atlanta, GA, USA
  • Correspondence: Andrew J. Feola, Department of Ophthalmology, Emory University, B2503 Clinic B Building, 1365B Clifton Road NE, Atlanta, GA 30322, USA. e-mail: [email protected] 
Translational Vision Science & Technology November 2023, Vol.12, 14. doi:https://doi.org/10.1167/tvst.12.11.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrew J. Feola, Rachael S. Allen, Kyle C. Chesler, Machelle T. Pardue; Development of an Automated Electroretinography Analysis Approach. Trans. Vis. Sci. Tech. 2023;12(11):14. https://doi.org/10.1167/tvst.12.11.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Electroretinography (ERG) is used to assess retinal function in ophthalmology clinics and animal models of ocular disease; however, analyzing ERG waveforms can be a time-intensive process with interobserver variability. We developed ERGAssist, an automated approach, to perform non-subjective and repeatable feature identification (“marking”) of the ERG waveform.

Methods: The automated approach denoised the recorded waveforms and then located the b-wave after applying a lowpass filter. If an a-wave was present, the lowpass filter wave was also used to help locate the a-wave, which was considered the initial large negative response after the flash stimuli. Oscillatory potentials (OPs) were found using a bandpass filter on the denoised waveform. We used two cohorts. One was a Coherence cohort that consisted of ERGs with eight dark-adapted and three light-adapted stimuli in Brown Norway rats (−6 to 1.5 log cd·s/m2). The Verification cohort consisted of control and diabetic (DM) Long Evans rats. We examined retinal function using a five-step dark-adapted protocol (−3 to 1.9 log cd·s/m2).

Results: ERGAssist showed a strong correlation with manual markings of ERG features in our Coherence dataset, including the amplitudes (a-wave: r2 = 0.99; b-wave: r2 = 0.99; OP: r2 = 0.92) and implicit times (a-wave: r2 = 0.96; b-wave: r2 = 0.90; OP: r2 = 0.96). In the Verification cohort, both approaches detected differences between control and DM animals and found longer OP implicit times (P < 0.0001) in DM animals.

Conclusions: These results provide verification of ERGAssist to identify features of the full-field ERG.

Translational Relevance: This ERG analysis approach can increase the rigor of basic science studies designed to investigate retinal function using full-field ERG. To aid the community, we have developed an open-source graphical user interface (GUI) implementing the methods presented.

Introduction
The electroretinogram (ERG) is a useful tool for the objective assessment of retinal function. A main advantage of the ERG is that it is a non-invasive assessment that can be used in both preclinical and clinical environments. In brief, an electrode is placed on the cornea or skin near the eye to record the electrical response of the retina when exposed to a light stimulus. There are several forms of ERGs based on the type of stimulus, including full-field, pattern, and multifocal.16 Here, we focus on the full-field ERG, which is frequently used in preclinical and clinical studies. This ERG signal results from the sum of the response of the entire retina to a uniform flash of light at a specific flash stimulus. This response has been extensively characterized and is known to originate from specific cell populations within the retina depending on the flash stimuli and light- or dark-adaption conditions of the eyes.1,4,5,710 
One of the components of the ERG waveform is the scotopic threshold response (STR), which occurs under dark-adapted conditions near scotopic ERG thresholds and partially originates from retinal ganglion cells (Fig. 1).1 The STR consists of a positive component (pSTR) and a negative component (nSTR) at roughly 120 ms and 220 ms, respectively.1 Under both dark- and light-adapted conditions and increasing intensity of the flash stimuli, the ERG results in a positive component (b-wave) representing the response of the ON bipolar cells (Fig. 1).11 The ERG signal also has an initial negative response (a-wave) that originates from the photoreceptors (Fig. 1).12 There is an additional slower positive response (c-wave) that corresponds with the function and integration of the retinal pigment epithelium and inner retina.10,13 Along the ascending portion of the b-wave are embedded oscillatory potentials (OPs) that are believed to represent the function of amacrine cells and retinal ganglion cells within the inner retina.1,14,15 Finally, flicker stimuli isolate the cone pathway function by stimulating at a specific frequency (Hz) in the presence of the background light (Fig. 1).16 
Figure 1.
 
Representation of various rat full-field ERG responses. (A) The scotopic threshold response (STR) where the blue circles represent the positive and negative threshold responses (pSTR and nSTR). (B) The light-adapted response to a flash stimulus, where the blue circle represents the b-wave. (C) An example of a light-adapted flicker response, where the blue circles represent the part of the flicker response measured, and the amplitude is measured as the difference in voltage between the two circles. (D) Representative response to a single flash stimulus recorded over a long duration (4000 ms), where the blue circle identifies the c-wave. (E) The dark-adapted response (an expanded visualization of the red box in D), where the blue circles identify the a-wave and b-wave. (F) The OP response generated by passing a dark-adapted waveform (D) through a bandpass filter, with the blue circles representing OP1 through OP4.
Figure 1.
 
Representation of various rat full-field ERG responses. (A) The scotopic threshold response (STR) where the blue circles represent the positive and negative threshold responses (pSTR and nSTR). (B) The light-adapted response to a flash stimulus, where the blue circle represents the b-wave. (C) An example of a light-adapted flicker response, where the blue circles represent the part of the flicker response measured, and the amplitude is measured as the difference in voltage between the two circles. (D) Representative response to a single flash stimulus recorded over a long duration (4000 ms), where the blue circle identifies the c-wave. (E) The dark-adapted response (an expanded visualization of the red box in D), where the blue circles identify the a-wave and b-wave. (F) The OP response generated by passing a dark-adapted waveform (D) through a bandpass filter, with the blue circles representing OP1 through OP4.
Given the ability of the ERG to specifically reflect the function of multiple cell populations within the retina, it is useful to extract features from these ERG signals to better understand disease pathophysiology, progression, and effectiveness of interventions.1,1720 Several studies have used various time- and frequency-domain analyses to assess the ERG waveform,21 which often requires sophisticated approaches or tools to assess each waveform. Others have taken a direct approach to labeling the amplitude and implicit (IT) times of the individual components such as the a-wave, b-wave, and OPs.17,18 However, it is extremely time consuming to label each feature given that many preclinical studies involve between five and 12 steps of increasing flash stimuli, several groups, multiple animals or participants per group, and repeated measurements over multiple time points. This quickly generates a dataset that is cumbersome and difficult to analyze. Finally, markings of these individual components can vary slightly among users, which may lead to variability in results and complicated interpretations. 
Here, we describe an automated approach, we refer to as ERGAssist, which is implemented in MATLAB (MathWorks, Natick, MA), to mark the features of the full-field ERG waveform. To validate ERGAssist, we compared automated to manual markings of the a-wave, b-wave, and OPs. We then evaluated the precision of ERGAssist to measure ERG waves in a disease model of diabetes mellitus (DM), in which OP delays have been shown to occur.17,22,23 
Methods
ERGAssist Overview
For preprocessing, the raw waveforms (in microvolts [µV] vs. milliseconds [ms]) were processed through a wavelet filter that performed one-dimensional denoising.24,25 Afterward, a “baseline” signal of this waveform was determined prior to the flash stimulus (calculated between 10% and 20% of the pre-stimulus flash). We also calculated the signal variation as the 95% confidence interval of the voltage in the region, which we later used as a threshold to ensure a response after a flash stimulus. This signal variation was defined as the signal noise threshold that a recorded eye voltage had to surpass after a flash stimulus to be considered a response. We refer to this processed waveform simply as the “waveform”; however, it should be noted that this waveform was denoised and offset to zero volts compared to the raw waveform (Fig. 2). To remove features with high frequencies (e.g., electrical or OPs) the waveform was then passed through a lowpass fifth-order Butterworth filter at 60 Hz, referred to as the lowpass filter signal. The maximum amplitude of the lowpass filter signal was identified. We labeled this peak amplitude as the b-wave and the corresponding time of this peak as the b-wave implicit time (IT), which has also been referred to as peak time or time to peak.4,5,16 We required that the b-wave amplitude be at least twice the signal variation. When the b-wave had been labeled, we moved to identify the a-wave (Fig. 3). In brief, we performed two local minimum searches to identify the a-wave. We first limited the search for a minimum between initiation of the flash stimuli (i.e., 0 ms) and the IT of the identified b-wave using the lowpass filtered signal. Next, we performed a second local minimum search within 5 ms of the waveform using the timing from the first local minima. The local minimum on the waveform was identified as the a-wave amplitude and its corresponding IT. Again, we required that the a-wave amplitude had to be over twice the signal variation to be considered a real response. 
Figure 2.
 
Overview of processing to identify the b-wave and OPs. The raw waveform (gray) is processed and offset to a baseline of zero (black). Afterward, the waveform is passed through a lowpass filter (dashed line) to identify the location of the b-wave. The waveform (black) is then passed through a bandpass filter to identify the amplitude and ITs of the OPs.
Figure 2.
 
Overview of processing to identify the b-wave and OPs. The raw waveform (gray) is processed and offset to a baseline of zero (black). Afterward, the waveform is passed through a lowpass filter (dashed line) to identify the location of the b-wave. The waveform (black) is then passed through a bandpass filter to identify the amplitude and ITs of the OPs.
Figure 3.
 
Feature identification for the a-wave. The raw waveform (gray) is denoised using wavelets, and its baseline signal is set to zero (solid black line). Afterward, the wave is sent through a lowpass filter to identify the b-wave amplitude. The a-wave is identified using a combination of the waveform (solid black line) and lowpass filter (dashed line). After the a-wave has been identified, the waveform is sent through a bandpass filter. The OPs are identified between the a- and b-wave. After OP1 is successfully marked, the next sequential OPs are labeled.
Figure 3.
 
Feature identification for the a-wave. The raw waveform (gray) is denoised using wavelets, and its baseline signal is set to zero (solid black line). Afterward, the wave is sent through a lowpass filter to identify the b-wave amplitude. The a-wave is identified using a combination of the waveform (solid black line) and lowpass filter (dashed line). After the a-wave has been identified, the waveform is sent through a bandpass filter. The OPs are identified between the a- and b-wave. After OP1 is successfully marked, the next sequential OPs are labeled.
We then isolated the OPs by passing the waveform through a bandpass fifth-order Butterworth filter (60–235 Hz).26 If an a-wave was identified, we used this feature to inform the region of the OPs. Here, we defined OPs as occurring after the a-wave. We started at the a-wave IT to identify the initial OP and searched for the first minimum on the filtered OP signal. This minimum is typically referred to as the trough of an OP.22 The location of the peak following this trough is considered the IT of the OP, and the amplitude of the OP is the voltage difference between the trough and peak. Afterward, we next labeled OP2–OP4 as the amplitudes of the sequential trough-to-peaks, and the IT of each OP corresponded to the time of each peak. If no a-wave was present, we calculated the second derivative of the lowpass filter signal and identified an inflection point from the waveform (i.e., the location of a change in curvature of the waveform). This corresponded to the leading edge of the b-wave. From the leading edge of the b-wave we again identified the initial OP as the first trough to peak, and the remaining OPs (up to OP4) were sequentially labeled. 
Finally, we used this approach to automatically detect the amplitude and IT of the electrical response induced by exposure to a light flicker. In this analysis, the waveform, after being denoised and offset to zero volts, was passed through a lowpass fifth-order Butterworth filter at 60 Hz, referred to as the lowpass filter signal (the frequency of the lowpass filter is tunable if needed for other species). The flicker frequency for rodents is roughly 6 Hz, which is below the frequency of the lowpass filter, so no loss of signal is expected. The maximum amplitudes of the lowpass filter signal were identified. We identified the second peak amplitude as the flicker response and the corresponding time of this peak as the flicker IT. After locating this peak amplitude, we found the preceding trough. The flicker amplitude was calculated from the trough to the second peak amplitude. A general overview of the signal processing steps taken to process a waveform can be found in Figure 4
Figure 4.
 
Overview of the signal flow for manual (gray shaded region) and automated marking (blue shaded region). In brief, the rat eye is used to generate a raw ERG waveform. For manual markings, this raw waveform is marked for the a-wave, b-wave, and flicker locations. The waveform is then passed through a bandpass filter, and the trough and peaks of OP1 to OP4 are manually selected. For the automated process, we perform signal preprocessing to denoise the raw waveform using wavelets, a threshold noise level is determined, and the waveform is then offset to ensure the baseline (pre-flash region) is at 0 µV. This results in the processed waveform, simply referred to as the waveform. Afterward, the first automated step to identify waveform features includes passing the waveform through a lowpass filter and then identifying the a-wave, b-wave, or flicker ITs. Excluding flicker waveforms, when this process is complete the waveform is processed to detect the OPs. If an a-wave was detected, the waveform is passed through a bandpass filter, and using the a-wave and b-wave information the program automatically labels OP1 to OP4. If no a-wave is detected in step 1, the waveform is again passed through a lowpass filter. We then estimated the leading edge of the b-wave by finding the inflection point of the lowpass filter wave. Afterward, the waveform is passed through the bandpass filter and, using the information from the b-wave and inflection point, the program automatically labels OP1 to OP4.
Figure 4.
 
Overview of the signal flow for manual (gray shaded region) and automated marking (blue shaded region). In brief, the rat eye is used to generate a raw ERG waveform. For manual markings, this raw waveform is marked for the a-wave, b-wave, and flicker locations. The waveform is then passed through a bandpass filter, and the trough and peaks of OP1 to OP4 are manually selected. For the automated process, we perform signal preprocessing to denoise the raw waveform using wavelets, a threshold noise level is determined, and the waveform is then offset to ensure the baseline (pre-flash region) is at 0 µV. This results in the processed waveform, simply referred to as the waveform. Afterward, the first automated step to identify waveform features includes passing the waveform through a lowpass filter and then identifying the a-wave, b-wave, or flicker ITs. Excluding flicker waveforms, when this process is complete the waveform is processed to detect the OPs. If an a-wave was detected, the waveform is passed through a bandpass filter, and using the a-wave and b-wave information the program automatically labels OP1 to OP4. If no a-wave is detected in step 1, the waveform is again passed through a lowpass filter. We then estimated the leading edge of the b-wave by finding the inflection point of the lowpass filter wave. Afterward, the waveform is passed through the bandpass filter and, using the information from the b-wave and inflection point, the program automatically labels OP1 to OP4.
Data Collection
All animals were housed in the animal facility at the Atlanta Veterans Affairs Healthcare System (Decatur, GA) under a 12:12-hour (light:dark) cycle with food and water ad libitum. All procedures were approved by the Institutional Animal Care and Use Committee of the Atlanta Veterans Affairs Healthcare System and performed in full accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. For all ERG procedures, animals were anesthetized using a mixture of ketamine (60 mg/kg) and xylazine (7.5 mg/kg). Subsequently, the pupils were dilated (1% tropicamide), and the corneal surface was anesthetized (0.5% tetracaine HCl). After each procedure, the animals received yohimbine (2.1 mg/kg) or atipamezole (2.1 mg/kg) to reverse the effects of the anesthesia.27 Animals were left in a recovery cage on a heating pad until they were fully ambulatory and then returned to their home cage. 
Coherence Dataset
Female Brown Norway rats (3–4 months old; n = 8) were obtained from Charles Rivers Laboratories (Wilmington, MA). After 1 week to allow the animals to acclimate, we performed a weekly ERG for 4 weeks. For each ERG session, animals were dark adapted overnight. We measured the response to the flash stimuli using custom-made gold loop electrodes placed on the cornea and referenced to platinum needle electrodes in the cheek to a platinum needle electrode in the tail serving as the ground.28,29 Each animal was presented with five steps of increasing flash stimuli ranging between −2.9 and 1.9 log cd·s/m2 under dark conditions (UTAS BigShot; LKC Technologies, Gaithersburg, MD). Animals were then light adapted for 10 minutes and exposed to three flash stimuli between 0.48 and 1.4 log cd·s/m2. All waveforms were independently marked by a previously trained individual labeling the amplitudes and ITs of the a-wave, b-wave, and OPs. These measurements were made in the ERG system software (EM 8.1.2, 2008; LKC Technologies), and the OPs were also filtered using the ERG system software (75–500 Hz; EM 8.1.2, 2008; LKC Technologies). All waveforms were recorded with a sampling frequency of 2000 Hz. The amplitudes and ITs of individual OP1 through OP4 were determined. We performed a simple linear regression to calculate the slope and the goodness of fit measured by the square of the Pearson correlation coefficient (R2) for each measurement to compare manual and automated ERG markings made using ERGAssist. Finally, we calculated the intraclass correlation coefficients (ICCs) using a two-way mixed model for consistency. ICCs allowed us to determine the consistency between the markings made manually or by ERGAssist. We performed all statistics in Prism 8.0 (GraphPad Software, San Diego, CA). All statistical tests were two tailed with a significance level of 0.05. 
Verification Dataset
Male Long Evans rats (n = 10) were obtained from Charles Rivers Laboratories. A subset from a dataset used and described elsewhere,22 animals were divided between control and DM cohorts. DM was induced with a single intravenous injection of streptozotocin (STZ, 100 mg/kg; Sigma-Aldrich, St. Louis, MO) dissolved in citrate buffer (pH 4.0), and control rats were injected with vehicle alone. Diabetes was defined as two successive daily blood glucose levels higher than 250 mg/dL (FreeStyle handheld blood glucose meter from tail-prick blood; Abbott Laboratories, Chicago, IL), which occurred routinely 2 to 3 days after STZ injection. For the present study, ERGs were assessed 4 weeks after inducing DM. Electrode placement for ERG recordings was as described above with a gold loop corneal electrode and needle electrodes in the cheek and tail. The flash stimuli were presented in order of increasing luminance (UTAS BigShot; LKC Technologies), consisting of a five-step protocol (−3.4, −1.5, −0.6, 1.5, and 1.9 log cd·s/m2) at 2000 Hz. Afterward, animals were light adapted for 10 minutes and exposed to a one-step light-adapted flicker (6 Hz) at 2 log cd·s/m2 to isolate cone pathway function.30,31 As previously described,17,18 amplitudes and IT were measured for a- and b-waves and OPs (OP1–OP4). Here, we focused on the IT of OP2, as it has been reported to change in DM.17,22 For the light-adapted flicker response, we assessed the amplitude as measured from the second trough of the signal after the flash onset to the peak. We then performed a three-way repeated ANOVA, with flash stimuli (−3.4, −1.5, −0.6, 1.5, and 1.9 log cd·s/m2), analysis method (automated vs. manual), and group (control vs. DM) to determine significant differences in a-wave and b-wave amplitude and IT. We performed a three-way repeated ANOVA that used flash stimuli, analysis method (automated vs. manual), and group (control vs. DM) as factors to assess changes in the IT of OP2. Again, we focused on IT, as it has previously been shown to be delayed in animal models of DM.17,22 Finally, we used a two-way ANOVA to compare the differences in flicker amplitude between analysis method (automated vs. manual) and group (control vs. DM). Again, we performed all statistical analyses in GraphPad Prism, and all tests were two tailed with a significance level of 0.05. 
Results
Coherence Dataset
For the repeated longitudinal measurements of the Brown Norway rats under dark-adapted steps, we observed an ICC between 0.96 and 1.0 for the amplitude between the automated and manual marking approaches and an ICC between 0.97 and 1.0 for the ITs between the automated and manual markings approaches (Table). For the light-adapted b-wave response, the ICCs between the automated and manual marking approaches were 1.0 for both the amplitude and IT, respectively (Table). These results show a high consistency between manual and automated measurements. These agreements are visualized in the linear regressions (Fig. 5) and the Bland–Altman plots (Fig. 6). The manual and automated markings of ERGAssist strongly correlated with manual markings (r2 between 0.87 and 1.0) and typically fell along the unity line. This is reflected in the bias of the Bland–Altman plot, as their lines of agreement overlapped with zero. The only exceptions were the OP amplitudes. Although there was a strong correlation between manual and automated OP amplitudes (r2 = 0.87) and low bias (−4.12%), there was more variation at the low amplitudes. Possible underlying reasons for the differences between automated and manual markings (y-axis of the Bland–Altman plot) are discussed below and are important to consider when examining OP amplitudes. 
Table.
 
ICCs for Consistency Between the Automated and Manual Markings of the ERG Features in Our Coherence Dataset
Table.
 
ICCs for Consistency Between the Automated and Manual Markings of the ERG Features in Our Coherence Dataset
Figure 5.
 
Automated marking strongly correlated with manual markings. Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs. The bottom panel highlights the b-wave of the light-adapted response. The left column shows the amplitudes and the right column displays the ITs of those responses. The circles represent individual measurements. The solid blue line represents the unity line and the solid back line represents the least squares fit of the data (the dashed line represents 95% confidence).
Figure 5.
 
Automated marking strongly correlated with manual markings. Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs. The bottom panel highlights the b-wave of the light-adapted response. The left column shows the amplitudes and the right column displays the ITs of those responses. The circles represent individual measurements. The solid blue line represents the unity line and the solid back line represents the least squares fit of the data (the dashed line represents 95% confidence).
Figure 6.
 
Agreement between automatic and manual approaches. Bland–Altman plots display the mean percent difference in the measurement (solid pink line) and the 95% limits of agreement (dashed pink lines). Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs and the light-adapted response b-wave (bottom panel). The amplitudes (left column) and the ITs (right column) of each response are shown.
Figure 6.
 
Agreement between automatic and manual approaches. Bland–Altman plots display the mean percent difference in the measurement (solid pink line) and the 95% limits of agreement (dashed pink lines). Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs and the light-adapted response b-wave (bottom panel). The amplitudes (left column) and the ITs (right column) of each response are shown.
Verification Dataset
In our Verification dataset using ERG data from DM and control rats, we found a similarly strong correlation between automated and manual markings for each of the ERG components (Supplementary Fig. S1). Between the automated and manual approaches (Fig. 7), there were no significant differences in b-wave amplitude (P = 1.0), b-wave IT (P = 0.55), a-wave amplitude (P = 0.88), and a-wave IT (P = 0.19). Additionally, both approaches found a significant decrease in b-wave amplitude (P < 0.0001) and a delay in the b-wave IT (P < 0.001) in the DM animals compared to control animals (Fig. 7). We also found a delay in the a-wave IT (P < 0.001) of DM animals compared to control animals but no changes in amplitude of the a-wave (P = 0.42). We found a significant difference in OP2 IT between the automated and manual approaches (P = 0.01). Nevertheless, we still detected a delay in the IT of OP2 in DM animals compared to control animals using the automated (P = 0.0018) or manual (P = 0.0017) approach. Finally, we examined the cone-isolated flicker response (Fig. 7). Here, we did not find differences in the flicker amplitude between automated and manual analysis (P = 0.87), and both approaches found a decrease in the amplitude in DM animals (P = 0.04). 
Figure 7.
 
DM-affected ERG responses were observed by the automated and manual approaches. The top panel displays the amplitude or IT of the control and DM animals in the Verification dataset as a function of flash stimuli for the b-wave and a-wave. The solid lines represent the automated approach, and the dashed lines represent the manually marked values. Please note that, to highlight the manual points, we used enlarged circles. The bottom panel displays the OPs as a function of flash stimuli and the amplitude of the light-adapted flicker response. The flicker response is recorded at a single flash stimulus and shown as box plots.
Figure 7.
 
DM-affected ERG responses were observed by the automated and manual approaches. The top panel displays the amplitude or IT of the control and DM animals in the Verification dataset as a function of flash stimuli for the b-wave and a-wave. The solid lines represent the automated approach, and the dashed lines represent the manually marked values. Please note that, to highlight the manual points, we used enlarged circles. The bottom panel displays the OPs as a function of flash stimuli and the amplitude of the light-adapted flicker response. The flicker response is recorded at a single flash stimulus and shown as box plots.
Discussion
ERGs are a valuable tool to assess overall retinal function and the impact of various diseases on specific types of retinal cells. The need to extract features from ERG waveforms is not new; however, typically these manual approaches are time consuming and limited by the software available. We developed a graphical user interface (GUI) we have named ERGAssist that allows for automated analysis of key ERG features, including the amplitude and IT of the b-wave, a-wave, and OPs. These results provide an initial verification of our automated ERG analysis. 
Our Coherence test (ICCs) was designed to assess the consistency of ERGAssist to mark key features of the ERG waveform with manually marked waveforms. In this dataset, we determined that there was a high ICC and correlation between the automated and manual markings under both scotopic and photopic conditions. We found a slight offset in the b-wave ITs between the automated and manual markings. Although these are highly correlated, the offset in timing is not wholly unexpected. In manual markings, the individuals were trained to mark the leading edge of the b-wave on the raw waveform, whereas the automated approach identified the peak of the lowpass filter waveform. The largest variability appeared in the amplitude of the OPs (Figs. 56). Although there was a high correlation, the Bland–Altman plot revealed variability in the amplitude between the automated and manual approaches. This result is not unexpected for several reasons. First, our raw waveforms were denoised, which impacted the amplitudes from the bandpass-filtered extracted OP signal. Second, there may be differences in the bandpass filter. We used a Butterworth bandpass filter, whereas OPs generated for manual markings passed through a finite impulse response bandpass filter. We also used different cutoffs for our bandpass filters; our automated approach used 60 to 235 Hz and the manual marking bandpass filter used 75 to 500 Hz. The use of waveforms with different stages of denoising, bandpass filter types, and bandpass filter cutoffs would directly impact the amplitude of the OPs. Previous studies have shown that bandpass filter type alone influences the amplitude of the OPs,32 thus it is important to define the parameters of the filter used when reporting the amplitude of the OPs. This also does not account for the difference in bandpass attenuation of the bandpass filter, which is difficult to replicate without knowledge of how the integrated filter of the LKC Technologies system was designed. This makes a direct comparison between filters challenging. We note that, in a disease model expected to affect OP amplitude, there is a need to ensure that ERGAssist finds the expected trends found from manual markings, as the direct amplitudes will differ. Importantly, our results have shown that denoising and filter choice had little impact on ITs of the overall ERG response (a-wave, b-wave, and OPs). 
Examining the Verification dataset aimed at comparing control and DM animals, our automated ERG marking approach found analogous changes to manual markings of the ERG. In summary, every significant difference found between CTRL and DM by manually marking the ERGs was also identified in the automated ERG markings. We found a similar correlation between the automated and manual approaches in the Verification dataset compared to the Coherence dataset. Importantly, we found a consistent delay in OP2 in both manual and automated markings of DM animals compared to control animals. This was vital to confirm in our automated approach, as a delay in OPs has been previously observed in animal models of DM.17,18,20,22 Given the known differences in the OP amplitude between the manual and automated methods, observing significant differences between automated and manual markings is less meaningful. Therefore, we limited our present analysis and observed a significant delay in the ITs of the OPs in DM animals. 
A major advantage of this approach is the ability to perform automated analysis, which can improve consistency and vastly reduce the time required to assess each ERG. For example, examining only the Verification dataset (DM and control animals), which consisted of a total of 60 waveforms, the ERGAssist program took 45 seconds to complete the ERG markings, which reduced analysis time 40-fold. Given that this approach is not limited to the number of waveforms, ERGAssist can greatly reduce the analysis burden of large ERG datasets. 
Another advantage of this approach is that it can be used independently of the device, allowing for ERG data from multiple devices to be assessed. To date, ERGAssist can import Excel or CSV files containing waveform data, exported Celeris (.txt; Diagnosys, Lowell, MA), exported UTAS BigShot (.csv; LKC Technologies), exported UTAS SunBurst (.csv; LKC Technologies), and RETeval (.rffx; LKC Technologies) datafiles. ERGAssist is an automatic approach that assesses each waveform in the same way, which potentially could improve the consistency of ERG markings. ERGAssist can also be further developed to assess other parameters, including power analysis of the OPs, c-wave, STR, and photopic negative responses. The next steps include expanding ERGAssist to include additional analyses such as the PII and PIII modeling, root mean square (RMS), and pattern electroretinography (PERG). 
Conclusions
Overall, we have developed and verified an automated approach to identify and label features on the ERG waveform including the a-wave, b-wave, and OPs. To improve usability and access, we have packaged ERGAssist using MathWorks MATLAB into a GUI shown in Supplementary Figure S2. The GUI form of ERGAssist is available for use at (https://github.com/afeola2/ERGAssist). The ERGAssist GUI allows users to upload ERG data and perform automatic marking of the full-field ERG waveform. We have also included example files from different formats (e.g., Excel, Celeris, and BigShot) to demonstrate how data should be formatted to allow data to be imported and assessed using ERGAssist. Whereas ERGAssist has been shown to be robust and able to quickly analyze multiple waveforms, there are still areas where marking ERGs remains challenging. After the initial development and local implementation of our approach, we found that ERGs collected from different individuals, laboratories, or species (e.g., mice or rat) may result in the appearance of multiple a-waves and variable noise or OP thresholding. This was another motivation to develop a GUI. While automatically marking waveforms, ERGAssist will flag waveforms that have potentially multiple identifying features. The flag waveform is highlighted in the GUI, so the user can inspect each waveform and manually adjust the marking if required. Thus, our GUI combines the automated detection described here with a user interface with the data to perform visual inspection for quality control and allow manual adjustments for corrections if necessary. 
Acknowledgments
The authors acknowledge the Research to Prevent Blindness Challenge grant awarded to the Department of Ophthalmology at Emory School of Medicine and a core grant from the National Institutes of Health (P30EY006360). Figures 1 and 4 and the Supplemental Figures were partially created with BioRender.com
Supported by Department of Veterans Affairs Rehabilitation Research & Development Service Service Career Development Awards to AJF (RX002342) and RSA (RX002928); Research Career Scientist Award to MTP (RX003134); and Research to Prevent Blindness. 
Disclosure: A.J. Feola, (P); R.S. Allen, None; K.C. Chesler, None; M.T. Pardue, None 
References
Bui BV, Fortune B. Ganglion cell contributions to the rat full-field electroretinogram. J Physiol. 2004; 555: 153–173. [CrossRef] [PubMed]
Porciatti V . The mouse pattern electroretinogram. Doc Ophthalmol. 2007; 115: 145–153. [CrossRef] [PubMed]
Porciatti V, Saleh M, Nagaraju M. The pattern electroretinogram as a tool to monitor progressive retinal ganglion cell dysfunction in the DBA/2J mouse model of glaucoma. Invest Ophthalmol Vis Sci. 2007; 48: 745–751. [CrossRef] [PubMed]
Robson AG, Frishman LJ, Grigg J, et al. ISCEV standard for full-field clinical electroretinography (2022 update). Doc Ophthalmol. 2022; 144: 165–177. [CrossRef] [PubMed]
Robson AG, Nilsson J, Li S, et al. ISCEV guide to visual electrodiagnostic procedures. Doc Ophthalmol. 2018; 136: 1–26. [CrossRef] [PubMed]
Hood DC, Frishman LJ, Saszik S, Viswanathan S. Retinal origins of the primate multifocal ERG: implications for the human response. Invest Ophthalmol Vis Sci. 2002; 43: 1673–1685. [PubMed]
Brown KT . The electroretinogram: its components and their origins. UCLA Forum Med Sci. 1969; 8: 319–378. [PubMed]
Sieving PA, Murayama K, Naarendorp F. Push-pull model of the primate photopic electroretinogram: a role for hyperpolarizing neurons in shaping the b-wave. Vis Neurosci. 1994; 11: 519–532. [CrossRef] [PubMed]
Steinberg RH, Schmidt R, Brown KT. Intracellular responses to light from cat pigment epithelium: origin of the electroretinogram c-wave. Nature. 1970; 227: 728–730. [CrossRef] [PubMed]
Perlman I . The electroretinogram: ERG. In: Kolb H, Nelson R, Fernandez E, Jones B, eds. Webvision: The Organization of the Retina and Visual System. Salt Lake City, UT: University of Utah Health Sciences Center; 1995.
Stockton RA, Slaughter MM. B-wave of the electroretinogram. A reflection of ON bipolar cell activity. J Gen Physiol. 1989; 93: 101–122. [CrossRef] [PubMed]
Breton ME, Schueller AW, Lamb TD, Pugh EN, Jr. Analysis of ERG a-wave amplification and kinetics in terms of the G-protein cascade of phototransduction. Invest Ophthalmol Vis Sci. 1994; 35: 295–309. [PubMed]
Nilsson SE, Wrigstad A. Electrophysiology in some animal and human hereditary diseases involving the retinal pigment epithelium. Eye (Lond). 1997; 11(Pt 5): 698–706. [PubMed]
Rangaswamy NV, Zhou W, Harwerth RS, Frishman LJ. Effect of experimental glaucoma in primates on oscillatory potentials of the slow-sequence mfERG. Invest Ophthalmol Vis Sci. 2006; 47: 753–767. [CrossRef] [PubMed]
Wachtmeister L, Dowling JE. The oscillatory potentials of the mudpuppy retina. Invest Ophthalmol Vis Sci. 1978; 17: 1176–1188. [PubMed]
Asanad S, Karanjia R. Multifocal electroretinogram. In: StatPearls [Internet]. Treasure Island, FL: StatPearls Publishing; 2023.
Aung MH, Kim MK, Olson DE, Thule PM, Pardue MT. Early visual deficits in streptozotocin-induced diabetic Long Evans rats. Invest Ophthalmol Vis Sci. 2013; 54: 1370–1377. [CrossRef] [PubMed]
Pardue MT, Barnes CS, Kim MK, et al. Rodent hyperglycemia-induced inner retinal deficits are mirrored in human diabetes. Transl Vis Sci Technol. 2014; 3: 6. [CrossRef] [PubMed]
Allen RS, Douglass A, Vo H, Feola AJ. Ovariectomy worsens visual function after mild optic nerve crush in rodents. Exp Eye Res. 2020; 202: 108333. [CrossRef] [PubMed]
Motz CT, Chesler KC, Allen RS, et al. Novel detection and restorative levodopa treatment for preclinical diabetic retinopathy. Diabetes. 2020; 69: 1518–1527. [CrossRef] [PubMed]
Beehbahani S, Ahmadieh H, Rahan S. Feature extraction methods for electroretinogram signal analysis: a review. IEEE Access. 2021; 9: 116879–116897. [CrossRef]
Chesler K, Motz C, Vo H, et al. Initiation of L-DOPA treatment after detection of diabetes-induced retinal dysfunction reverses retinopathy and provides neuroprotection in rats. Transl Vis Sci Technol. 2021; 10: 8. [CrossRef] [PubMed]
Allen RS, Feola A, Motz CT, et al. Retinal deficits precede cognitive and motor deficits in a rat model of type II diabetes. Invest Ophthalmol Vis Sci. 2019; 60: 123–133. [CrossRef] [PubMed]
Antoniadis A, Oppenheim G, eds. Wavelets and Statistics. New York: Springer-Verlag; 1995.
Donoho DL . De-noising by soft-thresholding. IEEE Trans Inform Theory. 1995; 42: 613–627.
Kapousta-Bruneau NV . Effects of sodium pentobarbital on the components of electroretinogram in the isolated rat retina. Vision Res. 1999; 39: 3498–3512. [CrossRef] [PubMed]
Turner PV, Albassam MA. Susceptibility of rats to corneal lesions after injectable anesthesia. Comp Med. 2005; 55: 175–182. [PubMed]
Mocko JA, Kim M, Faulker AE, Cao Y, Ciavetta VT, Pardue MT. Effects of subretinal electrical stimulation in mer-KO mice. Invest Ophthalmol Vis Sci. 2011; 52: 4223–4230. [CrossRef] [PubMed]
Hannon BG, Feola AJ, Gerberich BG, et al. Using retinal function to define ischemic exclusion criteria for animal models of glaucoma. Exp Eye Res. 2021; 202: 108354. [CrossRef] [PubMed]
Allen RS, Khayat CT, Feola AJ, et al. Diabetic rats with high levels of endogenous dopamine do not show retinal vascular pathology. Front Neurosci. 2023; 17: 1125784. [CrossRef] [PubMed]
Aung MH, Park HN, Han MK, et al. Dopamine deficiency contributes to early visual dysfunction in a rodent model of type 1 diabetes. J Neurosci. 2014; 34: 726–736. [CrossRef] [PubMed]
Gauthier M, Gauvin M, Lina JM, Lachapelle P. The effects of bandpass filtering on the oscillatory potentials of the electroretinogram. Doc Ophthalmol. 2019; 138: 247–254. [CrossRef] [PubMed]
Figure 1.
 
Representation of various rat full-field ERG responses. (A) The scotopic threshold response (STR) where the blue circles represent the positive and negative threshold responses (pSTR and nSTR). (B) The light-adapted response to a flash stimulus, where the blue circle represents the b-wave. (C) An example of a light-adapted flicker response, where the blue circles represent the part of the flicker response measured, and the amplitude is measured as the difference in voltage between the two circles. (D) Representative response to a single flash stimulus recorded over a long duration (4000 ms), where the blue circle identifies the c-wave. (E) The dark-adapted response (an expanded visualization of the red box in D), where the blue circles identify the a-wave and b-wave. (F) The OP response generated by passing a dark-adapted waveform (D) through a bandpass filter, with the blue circles representing OP1 through OP4.
Figure 1.
 
Representation of various rat full-field ERG responses. (A) The scotopic threshold response (STR) where the blue circles represent the positive and negative threshold responses (pSTR and nSTR). (B) The light-adapted response to a flash stimulus, where the blue circle represents the b-wave. (C) An example of a light-adapted flicker response, where the blue circles represent the part of the flicker response measured, and the amplitude is measured as the difference in voltage between the two circles. (D) Representative response to a single flash stimulus recorded over a long duration (4000 ms), where the blue circle identifies the c-wave. (E) The dark-adapted response (an expanded visualization of the red box in D), where the blue circles identify the a-wave and b-wave. (F) The OP response generated by passing a dark-adapted waveform (D) through a bandpass filter, with the blue circles representing OP1 through OP4.
Figure 2.
 
Overview of processing to identify the b-wave and OPs. The raw waveform (gray) is processed and offset to a baseline of zero (black). Afterward, the waveform is passed through a lowpass filter (dashed line) to identify the location of the b-wave. The waveform (black) is then passed through a bandpass filter to identify the amplitude and ITs of the OPs.
Figure 2.
 
Overview of processing to identify the b-wave and OPs. The raw waveform (gray) is processed and offset to a baseline of zero (black). Afterward, the waveform is passed through a lowpass filter (dashed line) to identify the location of the b-wave. The waveform (black) is then passed through a bandpass filter to identify the amplitude and ITs of the OPs.
Figure 3.
 
Feature identification for the a-wave. The raw waveform (gray) is denoised using wavelets, and its baseline signal is set to zero (solid black line). Afterward, the wave is sent through a lowpass filter to identify the b-wave amplitude. The a-wave is identified using a combination of the waveform (solid black line) and lowpass filter (dashed line). After the a-wave has been identified, the waveform is sent through a bandpass filter. The OPs are identified between the a- and b-wave. After OP1 is successfully marked, the next sequential OPs are labeled.
Figure 3.
 
Feature identification for the a-wave. The raw waveform (gray) is denoised using wavelets, and its baseline signal is set to zero (solid black line). Afterward, the wave is sent through a lowpass filter to identify the b-wave amplitude. The a-wave is identified using a combination of the waveform (solid black line) and lowpass filter (dashed line). After the a-wave has been identified, the waveform is sent through a bandpass filter. The OPs are identified between the a- and b-wave. After OP1 is successfully marked, the next sequential OPs are labeled.
Figure 4.
 
Overview of the signal flow for manual (gray shaded region) and automated marking (blue shaded region). In brief, the rat eye is used to generate a raw ERG waveform. For manual markings, this raw waveform is marked for the a-wave, b-wave, and flicker locations. The waveform is then passed through a bandpass filter, and the trough and peaks of OP1 to OP4 are manually selected. For the automated process, we perform signal preprocessing to denoise the raw waveform using wavelets, a threshold noise level is determined, and the waveform is then offset to ensure the baseline (pre-flash region) is at 0 µV. This results in the processed waveform, simply referred to as the waveform. Afterward, the first automated step to identify waveform features includes passing the waveform through a lowpass filter and then identifying the a-wave, b-wave, or flicker ITs. Excluding flicker waveforms, when this process is complete the waveform is processed to detect the OPs. If an a-wave was detected, the waveform is passed through a bandpass filter, and using the a-wave and b-wave information the program automatically labels OP1 to OP4. If no a-wave is detected in step 1, the waveform is again passed through a lowpass filter. We then estimated the leading edge of the b-wave by finding the inflection point of the lowpass filter wave. Afterward, the waveform is passed through the bandpass filter and, using the information from the b-wave and inflection point, the program automatically labels OP1 to OP4.
Figure 4.
 
Overview of the signal flow for manual (gray shaded region) and automated marking (blue shaded region). In brief, the rat eye is used to generate a raw ERG waveform. For manual markings, this raw waveform is marked for the a-wave, b-wave, and flicker locations. The waveform is then passed through a bandpass filter, and the trough and peaks of OP1 to OP4 are manually selected. For the automated process, we perform signal preprocessing to denoise the raw waveform using wavelets, a threshold noise level is determined, and the waveform is then offset to ensure the baseline (pre-flash region) is at 0 µV. This results in the processed waveform, simply referred to as the waveform. Afterward, the first automated step to identify waveform features includes passing the waveform through a lowpass filter and then identifying the a-wave, b-wave, or flicker ITs. Excluding flicker waveforms, when this process is complete the waveform is processed to detect the OPs. If an a-wave was detected, the waveform is passed through a bandpass filter, and using the a-wave and b-wave information the program automatically labels OP1 to OP4. If no a-wave is detected in step 1, the waveform is again passed through a lowpass filter. We then estimated the leading edge of the b-wave by finding the inflection point of the lowpass filter wave. Afterward, the waveform is passed through the bandpass filter and, using the information from the b-wave and inflection point, the program automatically labels OP1 to OP4.
Figure 5.
 
Automated marking strongly correlated with manual markings. Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs. The bottom panel highlights the b-wave of the light-adapted response. The left column shows the amplitudes and the right column displays the ITs of those responses. The circles represent individual measurements. The solid blue line represents the unity line and the solid back line represents the least squares fit of the data (the dashed line represents 95% confidence).
Figure 5.
 
Automated marking strongly correlated with manual markings. Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs. The bottom panel highlights the b-wave of the light-adapted response. The left column shows the amplitudes and the right column displays the ITs of those responses. The circles represent individual measurements. The solid blue line represents the unity line and the solid back line represents the least squares fit of the data (the dashed line represents 95% confidence).
Figure 6.
 
Agreement between automatic and manual approaches. Bland–Altman plots display the mean percent difference in the measurement (solid pink line) and the 95% limits of agreement (dashed pink lines). Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs and the light-adapted response b-wave (bottom panel). The amplitudes (left column) and the ITs (right column) of each response are shown.
Figure 6.
 
Agreement between automatic and manual approaches. Bland–Altman plots display the mean percent difference in the measurement (solid pink line) and the 95% limits of agreement (dashed pink lines). Dark-adapted responses (top panel) of the b-wave, a-wave, and OPs and the light-adapted response b-wave (bottom panel). The amplitudes (left column) and the ITs (right column) of each response are shown.
Figure 7.
 
DM-affected ERG responses were observed by the automated and manual approaches. The top panel displays the amplitude or IT of the control and DM animals in the Verification dataset as a function of flash stimuli for the b-wave and a-wave. The solid lines represent the automated approach, and the dashed lines represent the manually marked values. Please note that, to highlight the manual points, we used enlarged circles. The bottom panel displays the OPs as a function of flash stimuli and the amplitude of the light-adapted flicker response. The flicker response is recorded at a single flash stimulus and shown as box plots.
Figure 7.
 
DM-affected ERG responses were observed by the automated and manual approaches. The top panel displays the amplitude or IT of the control and DM animals in the Verification dataset as a function of flash stimuli for the b-wave and a-wave. The solid lines represent the automated approach, and the dashed lines represent the manually marked values. Please note that, to highlight the manual points, we used enlarged circles. The bottom panel displays the OPs as a function of flash stimuli and the amplitude of the light-adapted flicker response. The flicker response is recorded at a single flash stimulus and shown as box plots.
Table.
 
ICCs for Consistency Between the Automated and Manual Markings of the ERG Features in Our Coherence Dataset
Table.
 
ICCs for Consistency Between the Automated and Manual Markings of the ERG Features in Our Coherence Dataset
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×