Although deep learning models have received significant interest over the past few years, their black-box nature limits interpretation, particularly in healthcare applications. Essentially, interpretability is one of major technical obstacles in the implementation of deep learning. We utilized two different approaches to illuminate this black box to enhance the interpretability of the proposed AI models. First, PCA analysis revealed that without deep learning the model would not reach the high level of accuracy we obtained, due to the high overlap between samples from different classes. Supplemental PCA also revealed that the single dense layer was sufficient to achieve high accuracy in recognizing different features within the dataset. Second, and clinically more important, by using Grad-CAMs we found that deep convolution layers are able to extract hidden retinal features that significantly drive deep learning prediction. More specifically, we determined that the most important retinal regions for deep learning identification/classification of EAU included blood vessels, the optic disc, and retinal periphery regions, all of which correspond to the regions used by human experts to identify and classify EAU. Thus, deep learning results, rather than representing a black box result, may reveal important links underlying disease, which suggests further potential for clinical relevance.