The time to complete the task was measured as the total time it took the AAII to capture, process, and announce the information, plus the time it took the participant to interpret and report the information provided by the AAII. The overall Friedman analyses on the timing data for tasks in the text category showed significant time differences across AAII conditions for the invoice (χ² = 34.3, P < 0.001), handwriting tasks (χ² = 75.2, P < 0.001), medicine bottle (χ² = 34.0, P < 0.001), banknote (χ² = 45.3, P < 0.001), and street sign (χ² = 43.1, P < 0.001) tasks. Timing was not measured for the article task due to the different text lengths of the articles. In the text in columns category, neither the table of contents nor the TV guide task showed significant differences across testing conditions. For tasks in the searching and identifying category, the following tasks showed significant differences across testing conditions: color matching (χ² = 60.1, P < 0.001), face matching (χ² = 31.4, P < 0.001), landscape (χ² = 83.4, P < 0.001), and room identification (χ² = 9.3, P < 0.001). The overall Friedman test outcomes were not significant for timing for the barcode task and the find a person task.
Post hoc Wilcoxon signed-rank analyses for the tasks found to have significant main effects are shown in
Table 5. For the invoice and the medicine bottle tasks, timing was significantly faster than baseline when using all AAIIs. Timing for the handwriting task was significantly faster than baseline when using Envision, Seeing AI, and Lookout, but not OrCam. For the banknote task, only Seeing AI and Lookout were significantly faster than the baseline. All but OrCam allowed significantly faster than baseline performance on the street sign task. When doing tasks in the searching and identifying category, using OrCam and Lookout resulted in equivalent or statistically slower performance than baseline, and using Seeing AI allowed equivalent or significantly faster times for the landscape and room tasks.