The scope of this project was to create an AI journal for people with visual impairments.

At-a-glance

People with visual impairments (VI) have a strong desire for journaling, but current tools lack accessibility features that would make journaling easier and more productive. Most VI users rely on built-in apps like Notes for journaling, as using third-party apps can be tedious. The process of creating this AI-powered journal involved studying current research, conducting a competitor analysis, followed by research interviews with VI individuals and surveys with sighted users. Once we understood our user groups and the necessary features, we began creating wireframes. Alongside this, we performed card sorts to establish a strong information architecture. Finally, once the prototypes were ready, we conducted usability tests. We found that there are very limited products focusing on the needs of the visually impaired, likely due to the difficulty of conducting usability tests with VI individuals, especially when prototypes are created using typical software like Figma.

Focus areas
  • Literature review, competitor analysis, user interviews
  • Information architecture, wireframing, prototyping, content creation, usability testing
01 LITERATURE REVIEW

Our initial focus was to create an AI journal with better features and performance than existing products. During the competitor analysis, we discovered that none of the existing products included accessibility features. This led us to reorient and focus the product on the journaling needs of the visually impaired. As accessibility features are typically an added layer to a base product, we also considered sighted users as a secondary audience.

Although we found limited research papers focusing on technology created specifically for VI users, several papers helped us understand the requirements and guidelines for optimum accessibility for visually impaired users. We also learned about conducting research with VI users, market gaps, and the need for more inclusive mobile applications. Additionally, some papers concluded that machine learning for human support could be highly beneficial, provided ethical considerations and the use of bots in customer service are addressed.

02 COMPETITOR ANALYSIS

  • We modified the evaluation framework by Sharma et al., 2022 (Evaluation of mHealth Apps for Diverse, Low-Income Patient Populations: Framework Development and Application Study) to suit our specific evaluation purposes for the competitor analysis.
  • We assessed 10 mobile apps and websites.

Competitor Analysis

03 INTERVIEW PROTOCOL
I led the user interviews, as I was the only team member with prior experience conducting research with VI individuals.

  • Collaborated with Chicago based non-profit organization called Second Sense to recruit participants
  • Interviewed five users.
  • Participants needed to be visually impaired (either partially or fully) and had a desire to journal.

For the interview protocol, we began with open-ended questions to understand their journaling habits and current methods of journaling. We then focused on their motivations, reasons, expectations, and goals from journaling. After learning about their current tools, we asked about their likes, dislikes, and desired features. We also inquired about their knowledge of AI and their thoughts on various AI-driven features we planned to incorporate. Given that privacy is important for a digital journal, we also explored their comfort with using the tool in public spaces.

These questions gave us a deep understanding of what journaling looks like for people with VI, the process, challenges they face, and the technological improvements needed.

View interview protocol: Report

04 SURVEY & CARD SORT

  • Since sighted users were the secondary target audience, we decided to gather their insights via a survey.
  • Conducting a card sort with VI individuals posed challenges, so we conducted it with sighted users. Participants completed an online survey, followed by a hybrid card sort.
  • The card sort involved 32 cards and 8 categories, focused on where users would expect to find various features, methods for making a journal entry, enabling AI and accessibility features, and changing other settings.

05 DATA ANALYSIS

Each interview was transcribed and coded inductively using the Atlas.ti program. We then assembled all our codes into an affinity diagram in Figma to identify themes. (1) Features liked in their current tool, (2) Challenges faced with the current tool, (3) Features desired, and (4) Motivations for journaling.

Features liked in their current tool

Challenges faced with the current tool

Features desired

Motivations to journal

Survey results:
Journaling Preferences and Practices:
1. Formats: 50% of participants prefer physical journaling, 31.3% use a mix of both digital and physical methods, and 18.8% journal digitally.
2. Digital Tools Used: Among digital users, common tools include Google Docs, iPhone's Notes app, and Word Docs. Some also use unique methods like emailing themselves.
Frequency: Most participants journal a few days a month (56.3%), followed by 2-6 days a week (31.3%).
Likes About Journaling:
1. Participants appreciate journaling for its therapeutic effects, such as mental wellness, emotional expression, and reflection.
2. Physical journaling is valued for the tactile experience and creativity it allows.
3. Digital journaling is appreciated for its convenience, security, and the ability to integrate multimedia elements.
Features Desired in a Digital Journal App:
1. Top Requested Features: Privacy settings (56.3%), voice-to-text entry (56.3%), customizable themes and layouts (50%), daily reflection prompts (50%), and exporting & saving entries (62.5%).
Why These Features: Privacy is paramount. Participants desire the flexibility to express creativity, the convenience of voice-to-text, and the ability to reflect daily and save entries securely.
Expectations from Journaling:
1. Reflecting on thoughts and emotions (81.3%) and documenting memorable moments (75%) are the primary expectations.
2. Participants also seek organization, clarity, self-expression, stress relief, and personal goal tracking through journaling.
AI and Privacy Concerns:
1. Familiarity with AI as an assistive tool is moderate, with participants leaning towards "just starting to learn" about it.
2. Privacy and data security concerns are significant, with many participants being very or extremely concerned.
3. Desired security measures include encryption, password protection, two-step authentication, and clear privacy policies ensuring data is not misused.
Suggestions for App Designers:
1. Simplicity: Emphasize a distraction-free, simple interface akin to writing on a physical journal.
2. Security: Implement robust privacy features, including encryption and two-step authentication.
3. Creativity and Flexibility: Allow for customizable themes and creative expression within the journal.
4. Community Feature: Explore the potential of an in-app community for users seeking accountability or connection.

From the analysis of both surveys and the interviews we were able to derive two personas which are as follows:

06 PROTOTYPES

We began by sketching our screen designs using paper and pencil. This allowed us to map out the features each screen would include, determine the number of steps required for users to complete tasks, and establish task flows.

Low fi prototype

After deciding on the contents of the each screen we created our high fiedility protoypes.

Prototype: View

Visual Flow: View

07 USABILITY TESTING

Usability testing was the most challenging part of this project. This was primarily due to the fact that our prototypes were created using Figma, which is not fully compatible with text-to-speech software such as VoiceOver and JAWS. Additionally, we discovered a lack of resources—software, tools, articles, or guides—detailing how to conduct usability tests with people with disabilities.

Ultimately, we devised a solution: for each screen, we recorded the text and potential actions users might take. However, finding this solution took significant time and effort, and we had to pivot from our original plan. Instead of conducting multiple rounds of prototype testing with five users per round, we conducted one round of testing with both visually impaired and non-visually impaired users. Despite this late-stage pivot, we believe the findings were valuable in guiding the final design.

The usability test tasks were as follows:

TASK 1: ONBOARD AS A NEW USER
Scenario: You have downloaded a new journaling app on your phone. Show me how you would sign up and begin using the app. Please think aloud as you go through the process.
Post-task questions:
1. Overall, how would you rate your experience completing this task, with 1 being difficult and 5 being easy?
2. Why did you rate it that number? What aspects of the task were difficult or easy?
3. Are there aspects of this task that you would improve upon or change?

TASK 2: ADD A NEW JOURNAL ENTRY
Scenario:Imagine that you would like to add a new journal entry. Show me the steps you would take to do so. Again, please think aloud as you go through the process.
1. Overall, how would you rate your experience completing this task, with 1 being difficult and 5 being easy?
2. Why did you rate it that number? What aspects of the task were difficult or easy?
3. Are there aspects of this task that you would improve upon or change?

08 RESULTS FROM USABILITY TESTING

Task 1: ONBOARDING AS A NEW USER

  • Both participants rated their level of satisfaction as a 2 out of 5
  • Reasons: One participant mentioned feeling confused, while another described it as “straightforward but blunt.”
  • Both were dissatisfied with the use of the word “deputy” to describe the AI assistant, as it felt like someone was "dictating orders.""
  • The wording used to distinguish between visually impaired and non-visually impaired users was also confusing.

Task 2: ADDING A NEW JOURNAL ENTRY

  • Both participants rated their level of satisfaction as a 3 out of 5
  • Reasons: There was confusion due to the absence of a save/submit button after completing a journal entry. It wasn’t clear whether the app was auto-saving.
  • Participants again expressed dissatisfaction with the term “deputy” for the AI assistant.
  • There was additional confusion surrounding the location and sharing features.

Changes Made Based on Usability Testing Findings:

  • Replaced the term “Deputy” with “Assistant
  • Added a Save/Submit button for Task 2.
  • Made the tone of the content friendlier, changing “Visually impaired” and “20/20 Vision” to “Do you wish to turn on accessibility features?

09 CONCLUSION & FUTURE WORK

Reflecting on this project, I believe we successfully achieved our primary goals. However, we faced unexpected challenges, particularly during usability testing, which required a late pivot. This limited our ability to fully validate the designs. Although our exploratory interviews and competitive analysis were executed efficiently, more detailed feedback on the designs could have helped us better align the interface with user needs. Moreover, additional research is needed to improve usability testing methodologies for people with disabilities.

Next steps:

  • Conduct a user research study involving a more diverse range of participants across various age groups and vision levels.
  • Perform usability testing with different users to gather more feedback, aiming to make the app user-friendly and accessible for people with varying levels of technical knowledge.

10 MY LEARNINGS

- Moving forward, I feel confident about conducting user research with people with visual impairments.

- Always allocate buffer time for potential roadblocks in the project timeline to ensure the quality of research is maintained, even when pivots are necessary.

- While creating accessible products is crucial, there is also a need to design products specifically for people with disabilities.