ELAI is a mobile application, a personal English-speaking coach that helps English language learners to practice speaking skills with a wide-range of fun and realistic activities and receive fast and actionable automated AI feedback on fluency, vocabulary, and more.

Project overview

ELAI is a project in the early phase of ETS Foundry. It started from 2019 and the team spent over 6 months building the ELAI 1.0 prototype. The data we gathered from the process was used to inform our organization’s strategic planning. This project was paused after the first version of prototype. Starting from August 2020, ETS expected our team to continue the development and design work of ELAI. I am the lead UX designer and researcher to iterate ELAI 1.0. The new version we called ELAI 2.0 was launched in December 2020.

context

At the project kickoff, we learned that our company’s NLP (natural language processing) Lab houses some AI capabilities to process human speech and can produce reports on speaking English. ELAI is the first “experiment” that we developed to test if these AI capabilities can be translated to a user-friendly learning product.  

my Role

I led the UX Research and UX Design work on the team to iterate ELAI 1.0 to ELAI 2.0 and launched it as a UX lead. I take charge of design strategy, interaction design, visual design, and usability testing.

  • Design & Research Strategy

I led the UX research activities including drafting research plans, composing research protocols, strategically executing discovery research and usability testing. I analyzed the findings and provided actionable user engagement insights to influence decision making.

  • From Prototype to Product

I led this project from design to launch, set up sync meetings with engineers and conducted design quality assurance sessions with the engineer team each week. I also led the team testing and accessibility review session prior launch. 

Design Goal of ELAI 1.0

The design goal of ELAI 1.0 is to test if the speech rater and AI capabilities can be translated into actionable feedback in a user friendly way. We also want to know will the user be benefited by these feedback content? Will these feedback improve users’ English speaking skills effectively?

In terms of the previous goal, the focus of designing ELAI 1.0 was how to prioritize and select the appropriate feedback items for the speech. ELAI 1.0 did a successful job on this and the outcome of pilot study also shows that these selected actionable feedback translated from the AI capabilities are very helpful to the end users.

Design process  of ELAI 1.0

ELAI 1.0 started with an idea “What else can we do with this AI-powered, automated grading system?”. Our Foundry team spent a lot of effort on the discovery research to identify the target user group and validate the problem statement.

Core loop of ELAI 1.0

Our team decided that the following AI-based learning loop should be the core of our MVP. It will also serve as a showcase of the core of the ELAI 1.0 prototype.

ELAI 1.0 Core Learning Loop

ELAI 1.0 prototype

User Feedback

ELAI is a successful “experiment”, in the third round usability testing, we received many positive feedback

“I like the prompts, most of them are similar to TOEFL style. It helps me to build confidence for the TOEFL speaking test.”

“I almost use it every day when I was waiting for the school bus, it just takes 5 minutes to complete a practice.”

“The feedback is very helpful and very easy to understand, I often review it several times to check if I make similar mistakes again.”

“I like using it on my mobile phone so that I am able to use it anywhere. The transcript feature is also very helpful to check if I mispronounced words.”

However, in the meantime we received feedback that users are confused in some situations, and when I reviewed the research findings, I also found some problems that we need to solve. I analyzed these data and grouped these findings into three main categories, ELAI 2.0 redesign work started.

The Problems

1. Users are confused when they met unexpected situations.

The initial goal of ELAI 1.0 was testing the AI capabilities and applied them in the foundational research. ELAI 1.0 is not ready for launch so some details and user experience of edge cases are not considered throughly in the early design phase. Users complaint that they are confused when they submitted a speech but didn’t receive feedback. Sometimes they didn’t know if they submitted the recording successfully, and didn’t understand why they record but the feedback didn’t appear.

2. Users make mistakes in the recording process.

I reviewed the back end recording data and found that there are some unfinished speeches, and blank recordings. We also found some speeches are super short and the user also submitted a new version of the same prompt after that. I assumed that the users may have difficulties to record so I ask the team to invite selected users for an additional round interview to validate my assumptions.

The engineer team also pointed out that too many low quality and unfinished speeches will impact the accuracy of the scoring engine. The team expected a design solution that can help users to avoid these “"mistakes” and submit more high quality speeches.

From a product design perspective, I also want to encourage users to practice more and submit more high quality recordings so that they will be getting progress with the help of high quality and accurate feedback.

3. Users ask for more learning content.

During the testing, users provided many positive feedback about the speech feedback content and high quality prompts. They also ask us if there are more content we will develop in the future so that they can practice after TOEFL test. Our team also have a plan to expand our learning content so I listed this opportunity with the team to validate the needs more deeply.

The Redesign Process

Review the data

The third round usability testing of ELAI 1.0 collected a lot of data. The ELAI 1.0 prototype was tested with users from TPO (TOEFL Practice Online) user pool, TAL Education Group, and Xiaozhan. We invited over 200 users from China and the US. In this pilot testing, we asked them to use ELAI 1.0 prototype for a while and send a survey to them for feedback.

I led the research team to synthesize and share research findings. There are a lot that we learnt to share. I reorganized the prolems into different categories with these questions: “What challenges do they met in different scenarios?” “What do the users ask for?” “How does ELAI help them?” and listed the problems and opportunities with team:

Define the design opportunities

After I've identified all issues and generated suggestions for improvements, I discussed these findings with my team what improvements we can make to fix the identified problems. I generated numerous ideas during brainstorming sessions and then selected the most appropriate ones.

Then I invited the team to define the scope for the product redesign. One of the most effective ways to do that is by evaluating the learners’ value gained from the potential improvement versus the feasibility of that improvement.

We used this matrix to prioritized the design ideas I shared with the team. After this session, I sketched the prioritized ideas and led 5 rounds of usability testing to validate with users. The feedback we received from the concept testing are valuable and helpful, it helped me to move forward to the high-fidelity wireframes design.

The SOLUTIONs

01

Error handling and edge cases

In order to provide a more intuitive and friendly experience, I thoroughly considered different scenarios and edge cases. I also invited the engineering team to review all of the speeches we received in the previous user study to check what are the problems of these speeches. I designed various error handling screens to avoid confusion when users met some edge situations.

02

Optimize the recording process and add new “fox coins” system

The redesign of the recording process is a challenging task. It is more difficult than designing a recording flow from scratch. We have a tight timeline and the release date is around the corner. I provided several different design ideas and discussed them with the engineering team and selected the most appropriate one.

The key points of the redesign task are: How might we remind the users that they have to record speech over 30 seconds and less than 60 seconds in an intuitive way? How might we encourage the users to submit good quality recordings and not submit the speeches they don’t want to? How might we provide more tips and speaking ideas when they are stuck? Here are the design solutions:

03

Add new learning content: “My Journey Abroad”

My Journey Abroad is a new direction that we would like to try. In terms of users’ needs and requirements of our team, I collaborated with the instructional designer to develop the new learning content for international students to help them improving speaking skills and learning American culture. My Journey Abroad provides an immersive experience to the user with a role play learning style. The user will play a role and complete all of the tasks.

Implement 

Implement the changes and accessibility review

ELAI 2.0 was successfully launched in October 2020. We didn’t advertise it but we found some users still found it and felt it valuable. (see Users talk about ELAI ) Before we launched ELAI, we tested it several rounds with stakeholders and also collaborated with accessibility department to conduct accessibility review.

The Impact

ELAI prototype provides guidance to NLP’s research roadmap. Our prototype was picked up by several business units and external partners for future product development and academic research. As for now, we continuously work with the business unit that picked up our solution from time to time, and provide support. It is nice to see our budding prototype continue to grow there.

In October 2020, this project was presented in the ASU GSV Summit. After the presentation, we received many accomplishments and emails that showing interests to collaborate with us.

The final design