Inspired Cognition’s ExplainaBoard Platform
Platform enabling technology leadership to measure natural language processing (NLP) model performance.

Role
Project Lead, User Researcher, and Client Relationship Manager.
Timeline
3 months.
Team
4 designers and researchers, 1 mentor.
Stakeholders
Co-founder/CEO
In collaboration with Graham Neubig, Inspired Cognition’s CEO and co-founder, we investigated the ExplainaBoard platform’s issues with user churn.
At the time, Inspired Cognition’s team consisted of ~5 members, all of whom focused on improving ExplainaBoard’s technical capabilities; we diverted our focus to building out their design and research infrastructure that aligned with their tech stack.
Update: as of October 2023, Inspired Cognition has closed, and there have been many rapid changes in the AI/ML space since our team worked with Inspired Cognition.
Problem
How might we pivot ExplainaBoard to support enterprise customers and users instead of academics?
Outcome
75% reduction in user churn during onboarding (measured with NASA-TLX).
30% increase in user satisfaction for the dashboard experience (measured with NASA-TLX).
Spearheaded the development of 1 new feature and 30 significant changes to the product.
Concurrent design and research to understand a broad problem space.
Client Relationship Management
Closing the technical gap between our team’s knowledge and the client’s with collaborative meetings and whiteboarding sessions.
Design
Rapidly iterating through concepts and ideas that would bring the most value to ExplainaBoard with the least amount of technical effort.
3 Rounds of User Research
Measuring our impact before and after with usability testing (and NASA-TLX), as well as co-design sessions.
Technology
Crafting meaningful changes by understanding the tech stack (React) and identifying low-hanging fruit (Ant Design System).

Reducing user churn with a new onboarding flow.
When my team started the project, we found from ExplainaBoard’s usage analytics that first-time users rapidly dropped off after submitting their model data for the first time.
We resolved their difficulty in onboarding in 3 ways:
Creating a “Projects” feature to allow for the storage and comparison of multiple systems under one singular dashboard.
Redesigning the “Create a System” form to enable faster configuration and provide meaningful guidance to first-time users.
Creating a “Project Overview” visualization to enable rapid comprehension.

Increasing user satisfaction with intuitive data visualizations.
From our initial research, our team identified 3 significant changes to the dashboard that would decrease users’ cognitive load and increase the actionability of ExplainaBoard’s insights:
Crafting a pattern library for ExplainaBoard’s visualizations (pictured below, before and after)
Indicating insights from each visualization.
Clearly defining results with an easy-to-skim table.

Before: users communicated difficulty (1) parsing through results, (2) identifying the axis, and (3) viewing trends for a singular system.

After: users appreciated the ability to (1) quickly find insights and (2) view systems directly in comparison with one another.
Establishing user research operations and participant pools for future research.
Without a dedicated designer or researcher, the founding team of Inspired Cognition previously needed to find participants from their own network.
From my experience running studies in academic research labs, I created:
A research plan and estimated budget for compensating participants
A participant recruitment, consent, and retention plan
Processes for thematic encoding and quantifying qualitative results with NASA-TLX post-task results
Expert Developers | Product Managers | Data Scientists | Junior and Mid-Level and Developers | Professors and Researchers | |
|---|---|---|---|---|---|
Application | Industry | Industry | Industry | Industry and/or Academia | Academia |
Model Expertise | High | Low to Medium | Low to Medium | Medium | Low to High (high variance) |
Needs cross-functional communication | Yes | Yes | Depends | Depends | No |
Needs model monitoring | Yes | Yes | No | Yes | Depends |
Needs actionable insights | Yes | Depends | Depends | No | Depends |
Needs to address bias in the dataset | No | Yes | Yes | No | Yes |
Needs to troubleshoot granular model details | No | No | Yes | Yes | No |
We have existing models, supervised model to create easy-to-evaluate, forecasting, or prediction systems when we have actual data to evaluate. But when it comes to newer things, such as recommendation, it is difficult to evaluate without human feedback…NLP is more difficult, because there are more human decisions, and the text is more fuzzy. In tabular data is more regression, prediction, more concrete. NLP has more models, resources you can get. I can always get more data, more algorithms, but in tabular data we get weeks of sales data and that’s it. Someone else’s data doesn't help me with my data. ExplainaBoard can provides metrics the contribution of each feature to the prediction.
— Machine Learning Product Manager
Reflection: improving hand-off, quantitative research, and onboarding pilot
I enjoyed working with our client (Graham Neubig) and bringing our unique expertise to an ambiguous problem. I was also able to see every single team member grow throughout the experience: gaining confidence in client conversations, pitching different approaches, and taking ownership.
If I could go back and do things differently, I would have liked to:
Develop the reporting feature (cut from mid-to-high fidelity wireframes) with additional studies.
Conduct more quantitative user research comparing the old platform versus the proposed design.
Dive further into the onboarding experience with current users (testing documentation and other forms of guidance).
