Spring 2023
Interface Design, Artificial Intelligence, User Experience

LAS Interface Design


Spring 2023


Interface Design
User Experience
Artificial Intelligence


An interface prototype that leverages AI to help Translators interpret, contextualize, and transcribe foreign intelligence. This project was completed in partnership with the Laboratory for Analystic Sciences (LAS), a partnership between the intelligence community and NC State University that develops innovative technology and tradecraft to help solve mission-relevant problems.




**Important Note: For the sake of presenting real data through prototypes the transcripts, names, and events are adapted from the Nixon tapes. The scenario dictated that these transcripts were coming from a foreign intelligence in another language and being translated into 
English.







Expanded User Persona
As-Is User Journey Map
To Be User Journey Map

The Challenge



How might the design of an interface use the affordances of ML to enable voice language analysts to quickly produce reliable and robust intelligence that accurately conveys content, intent, and context?

This project made use of a human-centered design process to create an interface experience that would support specific personas provided to each team. My team, comprised of myself and Ned Babbott, was given the persona of Cameron, a novice translator.

To support our research process we were given access to experts from the LAS team for interviews and feedback sessions. Their insights were beyond helpful in generating ideas for functions and the final design. 

Access to experts also required us to make use of effective communcation strategies to not overwhelm or confuse our collaborators. It also meant that we needed to be able to back up our reasons for certain features that may have run up against the preconceived ideas the LAS team had about AI and translation tools.







The Design



Design of the interface began with three concepts based around resolving different pain points. The initial concepts were related to correcting an errorful speech-to-text, visualizing context searches, and mapping storylines.

Those concepts were narrowed down and refined into two concepts presented three weeks later that were focused on 1) improving the translation environment for context and learning and 2) adding visual depictions of intelligence information for better context and operator comments. Concept 1 was led by myself and concept 2 by my partner.
Led by Kevin Ward
Led by Kevin Ward
Led by Ned Babbott
Led by Ned Babbott






Final Prototype



The final sketches and user journey were combined to create a walkthrough video.


Credit to Ned Babbott for animation and voiceover work.