2022
User Experience
User Research
Interface
Branding
Illustration
Product Design
Google Facts is an extension of the Google Suite that helps people find and read credible information. I designed this project during my senior year at Cornish College of the Arts.
Launch video
Mobile Prototype
Desktop and Tablet Prototype
This is my thesis project for my BFA in design at Cornish College of the Arts. I was asked to research a problem and design a solution to exhibit at Cornish's 2022 BFA show. We had just under 15 weeks to deliver our designs.
I chose to research and develop a product around misinformation because it threatens public health. My intrigue spawned from my experience with misinformation and COVID-19 vaccines. False and misleading information influenced the public's adoption of the vaccine. It was clear that people need help finding credible information.
My users are politically moderate, generally informed, and seek out answers independently. They engage with conventional news, as well as other mediums like social media. They value brand recognition, data, and unbiased reporting.
My interviews dug into the participants’ experiences with misinformation and their opinions about it. I wanted to answer a couple fundamental questions:
How do users engage with media literacy principles?
How do users interact with media?
Users do not actively analyze search results for validity.
Users rely on bias to determine its credibility and validity.
Users say they don’t like being told what to think, but read opinionated and biased news nonetheless.
My interviewees all followed the same pattern while researching:
This information was the key to determining the architecture of my user flows.
Users value investigating their media but it takes too much time and effort for them to actually do.
Users to skim it instead of reading deeply because it can be time consuming, and outside their expertise.
Users struggle to remember more than a couple pieces information at a time because articles are difficult to read, too long, or uninteresting.
Overall, the success of the design should be measured by the amount of time and effort users take to find and read a diverse set of content. Additionally, success should increase ease of access to content metadata, like authorship and funding.
A successful design will present its users with the information necessary to understand why and how a piece of content exists.
A successful design will expedite users’ processing time and expand their capacity for mental load during research.
A successful design will assist users in the sorting of information.
My wireframes explored different ways to organize the different stages of research (Filtering, Reading, and Analyzing).
After testing at low fidelity, I realized that the hierarchy of my designs didn't match user's research behavior and there wasn't a logical flow of information.
My final design was the simplest. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.
My final design was the simplest. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.
My solution needed to live as close to users’ research habits as possible. Google is the standard when it comes to delivering information, and its users trust it to provide relevant content to their queries. Why couldn’t Google also deliver a solution that filtered credible information and encouraged media literacy?
Media literacy is nuanced, and users need help filtering for credibility. So, the solution ranks content. It values quantity of sources, diverse ranges of sources, multiple types of sources (primary and secondary), neutral language, and expertise in authorship.
Users need help with comprehension so they can hold onto more information at once. The product generates a series of bite sized summaries to speed up high level comprehension. Then, users can dive into low level understanding by toggling each summary to view the original portion of text.
Users need a better way to learn about their content quality and goals. The data used to rank each result is displayed so that users have access to transparent data on their content.
Throughout this process, I constantly asked myself, "What is the effect of being 'wrong', or 'right'? But, perhaps a more important line of inquiry questions our intent. Instead of asking ourselves to qualify our knowledge into categories loaded with morality and ethics, we might ask different questions. We can ask questions like, "Does the information I spread contribute towards a positive impact for others?" or, "Does this information have the ability to affect others in a way that I might not experience?".
When we're held accountable to be aware of ourselves, our information, and others, we have the potential to do good.
A lot of my research centered around the concept of media literacy. It’s a time tested method of practical skepticism. As an extension of general literacy, media literacy goes beyond the ability to interpret text, images, and other mediums of communication. It looks at the ‘why’ and ‘how’ of the media we interact with. A media literate user is able to look at a piece of content and investigate its creator, its agenda, and methods it uses to accomplish its goals.
Today, we have a higher degree of access to validate and certify the information we read. But then, why is misinformation still so pervasive?
My research suggested that people don't have enough media literacy to effectively use their resources to combat false and misleading information.
In most cases, the task of verifying information is left to fact checking platforms. I spent time researching a variety of these tools and identified there are trends which allude to some unarticulated needs.
Introduce fact checking software into digital spaces that users frequently visit. Fact checking is a part of formal research, which many users don’t do. These users who aren’t fact checking consistently might incorporate it into their research behavior if fact checking tools were integrated into the places they go to research.
Delivering facts using AI and NLP content aggregation to deliver verdicts. By creating a system that doesn’t rely on humans to fact check, queries would return more accurate results. The challenge here is creating an unbiased system if its goal is to deliver decisions on the validity of a pressing media claim.
Help users build conclusions rather than deliver them. Providing skeptical users with credible facts and resources could lead users to factual conclusions without the pressure of adopting—or negatively correlating— a political identity with a given verdict.
My users are politically moderate, generally informed, and seek out answers independently. They engage with conventional news, as well as other mediums like social media. They value brand recognition, data, and unbiased reporting. These users are most likely to engage in research concerning pressing or controversial news, but need help finding and evaluating credible information.
After screening my participants to make sure they fit my user profile, I began interviewing them. I wanted to answer a couple fundamental questions:
How do users engage with media literacy principles?
How do users interact with media?
My interviews included a range of questions concerning the participants’ experiences with misinformation and their opinions about it, as well as questions about their media literacy and political engagement. They also participated in 3 exercises:
Sorting a list media and content source characteristics.
Reading a sample article for 5 minutes to give a summary, report assumptions about its credibility, and research its credibility.
Reporting on their self perceived political identity, taking a political compass test, and reporting the results for comparison.
I learned that most users see misinformation as a pressing issue, but have little experience with media literacy. From my interviews, I was able to determine users’ habits and pain points concerning media consumption and research.
Most users can identify blatant misinformation but are unable to describe it with nuance.
Users trust search algorithms to sort their information for relevance and credibility, and do not actively analyze search results for validity.
Users are comfortable accepting the first search result as long as it is succinct and easy to understand.
Users are willing to interact with suspicious media on social media and rely on bias to determine its credibility and validity.
Some users consider facts to be politically charged, and show distrust of the scientific establishment.
Users say they don’t like being told what to think, but read opinionated and biased news nonetheless.
Users value author expertise over popularity, but primarily engage with popular headlines.
The brand and reputation of media outlets are the leading determinants of trust.
My interviewees all followed the same pattern while researching:
This information was the key to determining the architecture of my product.
Users are most likely to conduct light research, and use mental models of quality content to guide them. While users are aware of the benefit of investigating their content’s language, sources, and authorship, it takes too much time and effort for them to actually do.
Users value content that uses research and data. However, they tend to skim it instead of reading deeply because it can be dense, time consuming, and outside their expertise. They end up making assumptions with incomplete comprehension.
Users struggle to remember more than a couple pieces of unique information at a time. After a certain point, reading through multiple search results doesn’t correlate to better comprehension. Often, the first piece of content they interact with is the one they remember the most.
Overall, the success of the design should be measured by the amount of time and effort users take to find and read a diverse set of content. Additionally, success should be measured by a change in ease of access to content metadata, like authorship and funding.
Organizing and analyzing my research surfaced a number of implications of design. These implications characterized how a successful product would function.
A successful design will present its users with the information necessary to understand why and how a piece of content exists. Often, users are affected by misinformation because malicious content is designed to limit investigation. Giving users access to transparent metadata implicates increased media literacy awareness.
A successful design will expedite users’ processing time and expand their capacity for mental load during research. Users should be able to pivot between high and low level comprehension with ease while limiting distraction or overload.
A successful design will assist users in the sorting of information. Users should have a based model of reference for standards of credibility that incentivizes practical media literacy practices.
My wireframes explored different ways to organize the different stages of research (Filtering, Reading, and Analyzing).
I continued iterating in low fidelity, and landed on a couple designs that worked well enough to test.
After testing, I realized that the hierarchy of my designs didn't match user's research behavior. Even though all the stages of research were present in the design, there wasn't a logical flow of information.
My final design was the simplest. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.
I revised the design to match which resulted in the simplest layout possible. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.
My solution needed to live as close to users’ research habits as possible. Google is the standard when it comes to delivering information, and its users trust it to provide relevant content to their queries. Why couldn’t Google also deliver a solution that filtered credible information and encouraged media literacy?
Media literacy is nuanced, and users need help filtering for credibility. So, the solution ranks content. It values quantity of sources, diverse ranges of sources, multiple types of sources (primary and secondary), neutral language, and expertise in authorship.
Users need help with comprehension so they can hold onto more information at once. The product generates a series of bite sized summaries to speed up high level comprehension. Then, users can dive into low level understanding by toggling each summary to view the original portion of text.
Users need a better way to learn about their content quality and goals. The data used to rank each result is displayed so that users have access to transparent data on their content.
Throughout this process, I constantly asked myself, "What is the effect of being 'wrong', or 'right'? But, perhaps a more important line of inquiry questions our intent. Instead of asking ourselves to qualify our knowledge into categories loaded with morality and ethics, we might ask different questions. We can ask questions like, "Does the information I spread contribute towards a positive impact for others?" or, "Does this information have the ability to affect others in a way that I might not experience?".
When we're held accountable to be aware of ourselves, our information, and others, we have the potential to do good.