Money Adventure Mobile App Testing

My team and I work on testing the Federal Reserve’s first mobile app geared toward kids in grades K-5. With moderated, in-person usability testing we learned key insights and generated recommendations for improving the prototype.


The U.S. Currency Education launched Money Adventure, its first mobile app geared toward educating kids about U.S. currency. The app, which on iOS features augmented reality features, focuses on teaching kids about the different denominations of money. It also helps educate students about U.S. history.

The problem was it wasn’t clear how easily kids were going to be able to use the app’s features as well as understand the content about U.S. history.


Conduct usability testing and comprehension testing activities with kids.

Our goals were to:

  • Discover how kids (grades 3-5) use the core features of the app including the augmented reality portion
  • Uncover any major usability issues
  • Learn improvements and generate recommendations for the prototype app
  • Asses how well the content is written for appropriate ages


Representations of the app menu screen on an iPad and Phone
  • Kids performed better than expected in regard to the augmented reality portion of the app. Only one student struggled with the feature.
  • More visual cues were needed to switch between the different currency notes in the AR feature.
  • The weight of the iPad and ergonomics of it affected the way students used the app. Some felt more comfortable holding the iPad in their laps, which limited how well the augmented reality feature worked. We learned kids have limits for how long they want to hold the iPad.
  • U.S. history terms were often difficult for students to understand. Differing state curriculums meant some students were exposed to terms others weren’t. We recommended the app can take advantage of this and use it as a way to educate students about U.S. history terms and concepts depicted on the back of U.S. money.

Lessons learned

  • Logistics surrounding testing with children can be difficult and it’s important to follow all privacy guidelines.
  • Testing with kids can be tough — they may lose attention or not fully understand the testing activities. Plan for things to go wrong and that’s OK! blog post on Money Adventure

Download on iOS

Download on Google Play


What UX research methods you should use and why

One of the hardest parts of UX research is being confident in research planning. You can easily get good at front-line skills like interviewing and prototype testing, but being able to plan for different projects can be hard. No one UX project will be like the other, so don’t expect it to be the same process over and over, as in “we did a survey once so we must do surveys each time.”

Being able to articulate the what and why behind intentional user research will help you communicate to your organization the most efficient ways to get good information from users. It will also help squash the whole “Let’s do a survey!” suggestion from a product owner or developer when you know that won’t help.

Being able to articulate the what and why behind intentional user research will help you communicate to your organization the most efficient ways to get good information from users.

Below are five example scenarios with two UX research activities that could directly apply to the situation. In a real project, you may have to do some stakeholder interviews and learn more about the problem you are trying to solve before making any concrete recommendations concerning research methods.

1. The University of Washington is planning to redesign the U of W library website and wants to ensure that students and faculty can get the best use out of all of the services offered.

Usability testing — Benchmark usability testing is a great way to kick off this project. In order to fully understand how to approach a redesign project, we would first need to understand what are the current limitations and challenges with the site as it is now. This would involve recruiting users, coming up with common tasks to test, and analyzing the results.

Analytics review — a secondary research method is more quantitative in nature, and involves analyzing web traffic on the current site. Analyzing data from web traffic can help answer questions like:

  • How are users getting to your site?
  • What are users doing on your site?
  • How often are users interacting with a particular feature/service?
  • How often do users abandon their task before completing it?

An analytics review is a great way to discover a baseline for a site’s user engagement and other key indicators.

2. The photo printing company Shutterfly is looking to understand more about how, when and why people create, print, display, and share their photos … they are interested in directions for evolving their business to increase their customer base and find new potential revenue streams.

Survey — A great way to understand customer attitudes and behaviors is a survey. A well constructed survey can incorporate closed and open questions, and give organizations baseline data for what, how, and when customers do something. As in this case, Shutterfly is interested in the habits of customers creating, printing, and displaying photos. Particular insights from a survey may help to refine additional research questions, or can be used to test an organization’s assumptions about users. The survey results may help the business generate new concepts or ideas for products.

Focus groups — Surveys aren’t the best method for testing “hypothetical” scenarios, as if Shutterfly wants to gauge the interest in a new product idea. Instead, focus groups can be better suited to test new concepts and ideas with users. Selecting this research method would depend on what is learned in a survey. Additionally, it would depend on if Shutterfly wanted feedback on a new idea. For smaller testing of services and features, usability testing may be more appropriate. However, a focus group can be helpful for generating ideas during early product development.

3. Apparel company J.Crew is growing their ‘omnichannel’ retail functionality to better support consumers’ changing shopping expectations. Now they are trying to prioritize between two particular service features: Buy online, pick up in store — the credit card is charged at the time of online order OR hold online, pick up in store — the credit card is charged in the store when the customer picks up the item.

Focus group — Since J.Crew is considering two options, a focus group with representative customers may prove the most valuable in helping to decide which concept to go with. Focus groups, while they require a highly skilled moderator, are helpful to have an organized discussion about how customers might use a product or service. Customers in the focus group may also be able to speak to other companies and their experiences with online ordering and in-person pickup.

Literature review — J.Crew isn’t the first company to offer some sort of online shopping/store pickup experience. Likely, there is research on this buying habit and a literature review may be helpful to gain particular insights that would help J.Crew to make a decision. In a literature review, a researcher would search and identify previous academic or industry research conducted within the context of the business scenario.

*While research methods may be helpful to pick between two options for online ordering, implementing a solution requires special attention, as simple things like a poorly designed ordering/payment service could cause user frustration and abandonment. Low-fidentify, formative usability testing is essential in the actual design and implementation of either decision.

4. The New York City subway system is going to overhaul their payment and ticket system for greater efficiency and customer satisfaction. They are looking for a recommendation on what the revised experience should be.

Field studies — In discovering how customers currently interact and use payment systems within the subway system, a field study would be helpful for researchers and designers working for this project. In a field study, you can observe customers in their natural setting, and with consent, ask questions about payment systems. A field study isn’t a means to test new ideas, but rather gain insights into customer behaviors and attitudes, environment settings, and particular pain points in the payment systems/services.

Low-fidelity prototype testing — Simple low-fidelity wireframes or even sketches can be tested with users. This would be helpful during the ideation stage of the project, where we could test new concepts for both the payment system and service offered. This is a low cost method of testing early ideas in a project, and can be done after initial foundational research is conducted through field studies, observations and interviews.

5. Grocery delivery service Instacart is trying to expand, but they aren’t getting either the account signups they expected or the number of orders from those signups needed to support their business costs. Instacart wants to know why they aren’t getting more customers and orders.

Usability testing — It’s unclear if the issues with customer signups is due to usability issues with the site, so I’d recommend testing the current site to see how well it performs with users attempting common tasks. If there are usability issues, these can be addressed and analyzed to see if it improves signups and orders. If the site is performs well during usability testing, there may be larger user experience issues that would require more intensive, exploratory research like user interviews.

Analytics review — Looking at the website traffic and metrics will help to identify what pages users are leaving the site from. It could also be helpful to understand how users are coming to the site, which could identify how well any promotional or marketing campaigns are working.

design research

ReminderX Mobile App

I was given the opportunity to research and design improvements to a organization-type app that focused on reminders. Using UX research methods and design iteration, I helped to re-think the direction for the app.

The Problem

The ReminderX mobile app was in need of a re-think. The clients wanted to find out what direction the app should take, and ways it can be improved. My role was to help build out a road map for the app based on research. From there, I was tasked with re-designing the app.


Conversations were held with four separate participants. User interviews helped determine target users and their behaviors, patterns, and goals.

Research areas explored:

  • Identifying digital tool (reminder apps, calendars, etc.) usage by participants
  • Understanding behaviors of participants in relation to creating lists and setting reminders (note-taking, digital apps, etc.)
  • Understanding challenges or limitations of current processes

Additionally, participants offered feedback on the ReminderX app, which helped identify future enhancements and improvements help increase the overall user experience and adoption.


Analysis of the research helped identify findings and recommendations for ReminderX. Based on research, a user persona was developed to help the design process. Additionally, I created design tenets, or guiding principles for the app.

  1. Keep onboarding simple – many users are currently using Google apps that offer lists or event reminders, which don’t require additional logins or accounts. Where possible, leverage other app’s logins (Google or Facebook) for ReminderX. In general, keep the signup process simple.
  2. Make information accessible – users expect their information to be stored and saved on the cloud where it’s easily accessed from device to device.
  3. Allow collaboration to encourage adoption – in the professional world, team’s need to share information effectively, and this includes tasks and reminders for team projects or related disciplines. The app should offer sharing or collaboration features.
  4. Keep to-do lists flexible – users have a variety of needs when it comes to list types. Certain lists are for short-term related tasks or reminders, like grocery lists, to longer term lists of personal or work-related goals. Additionally, users may just want to create a simple note to jot down ideas. Categories or labels may be important for users to organize lists.
  5. Think post-it notes – emulate the positive experience people feel while checking off an item on their handwritten lists.
  6. Allow users to declutter lists – people often make many notes and lists throughout the week, and allowing them to easily organize them is paramount. Archiving of old information should be considered.
  7. Consider offline use – many users are creating lists for shopping. Lists should be cached or locally saved in the app so in the event of poor reception, lists still load in a store.

Design and interaction

Using the research findings as a foundation, I started to sketch new ideas about the app’s workflows and task flows. I iterated these sketches and concepts into digital wireframes. From there, I passed the designs along to the development team to build.

Lessons Learned

Sketching can be a really effective method for iterating ideas. It allowed me to not be too worried about how the design actually looked. Instead, sketching kept me focused on the workflows and overall larger picture.


WMATA Customer Payment Study

Following a user-centered design approach, I conduct research into the D.C. Metro’s customer experience. Through interviews and a competitive analysis, I learned key insights into user behaviors and attitudes, which led to key recommendations on how to improve the payment process.


WMATA (or Metro) is the transit agency that manages and operates metro rail service, bus service, and paratransit service in the Washington, DC metro area. The bus service and metro service collectively move nearly 1 million people a day throughout the District of Columbia, Maryland, and Virginia. Since 1999, the transit agency has offered a contactless, stored-value smart card for bus and metro payment, similar to the size of a credit card.

Despite some incremental improvements, the system hasn’t been improved in nearly two decades. As this was for a graduate school assignment, I envisioned WMATA had asked me to consult and research what improvements it should adopt for its payment system.


My research was guided by three main goals:

  • Identify customer behaviors and usage patterns in relation to the SmarTrip payment system
  • Evaluate how well the current payment system is matching user needs and goals
  • Discover new ideas and customer insights that will help WMATA improve the user experience of the payment system as a whole

Main research methods were:

  • Conducting user interviews with WMATA customers
  • Performing a competitive analysis of Boston, Chicago, and San Francisco’ transit agencies payment systems


During a thematic analysis, I coded data with an affinity diagramming activity with key insights from user interviews.

Through our user research study, we learned that WMATA’s users were often very satisfied with the metro system overall. However, they felt there are several areas of improvement. Users want more flexible payment options; the ability to pay with your phone, manage your account more easily, have the ability to use one SmarTrip card for more than one person, and want clearer instructions for tourists and new residents. There are also a few things that even life-long riders of metro did not understand — what the different beeps mean when boarding a bus, or that  you can reload your SmarTrip card at retail stores across the D.C. area.

While the D.C. metro was one of the first systems to offer a card to pay for service, other transit systems have caught up and surpassed D.C.’s technology and offerings. Chicago’s transit agency offers a more flexible payment system. It allows riders to pay with credit card directly at faregates, Apple or Google Pay, as well as paper tickets and cards like D.C.’s. WMATA has the potential to create a more seamless and efficient user experience across its payment service. 

Main takeaways
  • Overall, users mentioned how pleased and satisfied they were with the metro and bus system, as well as the SmarTrip card to pay for service.
    • “It’s been positive — it’s useful, the card has never broken and I’ve never had a card that didn’t work” – participant 2
  • There are number of common things users are often confused about with the payment features and functionality
    • Users don’t understand the bus payment beeps when boarding and were unaware of the option to reload SmarTrip cards at retail stores and on most buses
  • The time it takes for value to be active on a card after it’s added online frustrates users, they don’t understand why it’s different than adding value at stations.
  • Chicago, Boston, and San Francisco’s transit systems are generally more flexible, giving users more options to pay for fares as well as mobile apps to help manage user accounts.
Top recommendations
  • Seek out technology upgrades that emphasize improved user experiences. Investigate mobile app payment, mobile apps in general, as well as improving the experience going in or out of the faregates.
  • Improve information and instructions at metro stations to help orient users to the SmarTrip payment program
  • Investigate feasibility to allow multiple riders per card which would help tourists reduce the upstart cost to ride the system
  • Consider running advertising campaigns focused on common misunderstandings; what the beeps mean when boarding a bus, the option to add value on a bus, as well as the option to add value to SmarTrip cards at retail stores across the D.C. area
  • Evaluate station kiosks by conducting usability sessions or direct field observations
  • Discover how well the online account system works for users by conducting usability testing
Image result for WMATA station faregates

Lessons learned

  • It can be helpful to communicate to stakeholders clear expectations. Exploratory research like this often only brings back high-level themes and additional research questions or opportunities for deeper dives.
  • If there was more time, conducting a field study at stations and on buses may have been an efficient way to gather customer insights.


How to convince your organization to invest in user research

User research is paramount to the success of an organization’s digital products. Building products without having done the background and foundation research about user needs, goals, and challenges, is akin to building a house without a blueprint. Research helps inform design decisions and gives decision makers confidence that their products will work for users.

While many user experience (UX) issues can be identified by an expert design review, or a heuristic analysis, often deeper issues with a system’s workflow, content, usability, and other factors are uncovered by involving real users.

Basics: what is user research?

User research is a process of understanding your users through a variety of activities that help to inform the design of a solution that matches user goals and needs. User research helps to support the design and development of systems that are successful — systems that are intuitive, usable, and even delightful. Research activities are always informed by research questions — what do we want to learn and why? 

Here are a few examples of user research activities:

User interviews

One-on-one interviews with actual users (or new users) of a product where we discover qualitative information, high-level themes and direct user insights

Usability testing

Tests the current products (or redesigned ones) against a set of common tasks with typical users. Each task is measured for success or failure; helps to uncover usability issues and possible user frustrations.

Card sorting

An activity that asks real users to group items together in categories; helps to inform a site or app’s information architecture (navigation, taxonomies, labeling).

Benefits of user research

Arin Bhowmick, the VP of design at IBM, describes user research as “help[ing] us to understand how people go about performing tasks and achieving goals that are important to them. It gives us context and perspective and puts us in a position to respond with useful, simplified, and productive design solutions.” Research not only helps set up a project redesign for success, it may even save money in the long term. Investing the time to conduct user research can saves hours of development time on potential features that aren’t needed, or save countless hours refactoring code for a product that isn’t working for users. Having the research insights help to ensure we are building products that will work for users.

How would user research help your organization?

Firstly, user research allows you to set a benchmark for your current product offerings. Research helps answer questions like “How well is the current site or app supporting users’ needs and goals?” User research not only helps understand the current status of your organization’s products, but can help uncover new site or app offerings and features that may greatly support your existing users or new users.

Next steps: How to begin user research

Any research is better than no research. Here are just a few easy things to start your user research journey and which could be used to kick-off a new project.

  • Benchmark usability testing — involves recruiting a set of users to do usability testing of a website, app, or service. This will be helpful to identify how well any products are currently performing against a set of common tasks.
  • Stakeholder interviews — part of design research, stakeholder interviews are held with key people from the project team or organization. These help to identify any business requirements or insights from the organization that are helpful when approaching the redesign. They also help to minimize 11th hour changes by upper management. Engage them in the process early.
  • User interviews — these help to learn about user attitudes, behaviors, and usage patterns relating to behavioral and attitudinal factors. Learning customer insights directly from the source will help you identify new features or areas of your product to refine. Once you begin a design phase, you may rely on additional research activities like card sorting, concept testing or further usability testing.

U.S. Currency Education Program

As part of a user experience design and research team at the Federal Reserve, I participated in providing UX consulting on the U.S Currency Education Program’s website. The project included a number of activities, but mainly focused on identifying ways to improve the site’s user engagement.

The Problem

The U.S. Currency Education Program (CEP) manages the outreach and education initiatives about educating the public about money. One of the main components of this team is its website,

For this project, I was consulted to help boost the site’s falling engagement and help re-orient the site’s information architecture toward top tasks and users’ main goals. The site had since become larger than when it had launched, which created sprawling navs and unwieldy organization. User engagement was down, and bounce rates kept rising.


My team proposed a number of different research activities to understand the site’s target audience, top user goals, and ways in which target users group the site’s content. We conducted stakeholder interviews, a heuristic review of the website, evaluated site analytics and past usability studies. In addition, we planned two activities that were focused on the website’s IA.

We started with a card sort (40+ cards) and tested it with more than 75 users using a digital card sorting tool. From there, we focused our efforts on taking a proposed IA and further testing it with a scenario-based tree test, a simple test to see if people can locate information on a website.


We translated our research findings into a proposed IA, which lead the design team to spin up a new main navigation and footer. We incorporated top category names and utilized dropdowns, allowing users greater control. Additional usability findings impacted a few other changes to the site, including breaking out currency by denomination. The site’s bounce rate has since decreased and user engagement is up.

Lessons Learned

  • Cards sorts can be helpful to generate initial ideas about how users think about the website content.
  • Tree testing is great method of testing an initial site information architecture; however, scenarios can bias how users answers questions so it’s particularly important to pre-test scenarios before conducting tree tests. Test with colleagues or friends and family.