UX Research, UX Design
AC Goal Tracking
United Way is an organization which engage and bring together people and resources to drive sustainable improvements in the well-being of children, families and individuals in the community. And it uses the fund to run several local programs. Achievement Club (AC) is one of these important programs. The goal of Achievement Club (AC) is to assist newly-housed homeless people in setting and achieving goals and fostering the development of constructive life skills. Our target users are Julia Kelly—along with other similar AC moderators and their clients.
Some AC members report struggling with motivation, impeding their progress in achieving their goals. Therefore we develop a non-intrusive application to motivate goal completion. The application will enable the users to have the abilities of setting and readjusting their goals (and sub-goals), recording and sharing their progress, and reminding themselves of their incentives and current goal-related responsibilities.
Our users will be AC members and to know them better, we run a questionnaire based user study. For our study, we had 11 subjects, all Achievement Club members. We chose this group for involvement in our research since they are the potential target users of our system.
Most of the questions that we chose to put in the questionnaire are related to the potential tasks of our system. The task-related questions help us determine whether the tasks are truly meaningful to our users and how to make better design decisions. The obstacle-related question gives us insights about potential new tasks used to increase goal completion rate. Finally, the question about the interface gives us an understanding of the users’ habits and provides a point of reference about which platform to choose.
All of the questions are close-ended questions, which are easier and faster to answer. The results are all quantitative.
And because we went to Achievement Club, we got the chance to observe their daily lives. It was also very helpful. We learned that they are smartphone users and have access to a public computer and projectors. There is a possibility that we can make smartphone applications, in addition to using projectors and computers as the secondary interfaces of our system.
We also performed a literature review, including articles about how to encourage and help people finish their goals and information about our potential users.
Within the current system, all the tasks are divided into 4 sections: Create a goal, Get a goal approved, Complete a goal and Revise goal steps.
We also developed scenarios for each task to help the team get a consensus of the workflow.
Usability Criteria and Focus
1. Effectiveness: The current paper goal sheet does not have a great impact on motivating an individual to complete a goal as compared to the regular in-person meetings with Julia and the incentives of actually achieving the goal. Effectiveness will be used as a metric to assess how much the new system helps a client to achieve his/her goal. This can be measured by having the users review the interface to tell us how effective their experience was using the survey, application review, etc.
2. Efficiency: The current paper form has a limited number of fields available for filling out steps and tasks to work towards the goal, hence placing a restriction on the number of steps a person may take towards achieving the goal or making them feel as if they need to fill in all the steps on the worksheet. Coming up with a goal and steps to take towards achieving the goal take more time than it does for the client to actually write down what he/she wants to do on the form. However, updating the 'date completed' and 'proof' sections can be done instantly, once the task is actually completed. The efficiency will be used to assess how much time a client has to spend on each task which is given. This can be measured by analyzing each interface to see how long it takes to complete a task (e.g. set a goal, update a goal, etc.).
3. Learnability: The current paper goal sheet is easy to understand from the start since Julia meets all of her Achievement Club clients in person to explain how the Achievement Club works and what the club's purpose is. Additionally, she goes over the paper form with her clients to explain what kind of goals they can set and what kind of steps the clients need to complete the goal. The learnability will be used to assess how easy it is for a client to learn to use the new system. This can be measured by analyzing each interface to see how long it takes a new user to learn how to use it.
4. Memorability: The current paper goal form is not complicated to use because it is a simple piece of paper with goals sections and several progress sections, and it never changes since the interface is mostly static. The memorability will be used to assess how easy it is for a client to remember how to use the new system without trying to remember how to use it or going over a tutorial. This can be measured by analyzing each interface to see how long it takes a current user to complete each task without looking up the previous task.
5. Understandability: The contents of the current system are straightforward. A client is expected to write one or two goals he/she wants to achieve for the Get Started section and write several steps he/she has to take to achieve the goal in the Make Progress section. Additionally, Julia will go over the paper form with her clients to determine that they understand what they have to fill out on the form. The understandability metric will be used to assess how easy it is for a client to understand the interface elements of the new system and the purpose of the new system. This can be measured by having new users review the interface after trying it without previously knowing what it does.
6. Flexibility: The current paper goal sheet has a limited number of ways to record the user's goals and steps. The paper form has only two goal sections and ten step sections which are shared between the two goals. If a client wants to add another goal, he/she has to either use another paper goal sheet or write it outside of the goal section box. The flexibility metric will be used to assess whether or not the interface can be adjusted to the needs and behavior of each client. This can be measured by having the users review the interface to tell us how the flexibility can be adjusted for their usage and what they feel they could not do with the shown interface.
The tasks in the current Achievement Club goal sheet include ‘create a goal’, ‘get goal approved’, ‘complete goal steps’, and ‘revise the goal’. There is a lot of approval processes involved, which can take a long time. We can help to improve efficiency by embedding some of these approval processes into our system.
From the analysis of existing systems, we find the need for highly customized steps and goal setting with it working for both numerically- and non-numerically-tracked goals. The way to measure outcomes should also be flexible and not limited to binary outcomes.
And as a result, our system will support the following core tasks (#1-2) and features (#3-6):
1. Setting goals and goal steps.
2. Updating goals.
3. Setting reminders to complete goal steps.
4. Sharing goals with others.
5. Viewing the achievements of other users.
6. Viewing rewards for completion.
We brainstorm and come up with three different interfaces: Mobile Interface, Bulletin Board and Website Interface.
To choose one from these three, we assess them using 6 usability criteria. According to the results of usability assessments, the website is the best option. Although the website prototype has the highest quantitative assessment scores, our design has to consider the Achievement Club members' accessibility to each prototype. From the previous research, a large majority of club members have smartphones but only a small portion of members have Internet access. Due to this limitation, it is hence appropriate to conclude that the mobile application is the best interface for the club members.Then we built an interactive prototype using InVision.
we have 6 usability criteria: effectiveness, efficiency, learnability, memorability, understandability, and flexibility. We are only able to measure 3 (effectiveness, learnability, and understandability) out of the 6 usability criteria due to the limitations of our prototype.
Efficiency- assess how much time a user has to spend on each task which is given but cannot be measured during prototype testing due to the time limitation. We are given only a half hour for the whole assessment. This is not enough time for a subject to become expert user for both systems. Therefore we are not able to acquire the data of the time or clicks a user has to spend on the tasks we are going to test.
Flexibility - can be measured by having the users review the interface to tell us how the flexibility can be adjusted for their usage but this cannot be measured due to the limitations of the prototype and scenarios. The current prototype does not support any customization for a user to test on.
Memorability - can be measured by analyzing each interface to see how long it takes a current user to complete each task without looking up the previous tasks but this has the same time limitation as efficiency. Currently, we are only given about a half hour to run whole assessment which is not enough time for us to test another set of similar tasks for the memorability. Additionally, we will not be exposing the user to the application multiple times during a session. We have a half hour schedule for the consent form, three tasks, questionnaire, and a short interview for each subject user.
Effectiveness - can be measured by having the users review the interface to tell us how effective their experience was throughout the questionnaire. Questions 20 through 25 from the questionnaire provide us with the users’ experiences with our prototype. The subjects are asked to select one of the options given and provide a reason. By analyzing the feedback and answer choices from a subject user, we can measure the expected effectiveness of the prototype.
Learnability - can be measured by analyzing how long it takes a new user to learn how to use it. There are two methods that can be used to measure this usability criteria: completion time and number of clicks.We will be comparing the task-completion time and number of clicks between “master” users (Team LATTE members) and subject users using exactly the same task. We will use the master users' times and the minimum number of clicks as benchmarks to determine the learnability criteria. In fact, the subject users' task-completion times include the time they spend on thinking aloud and the time for them to understand how to use the prototype itself, such as understanding the automatic text-filling feature on this prototype. This is one limitation of measuring accurate learnability using the current prototype. However, we try to mitigate the impact that the thinking out loud protocol has on the subject users' task-completion time by having the master users talk aloud while measuring the time spent on tasks.
Understandability - can be measured by having new users review the interfaces after trying it without previously knowing what it does. There are three methods that can be used to measure this usability criteria: thinking out loud, error click location, and number of questions the subjects ask during a given task. By encouraging a user to think out loud, we can understand how the user thinks about the UI components of the current page and how much information a user gained from the page. In addition, this method allows us to do iterative design to improve the features which do not meet our initial intention. However, our written scenarios and protocol have to be clear enough to make sure that any confusion that the subject experiences comes from the prototype itself, not the task description. Secondly, we are going to record the location of interfaces or icons whenever a user clicks on an incorrect location during a given task. A user may click a random icon or location to interact with our prototype during the test but this data allows us to understand the user’s tendencies and the level of understanding of the user. Lastly, the number of questions asked from a subject user while completing a task is crucial information because it tells us whether or not the user is completely lost on the current page. In fact, our prototype has a limitation of having a built-in feature which reveals correct click positions after clicking incorrect areas. This built-in functionality can limit the number of questions asked by users without them understanding how to complete a task, in addition to lessening the number of incorrect clicks.
We interviewed 10 subjects and each of the interviews took half an hour. We set three benchmark tasks for them to perform and we record their performance.
Click here to read the Process, Outcome and Interpretation Report.