Too often dog lovers are not in suitable living conditions to adopt, but feel the void of not having a furry companion in their life. Meanwhile, shelter dogs are often not able to be given the best amount of attention they deserve due to a lack of volunteers. The app DogGo aims to connect dog lovers with shelter dogs in the area to set up walking appointments for approved walkers to spend some quality time with a new furry friend and get them exposure to new people.
The app, DogGo, connects dog lovers who are not yet able to adopt with shelter dogs in their area to go on walks because this demographic still wants a form of companionship from dogs. We will know this is successful by measuring the feedback walkers and shelters give on their experience with the app and the walks coordinated through it.
In order to gain a better understanding of what users would most want out of an app like this, I reached out to the Dogs subreddit and posted a user research survey and compiled the results of over 400 responses.
From my findings, as the artifacts below highlight, I was able to gauge user behaviors and determine which features and information were most essential to their experience.
Using feedback from the user survey, next I narrowed down what components of information I would need present in the app for users to include based on what users wanted available. After that, I used post-its to draft up a few user journey flows.
For this project I wanted to work with iterations and gain experience with usability testing, so for my first draft I designed mid-fidelity screens to give the user a better context of what they were looking at, as I would be conducting the test online and asynchronously. For the purposes of testing, I again recruited users on the dogs subreddit and was able to get a few testers.
To gain a better understanding of their mindset since I wouldn't be present for the testing, I made sure to ask follow-up questions such as to rate the difficulty of the task from 0 (not difficult) to 5 (very difficult) and if there were any aspects of the design that confused them.
For the first batch of testing, I tested a three main tasks to test the overall usability and learnability of the app's structure through the use of tasks. The green dots signify testers who clicked in the area intended and were able to advance to the next screen, whereas red dots signify testers who thought the task would be achieved clicking somewhere else.
The first task was to navigate to the reviews that previous walkers left for a dog, in this instance Alan.
The overall success rate was 67%, however 100% of users who were able to navigate to Alan's screen were able to find the reviews.
The next task was to use the filters to find the nearest "extra-large" (100+ lb.), gentle dog.
The overall success rate was only 14%, however I do attribute this in part to the complicated task without further guidance.
The last task for the initial designs was to navigate to the swipe view and only swipe right on a small dog who was available for a morning run.
The overall success rate was 0%, and while this was a little disheartening, it revealed the users' issue which was that the text showing the next earliest time each dog was available was not eye-catching enough for them to make the right judgment and instead only swiped based on the size of the dog.
Using the results from the usability testing, I decided to revise my user testing method by making the questions less complicated, while still making necessary changes to the designs that tripped up users such as the earliest time available not being as eye-catching enough. Additionally, I changed up some aspects of the existing design, such as changing the home image to test if users liked it more or less.
Again to better understand the testers' thought processes, I made sure to ask them to rate the difficulty, as well as to explain why they clicked where they did.
For the second batch of testing, I focused on more individualized aspects of the design so as to not lose testers throughout the process, as was the case considering the testing was conducted without supervision or enforcing the task at hand.
The first task was to click where the user would go to sort the dogs by size, energy level, etc.
The overall success rate was 75%, while comments left responding to my follow up questions often explained that they clicked on the dogs they thought were cute, so not every user fully understood the task at hand. I did note how the grid icon replacing the home icon created more confusion.
Since users in the last batch of testing had no trouble finding the swipe view in the tab bar, the next task was testing the legibility and how eye-catching the text of the next earliest available time was by swiping right on the dog available tomorrow for a morning run.
The second set of testing was more successful, as 68% of users were able to complete the task when the text was made seemingly more legible and the instructions were less dense. One aspect I would change is making the "swipe [direction]" buttons more clearly a button for testing like this, as a few users on the first screen did seem to try to swipe by clicking on the opposite side of the screen and presumably dragging.
In addition to conducting usability tasks, I also used the second set of testing to get feedback on which interface users preferred for certain instances. The first test was whether they preferred the grid view or swipe view, to determine whether both were necessary for the final design.
The overall results were 54% for the grid view, and 45% for the swipe view, with grid voters commonly citing the ease of viewing their options while swipe voters explained their preference for the more clear photos and instant access to more information without clicking around.
The second set of design iterations I tested preferences for were the output shelters would receive whenever a walker wanted to show interest in walking a dog.
59% of responders voted for the second interface, stating they preferred to have more information on the dog's availability up front, whereas 41% of the users voted for the first interface stating they would rather talk to the shelter a bit and meet the dog before setting up an appointment.
Taking into consideration some of the complications users had with the second set of testing, I decided to design on on-boarding tutorial to teach users how to navigate the interface. Additionally, I labeled the tabs at the bottom to be more easily understood, and
I first designed the on-boarding experience, accompanied by visual cues of which buttons would trigger whichever features was being explained, taking the user through the process of seeking out dogs at shelters nearby them that they're interested in meeting and messaging their shelters.
Due to the fairly evenly split votes, I decided to include both designs for the initial interest message into my final design, using one for instances in which the shelter requires meeting the dog beforehand or an additional interview, and the other for when the dog is available to walk without further action.
The main purpose of this project was for me to gain experience testing iterations of my design rather than settling on the first draft as I've done with previous projects. In the future I would like to be able to test user flows and designs in person, rather than remotely and asynchronously, however given a lack of resources I had to make do with asking strangers on the Internet for their feedback.