Choosing People Types for User Acceptance Testing
During the week of July 14, 2019 a news story came up illustrating the importance of choosing the “right” people for user acceptance testing (UAT) prior to software release into production.
On Wednesday, July 17th, the New York Times published a story written by Brian X. Chen – The Sad Truth About Sleep-Tracking Devices and Apps. The point of Chen’s story (the app he writes about is named AutoSleep) was to report on how software intended to outfit people with the tools they need to improve their sleep actually worked against itself. A few paragraphs into Chen’s story he sums it up:
“Ultimately, the technology did not help me sleep more. It didn’t reveal anything that I didn’t already know, which is that I average about five and a half hours of slumber a night. And the data did not help me answer what I should do about my particular sleep problems. In fact, I’ve felt grumpier since I started these tests.”
Not only didn’t AutoSleep help him improve his sleep, it actually became an irritant to him and, presumably, ended up in his circular file.
Readers should be asking why?. Ditto for the coders who brought the solution to market. It may have something to do with a thoughtless choice of terminology for packaging results and informing a user of them. A lack of sleep actually has nothing to do with the notion of debt. But the notion of debt is, more often than not, looked upon as not a good thing. Put a “not good” condition together with a long and lengthy process to migrate out of it and it’s easy to see how Chen came to really dislike the app.
The software team building the app should have included someone with experience with the psychology of people suffering from unwanted chronic conditions like sleep deprivation. But the flubbed word choice is a clue this wasn’t likely the case. Even if the coding team did include someone with this required experience and a tech promotional content writer for an app like AutoSleep, it’s also pretty clear the app was not first subjected to UAT- in other words asking a likely sample of users to try the app first.
Coders will argue AutoSleep is still “top rated” (Chen includes this point in his story). Sure, but my last check of downloads on the iOS app store shows 18.5K – anemic as compared to other “top rated” apps. What’s the possibility the download number would be exponentially higher (think 180.5K) if the app development team had taken sometime to “know its customer” better? Since I use this app I have to say “pretty high”.