[OSM-dev] User Testing OSM.org

SomeoneElse lists at mail.atownsend.org.uk
Sat Dec 7 20:30:01 UTC 2013

Ian Dees wrote:
> Hi everybody,
> I'm working with a Chicago organization called CUTGroup [0] to set up a user
> testing event for the OpenStreetMap website and the iD editor. The goal of
> this testing is to get feedback on the experience of navigating, signing up,
> and editing data from users that are new to OpenStreetMap. At the end of the
> day we'll generate a list of observations and actions that the OSM community
> can take to improve the new user experience.
Sounds like an excellent idea.  A few things spring to mind, but what 
you'll be able to measure depends on the volume of people you get 
through the door - too few and any "big picture" results may be 
statistically insignificant (although any common anecdotes that you get 
for particular parts of the site might still be useful).  Didn't someone 
have a go at this a few years ago (either at a London meetup or driven 
by some academic research) - it'd be interesting to see if you can try 
and measure some of the same things and see how the results compare.

One thing that I'd definitely consider asking before anything else is 
"what they think OSM is, and what use they think it will be for them".

Another thought, but one that would only really work if you've got 
sufficient volume, is to divide a portion of the testers into 3 - one 
group to use iD, one to use Potlatch 2, and one to use JOSM. What does 
each group manage to accomplish? How frustrating did they find the 
experience?  How likely are they to try and add something to OSM again?  
There's been lots of talk in the past about "users of editor X causing 
problems such as Y".  Usually the observations are valid as individual 
observations, but there's been little attempt to measure anything 
statistically significant.  After iD went live but before it became the 
default for for most users I did try and tot up some numbers for 
"mistakes made":


but that doesn't measure mapper frustration, or how much they achieved 
compared to what they had wanted to.

With regard to testing the usability of the site (and iD), it would be 
useful to try and get some sort of measure of how much of the available 
functionality they were using, and how much they just didn't find.  
Taking a specific example, I know in iD which is the "help button" 
because I've pressed them all and seen what they do, but I really don't 
know what the icon is supposed to represent. Obviously in a desktop 
browser you've got tooltips, but that's not the case on a tablet or phone.

In iD, can they figure out what the options on the radial menu are and 
can they figure out how to have the radial menu appears somewhere else 
if they need it to?  For example, if there's an L-shaped hedge that goes 
from the southwest to the northeast, and then turns left toward the 
northwest, can they figure out how to split it in the middle and add a 
piece of hedge that runs from the middle to the southeast?  Also, can 
they work out why some items (e.g. "make this line go in the opposite 
direction" might be needed?).



More information about the dev mailing list