One the second day of TotT’s workshop, which brought together interested academics and students of different backgrounds to discuss tech on the trail research, I (Gracie) had an opportunity to present to everyone my ongoing thesis work. The spotlight was particularly unnerving since the returned data from the several completed probes had only just come in, so I hadn’t had a chance to do a deep dive analysis yet. Even so, both polishing a presentation and having discussions about my probes were a good way to start off my data analysis for my thesis.
As a reminder, the basic gist of my probe can be read here; this post will be long enough without rehashing all that.
To start broadly, the most challenging question I received from the audience while presenting boiled down to, “Why? What’s the point of this?” The other day, I attended my friend’s dissertation defense about using big data sets in introductory CS education, and he received a similar question from the audience. In his case, the room was discussing the limitations of CS education research in regards to having no experimental control group and being rife with other uncontrollable variables, and someone finally asked, “Why should we believe any of these results?” For my friend’s research, I think having a good course outcome for students is a valuable goal in and of itself, but it seems to me that the qualitative side of HCI (and of social research in general) is often criticized as being pointless or inconclusive. In my case, even if I can draw solid conclusions about how hikers feel about technology on the trail… so what? How does that help anyone?
When I was designing my study, it would’ve been easy to craft a list of specific questions to ask participants. Do you ever bring a smartphone on a hike, yes or no? Do you feel like people are overly dependent on GPS, yes or no? The thing about asking specific questions is that you get limited and often expected answers. Questions can be more open-ended, and they could be posed in an interview that allows some back and forth, but the problem is still that I’m creating these questions that are limiting the answers I get. How do I know what questions to ask? Before joining TotT, I never knew trial angels existed for thru hikers, so I wouldn’t have known to ask any questions in that vein. I don’t doubt there’s a wealth of other trail and hiking related knowledge that I haven’t been exposed to yet.
So how would cultural probes fix that? I’d never heard of a research method in this vein before coming to grad school here. (In fact, I hadn’t even really heard of Human-Computer Interaction despite doing Computer Science and Interactive Media in undergrad, but that’s a different discussion.) In my Models and Theories of HCI class, we spent a day talking about cultural probes as a way to creatively draw out a response to a particular prompt or idea. I loved the idea as soon as I read about it. Personally, I’m bad at speaking unprepared; I’m fine if I have a script or thought about it in advance, but for something like an interview, it’s difficult for me to shape a response with any depth on the spot. Even taking written surveys can produce the same problem as I continue to think of more things relevant to the survey questions long after taking them. Having a probe stay with me over a period of time as I continue to shape my answer solves those issues, as well as a few others.
A cultural probe can be shaped for its audience. And it can hit a wider audience with the flexible nature of the responses. Prefer to respond in writing? Go ahead. Prefer to record a video? Have at it. This is demonstrated well in my scrapbook responses, where one participant filled the pages with writing, and another pasted only photographs. Probes allow the questions to be fuzzier as well. One of the scrapbook pages is themed “Proof you were there,” which doesn’t suggest what form proof needs to take. The way that participants interpret and answer the questions will say a lot about what’s most salient to them regarding tech and the trail.
It also offers a lot more room for creativity, which is something I’m passionate about. There was a good example of this at the workshop as well. After my presentation, I divided the attendees into three smaller groups to have a hands-on look at my early data results. One group was given the responses for the Indoor Hike (2) and the Hike Club (1). Whenever I passed by, I heard this group discussing one particularly well written Indoor Hike response, and by the end, they were happy to share with me another odd Indoor experience: have someone act like a trial guide to a group, but have it be inside a mall, so they’re giving their group a running dialogue of the environment and flora/fauna of the mall. They suggested a few phrases that might be passed around and had a good laugh. Later, one of the participants and I were chatting, and the Nacirema article which really captures the idea of taking something familiar and making it “strange.” That was one of the main driving themes behind the Indoor Hike activity, so it was gratifying to hear it discussed.
So, what do I hope to get out of these probes? A lot of unique perspectives that I wouldn’t be able to get by just talking with someone. I want a broad range of thoughts and opinions. I want a better look at the diversity that exists in hiking and outdoor communities. And that knowledge should be fueling tech on the trail discussion, making our conversations more nuanced and our views more accurate. I don’t want anyone’s voice to get lost just because they didn’t thru hike the entire Appalachian Trail. And really, there’s no point in designing technology if we don’t even know who we’re designing for or what need we’re filling.