University of Minnesota Extension
www.extension.umn.edu
612-624-1222
Menu Menu

Extension > Family Matters > Jam that Data! (Jam It Good)

Friday, July 7, 2017

Jam that Data! (Jam It Good)


By Emily Becher, Research Associate — Extension Center for Family Development

This spring, Extension Center for Family Development’s Applied Research and Evaluation (ARE) team applied a new data analysis technique to a typically unruly type of data. The new technique: data jamming. The unruly data: qualitative data.

For those of you unfamiliar with qualitative data, and qualitative analysis and coding, what follows is a brief primer. For those of you who are all too familiar with the struggle, jump to Data Jams: What Are They Good For?

Qualitative Data and How to Analyze It

In a nutshell, qualitative data are made up of words, not numbers. These are the data from Family Development programs that people share with us using words and not a response on a scale that can be easily quantified (turned into numbers). Think of a survey with open-ended comments, focus groups, and interview transcripts. Because words can be interpreted in more ways than numbers, analyzing this type of data in a consistent, accurate, and valid way can be a challenge.

Qualitative data analysis (QDA) is a process that explores the meaning of those words in a systematic way. Often this takes the form of coding: taking chunks of text like sentences or paragraphs and assigning some interpretation to them as a code. For example, comments from class attendees on how they really loved the way an educator taught a course but wanted more resources after the class might get coded as “positive educator comments” and “wanting more resources.” After you’ve coded a bunch of chunks of text, you can see all of the statements that participants made that fit into the codes you’ve designed. Then you can make some interpretation about participants’ overall experience or course impact.

Qualitative analysis can get much more complicated and go far deeper than described it here, so if you’re looking for a deeper dive, great resources are Qualitative Research and Evaluation Methods by Michael Quinn Patton, or The Coding Manual for Qualitative Researchers by Johnny Saldana. Or, talk to someone on the Applied Research and Evaluation team.


Qualitative coding software helps with this coding process. You still have to do the thinking behind creating codes and assigning text to the codes, but the software can make the process easier and more organized. If we have mixed method data (numbers and words), the software helps make some connections between a participant's response to a number-based question (for example, their self-reported income) and how they responded to a qualitative question (for example, what they found most useful about the course). Some examples of commonly used qualitative coding software are NVivo, MAXQDA, and ATLAS.ti.

As Christian Schmieder, a qualitative research specialist at University of Wisconsin Extension, wrote in a blog post titled A Response to the Data Challenge, “Extensions and other complex organizations are expected to use data when they develop their programs and services; they are also expected to ground their communications and reports to stakeholders in rigorous data analysis.”

Data Jams: What Are They Good For? (Absolutely Something!)

headshot of Christian Schmieder
Christian Schmieder / LinkedIn
The Extension ARE team learned about data jams from a series of two eXtension webinars this winter. eXtension shares resources and tools to enable Extension professionals to make a bigger impact at both local and national levels. In this case, eXtension supported Christian through a fellowship to bring his data jam approach to a national audience.

Data jamming is a strategy for both analyzing qualitative data and also building educator capacity in qualitative analysis. In a data jam, a group of colleagues sets aside a sizeable chunk of time to work together on a qualitative or mixed method data set. The goal is to produce write-ups, models, initial theories, and visualizations to give researchers tangible results quickly.

In contrast, in a more traditional QDA approach, the goal is to digest the whole data set from beginning to end. Tangible results can take a long time to be developed. I think it's the enormity of this task that can sometimes be a major log-jam in the QDA process. Data jams tackle the beginning of the QDA process and not the entirety. The jam gives you a good start so then each of you on a team (or in pairs) can take a piece of the data set and code it in similar ways, using software that you now know how to use.

Another difference between a data jam and conventional QDA is the creation of an initial document that describes preliminary insights. Sometimes the process of making meaning from qualitative data can lead to big, open-ended conversations with few tangible outcomes. A data jam is different in that a tangible outcome, such as a write-up and list of initial theories, is built into the process.

How Data Jams Fit into FD Work

The ARE team really wanted to try this method for two reasons. First, educators often don’t get a chance to really explore or make use of their qualitative data because conventional QDA seems overwhelming.

single arm sticking out of a body of water
Image credit: Nikko Macaspac.

In contrast, data jamming seemed like an approachable QDA method.

Second, the ARE team doesn’t have the capacity to support many QDA projects because they take so much time. So often what happens with qualitative data in Family Development is that choice quotes are pulled from the data, but the entire data set is never fully explored for how it could, for example, inform the development or revision of a curriculum or tells the whole story of participants’ experience.

So the ARE team wanted to try the method, but couldn’t find the data to try it with. Then along came a team of Health and Nutrition (H&N) educators working on the Systems Approaches for Healthy Communities (SAHC) Michigan pilot data. The H&N team had lots of qualitative data to analyze and was looking for support from the ARE team to move the analysis along. I suggested we use the SAHC pilot data as an opportunity to try out the data jam process, and the H&N team was on board. I then contacted Christian from UW Extension, and he was willing to help facilitate the process as a part of his eXtension fellowship.

The Day(s) of the Jam

And that is how we found ourselves in a room in the basement of Coffey Hall for eight hours this May. For four hours on two Wednesdays, we data jammed as a group, with Christian on Google Hangout. What did this look like?

First, prior to the data jam, we created a Google doc with the following key questions:
  • What’s the question we are asking of the data?
  • What’s the scope of data we need to analyze?
  • What will we do with the findings?
  • What’s our approach for looking at the data?
Everyone gave input on the shared doc and arrived with a common focus (and drinks and snacks for sustenance).

Then, for the day of the jam, one laptop was pre-loaded with qualitative coding software — NVivo in this case — and the H&N’s team’s data. We chose a room with a large monitor so that everyone could see the same information. Everyone also had their own laptops so they could work on the shared Google doc to which we were adding observations and insights.

After Christian and I demonstrated how to use the software, one “driver” took charge of the main laptop, a role that switched every 30 to 45 minutes so everyone had a chance to work with the software and build their capacity for QDA. The driver coded data, chunk by chunk, based on group consensus. This meant the group discussed which data to code, why, and how. In the meantime, the group documented remaining questions and tracked the process both as memos in the software (the driver’s responsibility) and in the shared Google doc. Remaining questions included things like “only one person said this but it seems important, will other people corroborate this item or is it an isolated experience?”

In addition to the large monitor, a whiteboard was extremely helpful. When we felt stuck, Christian encouraged us to map our thoughts on the whiteboard. We then took pictures of the whiteboard and uploaded them to the same folder on Google Drive with our other Google doc.

whiteboard with writing on it
The well-used whiteboard of 50C Coffey Hall.

Finally, when we had a good chunk of data coded, felt like we had a good handle on the software and process, and had lots of memos written in the software, we went to the Google doc. That’s when everyone started to write down the shared understanding of what the data was saying that had developed from the coding process. We used this format and included supporting “anchor” quotes:
  • Some participants are saying this….
  • Some participants are saying that…
  • A few participants are saying the other thing....
So after eight hours in a small room together, what did we have? Some preliminary data analysis, basic skill building, and some uniformity in how to code the data.

What We Thought of the Data Jam

After engaging in the new process, the U of M Extension data jam team felt we had spent eight hours in a productive way, which can sometimes be a challenge with the fuzzy process of qualitative coding. Here are reflections from some of those involved:

This was way better than any qualitative analysis I’ve done in the past! I really liked that at the end of a 4-hour session, we had a (very rough draft) write up started. In my previous experience, it takes months to get that far!
— Laura Perdue, H&N Extension educator

We started slow in order to go fast. I felt our team did a great job trusting the process. Also, Emily and Christian did a great job creating a safe place for us to talk openly and think critically. It was super helpful to have Christian’s outside perspective challenging us along the way. 
— Stephanie Heim, H&N associate program director 

As someone with no formal training in qualitative data coding or software, this experience was very helpful. We were able to make meaning from participant responses as a team by working through various perspectives and different ways of understanding. Plus, Stephanie brought chocolate.
— Anne Dybsetter, H&N Extension educator

Overall, I would recommend this process for other teams with large amounts of qualitative data. My one concern is related to software. I have NVivo on my laptop, but most educators don’t. So how exactly are we supposed to break up the coding after the data jam? I think if we knew what teams were going to have qualitative data to analyze, we could purchase a batch of licenses and rotate them among educators, as long as the coding was spaced out. I think this could be a good way to support educators at an institutional level who are engaging in deeper qualitative analysis of their evaluation data.
— Emily Becher, research associate

Do you have a large data set of qualitative data that you’ve been putting off analyzing, or even collecting? Contact the ARE team to discuss scheduling a data jam with your team!

Contributors to this article included Laura Bohen, Evalyn Carbrey, Anne Dybsetter, Stephanie Heim, Mary Marczak, and Laura Perdue.

No comments:

Post a Comment

  • © Regents of the University of Minnesota. All rights reserved.
  • The University of Minnesota is an equal opportunity educator and employer. Privacy