Hot off the presses – get your “good data” right here!

Screen Shot 2016-05-10 at 10.17.07

It’s finally out! 9 long months ago, we received the good news that our second Snapshot Serengeti paper was accepted for publication in Conservation Biology as part of a special section on citizen science. Patience has never been my strong suit, so I’m overjoyed to announce that that special section is finally published!

The paper takes a pretty detailed look at how we turn your answers into our final dataset (that same one that was published in Nature Scientific Data last June). Remember that you guys are good, and even when you’re not sure about what you’re seeing, those wrong answers help us determine just how difficult an image is. My favourite demonstration of this is the boxplot below:

SSblogslides.015

 

Now, I’ve written about this guy (and how to read boxplots) before. The gist of it is that we calculate a measure of disagreement across all of your answers for a given image. The disagreement score (also called evenness) ranges from 0 to 1, with 0 meaning that everyone agreed on what they saw and 1 meaning everyone said something different. You can see from the histogram on the right side of the plot that the vast majority of images were easy: everyone said the same thing! A good number of images are easy-ish, and a very small portion of the images are hard, with high disagreement scores.

When we compare images to experts, and look a the disagreement scores for images that were identified correctly or incorrectly, we see that images that were correct have generally lower disagreement scores (box on the left) than those that were incorrect (box on the right). That means we can use the disagreement score to predict whether images are probably right or wrong. If an image has a high disagreement score, it’s probably wrong or impossible, and we might want to have an expert review it before using it in an analysis.

For example. Across all images with disagreement scores 0-1, we know that 97% of images are correct. But say we want higher accuracy, so we set a threshold of images we accept and target for review. For example, 98.2% of images with a disagreement score of <0.75 are correct, so we could just accept all the ones with scores <0.75 and target all images >0.75 for review. Looking at the histogram to the right, that’s a pretty small percent of images needing a second look.SSblogslides.017

If 98.2% isn’t good enough, we can make that threshold stricter.

SSblogslides.019

99.7% of images with disagreement scores <0.5 are correct, so we could set that as our threshold, and conduct an expert review of all images with scores above that. It’s still a relatively small number of images we need to look at.

Anyway. I know I’ve written about this before, but I think this really gets at the heart of why the Zooniverse/Snapshot Serengeti approach works for producing useable scientific data. And why your answers, even when you’re not confident in them, are so incredibly valuable. This approach means that ecologists and conservation biologists can engage volunteers like you on other camera trapping projects to tackle their own enormous camera trap datasets – enabling us to do bigger, broader research much faster.

You can check out the paper for more details on this analysis, and more! It’s an open access publication, so you can access it, for free, even without needing a University library account.

As always, this wouldn’t be possible without your help. So thank you, again, for time and your clicks. And I can’t wait to see what you help us discover next!

 

About ali swanson

I'm an ecologist studying how large carnivores coexist. I spend way too much of my time trying to stop hyenas and elephants from munching my camera traps!

Leave a comment