Archive | News RSS for this section

Rarely discussed reptiles…

This week we have a guest post from herpetologist and Zooniverse volunteer Steve Allain (find him as “The Newt Guy” on Zooniverse), who has used Snapshot Serengeti data (available here) to dig a little deeper into our little-studied reptiles. Steve is a zoology graduate from Anglia Ruskin, Cambridge and has a particular passion and focus on British amphibian and reptile species. He is the current chairman of the Cambridgeshire and Peterborough Amphibian & Reptile Group (CPARG) where he helps to organise and coordinate a number of amphibian and reptile surveys around the county to map the distribution of amphibians within Cambridgeshire. More recently Steve has joined the IUCN SSC Amphibian Red Listing Authority as an intern. 


Agama Lizard


In the summer of 2014 I visited Tanzania and went on a tour of the north of the country visiting such places as Arusha, Mount Meru, Ngorogoro Crater and the Serengeti. Before I went, I prepared myself for the wildlife I would encounter by helping out with the Snapshot Serengeti project. As a herpetologist (someone that studies amphibians and reptiles) I was not familiar with the mammalian fauna of Africa apart from the large and obvious animals that you are taught as a child. When I was in Africa, the identification skills I’d learnt through helping with the project really did pay off when it came to narrowing what species we had seen.

Recently I was reading a scientific paper regarding the monitoring of Komodo dragons using camera traps; this is an unusual method as reptiles generally don’t trigger camera traps due to their biology. I pondered some thoughts for a while and then it suddenly dawned on me that I knew of a project that had recently published a large amount of data from which I could filter out when reptiles had been captured by the camera traps. I decided to get in contact with some of the people involved with Snapshot Serengeti to help me get started.

One of the main questions that I have is when is the most likely time to capture a reptile on a camera trap, be it a snake or a lizard etc.? Is it in the morning or the afternoon? With the data published by the Snapshot Serengeti project I have been investigating this by first identifying all of the trapping events which contain reptiles. The original project identified 131 events which have been a good baseline to work from but with some extra digging I have identified another 120 events and I’m only just getting started.

Once I have a list of all of the trapping events, I intend to collate the data relating to my first question using time stamps as well as identifying which species are present. There are other questions which I am still formulating and so far most of the animals I’ve managed to identify have been species of rock lizard which like to bask on rocks and outcrops called kopjes. I’m hoping that my findings will be able to inform scientists in the future about the possibilities of using camera traps for studying the behaviour and distribution of reptiles over a large area.

While you’re waiting…

We know you’re eager to get back to classifying wildebeest and other crazy critters, and good news is that Meredith has recently returned from the field with the next instalment of Snapshot Serengeti! So get ready! But we’re still in the process of uploading the photographs, checking timestamps, and doing all the other tedious but necessary pre-processing, and it will be a few more weeks before we get the next season online.

Screen Shot 2015-09-17 at 13.38.27

So while you’re waiting, why not checkout the Zooniverse’s newest camera trapping project: Wildcam Gorongosa?

Nestled in nearby Mozambique, Wildcam Gorongosa was developed as a joint effort between the Howard Hughes Medical Institute Biointeractive Program, the Gorongosa Restoration Project, and, of course, the Zooniverse.  Previously decimated by almost 20 years of civil war, Gorongosa National Park wildlife is rebounding thanks to an enormous conservation initiative. As part of that initiative, researchers have set out a grid of cameras, much like ours in the Serengeti. And now they need your help to identify the animals caught on their cameras. While many of the animals present in Gorongosa are the same as in Serengeti, they also have some critters we don’t: otters, nyala, oribi, and – my personal favorite – African wild dogs.

You can read up a bit more on the project here, but why not head on over to Wildcam Gorongosa and see what you can see!

New jobs in the Zooniverse!

Zooniverse is currently looking for a front-end developer to join the Oxford team. The key aim of the position is to help build data querying and visualization tools for educators and researchers, and, well, everyone, to better explore and engage with data from Snapshot Serengeti-style projects.

More details can be found here.

We are accepting applications *now* until August 10, so please share this with anyone you know who might be interested.

Wander over to Wildebeest Watch!

Wildebeest Watch Home

Can’t get enough of these gnarly gnus? Head on over to our new spinoff project, Wildebeest Watch!

In collaboration with Dr Andrew Berdhal from the Santa Fe Institute, and Dr Allison Shaw at the University of Minnesota, we are taking a closer look at what the wildebeest are doing in the Snapshot Serengeti images to try and better understand the details of the world’s largest mammal migration.

Every year, 1.3 million wildebeest chase the rain and fresh grass growth down from the northern edge of the ecosystem down to the short grass plains in the southeast. We have a broad-scale understanding of where they are moving across the landscape, but don’t understand how they make these detailed decisions of where and when to move on a moment-to-moment basis. Wildebeest as individuals aren’t known for being particularly smart — so we want to know how they use the “wisdom of the crowd” to make herd-level decisions that get them where they need to go.

So while you’re waiting for more photos of lions, hyenas, and other sharp-toothed beasts, why not wander over to Wildebeest Watch to help us understand the collective social behavior of these countless critters?

Snapshot Serengeti’s first scientific publication — today!

Yay! Says cheetah.

“Yay!” Says cheetah.

Champagne corks will be popping tonight. Snapshot Serengeti’s first peer reviewed scientific publication comes out today in Nature’s Scientific Data journal. Please give yourselves a round of applause, because we’d never have been able to do this without you.

The paper is a “data descriptor” instead of a traditional research article, meaning that we describe the detailed methods that led to the Snapshot Serengeti consensus dataset. In addition to describing all the excrutiating details of how we set the cameras in the field, we talk about the design of Snapshot Serengeti, setting retirement rules and aggregation algorithms to combine all of our answers into a single expert-quality dataset. We don’t talk about the cool ecological results just yet (those are still only published in my dissertation), but we do talk about all the cool things we hope the dataset will lead to. The dataset is publicly available here. Anyone can use it — to ask ecological questions about Serengeti species, evaluate better aggregation algorithms for citizen science research, or — we get this a lot — use the images plus consensus data to train and test better computer recognition algorithms.

Feel free to download the dataset and explore the data on your own. We’d love to hear what you find!

Back to the Field!

photo (2)

Good news: thanks to funding from National Geographic, we’re heading back out to Tanzania with some new camera traps for Snapshot Serengeti!

It’s a bit short notice, but I’ll be heading back out to the field in just under two weeks to dive back into camera maintenance and data collection. I’ve been frantically ordering field equipment and gathering together all the supplies I need in Serengeti, including 50 cameras and what feels like twice my weight in rechargeable batteries. I’ll be adding new cameras back in to the grid to replace those that have been damaged or stolen, in addition to following up on some playback experiments I conducted last summer and continuing to monitor changes in the habitat around each of our camera sites. Some new data that we’ll be picking up this year include examining changes in the soil quality throughout the camera trap set-up and characterizing diversity in the plant communities in the immediate vicinity of our camera traps. Both of these factors contribute to forage quality for our ungulates and affect how appealing a particular site is for different animal species. I might even attempt to collect samples of dung (ah, the glamour of field work) from around our cameras to see whether we’re actually catching in our photos all the animals hanging out in these areas.

After a few months in Tanzania, I’ll be heading down to South Africa to conduct additional experiments in a small private reserve in the Kalahari. Look forward to updates from the field, and wish me luck!


Help us find animal selfies!

We’re partnering with National Geographic to put together a photo book of animal selfies from Snapshot Serengeti. We’ve got some selfies already from the first seven seasons, but because no one has looked through Season 8 yet, we don’t know what great selfies might be in there.

You can help! If you find an animal selfie, please tag it as #selfie in Talk. (Click the ‘Discuss’ button after you’ve classified the image and then enter #selfie below the image on the Talk page. You can get back to classifying using the button in the upper right.)

All proceeds from book sales will go to supporting Snapshot Serengeti. We’re planning for a fall 2016 publication date, so it will be a while. But we’re excited to get working on it.


Season 8 Release!

And now, the moment you’ve all been waiting for …  Can I present to you:



I’m particularly proud of this, the first season that I’ve helped to bring all the way from the field to your computers. We’ve got a lot of data here, and I can’t wait for you guys to discover a whole host of exciting things in this new season.

This season is accompanied by IMPORTANT changes to our interface!

There’s a few more bits of data we think we can pull out of the camera trap photos this time around, in addition to all the great information we already get. One thing we’re particularly interested in is the occurrence of fire. Now, fire is no fun for camera traps (because they tend to melt), but these wildfires are incredibly important to the cycle of ecosystem functioning in Serengeti. Burns refresh the soil and encourage new grass growth, which attracts herbivores and may in turn draw in the predators. We have added a fire checkbox for you to tick if things look hot. Now, because we’re looking for things other than just animals, we replaced your option to click on “nothing there” with “no animals visible“, just to avoid confusion.

Some of the more savvy creature-identifiers among you may have noticed that there are a few Serengeti animals that wander into our pictures that we didn’t have options for. For this new season, we’ve added six new animal choices: duiker, steenbok, cattle, bat, insect/spider, and vultures. Keep an eye out for the following:



This season runs all the way from September 2013 until July 2014, when I retrieved them this summer, my first field season. Our field assistants, Norbert and Daniel, were invaluable (and inhumanly patient) in helping me learn to navigate the plains, ford dry river beds, and avoid, as much as possible, driving the truck into too many holes. Together, we set out new cameras, patched up some holes in our camera trap grid, and spent some amazing nights camped out in the bush.

Once I got the hang of the field, I spend my mornings running around to a subset of the cameras conducting a pilot playback experiment to see if I could artificially “elevate” the predation risk in an area by making it seem as though it were frequented by lions (I’m interested in the reactions of the lion’s prey, and to see whether they change their behaviors in these areas and how long it takes them to go back to normal). I’m more than a bit camera-shy (and put a lot of effort into carefully sneaking up around the cameras’ blind spots) but perhaps you’ll catch a rare glimpse of me waving my bullhorn around blaring lion roars…

Back in the lab, there’s been a multi-continental collaboration to get these data cleaned up and ready for identification. We’ve been making some changes to the way we store our data, and the restructuring, sorting, and preparing process has been possible only through the substantial efforts of Margaret, over here with me in the States, and Ali, all the way across the pond, running things from the Zooniverse itself!

But for now, our hard work on this season is over – it’s your turn! Dig in!

P.S. Our awesome developers have added some fancy code, so the site looks great even on small phone and tablet screens. Check it out!

Getting good data: part 1 (of many)

Despite the site being quiet, there’s a lot going on behind the scenes on Snapshot Serengeti at the moment. Season 8 is all prepped and currently being uploaded and should be online next week! And on my end, I’ve been busily evaluating Snapshot Serengeti data quality to try and develop some generalisable guidelines to producing expert quality data from a citizen science project. These will actually be submitted as a journal article in a special section of the journal Conservation Biology, but as that is a slowwwwwww process, I thought I’d share them with you in the meanwhile.
So! Recall that we use a “plurality” algorithm to turn your many different answers into a “consensus dataset” — this has one final answer of what is in any given image, as well as various measures of certainty about that image. For example, back in 2013, Margaret described how we calculate an “evenness score” for each image: higher evenness means more disagreement about what is in an image, which typically means that an images is hard. For example, everyone who looks at this photo
would probably say there is 1 giraffe , but we’d expect a lot more diversity of answers for this photo:
(It’s a warthog, btw.)
To test how good the plurality algorithm answers were, we created a “gold-standard dataset” by asking experts to identify ~4,000 images. Overall, we found that consensus answers from your contributions agreed with experts nearly 97% of the time!  Which is awesome. But now I want to take a closer look.  So I took all the images that had gold standard data and I looked at Evenness, Number of “nothing here” responses, and %support for final species, and evaluated how those measures related to whether the answer was right or wrong (or, impossible). Even though we don’t have an “impossible” button on the SS interface, some images simply are impossible, and we let the experts tell us so so these wouldn’t get counted as just plain “wrong.”
A note on boxplots: If you’re not familiar with a boxplot, what you need to know is this: the dark line in the middle shows the median value for that variable; the top and bottom of the boxes shows the 25 and 75 percentiles; and the “whiskers” out the ends show the main range of values (calculated as 1.5 * interquartile range, details here). Any outliers are presented as dots.
Evenness: The boxplot below shows the mean “Evenness” score described above vs. how the consensus answer matched the gold standard answer.   What you can see below is that the average evenness score for “correct” answers is about 0.25, and the average evenness score for wrong and impossible answers is 0.75. Although there are some correct answers with high evenness scores, there are almost no wrong/impossible answers with evenness scores below 0.5.
Percent Support: This number tells us how many people voted for the final answer out of the total number of classifiers. So, if 9 out of 10 classifiers said something was a giraffe, it would have 90% support. It’s similar to evenness, but simpler, and it shows essentially the same trend. Correct answers tended to have more votes for what was ultimately decided as the final species.
NumBlanks: So with the evenness and percent support scores, we can do a decent job of predicting whether the consensus answer for an image is likely to be right or wrong. But with Number of Blanks we can get a sense of whether it is identifiable at all. Margaret noticed a while back that people sometimes say “nothing here” if they aren’t sure about an animal, so the number of “nothing here” votes for an image ultimately classified as an animal also reflects how hard it is. We see that there isn’t a huge difference in the number of “nothing here” answers on images that are right or wrong — but images that experts ultimately said were impossible have much higher average numbers of “nothing here” answers.
So, what does this tell us? Well, we can these metrics on the entire dataset to target images that are likely to be incorrect. In any given analysis, we might limit our dataset to just those images with >0.50 evenness, or go back through all those images with >0.05% evenness to see if we can come up with a final answer. We can’t go through the millions of Snapshot Serengeti images on our own, but we can take a second look at a few thousand really tricky ones.
There’s all sorts of cool analyses still to come — what species are the hardest, and what’s most often mistaken for what. So stay tuned!

Get every new post delivered to your Inbox.

Join 5,340 other followers