Archive | Data Analysis RSS for this section

Getting Good Data, Part II (of many)

Okay, so by now you’ve heard dozens and dozens of times that you guys produce really good data: your aggregated answers are 97% correct overall (see here and here and here). But we also know that not all images are equally easy. More specifically, not all species are equally easy. It’s a lot easier to identify a giraffe or zebra than it is to decide between an aardwolf and striped hyena.

The plot below shows the different error rates for each species. Note that error comes in two forms. You can have a “false negative” which means you miss a species given that it’s truly there. And then you can have a “false positive,” in which you report a species as being there when it really isn’t. Error is a proportion from 0 to 1.

Species specific error rates.

Species specific error rates.

We calculated this by comparing the consensus data to the gold standard dataset that Margaret collated last year. Note that at the bottom of the chart there are a handful of species that don’t have any values for false negatives. That’s because, for statistical reasons, we could only calculate false negative error rates from completely randomly sampled images, and those species are so rare that they didn’t appear in the gold standard dataset. But for false positives, we could randomly sample images from any consensus classification – so I gathered a bunch of images that had been identified as these rare species and checked them to calculate false positive rates.

Now, if a species has really low rates of false negatives and really low rates of false positives, then it’s one people are really good at identifying all the time. Note that in general, species have pretty low rates of both types of error. Furthermore, species with lower rates of false negatives have higher rates of false positives. There aren’t really any species with high rates of both types of error. Take rhinos, for example: folks often identify a rhino when it’s not actually there, but never miss a rhino if it is there.

Also: we see that rare species are just generally harder to identify correctly than common species. The plot below shows the same false negative and false positive error rates plotted against the total number of pictures for every species. Even though there is some noise, those lines reflect  significant trends: in general, the more pictures of an animal, the more often folks get it right!

Error rates vs. species commonness, measured by the total number of pictures of that species

Error rates vs. species commonness, measured by the total number of pictures of that species

This makes intuitive sense. It’s really hard to get a good “search image” for something you never see. But also folks are especially excited to see something rare. You can see this if you search the talk pages for “rhino” or “zorilla.” Both of these have high false positive rates, meaning people say it’s a rhino or zorilla when it’s really not. Thus, most of the images that show up tagged as these really rare creatures aren’t.

But that’s okay for the science. Because recall that we can assess how confident we are in an answer based on the evenness score, fraction support, and fraction blanks. Because such critters are so rare, we want to be really sure that those IDs are right — but because those animals are so rare, and because you have high levels of agreement on the vast majority of images, it makes it really easy to review any “uncertain” image that’s been ID’d as a rare species.

Pretty cool, huh?

Season 8 Release!

And now, the moment you’ve all been waiting for …  Can I present to you:



I’m particularly proud of this, the first season that I’ve helped to bring all the way from the field to your computers. We’ve got a lot of data here, and I can’t wait for you guys to discover a whole host of exciting things in this new season.

This season is accompanied by IMPORTANT changes to our interface!

There’s a few more bits of data we think we can pull out of the camera trap photos this time around, in addition to all the great information we already get. One thing we’re particularly interested in is the occurrence of fire. Now, fire is no fun for camera traps (because they tend to melt), but these wildfires are incredibly important to the cycle of ecosystem functioning in Serengeti. Burns refresh the soil and encourage new grass growth, which attracts herbivores and may in turn draw in the predators. We have added a fire checkbox for you to tick if things look hot. Now, because we’re looking for things other than just animals, we replaced your option to click on “nothing there” with “no animals visible“, just to avoid confusion.

Some of the more savvy creature-identifiers among you may have noticed that there are a few Serengeti animals that wander into our pictures that we didn’t have options for. For this new season, we’ve added six new animal choices: duiker, steenbok, cattle, bat, insect/spider, and vultures. Keep an eye out for the following:



This season runs all the way from September 2013 until July 2014, when I retrieved them this summer, my first field season. Our field assistants, Norbert and Daniel, were invaluable (and inhumanly patient) in helping me learn to navigate the plains, ford dry river beds, and avoid, as much as possible, driving the truck into too many holes. Together, we set out new cameras, patched up some holes in our camera trap grid, and spent some amazing nights camped out in the bush.

Once I got the hang of the field, I spend my mornings running around to a subset of the cameras conducting a pilot playback experiment to see if I could artificially “elevate” the predation risk in an area by making it seem as though it were frequented by lions (I’m interested in the reactions of the lion’s prey, and to see whether they change their behaviors in these areas and how long it takes them to go back to normal). I’m more than a bit camera-shy (and put a lot of effort into carefully sneaking up around the cameras’ blind spots) but perhaps you’ll catch a rare glimpse of me waving my bullhorn around blaring lion roars…

Back in the lab, there’s been a multi-continental collaboration to get these data cleaned up and ready for identification. We’ve been making some changes to the way we store our data, and the restructuring, sorting, and preparing process has been possible only through the substantial efforts of Margaret, over here with me in the States, and Ali, all the way across the pond, running things from the Zooniverse itself!

But for now, our hard work on this season is over – it’s your turn! Dig in!

P.S. Our awesome developers have added some fancy code, so the site looks great even on small phone and tablet screens. Check it out!

More results!

As I’m writing up my dissertation (ahh!), I’ve been geeking out with graphs and statistics (and the beloved/hated stats program R). I thought I’d share a cool little tidbit.

Full disclosure: this is just a bit of an expansion on something I posted back in March about how well the camera traps reflect known densities. Basically, as camera traps become more popular, researchers are increasingly looking for simple analytical techniques that can allow them to rapidly process data. Using the raw number of photographs or animals counted is pretty straightforward, but is risky because not all animals are equally “detectable”: some animals behave in ways that make them more likely to be seen than other animals. There are a lot of more complex methods out there to deal with these detectability issues, and they work really well — but they are really complex and take a long time to work out. So there’s a fair amount of ongoing debate about whether or not raw capture rates should ever be used even for quick and dirty rapid assessments of an area.

Since the Serengeti has a lot of other long term monitoring, we were able to compare camera trap capture rates (# of photographs weighted by group size) to actual population sizes for 17 different herbivores. Now, it’s not perfect — the “known” population sizes reflect herbivore numbers in the whole park, and we only cover a small fraction of the park. But from the graph below, you’ll see we did pretty well.


Actual herbivore densities (as estimated from long-term monitoring) are given on the x-axis, and the # photographic captures from our camera survey are on the y-axis. Each species is in a different color (migratory animals are in gray-scale). Some of the species had multiple population estimates produced from different monitoring projects — those are represented by all the smaller dots, and connected by a line for each species. We took the average population estimate for each species (bigger dots).

We see a very strong positive relationship between our photos and actual population sizes: we get more photos for species that are more abundant. Which is good! Really good! The dashed line shows the relationship between our capture rates and actual densities for all species. We wanted to make sure, however, that this relationship wasn’t totally dependent on the huge influx of wildebeest and zebra and gazelle — so we ran the same analysis without them. The black line shows that relationship. It’s still there, it’s still strong, and it’s still statistically significant.

Now, the relationship isn’t perfect. Some species fall above the line, and some below the line. For example, reedbuck and topi fall below the line – meaning that given how many topi really live in Serengeti, we should have gotten more pictures. This might be because topi mostly live in the northern and western parts of Serengeti, so we’re just capturing the edge of their range. And reedbuck? This might be a detectability issue — they tend to hide in thickets and so might not pass in front of cameras as often as animals that wander a little more actively.

Ultimately, however, we see that the cameras do a good overall job of catching more photos of more abundant species. Even though it’s not perfect, it seems that raw capture rates give us a pretty good quick look at a system.

Lions and cheetahs and dogs, oh my! (final installment)

I’ve written a handful of posts (here and here and here) about how lions are big and mean and nasty…and about how even though they are nasty enough to keep wild dog populations in check, they don’t seem to be suppressing cheetah numbers.

Well, now that research is officially out! It’s just been accepted by the Journal of Animal Ecology and is available here. Virginia Morrell over at ScienceNews did a nice summary of the story and it’s conservation implications here.

One dissertation chapter down, just two more to go!




What we’ve seen so far, Part IV

Last week I wrote about using really simple approaches to interpret camera trap data. Doing so makes the cameras a really powerful tool that virtually any research team around the world can use to quickly survey an ecosystem.

Existing monitoring projects in Serengeti give us a really rare opportunity to actually validate our results from Snapshot Serengeti: we can compare what we’re seeing in the cameras to what we see, say, from radio-tracking collared lions, or to the number of buffalo and elephants counted during routine flight surveys.

Ingela scanning for lions from the roof of the car.

Ingela scanning for lions from the roof of the car.

One of the things we’ve been hoping to do with the cameras is to use them to understand where species are, and how those distributions change. As you know, I’ve struggled a bit with matching lion photographs to known lion ranging patterns. Lions like shade, and because of that, they are drawn to camera traps on lone, shady trees on the plains from miles and miles away.

But I’ve finally been able to compare camera trap captures to know distributions for other animals. Well, one other animal: giraffes.  From 2008-2010, another UMN graduate student, Megan Strauss, studied Serengeti giraffes and recorded where they were. By comparing her data with camera trap data, we can see that the cameras do okay.

The graph below compares camera trap captures to known densities of giraffes and lions. Each circle represents a camera trap; the bigger the circle, the more photos of giraffes (top row) or lions (bottom row). The background colors reflect known relative densities measured from long-term monitoring: green means more giraffes or lions; tan/white means fewer. For giraffes, on the whole, we get more giraffe photos in places that have more giraffes. That’s a good sign. The scatterplot visualizes the map in a different way, showing the number of photos on the y-axis vs. the known relative densities on the x-axis.



What we see is that cameras work okay for giraffes, but not so much for lions. Again, I suspect that this has a lot to do with the fact that lions are incredibly heat stressed, and actively seek out shade (which they then sleep in for 20 hours!). But lions are pretty unique in their extreme need for shade, so cameras probably work better for most other species. We see the cameras working better for giraffes, which is a good sign.

We’ve got plans to explore this further. In fact, Season 7 will overlap with a wildebeest study that put GPS collars on a whole bunch of migratory wildebeest. For the first time, we’ll be able to compare really fine scale data on the wildebeest movements to the camera trap photos, and we can test even more precisely just how well the cameras work for tracking large-scale animal movements.  Exciting!

What we’ve seen so far, Part III

Over the last few weeks, I’ve shared some of our preliminary findings from Seasons 1-6 here  and here. As we’re still wrapping up the final stages of preparation for Season 7, I thought I’d continue in that vein.

One of the coolest things about camera traps is our ability to simultaneously monitor many different animal species all at once. This is a big deal. If we want to protect the world around us, we need to understand how it works. But the world is incredibly complex, and the dynamics of natural systems are driven by many different species interacting with many others. And since some of these critters roam for hundreds or thousands of miles, studying them is really hard.

I have for a while now been really excited about the ability of camera traps to help scientists study all of these different species all at once. But cameras are tricky, because turning those photographs into actual data on species isn’t always straightforward. Some species, for example, seem to really like cameras,

so we see them more often than we really should — meaning we might think there are more of that critter than there really are.  There are statistical approaches to deal with this kind of bias in the photos, but these statistics are really complex and time consuming.

This has actually sparked a bit of a debate among researchers who use camera traps. Researchers and conservationists have begun to advocate camera traps as a cost-effective, efficient, and accessible way to quickly survey many understudied, threatened ecosystems around the world. They argue that basic counting of photographs of different species is okay as a first pass to understand what animals are there and how many of them there are. And that requiring the use of the really complex stats might hinder our ability to quickly survey threatened ecosystems.

So, what do we do?  Are these simple counts of photographs actually any good? Or do we need to spend months turning them into more accurate numbers?

Snapshot Serengeti is really lucky in that many animals have been studied in Serengeti over the years. Meaning that unlike many camera trap surveys, we can actually check our data against a big pile of existing knowledge. In doing so, we can figure out what sorts of things cameras are good at and what they’re not.

Comparing the raw photographic capture rates of major Serengeti herbivores to their population sizes as estimated in the early 2000’s, we see that the cameras do an okay job of reflecting the relative abundance of different species. The scatterplot below shows the population sizes of 14 major herbivores estimated from Serengeti monitoring projects on the x-axis, and camera trap photograph rates of those herbivores on the y-axis. (We take the logarithm of the value for statistical reasons.) There are really more wildebeest than zebra than buffalo than eland, and we see these patterns in the number of photographs taken.


Like we saw the other week, monthly captures shows that we can get a decent sense of how these relative abundances change through time.


So, by comparing the camera trash photos to known data, we see that they do a pretty good job of sketching out some basics about the animals. But the relationship also isn’t perfect.

So, in the end, I think that our Snapshot Serengeti data suggests that cameras are a fantastic tool and that raw photographic capture rates can be used to quickly develop a rough understanding of new places, especially when researchers need to move quickly.  But to actually produce specific numbers, say, how many buffalo per square-km there are, we need to dive in to the more complicated statistics. And that’s okay.

Data from Afar

Earth, rendered from MODIS data

Look at this picture of the world – it’s blue, it’s green, it’s dynamic. It is covered in swirling clouds beneath which we can see hints of landforms, their shapes and their colors. Satellites tireless orbiting the Earth gathered the information to construct this image. And every pixel of this this awe-inspiring rendition of our planetary home is packed with data on geology, topography, climatology, and broad-scale biological processes.

I still find it funny that I can sit in my office and watch weather patterns in Asia, cloud formation over the Pacific, or even examine the contours of the moon in minute detail, thanks to remote sensing programs. Not that lunar geomorphology is particularly pertinent to lion behavior, at least, in any way we’ve discovered so far. Still, an incredible amount of information on the Serengeti landscape can be collected by remote sensing and incorporated into our research. “Remote sensing” simply refers to gathering information from an object without actually making physical contact with the object itself. Primarily, this involves the use of aerial platforms (some kind of satellite or aircraft) carrying sensor technologies that detect and classify objects by means of propagated signals. Most people are passingly familiar with RADAR (“radio detection and ranging”) and SONAR (“sound navigation and ranging”), both examples of remote sensing technologies where radio waves and sound, respectively, are emitted and information retrieved from the signal bouncing back off of other objects. The broad-scale biotic or abiotic environmental information gathered can then be used in our analyses to help predict and explain patterns of interest. People are using remote sensing to monitor monitoring deforestation in Amazon Basin, glacial features in Arctic and Antarctic regions, and processes in coastal and deep oceans. Here are brief vignettes of several kinds of remote sensing data we draw upon for our own biological studies.

Herbivore distributions overlaid on NDVI readings

Herbivore distributions overlaid on NDVI readings

NDVI: Normalized Difference Vegetation Index

NDVI is collected using the National Oceanic and Atmospheric Administration (NOAA)’s Advanced Very High Resolution Radiometer and is an assessment of whether a bit of landscape in question contains live green vegetation or not. And yes, it’s far more complicated than simply picking out the color “green”. In live plants, chlorophyll in the leaves absorbs solar radiation in the visible light spectrum as a source of energy for the process of photosynthesis. Light in the near-infrared spectral region, however, is much higher in energy and if the plant were to absorb these wavelengths, it would overheat and become damaged. These wavelengths are reflected away. This means that if you look at the spectral readings from vegetation, live green plants appear relatively dark in the visible light spectral area and bright in the near-infrared. You can exploit the strong differences in plant reflectance to determine their distribution in satellite images. Clever, right? NDVI readings are normalized on a scale of -1 to 1, where negative values correspond to water, values closer to zero indicate barren areas of tundra, desert, or barren rock, and increasingly positive values represent increasing vegetated areas. As you can see in the image above, we have NDVI readings for our study sites which can be used to examine temporal and spatial patterns of vegetation cover, biomass, or productivity — factors important in driving herbivore distribution patterns.

Wildfire occurrence data gathered from MODIS satellites

MODIS: Moderate-resolution Imaging Spectroradiometer

The MODIS monitoring system is being carried in orbit aboard a pair of satellites, the Terra and Aqua spacecraft, launched by NASA in the early 2000s. The two instruments image the entire surface of the Earth every 1 to 2 days, collecting measurements on a range of spectral bands and spatial resolutions. Their readings provide information on large-scale global processes, including pretty much anything that can occur in the oceans, on land, or throughout the lower atmosphere. Many of the beautiful Earth images, such as the one at the head of this post, are constructed using MODIS data. We hope to use MODIS information for the detection and mapping of wildlife fires, which impact organisms at every level of the Serengeti food web.

LiDAR: Apparently, a common misnomer is that “LiDAR” is an acronym for Light Detection and Ranging, while the official Oxford English Dictionary (the be-all-end-all for etymology) maintains that the word is merely a combination of light and radar. Either way, it’s less of a mouthful than the other two techniques just discussed!

LiDAR is quite well-known for its applications in homing missiles and weapons ranging, and was used in the 1971 Apollo 15 mission to map the surface of the moon. We also use this for biology, I promise. What LiDAR does, and does far better than RADAR technology, is to calculate distances by illuminating a target with a laser and measuring the amount of time it takes for the reflected signal to return. High resolution maps can be produced detailing heights of objects and structural features of any material that can reflect the laser, including metallic and non-metallic objects, rocks, rain, clouds, and even, get this, single molecules. There are two types of LiDAR: topographic, for mapping land, and bathymetric, which can penetrate water. To acquire these types of data for your site, you load up your sensors into an airplane, helicopter, or drone and use these aerial platforms to cover broad areas of land. I first became aware of LiDAR from a study that used this technology in South Africa to map lion habitat and correlate landscape features with hunting success. I’ve also seen it used to map habitat for wolves and elk, determine canopy structure, and, interestingly enough, to remotely distinguish between different types of fish (weird, and also really neat). Now we don’t have LiDAR information for the Serengeti, so keep an eye out for anyone who might be able to lend us a couple of small aircraft and some very expensive sensing equipment!

What we’ve seen so far, cont’d.

Playing with data is one of the many things I love about research. Yes, it is super nerdy. I embrace that.

Last week I shared with you the various critters we’re getting to *see* in the Snapshot Serengeti data. Over 100,000 wildebeest photos! Over 4,000 lions! And the occasional really cool rarity like pangolins


and rhinos.


But the photographs carry a lot more information than just simply what species was caught in the frame. For example, because the photos all have times recorded, we can see how the Serengeti changes through time.

This graph shows the number of daily pictures of wildebeest and buffalo, and how the daily capture rates change through the seasons. Each set of bars represents a different month, starting in July 2010. Wildebeest are in dark green, buffalo in light green. The y-axis is on a square-root scale, meaning that the top is kind of squished: the difference from 30-40 is smaller than the distance from 0-10. Otherwise, we’d either have to make the graph very very tall, or wouldn’t really be able to see the buffalo counts at all.


Buffalo are captured more-or-less evenly across the different months. But the wildebeest show vast spikes in capture rates during the wet season. These spikes in numbers coincide with the migration, when the vast herds of wildebeest come sweeping through the study area.

Now, the number of photos doesn’t directly transfer into the number of wildebeest in the study area, and these aren’t changes in population size, but instead changes in distribution of the wildebeest. But it’s pretty cool that with something as simple as just the number of photographs, we can see these huge changes that accurately reflect what’s going on in the system.

What we’ve seen so far…

As we prepare to launch Season 7 (yes! it’s coming soon! stay tuned!), I thought I’d share with you some things we’ve seen in seasons 1-6.

Snapshot Serengeti is over a year old now, but the camera survey itself has been going on since 2010; you guys have helped us process three years of pictures to date!

First, of the >1.2 million capture events you’ve looked through, about two-thirds were empty. That’s a lot of pictures of grass!


But about 330,000 photos are of the wildlife we’re trying to study.  A *lot* of those photos are of wildebeest. From all the seasons so far, wildebeest made up just over 100,000 photos! That’s nearly a third of all non-empty images altogether.herbivores

We also get a lot of zebra and gazelle – both of which hang out with the wildebeest as they migrate across the study area. We also see a lot of buffalo, hartebeest, and warthog — all of which lions love to eat.


We also get a surprising number of photos of the large carnivores. Nearly 5,000 hyena photos! And over 4,000 lion photos! (Granted, for lions, many of those photos are of them just lyin’ around.)

Curious what else? Check out the full breakdown below…

Preview of “Tables.xlsx”

The joys of poster presentation

As Meredith mentioned last week, she, Craig, and I are counting down the days until we head out to sunny California for an academic conference.  I am really looking forward to above-zero temperatures. I am rather less enthused about the prospect of presenting a poster. Yes, it is good networking. Yes, I get to personally advertise results from a study that are currently in review at a journal (and hopefully will be published “soon”). Yes, I get to engage with brilliant minds whose research I have read forward, backwards, and sideways. Despite all of that, I’m still not excited.

Poster-ing is perhaps the most awkward component of an academic conference. Academics are not known for their mingling skills. Add to that the inherent awkwardness of having to lurk like an ambush predator by your poster while fellow ever-so-socially-savvy scientists trudge through the narrow aisle ways, trying to sneak non-committal glances at figures and headings without pausing long enough for the poster-presenter to pounce with their “poster spiel.” For the browsers who do stop and study your poster, you have stand there pretending that you aren’t just standing there breathing down their necks while they try to read your poster until they decide that a) this is really interesting and they want to talk to you, or b) phew that was close, they almost got roped into having to talk to you about something they know/care nothing about. Most conferences have figure out that poster sessions are a lot less painful if beer is served.

Working with big, fuzzy animals means that I usually get a pretty decent sized crowd at my posters. About half of those people want to ask me about job opportunities or to tell me about the time that they worked in a wildlife sanctuary and got to hug a lion and do I get to hug lions when I’m working? I once had a pleistocene re-wilding advocate approach me for advice on – no joke – introducing African lions into suburban America. But they aren’t all bad. I’ve met a number of people in poster sessions who have gone on to become respected colleagues and casual friends. I’ve met faculty members whose labs I am now applying to for post-doctoral research positions. And I’ve learned how to condense a 20-page paper into a 2 minute monologue — which is a remarkably handy skill to have.

As much as I gripe and grumble about poster sessions, I know they’re good for me. At least with this one, I’ll be close to the beach!!

Below is a copy of my (draft) poster for the upcoming Gordon Research Conference that a chunk of the Snapshot Serengeti team will be at. It’s mostly on data outside of Snapshot Serengeti, but you might find it interesting nonetheless! (Minor suggestions and typo corrections welcome! I know I still have to add a legend or two…)

At 4 feet by 4 feet, this thing is a beast!

At 4 feet by 4 feet, this thing is a beast!


Get every new post delivered to your Inbox.

Join 5,333 other followers