As I’m writing up my dissertation (ahh!), I’ve been geeking out with graphs and statistics (and the beloved/hated stats program R). I thought I’d share a cool little tidbit.
Full disclosure: this is just a bit of an expansion on something I posted back in March about how well the camera traps reflect known densities. Basically, as camera traps become more popular, researchers are increasingly looking for simple analytical techniques that can allow them to rapidly process data. Using the raw number of photographs or animals counted is pretty straightforward, but is risky because not all animals are equally “detectable”: some animals behave in ways that make them more likely to be seen than other animals. There are a lot of more complex methods out there to deal with these detectability issues, and they work really well — but they are really complex and take a long time to work out. So there’s a fair amount of ongoing debate about whether or not raw capture rates should ever be used even for quick and dirty rapid assessments of an area.
Since the Serengeti has a lot of other long term monitoring, we were able to compare camera trap capture rates (# of photographs weighted by group size) to actual population sizes for 17 different herbivores. Now, it’s not perfect — the “known” population sizes reflect herbivore numbers in the whole park, and we only cover a small fraction of the park. But from the graph below, you’ll see we did pretty well.
Actual herbivore densities (as estimated from long-term monitoring) are given on the x-axis, and the # photographic captures from our camera survey are on the y-axis. Each species is in a different color (migratory animals are in gray-scale). Some of the species had multiple population estimates produced from different monitoring projects — those are represented by all the smaller dots, and connected by a line for each species. We took the average population estimate for each species (bigger dots).
We see a very strong positive relationship between our photos and actual population sizes: we get more photos for species that are more abundant. Which is good! Really good! The dashed line shows the relationship between our capture rates and actual densities for all species. We wanted to make sure, however, that this relationship wasn’t totally dependent on the huge influx of wildebeest and zebra and gazelle — so we ran the same analysis without them. The black line shows that relationship. It’s still there, it’s still strong, and it’s still statistically significant.
Now, the relationship isn’t perfect. Some species fall above the line, and some below the line. For example, reedbuck and topi fall below the line – meaning that given how many topi really live in Serengeti, we should have gotten more pictures. This might be because topi mostly live in the northern and western parts of Serengeti, so we’re just capturing the edge of their range. And reedbuck? This might be a detectability issue — they tend to hide in thickets and so might not pass in front of cameras as often as animals that wander a little more actively.
Ultimately, however, we see that the cameras do a good overall job of catching more photos of more abundant species. Even though it’s not perfect, it seems that raw capture rates give us a pretty good quick look at a system.
I’ve written a handful of posts (here and here and here) about how lions are big and mean and nasty…and about how even though they are nasty enough to keep wild dog populations in check, they don’t seem to be suppressing cheetah numbers.
Well, now that research is officially out! It’s just been accepted by the Journal of Animal Ecology and is available here. Virginia Morrell over at ScienceNews did a nice summary of the story and it’s conservation implications here.
One dissertation chapter down, just two more to go!
Last week I wrote about using really simple approaches to interpret camera trap data. Doing so makes the cameras a really powerful tool that virtually any research team around the world can use to quickly survey an ecosystem.
Existing monitoring projects in Serengeti give us a really rare opportunity to actually validate our results from Snapshot Serengeti: we can compare what we’re seeing in the cameras to what we see, say, from radio-tracking collared lions, or to the number of buffalo and elephants counted during routine flight surveys.
One of the things we’ve been hoping to do with the cameras is to use them to understand where species are, and how those distributions change. As you know, I’ve struggled a bit with matching lion photographs to known lion ranging patterns. Lions like shade, and because of that, they are drawn to camera traps on lone, shady trees on the plains from miles and miles away.
But I’ve finally been able to compare camera trap captures to know distributions for other animals. Well, one other animal: giraffes. From 2008-2010, another UMN graduate student, Megan Strauss, studied Serengeti giraffes and recorded where they were. By comparing her data with camera trap data, we can see that the cameras do okay.
The graph below compares camera trap captures to known densities of giraffes and lions. Each circle represents a camera trap; the bigger the circle, the more photos of giraffes (top row) or lions (bottom row). The background colors reflect known relative densities measured from long-term monitoring: green means more giraffes or lions; tan/white means fewer. For giraffes, on the whole, we get more giraffe photos in places that have more giraffes. That’s a good sign. The scatterplot visualizes the map in a different way, showing the number of photos on the y-axis vs. the known relative densities on the x-axis.
What we see is that cameras work okay for giraffes, but not so much for lions. Again, I suspect that this has a lot to do with the fact that lions are incredibly heat stressed, and actively seek out shade (which they then sleep in for 20 hours!). But lions are pretty unique in their extreme need for shade, so cameras probably work better for most other species. We see the cameras working better for giraffes, which is a good sign.
We’ve got plans to explore this further. In fact, Season 7 will overlap with a wildebeest study that put GPS collars on a whole bunch of migratory wildebeest. For the first time, we’ll be able to compare really fine scale data on the wildebeest movements to the camera trap photos, and we can test even more precisely just how well the cameras work for tracking large-scale animal movements. Exciting!
Over the last few weeks, I’ve shared some of our preliminary findings from Seasons 1-6 here and here. As we’re still wrapping up the final stages of preparation for Season 7, I thought I’d continue in that vein.
One of the coolest things about camera traps is our ability to simultaneously monitor many different animal species all at once. This is a big deal. If we want to protect the world around us, we need to understand how it works. But the world is incredibly complex, and the dynamics of natural systems are driven by many different species interacting with many others. And since some of these critters roam for hundreds or thousands of miles, studying them is really hard.
I have for a while now been really excited about the ability of camera traps to help scientists study all of these different species all at once. But cameras are tricky, because turning those photographs into actual data on species isn’t always straightforward. Some species, for example, seem to really like cameras,
so we see them more often than we really should — meaning we might think there are more of that critter than there really are. There are statistical approaches to deal with this kind of bias in the photos, but these statistics are really complex and time consuming.
This has actually sparked a bit of a debate among researchers who use camera traps. Researchers and conservationists have begun to advocate camera traps as a cost-effective, efficient, and accessible way to quickly survey many understudied, threatened ecosystems around the world. They argue that basic counting of photographs of different species is okay as a first pass to understand what animals are there and how many of them there are. And that requiring the use of the really complex stats might hinder our ability to quickly survey threatened ecosystems.
So, what do we do? Are these simple counts of photographs actually any good? Or do we need to spend months turning them into more accurate numbers?
Snapshot Serengeti is really lucky in that many animals have been studied in Serengeti over the years. Meaning that unlike many camera trap surveys, we can actually check our data against a big pile of existing knowledge. In doing so, we can figure out what sorts of things cameras are good at and what they’re not.
Comparing the raw photographic capture rates of major Serengeti herbivores to their population sizes as estimated in the early 2000′s, we see that the cameras do an okay job of reflecting the relative abundance of different species. The scatterplot below shows the population sizes of 14 major herbivores estimated from Serengeti monitoring projects on the x-axis, and camera trap photograph rates of those herbivores on the y-axis. (We take the logarithm of the value for statistical reasons.) There are really more wildebeest than zebra than buffalo than eland, and we see these patterns in the number of photographs taken.
Like we saw the other week, monthly captures shows that we can get a decent sense of how these relative abundances change through time.
So, by comparing the camera trash photos to known data, we see that they do a pretty good job of sketching out some basics about the animals. But the relationship also isn’t perfect.
So, in the end, I think that our Snapshot Serengeti data suggests that cameras are a fantastic tool and that raw photographic capture rates can be used to quickly develop a rough understanding of new places, especially when researchers need to move quickly. But to actually produce specific numbers, say, how many buffalo per square-km there are, we need to dive in to the more complicated statistics. And that’s okay.
Look at this picture of the world – it’s blue, it’s green, it’s dynamic. It is covered in swirling clouds beneath which we can see hints of landforms, their shapes and their colors. Satellites tireless orbiting the Earth gathered the information to construct this image. And every pixel of this this awe-inspiring rendition of our planetary home is packed with data on geology, topography, climatology, and broad-scale biological processes.
I still find it funny that I can sit in my office and watch weather patterns in Asia, cloud formation over the Pacific, or even examine the contours of the moon in minute detail, thanks to remote sensing programs. Not that lunar geomorphology is particularly pertinent to lion behavior, at least, in any way we’ve discovered so far. Still, an incredible amount of information on the Serengeti landscape can be collected by remote sensing and incorporated into our research. “Remote sensing” simply refers to gathering information from an object without actually making physical contact with the object itself. Primarily, this involves the use of aerial platforms (some kind of satellite or aircraft) carrying sensor technologies that detect and classify objects by means of propagated signals. Most people are passingly familiar with RADAR (“radio detection and ranging”) and SONAR (“sound navigation and ranging”), both examples of remote sensing technologies where radio waves and sound, respectively, are emitted and information retrieved from the signal bouncing back off of other objects. The broad-scale biotic or abiotic environmental information gathered can then be used in our analyses to help predict and explain patterns of interest. People are using remote sensing to monitor monitoring deforestation in Amazon Basin, glacial features in Arctic and Antarctic regions, and processes in coastal and deep oceans. Here are brief vignettes of several kinds of remote sensing data we draw upon for our own biological studies.
NDVI: Normalized Difference Vegetation Index
NDVI is collected using the National Oceanic and Atmospheric Administration (NOAA)’s Advanced Very High Resolution Radiometer and is an assessment of whether a bit of landscape in question contains live green vegetation or not. And yes, it’s far more complicated than simply picking out the color “green”. In live plants, chlorophyll in the leaves absorbs solar radiation in the visible light spectrum as a source of energy for the process of photosynthesis. Light in the near-infrared spectral region, however, is much higher in energy and if the plant were to absorb these wavelengths, it would overheat and become damaged. These wavelengths are reflected away. This means that if you look at the spectral readings from vegetation, live green plants appear relatively dark in the visible light spectral area and bright in the near-infrared. You can exploit the strong differences in plant reflectance to determine their distribution in satellite images. Clever, right? NDVI readings are normalized on a scale of -1 to 1, where negative values correspond to water, values closer to zero indicate barren areas of tundra, desert, or barren rock, and increasingly positive values represent increasing vegetated areas. As you can see in the image above, we have NDVI readings for our study sites which can be used to examine temporal and spatial patterns of vegetation cover, biomass, or productivity — factors important in driving herbivore distribution patterns.
MODIS: Moderate-resolution Imaging Spectroradiometer
The MODIS monitoring system is being carried in orbit aboard a pair of satellites, the Terra and Aqua spacecraft, launched by NASA in the early 2000s. The two instruments image the entire surface of the Earth every 1 to 2 days, collecting measurements on a range of spectral bands and spatial resolutions. Their readings provide information on large-scale global processes, including pretty much anything that can occur in the oceans, on land, or throughout the lower atmosphere. Many of the beautiful Earth images, such as the one at the head of this post, are constructed using MODIS data. We hope to use MODIS information for the detection and mapping of wildlife fires, which impact organisms at every level of the Serengeti food web.
LiDAR: Apparently, a common misnomer is that “LiDAR” is an acronym for Light Detection and Ranging, while the official Oxford English Dictionary (the be-all-end-all for etymology) maintains that the word is merely a combination of light and radar. Either way, it’s less of a mouthful than the other two techniques just discussed!
LiDAR is quite well-known for its applications in homing missiles and weapons ranging, and was used in the 1971 Apollo 15 mission to map the surface of the moon. We also use this for biology, I promise. What LiDAR does, and does far better than RADAR technology, is to calculate distances by illuminating a target with a laser and measuring the amount of time it takes for the reflected signal to return. High resolution maps can be produced detailing heights of objects and structural features of any material that can reflect the laser, including metallic and non-metallic objects, rocks, rain, clouds, and even, get this, single molecules. There are two types of LiDAR: topographic, for mapping land, and bathymetric, which can penetrate water. To acquire these types of data for your site, you load up your sensors into an airplane, helicopter, or drone and use these aerial platforms to cover broad areas of land. I first became aware of LiDAR from a study that used this technology in South Africa to map lion habitat and correlate landscape features with hunting success. I’ve also seen it used to map habitat for wolves and elk, determine canopy structure, and, interestingly enough, to remotely distinguish between different types of fish (weird, and also really neat). Now we don’t have LiDAR information for the Serengeti, so keep an eye out for anyone who might be able to lend us a couple of small aircraft and some very expensive sensing equipment!
Playing with data is one of the many things I love about research. Yes, it is super nerdy. I embrace that.
Last week I shared with you the various critters we’re getting to *see* in the Snapshot Serengeti data. Over 100,000 wildebeest photos! Over 4,000 lions! And the occasional really cool rarity like pangolins
But the photographs carry a lot more information than just simply what species was caught in the frame. For example, because the photos all have times recorded, we can see how the Serengeti changes through time.
This graph shows the number of daily pictures of wildebeest and buffalo, and how the daily capture rates change through the seasons. Each set of bars represents a different month, starting in July 2010. Wildebeest are in dark green, buffalo in light green. The y-axis is on a square-root scale, meaning that the top is kind of squished: the difference from 30-40 is smaller than the distance from 0-10. Otherwise, we’d either have to make the graph very very tall, or wouldn’t really be able to see the buffalo counts at all.
Buffalo are captured more-or-less evenly across the different months. But the wildebeest show vast spikes in capture rates during the wet season. These spikes in numbers coincide with the migration, when the vast herds of wildebeest come sweeping through the study area.
Now, the number of photos doesn’t directly transfer into the number of wildebeest in the study area, and these aren’t changes in population size, but instead changes in distribution of the wildebeest. But it’s pretty cool that with something as simple as just the number of photographs, we can see these huge changes that accurately reflect what’s going on in the system.
As we prepare to launch Season 7 (yes! it’s coming soon! stay tuned!), I thought I’d share with you some things we’ve seen in seasons 1-6.
Snapshot Serengeti is over a year old now, but the camera survey itself has been going on since 2010; you guys have helped us process three years of pictures to date!
First, of the >1.2 million capture events you’ve looked through, about two-thirds were empty. That’s a lot of pictures of grass!
But about 330,000 photos are of the wildlife we’re trying to study. A *lot* of those photos are of wildebeest. From all the seasons so far, wildebeest made up just over 100,000 photos! That’s nearly a third of all non-empty images altogether.
We also get a lot of zebra and gazelle – both of which hang out with the wildebeest as they migrate across the study area. We also see a lot of buffalo, hartebeest, and warthog — all of which lions love to eat.
We also get a surprising number of photos of the large carnivores. Nearly 5,000 hyena photos! And over 4,000 lion photos! (Granted, for lions, many of those photos are of them just lyin’ around.)
Curious what else? Check out the full breakdown below…
As Meredith mentioned last week, she, Craig, and I are counting down the days until we head out to sunny California for an academic conference. I am really looking forward to above-zero temperatures. I am rather less enthused about the prospect of presenting a poster. Yes, it is good networking. Yes, I get to personally advertise results from a study that are currently in review at a journal (and hopefully will be published “soon”). Yes, I get to engage with brilliant minds whose research I have read forward, backwards, and sideways. Despite all of that, I’m still not excited.
Poster-ing is perhaps the most awkward component of an academic conference. Academics are not known for their mingling skills. Add to that the inherent awkwardness of having to lurk like an ambush predator by your poster while fellow ever-so-socially-savvy scientists trudge through the narrow aisle ways, trying to sneak non-committal glances at figures and headings without pausing long enough for the poster-presenter to pounce with their “poster spiel.” For the browsers who do stop and study your poster, you have stand there pretending that you aren’t just standing there breathing down their necks while they try to read your poster until they decide that a) this is really interesting and they want to talk to you, or b) phew that was close, they almost got roped into having to talk to you about something they know/care nothing about. Most conferences have figure out that poster sessions are a lot less painful if beer is served.
Working with big, fuzzy animals means that I usually get a pretty decent sized crowd at my posters. About half of those people want to ask me about job opportunities or to tell me about the time that they worked in a wildlife sanctuary and got to hug a lion and do I get to hug lions when I’m working? I once had a pleistocene re-wilding advocate approach me for advice on – no joke – introducing African lions into suburban America. But they aren’t all bad. I’ve met a number of people in poster sessions who have gone on to become respected colleagues and casual friends. I’ve met faculty members whose labs I am now applying to for post-doctoral research positions. And I’ve learned how to condense a 20-page paper into a 2 minute monologue — which is a remarkably handy skill to have.
As much as I gripe and grumble about poster sessions, I know they’re good for me. At least with this one, I’ll be close to the beach!!
Below is a copy of my (draft) poster for the upcoming Gordon Research Conference that a chunk of the Snapshot Serengeti team will be at. It’s mostly on data outside of Snapshot Serengeti, but you might find it interesting nonetheless! (Minor suggestions and typo corrections welcome! I know I still have to add a legend or two…)
I have successfully survived the trials and tribulations of my first semester of graduate school! Huzzah! That being said, a student’s work is never done – you can still find me sitting in my office, plugging away at data and up to my eyeballs in pdfs and textbooks. Although it certainly helps when I know that, in a few short weeks, I’ll be showing off my preliminary data on a nice warm beach in California. Well, the Gordon Research Conference that Ali and I will both be attending will probably not be held directly ON the beach, but it’s a nice fantasy to have when your fingers are freezing off in Minnesota.
The theme of the conference is predator-prey interactions, but approached from a very interdisciplinary standpoint. Topics range from genes and the causes of childhood anxiety up through ecosystems, evolution, and Craig’s presentation on man-eating lions. It’s been over a year since I last attended a conference, and it’s going to be intimidating and inspiring to meet the Who’s Who in our field. All the papers piled up around my desk, underlined and annotated and thoroughly mulled over? Hopefully I’ll have a chance to chat with their authors in person and get these scientists’ input on the direction of my current research ideas.
My particular focus, predator intimidation (“fear”), is delightfully billed in the conference descriptions as “the persistent threat of immediate violent death.” The blurb continues on to state that “most wild animals are in peril every moment of every day of being torn limb from limb by any number of predators.” Language far more colorful that I can get away with in most of my proposals, but certainly right on point! There will be talks on fear’s impacts on evolutionary ecology and population- and ecosystem-level processes as well as about the effect of predators as stressors that I’m am particularly keen to attend.
As excited as I am, I’m honestly a bit frantic trying to synthesize our Snapshot data to produce distribution graphs and other basic preliminary results. A few months ago, I couldn’t have programmed my own name into “R” – the bread and butter statistical program of beloved (well, it’s a bittersweet relationship) by biologists. With long evenings in front of the computer and by the generous grace and goodwill of Ali, I’ve been making progress. Ideally, I would like to show up to this conference with not only an outline of my research to be picked apart by the aforementioned greatest minds in the field, but also with maps of the monthly distributions of several herbivore species in relation the changing vegetative landscape and predator movements. No breakthroughs so far; I foresee a great deal of coffee in my future between now and January…
P.S. Congrats to Margaret for defending her PhD!!!
I’ve got to echo Margaret’s apology for our sporadic blog posts lately. Things have been a bit hectic for all of us — Dr (!!!) Margaret Kosmala is finishing up her dissertation revisions and moving on to an exciting post-doctoral position at Harvard, our latest addition, Meredith, is finishing up her first semester (finals! ah!), and I’m knee deep in analyses (and snow!).
So,\ please bear with us through the craziness and rest assured that we’ll pick up the blog posts again after the holidays. In the meanwhile, I’ll show you something that got me really excited last week. (Warning: this involves graphs, not cute pictures.)
Last week, I was summarizing some of the Snapshot Serengeti data to present to my committee members. (My committee is the group of faculty members that eventually decide whether my research warrants a PhD, so holding these meetings is always a little nerve-wracking.) As a quick summary, I made this graph of the total number of photographs of the top carnivores. Note that I’m currently only working with data from Seasons 1-3, since we’re having trouble with the timestamps from Seasons 4-6, so the numbers below are about half of what I’ll eventually be able to analyze.
The height of each bar represents the total number of pictures for each species. The color of the bar reflects whether or not a sighting is “unique” or “repeat.” Repeated sightings happen when an animal plops down in front of the camera for a period of time, and we get lots and lots of photos of it. This most likely happens when animals seek out shade to lie in. Notice that lions have wayyyy more repeated sightings percentage-wise than other species. This makes sense — while we do occasionally see cheetahs and hyenas conked out in front of a well-shaded camera, this is a much bigger issue for lions.
I also dived a little deeper into the temporal patterns of activity for each species. The next graph shows the number of unique camera trap captures of each species for every hour of the day. See the huge spike in lion photos from 10am-2pm? It’s weird, right? Lions, like the other carnivores, are mostly nocturnal….so why are there so many photos of them at midday? Well, these photos are almost always lions who have wandered over for a well-shaded naptime snoozing spot. While there are a fair number of cheetahs who seem to do this too, it doesn’t seem to be as big of a deal for hyenas or leopards.
Why is this so exciting? Well, recall how I’ve repeatedly lamented about the way shade biases camera trap captures of lions? Because lions are so drawn to nice, shady trees, we get these camera trap hotspots that don’t match up with our lion radio-collar data. The map below shows lion densities, with highest densities in green, and camera traps in circles. The bigger the circle, the more lions were seen there.
The “lion hotspots” in relatively low density lion areas have been driving me mad all year. These are nice, shady trees that lions are drawn to from up to several kilometers away, and I’ve been struggling to reconcile the lion radio-collar data with the camera trapping data.
What the graphs above suggest, though, is that there likely to be much less bias for hyenas and leopards. Lions are drawn to shade, because they are big and bulky and easily overheated. We see this in the data in the form of many repeated sightings (indicating that lions like to lie down in one spot for hours) and in the “naptime spike” in the timing of camera trap captures that suggest lions seeking out shade trees to go to. Although this remains a bit of an issue for cheetahs, what the graphs above suggest is that using camera traps to understand hyena and leopard activity will be much less biased and much more straightforward — ultimately, much easier than it is for lions. And this is really good news for me.