Our lions in National Geographic Magazine
The August edition of National Geographic Magazine has a cover story on the Serengeti lions that Craig has been studying for decades. And because Ali set out the camera trap grid in the same place as Craig’s lion study area, you see the same lions (plus more) on Snapshot Serengeti as those featured in the article. In fact, photographer Michael Nichols was out in the Serengeti during Season 5, so his pictures are contemporaneous with the ones up on Snapshot Serengeti right now.
So if you have a moment, go check out “The Short Happy Life of a Serengeti Lion,” which is entertaining and gives a nice history of the foundational research on which the Snapshot Serengeti science rests. And take a gander at the editor’s note, which accompanies this picture.
Weavers
Every once in a while, a camera gets knocked off a tree and ends up pointing up into the tree where there are many grassy balls hanging from the branches. We have one of these cameras in Season 5, and it is taking pictures like this one:
What are those odd grassy balls? Why, they’re the nests of weaver birds. My Birds of East Africa book lists a dozen species of weavers in the Serengeti, and most of them have a yellow and black pattern. Here’s what some of these guys look like close up.
Several years ago, I watched through a Lion House window as a weaver bird build its nest from scratch. The bird started with just a branch, one with something of a knot at the end where a twig may have split off in the past. The weaver grabbed a long blade of grass and wrapped it around that knobby joint and tucked the blade under itself, as you might do if you were tying your shoe. Then it got another blade of grass and wove that through the loop it had created with the first blade, tucking it securely back under and through the loop a second time. It continued to add blades for the next twenty minutes or so, such that the grass formed two clumps, one sticking out of either side of the knot.
(Aside: the soundtrack is completely coincidental; field assistant John was cooking something in the kitchen while listening to music.)
Straddling the two clumps, with one talon hanging on to each, the weaver then took a long blade from one clump and wove its end back up into the other clump. The result was a loop. The bird pulled additional grass from one clump to the other and strengthened the loop. Bit by bit.
I watched for over a half-hour, but I had work to do, too. So I left the little weaver to its task, and checked in again that evening before the sun set. There it was, a hefty wreath of grass hanging from the end of a tree branch.
I checked again a couple days later. The weaver had been working on filling in grass around the sides to form the ball shape.
Three days later the ball shape was becoming apparent (and I finally decided to take pictures outdoors instead of through the window, so that they’re better in focus).
Aha! I caught a decent shot of the builder. My bird appears to be a Vitelline Masked Weaver male. (Although, my book also says that the top of the head ought to be rather chestnut color and this guy has maybe only a little bit of chestnut and rather brown markings on the back instead of black. Maybe it’s a young male?) These guys generally are found solitary or in pairs, which explains why I saw just one of them building a nest in a tree all alone. And their nests are “distinctive onion-shaped nests with an entrance hole at the bottom.” Looking good…
Five days later Mr. Vitelline’s work was looking very much like a nest.
Five days later was also my last day in the Serengeti, so I didn’t see further developments of this nest. But I suspect it was completed and became a comfortable abode for its industrious builder.
Rare Romping Rhinoceros
Thanks to Snapshotters Jihang and parsfan who posted it to Talk, we can all marvel over the best ever picture of a black rhinoceros taken by a Snapshot Serengeti camera.
(And yes, it’s a set of three images, so you can animate them.)
There’s no mistaking this beast!
There are two types of rhinoceroses in Africa: the white rhino and the black rhino. Despite their names, they’re both gray, but you can tell them apart by their lips: white rhinos have broad lips, while black rhinos have a pointed and curved upper lip that is used to grasp vegetation. There are only black rhinos in the Serengeti (and in all of Tanzania), and they can grow to weigh up to 3,000 lbs (1,360 kg).
Despite their size and bulk, these animals can really move — at speeds up to 28 mph (50 kph). Several years ago, I had the chance to see the rhinoceros rehabilitation center at Kruger National Park in South Africa. This gave me a chance to get fairly close to the creatures. At one point I was on the other side of a fence from a rhino and it charged me; the fence was made of wood and very solid, so I was fine and the rhino just bounced off the fence. But I was amazed at how fast it went from standing still to ramming speed — and how quiet it was doing so.
Black rhinos are critically endangered, with fewer than 5,000 alive in the wild (as of the end of 2010, the most recent statistic). Fully 95% of all wild black rhinos live in one of four countries: South Africa, Namibia, Kenya, and Zimbabwe. Tanzania has only about 100 of them, with a quarter of those in the Serengeti.
It’s thought that about 1,000 rhinos originally inhabited the Serengeti. But in the 1970’s and early 1980’s, poaching increased severely. A park survey conducted in 1982 found only two remaining rhinos, both female. Efforts were made to actively protect these remaining two, and plans began to be made to bring in a male. But before those plans came through, a male rhino showed up on his own in 1994. It’s thought that he came from the Ngorongoro population, which contained only about a dozen animals at the time. Moreover, it was quite a hike at 70 miles (113 km). As I said, these animals can move!
It’s something of a mystery how this male rhino found the females in so vast an area — sound? smell? — but he stayed. Within a short time there were four babies, and since then, the population has steadily grown. Now there are twenty or so black rhinoceroses living around Moru Kopjes. These Kopjes are about 20 miles (32 km) south of our camera trap area, so we only rarely catch a glimpse of one (out for a walk?).
While the Moru population is growing, it still faces two major threats. The first is continued poaching. Demand for rhino horn has been rapidly escalating since 2009, with black market prices in Asia skyrocketing, and organized crime getting into the action. In May 2012, two of the Moru rhinos were found dead with their horns missing. The second threat to the Moru population is inbreeding, which over time can cause lower reproduction rates and increased genetic disease. Fortunately, there are ongoing efforts to both protect the existing rhinos and to bring more black rhino genetic diversity to the Serengeti by translocating animals from South Africa.
Perhaps in several decades these massive beasts will once again make regular appearances in central Serengeti; until then, keep your eyes peeled for their rare cameos.
Grass
You’ve undoubtedly seen it: Grass. Tall waving grass. Lots of it. From here to the horizon. If you’re itching to get images of animals to classify, the “nothing here” grass images can seem annoying. Some people find the grass images soothing. The animals themselves, well, a lot of them seem to like it.
Some animals find that tall grass is nice for concealing themselves from predators, like these guys:
Or this impala:
And some animals think the grass is nice for eating, like here:
Or here:
This post is brought to you by Faulty Cameras that switch unexpectedly to video mode when they’re not supposed to. These Season 5 videos have no sound, but capture some of the movement you don’t get with the photographs, so I thought you might like them.
Conference on Science Communication
Last week I attended a conference on science communication in Cambridge, Massachusetts. It was an intense few days, but totally worthwhile and interesting. There were fifty of us grad students, seven 3-person panels of various experts, and more food than you can possibly imagine. (The sheer quantity of food rivaled that put out by Zooniverse for its workshops — and that’s saying something.) The grad students spanned all sorts of science disciplines, but the conference was arranged by astronomers, so there was, I think, a disproportionate number of people there who like to try to figure out what’s going on up in space. I really enjoy talking with researchers in other disciplines because there are rather distinct cultures across the difference sciences. It’s interesting to see what various fields value and how they do things. And frankly, I don’t want to reinvent the wheel and so prefer to borrow best practices from elsewhere rather than figure them out from scratch.
The more I talk to astronomers, the more I think ecologists can borrow stuff from them. I mean, astronomers are pretty constrained in their science. All they can do is observe stuff out there in space and then try to be super clever to figure out what’s going on. Meanwhile, here on earth, we ecologists can do all that sort of observing PLUS we can manipulate the world to do experiments. Because we can do hands-on experiments, that’s a big part of ecology, but as the tools are getting more sophisticated to collect the sort of large-scale observational data that astronomers already have, I think we may be able to learn new things about the living world that are hard to figure out from experiments alone. And we might be able to borrow ideas from astronomers on how to do so.
For example, check this out. It’s a hand-out from one of our panel speakers, Dr. Alyssa Goodman, an astronomer at Harvard, who talked with us about communicating science with other scientists in different disciplines.
So the cool thing that caught my eye was: Zooniverse! (I added the red oval and arrow; the rest is original.) But this whole Seamless Astronomy thing sounds like a neat effort to integrate large amounts of data, visualization, research, and social media into something coherent that people can use to explore and combine some large astronomy data sets. There’s nothing like this (that I am aware of) going on in ecology, but the sorts of things this project figures out in the astronomy world could be useful to us over in ecology.
Of course some things will always be different in different disciplines. One thing we did at this conference was to introduce ourselves and our research in short one-minute “pop talks.” We had to avoid using jargon, which is hard when you’re steeped in your science all day every day. To reinforce the no-jargon rule, everyone was given big, brightly colored sheets of paper — one that read JARGON and one that read AWESOME. If someone used jargon, the audience would all hold up their JARGON flags. If someone explained something well without jargon, up went the AWESOME signs. This sort of feedback worked really well and we all got good at speaking without jargon fairly quickly.
But it was easier for some of us than others. I got to stand up and talk about how I study “plants and animals and how they interact with one another,” which is pretty understandable to anyone. I felt bad for the particle physicists and molecular chemists who had to try to describe their work without using the technical terms for the things they study; but they did well: “The world is made up of little tiny particles. I study how these particles wobble, and in particular how they wobble when you shine really bright lights on them.”
Lucky for us, we get to look at savanna landscapes and amazing animals as we do our research, so I’ll appreciate the perks of ecology as I get back to work now that the conference is over.
Cute Baby Elephant
I hope you’ve been having fun with the new Season 5 images. I have. It’s been about a week since we went live with Season 5, and we’re making good progress. It took under two weeks to go through the first three seasons in December. (We had some media attention then and lots of people checking out the site.) It took about three weeks to finish Season 4 in January. According to my super science-y image copy-and-paste method, it may take us about two months to do Season 5:
And that’s fine. But I was curious about who’s working on Season 5. The Talk discussion boards are particularly quiet, with almost no newbie questions. So is everyone working on Season 5 a returnee? Or do we have new folks on board?
I looked at the user data from a data dump done on Sunday. So it includes the first 5 or so days of Season 5. In total, there are 2,000 volunteers who had contributed to 280,000 classifications by Sunday! I was actually quite amazed to see that 6% of the classifications are being done by folks not logged in. Is that because they’re new people trying out the site — or because there are some folks who like to classify without logging in? I can’t tell.
But I can compare Season 5 to Season 4. We had 8,300 logged-in volunteers working on Season 4. Of all the classifications, 9% were done by not-logged-in folks. That suggests we have fewer newcomers so far for Season 5. But then we get to an intriguing statistic: of those 2,000 volunteers working on Season 5 in its first five days, 33% of them did not work on Season 4 at all! And those 33% apparently new folks have contributed 50% of the (logged-in) classifications!
So what’s going on? Maybe we’re getting these new volunteers from other Zooniverse projects that have launched since January. Maybe they’re finding us in other ways. (Have you seen that the site can be displayed in Finnish in addition to Polish now?) But in any case, welcome everyone and I hope you spot your favorite animal.
Me, I found this super cute baby elephant just the other day:
Plurality algorithm
On Wednesday, I wrote about how well the simple algorithm I came up with does against the experts. The algorithm looks for species that have more than 50% of the votes in a given capture (i.e. species that have a majority). Commenter Tor suggested that I try looking at which species have the most votes, regardless of whether they cross the 50% mark (i.e. a plurality). It’s a great idea, and easy to implement because any species that has more than 50% of the vote ALSO has the plurality. Which means all I have to do is look at the handful of captures that the majority algorithm had no answer for.
You can see why it might be a good idea in this example. Say that for a particular capture, you had these votes:
| 10 | impala |
| 4 | gazelleThomsons |
| 4 | dikDik |
| 3 | bushbuck |
You’d have 21 votes total, but the leading candidate, impala, would be just shy of the 11 needed to have a majority. It really does seem like impala is the likely candidate here, but my majority algorithm would come up with “no answer” for this capture.
So I tried out Tor’s plurality algorithm. The good news is that 57% of those “no answers” got the correct answer with the plurality algorithm. So that brings our correct percentage from 95.8% to 96.6%. Not bad! Here’s how that other 3.4% shakes out:
So now we have a few more errors. (About a quarter of the “no answers” were errors when the plurality algorithm was applied.) And we’ve got a new category called “Ties”. When you look for a plurality that isn’t over 50%, there can be ties. And there were. Five of them. And in every case the right answer was one of the two that tied.
And now, because it’s Friday, a few images I’ve stumbled upon so far in Season 5. What will you find?
Algorithm vs. Experts
Recently, I’ve been analyzing how good our simple algorithm is for turning volunteer classifications into authoritative species identifications. I’ve written about this algorithm before. Basically, it counts up how many “votes” each species got for every capture event (set of images). Then, species that get more than 50% of the votes are considered the “right” species.
To test how well this algorithm fares against expert classifiers (i.e. people who we know to be very good at correctly identifying animals), I asked a handful of volunteers to classify several thousand randomly selected captures from Season 4. I stopped everyone as soon as I knew 4,000 captures had been looked at, and we ended up with 4,149 captures. I asked the experts to note any captures that they thought were particularly tricky, and I sent these on to Ali for a final classification.
Then I ran the simple algorithm on those same 4,149 captures and compared the experts’ species identifications with the algorithm’s identifications. Here’s what I found:
For a whopping 95.8% of the captures, the simple algorithm (due to the great classifying of all the volunteers!) agrees with the experts. But, I wondered, what’s going on with that other 4.2%. So I had a look:
Of the captures that didn’t agree, about 30% were due to the algorithm coming up with no answer, but the experts did. This is “No answer” in the pie chart. The algorithm fails to come up with an answer when the classifications vary so much that there is no single species (or combination if there are multiple species in a capture) that takes more than 50% of the vote. These are probably rather difficult images, though I haven’t looked at them yet.
Another small group — about 15% of captures was marked as “impossible” by the experts. (This was just 24 captures out of the 4,149.) And five captures were both marked as “impossible” and the algorithm failed to come up with an answer; so in some strange way, we might consider these five captures to be in agreement.
Just over a quarter of the captures didn’t agree because either the experts or the algorithm saw an extra species in a capture. This is labeled as “Subset” in the pie chart. Most of the extra animals were Other Birds or zebras in primarily wildebeest captures or wildebeest in primarily zebra captures. The extra species really is there, it was just missed by the other party. For most of these, it’s the experts who see the extra species.
Then we have our awesome, but difficulty-causing duiker. There was no way for the algorithm to match the experts because we didn’t have “duiker” on the list of animals that volunteers could choose from. I’ve labeled this duiker as “New animal” on the pie chart.
Then the rest of the captures — just over a quarter of them — were what I’d call real errors. Grant’s gazelles mistaken for Tommies. Buffalo mistaken for wildebeest. Aardwolves mistaken for striped hyenas. That sort of thing. They account for just 1.1% of all the 4,149 captures.
I’ve given the above Non-agreement pie chart some hideous colors. The regions in purple are what scientists call Type II errors, or “false negatives.” That is, the algorithm is failing to identify a species that we know is there — either because it comes up with no answer, or because it misses extra species in a capture. I’m not too terribly worried about these Type II errors. The “Subset” ones happen mainly with very common animals (like zebra or wildebeest) or animals that we’re not directly studying (like Other Birds), so they won’t affect our analyses. The “No answers” may mean we miss some rare species, but if we’re analyzing common species, it won’t be a problem to be missing a small fraction of them.
The regions in orange are a little more concerning; these are the Type I errors, or “false positives.” These are images that should be discarded from analysis because there is no useful information in them for the research we want to do. But our algorithm identifies a species in the images anyway. These may be some of the hardest captures to deal with as we work on our algorithm.
And the red-colored errors are obviously a concern, too. The next step is to incorporate some smarts into our simple algorithm. Information about camera location, time of day, and identification of species in captures immediately before or following a capture can give us additional information to try to get that 4.2% non-agreement even smaller.
Update on Season 5
In short, delay. 😦
In long, we’ve processed all the images and are uploading them onto the Zooniverse servers. However, it’s taking a long time. A really long time. Since Season 4, the Minnesota Supercomputer Institute (MSI) has switched over to a new system, and it seems like the upload time from this new system is painfully slow. We’ve uploaded over 25% of the images, but it’s taken a couple days uploading non-stop. So best estimate is mid to late next week for when they’ll all be uploaded. We’re trying to coordinate with the staff at MSI to see if they can increase upload speeds for us, but no guarantees.
(Man, I wish we had some images of turtles or snails or sloths or something from Serengeti… Wait! I know what’s slow — stationary, actually.)
Meanwhile, you can read a guest blog post that I wrote over at Dynamic Ecology. Dynamic Ecology is read by ecologists, so my blog post introduces the concept of citizen science (and Snapshot Serengeti, of course) to professional ecologists who may not be very familiar with it. One question that comes up in the comments is: can you do citizen science if you don’t have cool, awesome animals? Like, what if you have flies or worms or plankton instead? I think the answer is yes. But feel free to give your perspectives in the comments there, too.
Not on the A-List
I’m working on an analysis that compares the classifications of volunteers at Snapshot Serengeti with the classifications of experts for several thousand images from Season 4. This analysis will do two things. First, it will give us an idea of how good (or bad) our simple vote-counting method is for figuring out species in pictures. Second, it will allow us to see if more complicated systems for combining the volunteer data work any better. (Hopefully I’ll have something interesting to say about it next week.)
Right now I’m curating the expert classifications. I’ve allowed the experts to classify an image as “impossible,” which, I know, is totally unfair, since Snapshot Serengeti volunteers don’t get that option. But we all recognize that for some images, it really isn’t possible to figure out what the species is — either because it’s too close or too far or too off the side of the image or too blurry or …. The goal is that whatever our combining method is, it should be able to figure out “impossible” images by combining the non-“impossible” classifications of volunteers. We’ll see if we can do it.
Another challenge that I’m just running into is that our data set of several thousand images contains a duiker. A what? A common duiker, also known as a bush duiker:
You’ve probably noticed that “duiker” is not on the list of animals we provide. While the common duiker is widespread, it’s not commonly seen in the Serengeti, being small and active mainly at night. So we forgot to include it on the list. (Sorry about that.)
The result is that it’s technically impossible for volunteers to properly classify this image. Which means that it’s unlikely that we’ll be able to come up with the correct species identification when we combine volunteer classifications. (Interested in what the votes were for this image? 10 reedbuck, 6 dik dik, and 1 each of bushbuck, wildebeest(!), and impala.)
The duiker is not the only animal that’s popped up unexpectedly since we put together the animal list and launched the site. I never expected we’d catch a bat on film:
Our friends over at Bat Detective tell us that the glare on the face makes it impossible to truly identify, but they did confirm that it’s a large, insect-eating bat. Anyway, how to classify it? It’s not a bird. It’s not a rodent. And we didn’t allow for an “other” category.
I also didn’t think we’d see insects or spiders.
Moths fly by, ticks appear on mammal bodies, spiders spin webs in front of the camera and even ants have been seen walking on nearby branches. Again, how should they be classified?
And here’s one more uncommon antelope that we’ve seen:
It’s a steenbok, again not commonly seen in Serengeti. And so we forgot to put it on the list. (Sorry.)
Luckily, all these animals we missed from the list are rare enough in our data that when we analyze thousands of images, the small error in species identification won’t matter much. But it’s good to know that these rarely seen animals are there. When Season 5 comes out (soon!), if you run into anything you think isn’t on our list, please comment in Talk with a hash-tag, so we can make a note of these rarities. Thanks!



















