Image Quality Explained


A Snapshot Serengeti Camera-trap image

Those of you who have been with us for some time will probably have noticed that the image quality since we switched to the Snapshot Safari platform has reduced, sometimes dramatically. Before I go any further, we are trying hard to fix this but in the meantime I thought I would try and explain what the issues are in a hope that it may induce a little more patience from you. I am afraid that I really am technically challenged when it comes to computer stuff so I am going to be a little vague here but please, if there is anyone out there with more knowledge who can either help explain more appropriately or better still offer our team help don’t hesitate to get in touch.

So the trouble all started when Snapshot Serengeti joined the bigger Snapshot Safari platform at the start of this year. At this time Zooniverse was having a big overhaul with older projects operating on Ouroboros moving over to the Panoptes format. Essentially Ouroboros and Panoptes are both software packages which enable projects to build their pages and run them.

Of course Snapshot Serengeti being one of the oldest Zooniverse projects was designed using Ouroboros and has had some teething problems with the switch over. One thing to remember is that the teams involved with bringing all the camera trap images to the Snapshot Serengeti platform are for the most part unpaid graduate and undergraduate students studying ecology. They are not experts in computer programming yet have to keep the platforms running and fix all the problems.

In the old days the University of Minnesota based team would upload the batches of images from the camera traps and send them to Zooniverse who would process and upload them to the platforms. That was when there were a dozen or so projects. There are now over 50 active projects. Can you imagine how long it would take for Zooniverse to do all the uploading? To address this problem they have asked individual projects to manage the uploading themselves. To complicate this process a little more they have also placed a 600GB maximum file size on the images.

This all means that the team of ecologists at Minnesota have to engage computer code developers to write custom scripts enabling their super computers to interact with the Zooniverse web platform. The image quality issue then is not because we have started using different camera’s or taking images at a lower resolution it is due to the code that compresses the images from their full size to less than 600GB. Those images that were smaller in the first place have been less effected than the larger ones and hence the mixture of quality that we are seeing.

So as I said earlier we are trying hard to get this problem sorted and bring you back the kind of top rate images you are used to and hope to have things sorted with the next batch of images we upload. In the meantime please spare a thought for the team and remember that like you they are all volunteers, all be t with a slightly more vested interest in the research project. I hope that you will bear with us and keep up the much needed support you have always given us.



About lucy Hughes

I am a moderator on Snapshot Serengeti, you will see me post as lucycawte. In my spare time I am studying an MSc in Wildlife biology and conservation. After living on a nature reserve in Southern Africa for several years my passion for all things wild is well and truly fired!

2 responses to “Image Quality Explained”

  1. David Bygott says :

    I’m an old timer and not very digitally savvy but I do work a lot with images. “600 GB” file size is bigger than the hard drive capacity of many laptops, so I am not sure what you mean…maybe 60 KB?

    Here’s the thing. Imagine a picture made up of a lot of little colored tiles called pixels. The data in the file that enables the computer to display the picture, describe the color of each tile and its position. You can reduce that file size in two main ways; A) by using fewer different colors, or B) by using fewer tiles overall.

    In the old style SSS you used method B and got a small image, 600 tiles wide with a full range of colors. Average size of 1 frame (as downloaded on my computer) ~60 KB.

    Right now you are using method A, where you get a big image over 2500 tiles wide. To my mind this is unnecessarily big, twice as big as the screens that most people use to view the pictures. And the number of colors is severely reduced, which is why skies look banded and shadow detail is lost. Average file size of 1 frame is ~ 190 KB. So your files are now 3x as big but they provide less information, as it is less easy now to manipulate dark photos to see what is in the shadows. I think that is what most of us are complaining about.

    Seems like all you need to do is just set different parameters when optimizing your raw photos for the web. Less pixels, more colors.

    • lucy Hughes says :

      Hi David,
      Thanks for your response, I will pass it on to Sarah who is in charge of this side of things, I believe you have already been in contact with her? As you can see I am very challenged in this department and was passing on what Sarah reported to me so really not sure either about what the 600GB max means. I will have to get clarification from Sarah.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: