Client Side Image Processing for Mobile Networks

client side image processing

image |

JQuery |

processing |

When we were working on the Suzanne app, we ran into difficulties with the size of the images we were uploading. Here’s how we dealt with them. Many people have run into the issue of slow, laggy performance on 3G/4G cellular networks. Download speeds being prioritized over uploads exacerbates the issue, especially when attempting to upload images. One way of improving the user experience and reducing the chance of errors when uploading images is to pre-process them before the upload ever occurs.

Benefits

Processing images on the client’s device offers a lot of benefits, not the least of which is lower server costs due to the image processing burden being placed on the client’s device. It also gives the user a much better experience because the longest delay (the upload of a large image) is significantly reduced. In turn, we use less of the users data subscription, which is good for all parties.

When users take images, they almost always have their phones set to the maximum images size, since the majority of the time they will be saving the images to disk and may need high resolution images for printing (or they simply pick the highest value because high is better than low). If your application doesn’t need high resolution images to work with, you can save server space when storing large numbers of images, especially if you intend on keeping the original upload.

Process

The general process requires some basic settings based on the intended use. For this example, we’re looking at application built to simplify the process of getting a trade-in on a car. The “Suzanne” app requires the user to take and upload a series of photos of the car, which probably means they’re outside where wi-fi isn’t available. This is pretty much the exact type of application that benefits from client side image processing.

Image Size

Suzanne doesn’t require particularly large images, just something larger than 800 x 600 pixels. The phone we used during development, a Droid Turbo 2, takes images at 5344 x 4008 pixels, which equates to 21 Megapixel images, and is significantly larger than Suzanne actually needs. Even with fairly efficient storage, we’re looking at 4.67 MB, which on Verizon’s network (with upload speed maxing out around 5 Mbps for 4G LTE) is about 4.67 / (5 / 8) = 7.472 seconds when everything operates perfectly.

While this may not seem like a huge amount of time, remember that not only is this the absolute fastest time we can expect (under ideal network conditions), but we have to repeat this at least 6 times, and as many as 12. Someone with a low to average cellular data signal, at a decently busy time could easily see ~20 second upload times (we noted anywhere from 15–30 second uploads during the development process).

Since upload speed is clearly the weak link in the performance chain, we needed to reduce the image as much as possible. In this specific case, we will significantly reduce image size (max dimension will be 900 pixels) while maintaining the image’s aspect ratio.

Because we are targeting a large audience, we included an extra step ensuring a maximum canvas dimension of 1448px, as this should work on all canvas compatible phones. If you’re not worried about covering older phones (which we won’t be for this example), you can specify a larger value.

Minimizing

When reducing an image, we want to operate over a number of steps to reduce aliasing in the image. Specifying more steps takes longer and uses more processing time, but will produce a better image. Our testing showed that three steps seemed to be right for most uses. Lastly, we can also specify image quality when generating the resulting image, which can further reduce size of the image if you don’t need the best possible image clarity. In our case, we just want to see the basic images of the car and any obvious damage, so maximum clarity isn’t a priority.

Code

I’ll break things down into sections, and skip type/value and callback checking for brevity. Lets start with the first task, determining when a file is moved from the user’s device to the browser and ready for upload.

$(document).ready(function() {
  $('#fileInput').on('change', function(evt) {
    var files = evt.target.files;

    if (typeof files == "object" && files.length > 0) {
      var file = files[0];
      var reader = new FileReader();

      var maxDimension = 800;
      var quality = 8;
      var steps = 3;
      
      processFile(file, maxDimension, quality, steps, function(data) {
          var img = new Image();
          img.width = data.width;
          img.height = data.height;
          img.src = data.data;
          document.body.appendChild(img);
      });
    }
  });
});

Here we monitor the file input change event. This will fire when the user selects a file to upload. We make sure that we have a file, get its file object and pass it to the processFile function. We send it with a max dimension value of 900 pixels, a quality value of 8 (max is 10), and a 3 step reduction. Next we read the file in with the processFile function.

var processFile = function(fileObj, maxDimension, quality, steps, callback) {
  var reader = new FileReader();
  reader.onloadend = function (e) {
    getScaledImage(reader.result, maxDimension, quality, steps, function (data /*{data, width, height}*/) {
      callback(data);
    });
  };
          
  reader.readAsDataURL(fileObj);

This function simply reads the file in and calls the getOptimizedImage function on the result. Now we handle the actual processing. If you need to preserve orientation data for the image, make sure to save it before processing the image, or adjust the processing portion accordingly.

var getScaledImage = function(imageData, maxDimension, quality, steps, callback) {
    var img = new Image();
    
    img.onload = function () {
        var width = parseInt(img.width);
        var height = parseInt(img.height);
        var max = (width > height ? width : height);
        var scaleFactor = maxDimension / max;
        
        var stepFactors = [];
        var stepInc = (1 - scaleFactor) / steps;
        for (var i = 1; i <= steps; i++) {
            stepFactors.push(1 - (stepInc * i));
        }
        
        var scaleCanvas = document.createElement("canvas");
        var tempCanvas = document.createElement("canvas");
        var scaleCanvasData, nWidth, nHeight;
        
        for (var i = 0; i < steps; i++) {
            var inc = stepFactors[i];
            nWidth = parseInt(width * inc);
            nHeight = parseInt(height * inc);
            
            if (i == 0) {
                scaleCanvas.width = nWidth;
                scaleCanvas.height = nHeight;
                scaleCanvas.getContext("2d").drawImage(img, 0, 0, nWidth, nHeight);
            } else {
                tempCanvas.width = nWidth;
                tempCanvas.height = nHeight;
                tempCanvas.getContext("2d").drawImage(scaleCanvas, 0, 0, nWidth, nHeight);
                
                scaleCanvas.width = nWidth;
                scaleCanvas.height = nHeight;
                scaleCanvas.getContext("2d").drawImage(tempCanvas, 0, 0, nWidth, nHeight);
            }
        }
        var result = {
            'width':nWidth,
            'height':nHeight,
            'data':scaleCanvas.toDataURL("image/jpeg", quality),
        }
        
        callback(result);
        return;
    };
    
    img.src = imageData;
};

Conclusion

The example image for this post was at 3840 x 2400 resolution, giving it a compressed size of just over 2MB (seen here).

Running with the settings listed in the example resulted in a reduction to 127KB, which is a significant savings as far as size is concerned. This also allowed for a sub–1 second upload time, which is much more tolerable for the user.

Share and enjoy!