Minimalist-Style Demo of Running Neural Networks in Web Browser

in Article, Blog Posts, Computer Vision (机器视觉)

Minimalist-Style Demo of Running Neural Networks in Web Browser

This demo shows how to run a pre-trained neural networks in web-browser. The user would first download the pre-trained style transfer model to local by opening up the webpage. Then everything will get processed locally without accessing any remote resource. The user can then open-up a picture from their hard drive and click “run” to let the style transferring neural networks to do its job.

Minimalist-Style Demo of Running Neural Networks in Web Browser

I didn’t put in extra effort to make it look better. The web browser may even freeze for a few seconds after you click the “run” button while the model inference is taking up your local resources. But I consider this is a good enough minimalist-style demo for me to understand how these things work.

The demo can be accessed here:

https://pppoe.github.io/ONNX-minimal-web-demo/

The code is open-sourced on Github. A very rough walk-through of the code is as following.

After loading the web page, it starts downloading the model.

$( document ).ready(function() {
  main();
});

async function main() {
  try {
    session = await ort.InferenceSession.create('./mosaic-8.onnx');
    $('#info').hide();
    $('#main').show();
    $('#run_btn').hide()
  } catch (e) {
    document.write(`failed to inference ONNX model: ${e}.`);
  }
}

This is the user-interaction part. This is my first time using these Canvas API. They are quite neat.

let inputFile = async function(event) {
  $('#info').show();
  $('#info').text('Click Run to Continue, browser may freeze for a few seconds.')
  output_src = URL.createObjectURL(event.target.files[0]);
  var img = new Image();
  img.src = output_src
  img.onload = function() {
    $('#file_selector').hide()
    var canvas = document.getElementById('preview');
    ctx = canvas.getContext('2d');
    w = canvas.width
    h = canvas.height
    ctx.drawImage(img,0,0,img.width,img.height,0,0,w,h);
    $('#run_btn').show()
  }
}

The tricky things are in the run function. The input has to be transposed into a channel-first tensor as in most deep learning framework. I couldn’t find a ONNX API on this and I try to avoid unnecessary imports. The Canvas stores the pixel data in a classic channel-last fashion with RGBA pixel format.

  var canvas = document.getElementById('preview');
  ctx = canvas.getContext('2d');
  const float32Data = new Float32Array(3*h*w);
  for (let i = 0; i < w; i+=1) {
    for (let j = 0; j < h; j+=1) {
      var pixel = ctx.getImageData(i, j, 1, 1).data;
      float32Data[0*h*w+j*w+i] = pixel[0]
      float32Data[1*h*w+j*w+i] = pixel[1]
      float32Data[2*h*w+j*w+i] = pixel[2]
    }
  }
  input_ts = new ort.Tensor('float32', float32Data, [1,3,h,w]);

The model file doesn’t include the input and output tensor names. But they are essential. You need to check them programmatically.

  output_name = session.outputNames[0]; // output1
  input_name = session.inputNames[0]; // input1
  feeds = { 'input1': input_ts };
  const results = await session.run(feeds);
  const result_data = results.output1.data;

Similar to the pre-processing, we need to put the output values back into Canvas Pixels.

  result_h = results.output1.dims[2]
  result_w = results.output1.dims[3]
  result_img_data = new Uint8ClampedArray([result_h,result_w,4]);
  step0 = result_w*result_h;
  canvas = document.getElementById('result');
  ctx = canvas.getContext('2d');
  const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
  const data = imageData.data;
  for (var i = 0; i < data.length; i += 4) {
    iw = (i/4)%result_w;
    ih = Math.floor(i/4/result_h)
    data[i] = result_data[0*step0 + ih*result_w + iw];
    data[i+1] = result_data[1*step0 + ih*result_w + iw];
    data[i+2] = result_data[2*step0 + ih*result_w + iw];
    data[i + 3] = 255; // alpha
  }
  ctx.putImageData(imageData, 0, 0);

Code is here. You can easily replace the model with other style transfer models from the ONNX model zoo.

Personally, I always believe the large deployment of deep learning wouldn’t happen unless web developer starts to use neural networks. Although ONNX is relatively (if not totally) irrelevant in the Deep Learning academical research community, it seems to be a leading player in its web deployment. Hope their efforts can keep going on.

Write a Comment

Comment