How I landed my first contribution to Go

I have been writing open-source software in Go for quite some time now. And only recently, an opportunity came along, which allowed me to write Go code at work too. I happily shifted gears from being a free-time Go coder to full time coding in Go.

All was fine until the last GopherCon happened, where a contributor’s workshop was held. Suddenly, seeing all these people committing code to Go gave me an itch to do something. And immediately within a few days, Fransesc did a wonderful video on the steps to contribute to the Go project on his JustForFunc channel.

The urge was too much. With having an inkling of an idea on what to contribute, I atleast decided to download the source code and compile it. Thus began my journey to become a Go contributor !

I started reading the contribution guide and followed along the steps. Signing the CLA was bit of a struggle, because the instructions were slightly incorrect. Well, why not raise an issue and offer to fix it on my own ? That can well be my first CL ! Excited, I filed this issue. It turned out to be a classic n00b mistake. The issue was already fixed in tip, and I didn’t even bother to look. Shame !

Anyways, now that everything was set, I was wading along aimlessly across the standard library. After writing continuous Go code for a few months at work, there were a few areas in the standard library which consistently came up as hotspots in the cpu profiles. One of them was the fmt package. I decided to look at the fmt package and see if something can be done. After an hour or so, something came out.

The fmt_sbx function in the fmt/format.go file, starts like this -

func (f *fmt) fmt_sbx(s string, b []byte, digits string) {
	length := len(b)
	if b == nil {
		// No byte slice present. Assume string s should be encoded.
		length = len(s)
	}

It was clear that the len() call happened twice in case b was nil, whereas, if it was moved to the else part of the if condition, only one of them would happen. It was an extremely tiny thing. But it was something. Eventually, I decided to send a CL just to see what others will say about it.

Within a few minutes of my pushing the CL, Ian gave a +2, and after that Avelino gave a +1. It was unbelievable !

And then things took a darker turn. Dave gave a -1 and Martin also concurred. He actually took binary dumps of the code and examined that there was no difference in the generated assembly at all. Dave had already suspected that the compiler was smart enough to detect such an optimization and overall it was a net loss because the else condition hurt readability at no considerable gain in performance.

The CL had to be abandoned.

But I learnt a lot along the way, adding new tools like benchstat and benchcmp under my belt. Moreover, now I was comfortable with the whole process. So there was no harm in trying again. :sweat_smile:

A few days back, I found out that instead of doing an fmt.Sprintf() with strings, a string concat is a lot faster. I started searching for a victim, and it didn’t take much time. It was the archive/tar package. The formatPAXRecord function in archive/tar/strconv.go has some code like this -

size := len(k) + len(v) + padding
size += len(strconv.Itoa(size))
record := fmt.Sprintf("%d %s=%s\n", size, k, v)

On changing the last line to - record := fmt.Sprint(size) + " " + k + "=" + v + "\n", I saw pretty significant improvements -

name             old time/op    new time/op    delta
FormatPAXRecord     683ns ± 2%     457ns ± 1%  -33.05%  (p=0.000 n=10+10)

name             old alloc/op   new alloc/op   delta
FormatPAXRecord      112B ± 0%       64B ± 0%  -42.86%  (p=0.000 n=10+10)

name             old allocs/op  new allocs/op  delta
FormatPAXRecord      8.00 ± 0%      6.00 ± 0%  -25.00%  (p=0.000 n=10+10)

The rest, as they say, is history :stuck_out_tongue_closed_eyes:. This time, Joe reviewed it. And after some small improvements, it got merged ! Yay ! I was a Go contributor. From being an average open source contributor, I actually made a contribution to the Go programming language.

This is no way the end for me. I am starting to grasp the language much better and will keep sending CLs as and when I find things to do. Full marks to the Go team for tirelessly managing such a complex project so beautifully.

P.S. For reference -

This is my first CL which was rejected: https://go-review.googlesource.com/c/54952/

And this is the second CL which got merged: https://go-review.googlesource.com/c/55210/

Running JS Promises in series

After having read the absolutely wonderful exploring ES6, I wanted to use my newly acquired ES6 skills in a new project. And promises were always the crown jewel of esoteric topics to me (after monads of course :P).

Finally a new project came along, and I excitedly sat down to apply all my knowledge into practice. I started nice and easy, moved on to Promise.all() to load multiple promises in parallel, but then a use case cropped up, where I had to load promises in series. No sweat, just head over to SO, and look up the answer. Surely, I am not the only one here with this requirement. Sadly, most of the answers pointed to using async and other similar libraries. Nevertheless, I did get an answer which just used plain ES6 code to do that. Aww yiss ! Problemo solved.

I couldn’t declare the functions in an array like the example. Because I had a single function. I modified the code a bit to adjust for my usecase. This was how it came out -

'use strict';
const load = require('request');

let myAsyncFuncs = [
  computeFn(1),
  computeFn(2),
  computeFn(3)
];

function computeFn(val) {
  return new Promise((resolve, reject) => {
    console.log(val);
    // I have used load() but this can be any async call
    load('http://exploringjs.com/es6/ch_promises.html', (err, resp, body) => {
      if (err) {
        return reject(err);
      }
      console.log("resolved")
      resolve(val);
    });
  });
}

myAsyncFuncs.reduce((prev, curr) => {
  console.log("returned one promise");
  return prev.then(curr);
}, Promise.resolve(0))
.then((result) => {
  console.log("At the end of everything");
})
.catch(err => {
  console.error(err);
});

Not so fast. As you can guess, it didn’t work out. This was the output I got -

1
2
3
returned one promise
returned one promise
returned one promise
At the end of everything
resolved
resolved
resolved

The promises were all getting pre-executed and didn’t wait for the previous promise to finish. What is going on ? After some more time, got this (Advanced mistake #3: promises vs promise factories).

Aha ! So the promise will start to execute immediately on instantiation. And will resolve only when called. So all I had to do was delay the execution of the promise until the previous promise was finished. bind to the rescue !

'use strict';
const load = require('request');

let myAsyncFuncs = [
  computeFn.bind(null, 1),
  computeFn.bind(null, 2),
  computeFn.bind(null, 3)
];

function computeFn(val) {
  return new Promise((resolve, reject) => {
    console.log(val);
    // I have used load() but this can be any async call
    load('http://exploringjs.com/es6/ch_promises.html', (err, resp, body) => {
      if (err) {
        return reject(err);
      }
      console.log("resolved")
      resolve(val);
    });
  });
}

myAsyncFuncs.reduce((prev, curr) => {
  console.log("returned one promise");
  return prev.then(curr);
}, Promise.resolve(0))
.then((result) => {
  console.log("At the end of everything");
})
.catch(err => {
  console.error(err);
});

And now -

returned one promise
returned one promise
returned one promise
1
resolved
2
resolved
3
resolved
At the end of everything

Finally :)

Conclusion - If you want to execute promises in series, dont create promises which start executing. Delay their execution untill the previous promise has finished.

How to smoothen contours in OpenCV

Disclaimer: I am in no way an expert in statistics, so much of the details is beyond me. This is just an explanation of my attempt to solve the problem I had.


Recently, I was working with some cool stuff in image processing. I had to extract some shapes after binarizing some images. The final task was to smoothen the contours extracted from the shapes to give it a better feel.

After researching around a bit, the task was clear. All I had to do was resample the points in the contours at regular intervals and draw a spline through the control points. But opencv had no native function to do this. So I had to resort to numpy. Now, another problem in numpy was the data representation. Though opencv uses numpy internally, you have to jump through a couple of hoops to get everything running along smoothly.

Without wasting further time, here’s the code -

Get the contours from the binary image-

import cv2

ret,thresh_img = cv2.threshold(
			img,
			127,
			255,
			cv2.THRESH_BINARY_INV)
contours, hierarchy = cv2.findContours(thresh_img,
			cv2.RETR_TREE,
			cv2.CHAIN_APPROX_SIMPLE)

Now comes the numpy code to smoothen each contour-

import numpy
import cv2
from scipy.interpolate import splprep, splev

smoothened = []
for contour in contours:
    x,y = contour.T
    # Convert from numpy arrays to normal arrays
    x = x.tolist()[0]
    y = y.tolist()[0]
    # https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.splprep.html
    tck, u = splprep([x,y], u=None, s=1.0, per=1)
    # https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.linspace.html
    u_new = numpy.linspace(u.min(), u.max(), 25)
    # https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.splev.html
    x_new, y_new = splev(u_new, tck, der=0)
    # Convert it back to numpy format for opencv to be able to display it
    res_array = [[[int(i[0]), int(i[1])]] for i in zip(x_new,y_new)]
    smoothened.append(numpy.asarray(res_array, dtype=numpy.int32))

# Overlay the smoothed contours on the original image
cv2.drawContours(original_img, smoothened, -1, (255,255,255), 2)

P.S.: Credit has to be given to this SO answer which served as the starting point.

As you can see, data conversion is required to pass to splprep. And then again, when you are appending to the list to overlay on the image.

Hope you found it useful. If you have a better way to achieve the same result, please do not hesitate to let me know in the comments !

Quick and Dirty intro to Debian packaging

Required background

I assume you have installed a debian package atleast once in your life. And you are reading this because you want to know how they are created or you want to actually create one.

Back story

Over my career as a software engineer, there were several times I had to create a debian package. I always managed to avoid learning how to actually create it by sometimes using company internal tools and sometimes fpm.

Recently, I had the opportunity to create a debian package to deploy a project for a client, and I decided to learn how debian packages were “actually” created - “the whole nine yards”. Well, this is an account of that adventure. :)

As usual, I looked through the couple of blog posts on the internet. But most of them had the same “man page” look and feel. And I absolutely dread man pages. But without getting discouraged, I decided to plough through. I came across this page which finally gave me some much needed clarity.

Into the real stuff !

So, these are the things that I wanted to happen when I did dpkg -i on my package -

  1. Put the source files inside a “/opt/<project-name>/” folder.
  2. Put an upstart script inside the “/etc/init/” folder.
  3. Put a cron job in “/etc/cron.d/” folder.

The command that you use to build the debian package is

$ dpkg-deb --build <folder-name>

The contents of that folder is where the magic is.

Lets say that your folder is package. Inside package you need to have a folder DEBIAN. And then depending on the folder structure where you want your files to be, you have to create them accordingly. So in my case, I will have something like this -

$ tree -L 3 package/
package/
├── DEBIAN
│   ├── control
│   └── postinst
├── etc
│   ├── cron.d
│   │   └── cron-file
│   └── init
│       └── project_name.conf
└── opt
    └── <project-name>
        ├── main.js
        ├── folder1
        ├── node_modules
        ├── package.json
        ├── folder2
        └── helper.js

Consider the package folder to be the root(/). Don’t worry about the contents of the DEBIAN folder, we’ll come to that later.

After this, just run the command -

$ dpkg-deb --build package

Voila ! You have a debian package ready !

If you see any errors now, its probably related to the contents inside the DEBIAN folder. So, lets discuss it one by one.

  • control

If you just want to build the debian and get it done with, you only need to have the control file. Its kind of a package descriptor file with some fields that you need to fill up. Each field begins with a tag, followed by a colon and then the body of the field. The compulsory fields are Package, Version, Maintainer and Description.

Here’s how my control file looks -

Package: myPackage
Version: 1.0.0-1
Architecture: amd64
Depends: libcairo2-dev, libpango1.0-dev, libssl-dev, libjpeg62-dev, libgif-dev
Maintainer: Agniva De Sarker <agniva.quicksilver@gmail.com>
Description: Node js worker process to consume from the Meteor job queue
 The myPackage package consumes jobs submitted by users to the Meteor
 web application.

The Depends field helps you to specify the dependencies that your package might require to be pre-installed. Architecture is self-explanatory. (Small note on this - debian uses amd64 for 64 bit systems, not x86_64.)

For further info, see man 5 deb-control

  • preinst

If you want to run some sanity checks before the installation begins, you can have a shell script here. Important thing to note is that the packager decides the execution of the installation of the package depending on the exit code of the scripts. So, you should write “set -e” at the top of your script. Don’t forget to make it executable.

  • postinst

This is executed after the package is installed. Same rules apply as before. This is how my postinst looks -

#!/bin/bash
set -e

#Move the bootstrap file to proper location
mv /opt/myPackage/packaging/bootstrap.prod /opt/myPackage/.bootstraprc

#Clear the DEBIAN folder
rm -rf /opt/myPackage/packaging/DEBIAN
  • prerm

Gets executed before removing the package.

  • postrm

Gets executed after removing the package. You usually want to execute clean up tasks in this script.

Taking a step further

As you can figure, this entire process can be easily automated and made a part of your build system. Just create the required parent folders and put the source code and config files at the right places. Also have the files of the DEBIAN folder stored somewhere in your repo, which you can copy to the target folder.

Since, I had a Node project, I mapped it to my "scripts":{"build": "<command_to_run>"} in package.json file. You can apply it similarly for projects in other programming languages too.

TLDR

Just to recap quickly -

  1. Create a folder you will use to build the package.
  2. Put a DEBIAN folder inside it with the control file. Add more files depending on your need.
  3. Put the other files that you want to be placed in the filesystem after installation considering the folder as the root.
  4. Run dpkg-deb --build <folder-name>

Keep in mind, this is the bare minimum you need to create a debian package. Ideally, you would also want to add a copyright file, a changelog and a man page. There is a tool called lintian that you can use to follow the best practices around creating debian packages.

Hope this intro was helpful. As usual, comments and feedback are always appreciated !

Passing array buffer to Meteor gridFS

MongoDB has a document size limit of 16MB. To store larger file sizes, it is recommended to use GridFS.

Now, if you are a meteor user, you can very easily use the Meteor Collection-FS package to store and upload files. But it is slightly different when you actually want to store an object of size larger than 16MB which is not a file. Usually, these scenarios will come in the server side when you generate a large content and want to store that.

I was doing something like this -

//Collection initialisation
var Store = new FS.Store.GridFS("fileuploads");

FileUploads = new FS.Collection("fileuploads", {
  stores: [Store]
});

var buffer = new Buffer(JSON.stringify(jsonObj));
FileUploads.insert(buffer);

I found myself stuck with this error when I tried to use the insert function with the generated data.

DataMan constructor requires a type argument when passed a Buffer

This is actually a mistake in the documentation here - https://github.com/CollectionFS/Meteor-CollectionFS#initiate-the-upload which says the insert function accepts a Buffer object at the server side. It doesn’t. Issue raised here. It accepts a file object with its data set as a buffer object along with a mime type.

Here is how to get it done-

var buffer = new Buffer(JSON.stringify(jsonObj));
var newFile = new FS.File();
newFile.attachData(buffer, {type: 'application/javascript'});
FileUploads.insert(newFile)

Now this will work :)

But we are not done yet !

How are we going to read the data back, if we are doing it at the client side ?

var fs = FileUploads.findOne({_id: fileId});
$.ajax({
  url: fs.url(),
  type: "GET",
  dataType: "binary",
  processData: false,
  success: function(data){
    var reader = new FileReader();
    reader.onload = function (event) {
      // event.target.result contains your data .. TADA!
      // console.log(event.target.result)
    };
    reader.onerror = function (event) {
      console.error(event.target.error);
    };
    reader.readAsBinaryString(new Blob([ data ],
      { type: 'application/octet-stream' }));
  }
});

Any comments and feedback is most appreciated

Update (May 25th, 2016): I just saw that the author of the repo has stopped from maintaining the project. Sorry to hear it. Its still a great library and I hope will help users who might still use this.