Experiments with image manipulation in WASM using Go

The Go master branch recently finished a working prototype implementation of WebAssembly. And being a WASM enthusiast, I naturally wanted to take it out for a spin.

In this post, I will be writing down my thoughts on a weekend experiment I did with manipulating images in Go. The demo just takes an input image from the browser, and applies various image transformations like brightness, contrast, hue, saturation etc. and then dumps it back to the browser. This tests 2 things - plain CPU bound execution which is what the image transformation should be doing, and moving data to and fro between JS and Go land.

Callbacks

It should be clarified how to communicate with Go from JS land. It is not the usual way we do in emscripten; which is to expose a function and call that function from JS. In Go, interop with JS is done through callbacks. In your Go code, you set up callbacks which can be invoked from JS. These are mainly event handlers to which you want your Go code to be executed against.

It looks something like this -

js.NewEventCallback(js.PreventDefault, func(ev js.Value) {
	// handle event
})

There is a pattern here - as your application grows, it becomes a list callback handlers to DOM events. I look at it like url handlers of a REST app.

To arrange it, I declare all of my callbacks as methods of my main struct and attach them in a single place. Kind of similar to how you will declare the url handlers in different files and setup all of your routes in a single place.

// Setup callbacks
s.setupOnImgLoadCb()
js.Global.Get("document").
	Call("getElementById", "sourceImg").
	Call("addEventListener", "load", s.onImgLoadCb)

s.setupBrightnessCb()
js.Global.Get("document").
	Call("getElementById", "brightness").
	Call("addEventListener", "change", s.brightnessCb)

s.setupContrastCb()
js.Global.Get("document").
	Call("getElementById", "contrast").
	Call("addEventListener", "change", s.contrastCb)

And then in a separate file, write your callback code -

func (s *Shimmer) setupHueCb() {
	s.hueCb = js.NewEventCallback(js.PreventDefault, func(ev js.Value) {
		// quick return if no source image is yet uploaded
		if s.sourceImg == nil {
			return
		}
		delta := ev.Get("target").Get("value").Int()
		start := time.Now()
		res := adjust.Hue(s.sourceImg, delta)
		s.updateImage(res, start)
	})
}

Implementation

My primary gripe is the way image data is being passed around from Go land to the browser land.

While uploading the image, I am setting the src attribute to the base64 encoded format of the entire image. That value goes to Go code, which then decodes it back to binary, applies the transformation and then encodes it back to base64 and sets the src attribute of the target image.

This makes the DOM incredibly heavy and requires passing a huge string from Go to JS. Possibly, if SharedArrayBuffer support lands in WASM, this might improve. I am also looking into setting pixels directly in a canvas and see if that gives any benefit. Even shaving off this base64 conversion should buy us some time. (Other ideas will be very appreciated :grin:)

Performance

For a JPEG image of size 100KB, the time it takes for it to apply the transformation is around 180-190ms. The time increases with the size of the image. This is using Chrome 65. (FF has been giving me some errors which I didnt have time to investigate into :sweat_smile:).

timings

Performance snapshots show something similar.

perf

The heap can be quite huge. A heap snapshot resulted in about 1GB size.

Finishing thoughts

The complete repo is here - github.com/agnivade/shimmer. Feel free to poke around it. Just a reminder that I wrote it in one day, so obviously there are things that can be improved. I will be looking into those next.

P.S. - Slight note that image transformations are not applied on top of another. i.e. if you change the brightness and then change hue, the resulting image will just change hue from the original base image. This is a TODO item for now.

Learn Web Assembly the hard way

I had experimented with web assembly before, but only upto running the “hello world” example. After reading a recent post on how to load wasm modules efficiently, I decided to jump into the gory details of web assembly and learn it the hard way.

What follows is a recount of that adventure.

For our demo, we will have the simplest possible function which will just return the number 42. And then go from easiest to the hardest level to run it. As a pre-requisite, you need to have the emscripten toolchain up and running. Please refer to - http://kripken.github.io/emscripten-site/docs/getting_started/downloads.html for instructions.

Level 0 :sunglasses:

Create a file hello.c:

#include <emscripten.h>

EMSCRIPTEN_KEEPALIVE
int fib() {
  return 42;
}

Compile it with emcc hello.c -s WASM=1 -o hello.js

The flag WASM=1 is used to signal emscripten to generate wasm code. Otherwise, it generates asm.js code by default. Note that even if the output is set to hello.js, it will generate hello.wasm and hello.js. The .js file loads the .wasm file and sets up important environment stuff.

Then load this in an HTML file like:

<html>
<head>
<script src="hello.js"></script>
<script>
Module.onRuntimeInitialized = function() {
  console.log(Module._fib())
}
</script>
</head>
</html>

Put all of these files in a folder and run a local web server.

Great, this completes level 0. But the js file is just a shim which sets up some stuff which we don’t want. We want to load the .wasm file by ourselves and run that. Let’s do that.

Level 1 :godmode:

Let’s try with the one mentioned here - https://developers.google.com/web/updates/2018/03/emscripting-a-c-library. Modify the HTML file to -

<html>
<head>
<script>
(async function() {
  const imports = {
    env: {
      memory: new WebAssembly.Memory({initial: 1}),
      STACKTOP: 0,
    }
  };
  const {instance} = await WebAssembly.instantiateStreaming(fetch('hello.wasm'), imports);
  console.log(instance.exports._fib());
})();
</script>
</head>
</html>

We have a wonderfully cryptic error: WebAssembly Instantiation: Import #5 module="global" error: module is not an object or function

Some digging around in SO (here and here) led me to find that normally compiling with the -s WASM=1 flag will add some other glue code along with the wasm code to interact with the javascript runtime. However, in our case it is not needed at all. We can remove it with -s SIDE_MODULE=1

Alright, so let’s try - emcc hello.c -s WASM=1 -s SIDE_MODULE=1 -o hello.jsand modify the code to as mentioned in the links.

(async () => {
  const config = {
    env: {
        memoryBase: 0,
        tableBase: 0,
        memory: new WebAssembly.Memory({
            initial: 256,
        }),
        table: new WebAssembly.Table({
            initial: 0,
            element: 'anyfunc',
        }),
    }
  }
  const fetchPromise = fetch('hello.wasm');
  const {instance} = await WebAssembly.instantiateStreaming(fetchPromise, config);
  const result = instance.exports._fib();
  console.log(result);
})();

Still no luck. Same error.

Finally, after a couple of frustrating hours, a break came through this post - https://stackoverflow.com/questions/44346670/webassembly-link-error-import-object-field-dynamictop-ptr-is-not-a-number.

So it seems that an optimization flag greater than 0 is required. Otherwise even if you mention SIDE_MODULE, it does not remove the runtime.

Let’s add that flag and run the command - emcc hello.c -Os -s WASM=1 -s SIDE_MODULE=1 -o hello.wasm

Note that in this case, we directly generate the .wasm file without any js shim.

This works !

Level 2 :goberserk:

But we need to go deeper. Is there no way to compile to normal web assembly and still load the wasm file without the js shim ? Of course there was.

Digging a bit further, I got some more clarity from this page - https://github.com/kripken/emscripten/wiki/WebAssembly-Standalone. So either we use -s SIDE_MODULE=1 to create a dynamic library, or we can pass -Os to remove the runtime. But in the latter case, we need to write our own loading code to use it. Strap on, this adventure is going to get bumpy.

Let’s use the same code and compile without the -s SIDE_MODULE=1 flag and see what error we get.

Import #0 module="env" function="DYNAMICTOP_PTR" error: global import must be a number.

By just making a guess, I understood that the env object must need a DYNAMICTOP_PTR field as a number. Let’s add DYNAMICTOP_PTR as 0 in the env object and see what happens.

We have a new error - WebAssembly Instantiation: Import #1 module="env" function="STACKTOP" error: global import must be a number.

Ok, it looks like there are still more imports that need to be added. This was getting to be a whack-a-mole game. I remembered that there is a WebAssembly Binary Toolkit which comprises of a suite of tools used to translate between wasm and wat format.

Let’s try to convert our wasm file to wat and take a peek inside.

$wasm2wat hello.wasm  | head -30
(module
  (type (;0;) (func (param i32 i32 i32) (result i32)))
  (type (;1;) (func (param i32) (result i32)))
  (type (;2;) (func (param i32)))
  (type (;3;) (func (result i32)))
  (type (;4;) (func (param i32 i32) (result i32)))
  (type (;5;) (func (param i32 i32)))
  (type (;6;) (func))
  (type (;7;) (func (param i32 i32 i32 i32) (result i32)))
  (import "env" "DYNAMICTOP_PTR" (global (;0;) i32))
  (import "env" "STACKTOP" (global (;1;) i32))
  (import "env" "STACK_MAX" (global (;2;) i32))
  (import "env" "abort" (func (;0;) (type 2)))
  (import "env" "enlargeMemory" (func (;1;) (type 3)))
  (import "env" "getTotalMemory" (func (;2;) (type 3)))
  (import "env" "abortOnCannotGrowMemory" (func (;3;) (type 3)))
  (import "env" "___lock" (func (;4;) (type 2)))
  (import "env" "___syscall6" (func (;5;) (type 4)))
  (import "env" "___setErrNo" (func (;6;) (type 2)))
  (import "env" "___syscall140" (func (;7;) (type 4)))
  (import "env" "_emscripten_memcpy_big" (func (;8;) (type 0)))
  (import "env" "___syscall54" (func (;9;) (type 4)))
  (import "env" "___unlock" (func (;10;) (type 2)))
  (import "env" "___syscall146" (func (;11;) (type 4)))
  (import "env" "memory" (memory (;0;) 256 256))
  (import "env" "table" (table (;0;) 6 6 anyfunc))
  (import "env" "memoryBase" (global (;3;) i32))
  (import "env" "tableBase" (global (;4;) i32))
  (func (;12;) (type 1) (param i32) (result i32)
    (local i32)

Ah, so now we have a better picture. We can see that apart from memory, table, memoryBase and tableBase which we had added earlier, we have to include a whole lot of functions for this to work. Let’s do that.

(async () => {
  const config = {
    env: {
        DYNAMICTOP_PTR: 0,
        STACKTOP: 0,
        STACK_MAX: 0,
        abort: function() {},
        enlargeMemory: function() {},
        getTotalMemory: function() {},
        abortOnCannotGrowMemory: function() {},
        ___lock: function() {},
        ___syscall6: function() {},
        ___setErrNo: function() {},
        ___syscall140: function() {},
        _emscripten_memcpy_big: function() {},
        ___syscall54: function() {},
        ___unlock: function() {},
        ___syscall146: function() {},

        memory: new WebAssembly.Memory({initial: 256, maximum: 256}),
        table: new WebAssembly.Table({initial: 6, element: 'anyfunc', maximum: 6}),
        memoryBase: 0,
        tableBase: 0,
    }
  }
  const fetchPromise = fetch('hello.wasm');
  const {instance} = await WebAssembly.instantiateStreaming(fetchPromise, config);
  const result = instance.exports._fib();
  console.log(result);
})();

And voila ! This code works.

Level 3 :trollface:

Now that I have come so far, I wanted to write the code in the wat (web assembly text) format itself to get the full experience. Turns out, the wat format is quite readable and easy to understand.

Decompiling the current hello.wasm with the same wasm2wat command as before, and scrolling to our fib function shows this -

(func (;19;) (type 3) (result i32)
  i32.const 42)

Not completely readable, but not very cryptic too. Web Assembly uses a stack architecture where values are put on the stack. When a function finishes execution, there is just a single value left on the stack, which becomes the return value of the function.

So this code seems like it is putting a constant 42 on the stack, which is finally returned.

Let’s write a .wat file like -

(module
   (func $fib (result i32)
      i32.const 42
   )
   (export "fib" (func $fib))
)

And then compile it to .wasm with wat2wasm hello.wat

Now, our wasm file does not have any dependencies. So we can get rid of our import object altogether !

(async () => {
  const fetchPromise = fetch('hello.wasm');
  const {instance} = await WebAssembly.instantiateStreaming(fetchPromise);
  const result = instance.exports.fib();
  console.log(result);
})();

Finally, we have the code which we want :relieved:. Since we are hand writing our wasm code, we have full control of everything, and therefore we don’t need to go through extra hoops of js glue. This is certainly not something which you would want to do for production applications, but it is an interesting adventure to open the hood of web assembly and take a peek inside.

Quick guide to JSON operators and functions in Postgres

Postgres introduced JSON support in 9.2. And with 9.4, it released JSONB which even improved querying and indexing json fields a notch. In this post, I would like to give a quick tour of some of the most common json operators and functions which I have encountered. And some gotchas which tripped me up. I have tested them on 9.6. If you have an earlier version, please refer the documentation for any changes.

Querying json fields

Let’s start off with getting data from json keys. There are 2 operators for doing this: -> and ->>. The difference is very subtle and something which had tripped me up when I started writing postgres queries with json.

-> returns the value of a field as another json object. Whereas ->> returns the value of a field as text.

Let’s understand that with an example. Suppose you have a json object like {"a": "hobbit", "b": "elf"}.

To get the value of “a”, you can do:

test=> select '{"a": "hobbit", "b": "elf"}'::jsonb->'a';
 ?column?
----------
 "hobbit"
(1 row)

But, if you use the ->> operator, then:

test=> select '{"a": "hobbit", "b": "elf"}'::jsonb->>'a';
 ?column?
----------
 hobbit
(1 row)

Notice, the "" in the previous case. -> thinks that the return value is an object, hence quotes the result. Its usefulness becomes apparent when you have a nested json object.

test=> select '{"a": {"internal": 45}, "b": "elf"}'::jsonb->>'a'->>'internal';
ERROR:  operator does not exist: text ->> unknown
LINE 1: ...'{"a": {"internal": 45}, "b": "elf"}'::jsonb->>'a'->>'intern...
                                                             ^
HINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.


test=> select '{"a": {"internal": 45}, "b": "elf"}'::jsonb->'a'->>'internal';
 ?column?
----------
 45
(1 row)

Here, the difference is clear. If you use the ->> operator and try to access the fields from it’s result, it doesn’t work. You need to use the -> operator for that. Bottom line is - If you want to get the value from a json field, use ->>, but if you need to access nested fields, use ->.

Key exist operator

You can also check whether a json field exists or not. Use the ? operator for that.

test=> select '{"a": "hobbit"}'::jsonb?'hello';
 ?column?
----------
 f
(1 row)

test=> select '{"a": "hobbit"}'::jsonb?'a';
 ?column?
----------
 t
(1 row)

Delete a key

To delete a json field, use the - operator.

test=> select '{"a": "hobbit", "b": "elf"}'::jsonb-'a';
   ?column?
--------------
 {"b": "elf"}
(1 row)

Update a key

To update a json field, you need to use the jsonb_set function.

Let’s say you have a table like this:

CREATE TABLE IF NOT EXISTS users (
	id serial PRIMARY KEY,
	full_name text NOT NULL,
	metadata jsonb
);

To update a field in the metadata column, you can do:

UPDATE USERS SET metadata=jsonb_set(metadata, '{category}', '"hobbit"') where id=1;

If the field does not exist, it will be created by default. You can also choose to disable that behavior by passing an additional flag.

UPDATE USERS SET metadata=jsonb_set(metadata, '{category}', '"hobbit"', false) where id=1;

There is a catch here. Note that we have set the metadata field to be nullable. What if you try to set a field when the value is NULL ? It fails silently !

test=> select metadata from users where id=1;
 metadata
----------

(1 row)

test=> update users set metadata=jsonb_set(metadata, '{category}', '""') where id=1;
UPDATE 1

test=> select metadata from users where id=1;
 metadata
----------

(1 row)

Either set the field to NOT NULL. Or if that is not possible, use the coalesce function.

test=> update users set metadata=jsonb_set(coalesce(metadata, '{}'), '{category}', '""') where id=1;
UPDATE 1
test=> select metadata from users where id=1;
     metadata
------------------
 {"category": ""}
(1 row)

This covers the most common use-cases of json queries that I have encountered. If you spot a mistake or if there is something else you feel need to be added, please feel free to point it out !

Hidden goodies inside lib/pq

It has happened to all of us. You get into a habit and accept a few inconveniences and move on. It bothers you, but you procrastinate, putting it in the backburner by slapping that mental TODO note. Yet surprisingly, sometimes the solution is right in front of you.

Take my case. I have always done _ "github.com/lib/pq" in my code to use the postgres driver. The _ is to register the driver with the standard library interface. Since we usually do not actually use the pq library, one needs to use a _ to import the library without exposing the package in the code. Life went on and I didn’t even bother to look for better ways to do things. Until the time came and I screamed “There has to be a better way !”.

Indeed there was. It was the actual pq package, which I was already using but never actually imported ! Yes, I am shaking my head too :sweat:. Stupidly, I had always looked at database/sql and never bothered to look at the underlying lib/pq package. Oh well, dumb mistakes are bound to happen. I learn from them and move on.

Let’s take a look at some of the goodies that I found inside the package, and how it made my postgres queries look much leaner and elegant. :tada:

Arrays

Let’s say that you have a table like this -

CREATE TABLE IF NOT EXISTS users (
	id serial PRIMARY KEY,
	comments text[]
);

Believe it or not, for the longest time, I did this to scan a postgres array -

id := 1
var rawComments string
err := db.QueryRow(`SELECT comments from users WHERE id=$1`, id).Scan(&rawComments)
if err != nil {
	return err
}
comments := strings.Split(rawComments[1:len(rawComments)-1], ",")
log.Println(id, comments)

It was ugly. But life has deadlines and I moved on. Here is the better way -

var comments []string
err := db.QueryRow(`SELECT comments from users WHERE id=$1`, id).Scan(pq.Array(&comments))
if err != nil {
	return err
}
log.Println(id, comments)

Similarly, to insert a row with an array -

id := 3
comments := []string{"marvel", "dc"}
_, err := db.Exec(`INSERT INTO users VALUES ($1, $2)`, id, pq.Array(comments))
if err != nil {
	return err
}

Null Time

Consider a table like this -

CREATE TABLE IF NOT EXISTS last_updated (
	id serial PRIMARY KEY,
	ts timestamp
);

Now if you have an entry where ts is NULL, it is extremely painful to scan it in one shot. You can use coalesce or a CTE or something of that sort. This is how I would have done it earlier -

id := 1
var ts time.Time
err := db.QueryRow(`SELECT coalesce(ts, to_timestamp(0)) from last_updated WHERE id=$1`, id).Scan(&ts)
if err != nil {
	return err
}
log.Println(id, ts, ts.IsZero()) // ts.IsZero will still be false btw !

This is far better :+1: -

id := 1
var ts pq.NullTime
err := db.QueryRow(`SELECT ts from last_updated WHERE id=$1`, id).Scan(&ts)
if err != nil {
	return err
}
if ts.Valid {
	// do something
}
log.Println(id, ts.Time, ts.Time.IsZero()) // This is true !

Errors

Structured errors are great. But the only error type check that I used to have in my tests were for ErrNoRows since that is the only useful error type exported by the database/sql package. It frustrated me to no end. Because there are so many types of DB errors like syntax errors, constraint errors, not_null errors etc. Am I forced to do the dreadful string matching ?

I made the discovery when I learnt about the # format specifier. Doing a t.Logf("%+v", err) versus t.Logf("%#v", err) makes a world of a difference.

If you have a key constraint error, the first would print

pq: duplicate key value violates unique constraint "last_updated_pkey"

whereas in case of latter

&pq.Error{Severity:"ERROR", Code:"23505", Message:"duplicate key value violates unique constraint \"last_updated_pkey\"", Detail:"Key (id)=(1) already exists.", Hint:"", Position:"", InternalPosition:"", InternalQuery:"", Where:"", Schema:"public", Table:"last_updated", Column:"", DataTypeName:"", Constraint:"last_updated_pkey", File:"nbtinsert.c", Line:"433", Routine:"_bt_check_unique"}

Aha. So there is an underlying pq.Error type. And it has error codes ! Wohoo ! Better tests !

So in this case, the way to go would be -

pqe, ok := err.(*pq.Error)
if ok != true {
	t.Fatal("unexpected type")
}
if string(pqe.Code) != "23505" {
	t.Error("unexpected error code.")
}

And that’s it ! For a more detailed look, head over to the package documentation.

Feel free to post a comment if you spot a mistake. Or if you know of some other hidden gems, let me know !

How to shrink an AWS EBS volume

Recently, I had a requirement to shrink the disk space of a machine I had setup. We had overestimated and decided to use lesser space until the need arises. I had setup a 1TB disk initially and we wanted it to be 100GB.

I thought it would be as simple as detaching the volume, setting the new values and be done with it. Turns out you can increase the disk space, but not decrease it. Bummer, now I need to do the shrinking manually.

Disclaimer:

This is nearly taken verbatim from Matt Berther’s post https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ combined with @sinnardem’s suggestion. But I have showed the actual command outputs and updated some steps from my experience following the process.

Note: This worked for me on an Ubuntu 16.04 OS. YMMV. Proceed with caution. Take a snapshot of your volume before you do anything.

Basic idea:

We have a 1TB filesystem. Our target is to make it 100GB.

AWS stores all your data in EBS (Elastic Block Storage) which allows detaching volumes from one machine and attaching to another. We will use this to our advantage. We will create a 100GB volume, attach this newly created volume and the original volume to a temporary machine. From inside the machine, we will copy over the data from the original to the new volume. Detach both volumes and attach this new volume to our original machine. Easy peasy. :tada:

Here we go !

  1. Note the hostname of the current machine. It should be something like ip-a-b-c-d.

  2. Shutdown the current machine. (Don’t forget to take the snapshot !).

  3. Detach the volume, name it as original-volume to avoid confusion.

  4. Create a new ec2 instance with the same OS as the current machine with 100GB of storage. Note, that it has to be in the same availability zone.

  5. Shutdown that machine

  6. Detach the volume from the machine, name it as new-volume to avoid confusion.

  7. Now create another new ec2 machine, t2.micro is fine. Again, this has to be in the same availability zone.

  8. Boot up the machine. Log in.

  9. Attach original-volume to this machine at /dev/sdf which will become /dev/xvdf1.

    Attach new-volume to this machine at /dev/sdg which will become /dev/xvdg1.

    It will take some time to attach because the machines are running. Do NOT attach while the machine is shut down because it will take the original-volume to be the root partition and boot into it. We do not want that. (This happened to me).

    We want the root partition to be the separate 8G disk of the t2.micro machine, and have 2 separate partitions to work with.

    After the attachment is complete (you will see so in the aws ec2 console), do a lsblk. Check that you can see the partitions.

     $lsblk
     NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
     xvda    202:0    0    8G  0 disk
     └─xvda1 202:1    0    8G  0 part /
     xvdf    202:80   0 1000G  0 disk  --> original-volume
     └─xvdf1 202:81   0 1000G  0 part
     xvdg    202:96   0  100G  0 disk  --> new-volume
     └─xvdg1 202:97   0  100G  0 part
    

    We are now all set to do the data transfer.

  10. First, check filesystem integrity of the original volume.

    ubuntu@ip-172-31-12-57:~$ sudo e2fsck -f /dev/xvdf1
    e2fsck 1.42.13 (17-May-2015)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    cloudimg-rootfs: 175463/128000000 files (0.1% non-contiguous), 9080032/262143739 blocks
    
  11. Resize the filesytem to the partition’s size.

    ubuntu@ip-172-31-12-57:~$ sudo resize2fs -M -p /dev/xvdf1
    resize2fs 1.42.13 (17-May-2015)
    Resizing the filesystem on /dev/xvdf1 to 1445002 (4k) blocks.
    Begin pass 2 (max = 492123)
    Relocating blocks             XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Begin pass 3 (max = 8000)
    Scanning inode table          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Begin pass 4 (max = 31610)
    Updating inode references     XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    The filesystem on /dev/xvdf1 is now 1445002 (4k) blocks long.
    
  12. Take the number from the previous step and calculate how many 16MB blocks would be required.
    ubuntu@ip-172-31-12-57:~$ echo $((1445002*4/(16*1024)))
    352
    

    Let’s round it off to 355.

  13. Start the copy.
    ubuntu@ip-172-31-12-57:~$ sudo dd bs=16M if=/dev/xvdf1 of=/dev/xvdg1 count=355
    355+0 records in
    355+0 records out
    5955911680 bytes (6.0 GB, 5.5 GiB) copied, 892.549 s, 6.7 MB/s
    
  14. Double check that all changes are synced to disk.
    ubuntu@ip-172-31-12-57:~$ sync
    
  15. Resize the new volume.
    ubuntu@ip-172-31-12-57:~$ sudo resize2fs -p /dev/xvdg1
    resize2fs 1.42.13 (17-May-2015)
    Resizing the filesystem on /dev/xvdg1 to 26214139 (4k) blocks.
    The filesystem on /dev/xvdg1 is now 26214139 (4k) blocks long.
    
  16. Check for filesystem integrity.
    ubuntu@ip-172-31-12-57:~$ sudo e2fsck -f /dev/xvdg1
    e2fsck 1.42.13 (17-May-2015)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    cloudimg-rootfs: 175463/12800000 files (0.1% non-contiguous), 1865145/26214139 blocks
    
  17. Shutdown the machine.

  18. Detach both volumes.

  19. Attach the new-volume to your original machine, and mount it as your boot device (/dev/sda1).

  20. Login to the machine. You will see that the hostname is set to the machine from where you created the volume. We need to set to the original hostname.

    sudo hostnamectl set-hostname ip-a-b-c-d
    
  21. Reboot.

That should be it. If you find anything that has not worked for you or you have a better method, please feel free to let me know in the comments !