Hell Oh Entropy!

Life, Code and everything in between

Hello FOSSASIA: Revisiting the event *and* the first program we write in C

I was at FOSSAsia this weekend to deliver a workshop on the very basics of programming. It ended a pretty rough couple of weeks for me, with travel to Budapest (for Linaro Connect) followed immediately by the travel to Singapore. It seems like I don’t travel east in the timezone very well and the effects were visible with me napping at odd hours and generally looking groggy through the weekend at Singapore. It was however all worth it because despite a number of glitches, I had some real positives to take back from the conference.

The conference

FOSSAsia had been on my list of conferences to visit due to Kushal Das telling me time and again that I’d meet interesting people there. I had proposed a talk (since I can’t justify the travel just to attend) a couple of years ago but dropped out since I could not find sponsors for my talk and FOSSAsia was not interested in sponsoring me either. Last year I met Hong at SHD Belgaum and she invited me to speak at FOSSAsia. I gladly accepted since Nisha was going to volunteer anyway. However as things turned out in the end, my talk got accepted and I found sponsorship for travel and stay (courtesy Linaro), but Nisha could not attend.

I came (I’m still in SG, waiting for my flight) half-heartedly since Nisha did not accompany me, but the travel seemed worth it in the end. I met some very interesting people and was able to deliver a workshop that I was satisfied with.

Speaking of the workshop…

I was scheduled to talk on the last day (Sunday) first thing in the morning and I was pretty sure I was going to be the only person standing with nobody in their right minds waking up early on a Sunday for a workshop. A Sunday workshop also meant that I knew the venue and its deficiencies - the “Scientist for a Day” part of the Science Center was a disaster since it was completely open and noisy, with lunch being served right next to the room on the first day. I was wary of that, but the Sunday morning slot protected me from that and my workshop personally without such glitches.

The workshop content itself was based on an impromptu ‘workshop’ I did at FUDCon Pune 2015, but a little more organized. Here’s a blow by blow account of the talk for those who missed it, and also a reference for those who attended and would like a reference to go back to in future.

Hell Oh World

It all starts with this program. Hello World is what we all say when we are looking to learn a new language. However, after Hello World, we move up to learn the syntax of the language and then try to solve more complex user problems, ignoring the wonderful things that happened underneath Hello World to make it all happen. This session is an attempt to take a brief look into these depths. Since I am a bit of a cynic, my Hello World program is slightly different:

#include <stdio.h>

main (void)
  printf ("Hell Oh World!\n");
  return 0;

We compile this program:

$ gcc -o helloworld helloworld.c

We can see that the program prints the result just fine:

$ ./helloworld 
Hell Oh World!

But then there is so much that went into making that program. Lets take a look at the binary by using a process called disassembling, which prints the binary program into a human-readable format - well at least readable to humans that know assembly language programming.

$ objdump -d helloworld

We wrote only one function: main, so we should see only that. Instead however, we see so many functions that are present in the binary In fact, you you were lied to when they told back in college that main() is the entry point of the program! The entry point is the function called _start, which calls a function in the GNU C Library called __libc_start_main, which in turn calls the main function. When you invoke the compiler to build the helloworld program, you’re actually running a number of commands in sequence. In general, you do the following steps:

  • Preprocess the source code to expand macros and includes
  • Compile the source to assembly code
  • Assemble the assembly source to binary object code
  • Link the code against its dependencies to produce the final binary program

let us look at these steps one by one.


gcc -E -o helloworld.i helloworld.c

Run this command instead of the first one to produce a pre-processed file. You’ll see that the resultant file has hundreds of lines of code and among those hundreds of lines, is this one line that we need: the prototype for printf so that the compiler identifies the call printf:

extern int printf (const char *__restrict __format, ...);

It is possible to just use this extern decl and avoid including the entire header file, but it is not good practice. The overhead of maintaining something like this is unnecessary, especially when the compiler can do the job of eliminating the unused bits anyway. We are better off just including a couple of headers and getting all declarations.

Compiling the preprocessed source

Contrary to popular belief, the compiler does not compile into binary .o - it only generates assembly code. It then calls the assembler in the binutils project to convert the assembly into object code.

$ gcc -S -o helloworld.s helloworld.i

The assembly code is now just this:

    .file   "helloworld.i"
    .section    .rodata
    .string "Hell Oh World!"
    .globl  main
    .type   main, @function
    pushq   %rbp
    .cfi_def_cfa_offset 16
    .cfi_offset 6, -16
    movq    %rsp, %rbp
    .cfi_def_cfa_register 6
    movl    $.LC0, %edi
    call    puts
    movl    $0, %eax
    popq    %rbp
    .cfi_def_cfa 7, 8
    .size   main, .-main
    .ident  "GCC: (GNU) 6.3.1 20161221 (Red Hat 6.3.1-1)"
    .section    .note.GNU-stack,"",@progbits

which is just the main function and nothing else. The interesting thing there though is that the printf function call is replaced with puts because the input to printf is just a string without any format and puts is much faster than printf in such cases. This is an optimization by gcc to make code run faster. In fact, the code runs close to 200 optimization passes to attempt to improve the quality of the generated assembly code. However, it does not add all of those additional functions.

So does the assembler add the rest of the gunk?

Assembling the assembly

gcc -c -o helloworld.o helloworld.s

Here is how we assemble the generated assembly source into an object file. The generated assembly can again be disassembled using objdump and we see this:

helloworld.o:     file format elf64-x86-64

Disassembly of section .text:

: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq

which is no more than what we saw with the compiler, just in binary format. So it surely is the linker adding all of the gunk.

Putting it all together

Now that we know that it is the linker adding all of the additional stuff into helloworld, lets look at how gcc invokes the linker. To do this, we need to add a -v to the gcc command. You’ll get a lot of output, but the relevant bit is this:

$ gcc -v -o helloworld helloworld.c

/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/collect2 -plugin /usr/libexec/gcc/x86_64-redhat-linux/6.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccEdWzG5.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o helloworld /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../.. /tmp/cc3m0We9.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtend.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crtn.o
COLLECT_GCC_OPTIONS='-v' '-o' 'helloworld' '-mtune=generic' '-march=x86-64'

This is a long command, but the main points of interest are all of the object files (*.o) that get linked in because the linker concatenates those and then resolves dependencies of unresolved references to functions (only puts in this case) among those and all of the libraries (libc.so via -lc, libgcc.so via -lgcc, etc.). To find out which of the object code files have the definition of a specific function, say, _start, disassemble each of them. You’ll find that crt1.o has the definition.

Static linking

Another interesting thing to note in the generated assembly is that the call is to puts@plt, which is not exactly puts. It is in reality a construct called a trampoline, which helps the code jump to the actual printf function during runtime. We need this because printf is actually present in libc.so.6, which the binary simply claims to need by encoding it in the binary. To see this, disassemble the binary using the -x flag:

$ objdump -x helloworld

helloworld:     file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000112:
start address 0x0000000000400430
Dynamic Section:
  NEEDED               libc.so.6

This is dynamic linking. When a program is executed, what is actually called first is the dynamic linker (ld.so), which then opens all dependent libraries, maps them into memory, and then calls the _start function in the program. During mapping, it also fills in a table of data called the Global Offset Table with offsets of all of the external references (puts in our case) to help the trampoline jump to the correct location.

If you want to be independent of the dynamic linker, then you can link the program statically:

$ gcc -static -o helloworld helloworld.c

This will however result in bloating of the program and also has a number of other disadvantages, like having to rebuild for every update of its dependent libraries and sub-optimal performance since the kernel can no longer share pages among processes for common code.

BONUS: Writing the smallest program

The basics were done with about 10 minutes to spare, so I showed how one could write the smallest program ever. In principle, the smallest program in C is:

main (void)
  return 42;

As is evident though, this pulls in everything from the C and gcc libraries, so it is clearly hard to do this in C, so lets try it in assembly. We already know that _start is the main entry point of the program, so we need to implement that function. To exit the program, we need to tell the kernel to exit by invoking the exit_group syscall, which has syscall number 231. Here is what the function looks like:

.globl _start
    mov $0xe7, %rax
    mov $0x42, %rdi

We can build this with gcc to get a very small binary but to do this, we need to specify that we don’t want to use the standard libraries:

gcc -o min -nostdlib min.s

The resultant file is 864 bytes, as opposed to the 8.5K binary from the C program. We can reduce this further by invoking the assembler and linker directly:

$ as -o min.o min.s
$ ld -o min min.o

This results in an even smaller binary, at 664 bytes! This is because gcc puts some extra meta information in the binary to identify its builds.


At this point we ran out of time and we had to cut things short. It was a fun interaction because there were even a couple of people with Macbooks and we spotted a couple of differences in the way the linker ran due to differences in the libc, despite having the same gcc installed. I wasn’t able to focus too much on the specifics of these differences and I hope they weren’t a problem for the attendees using Macs. In all it was a satisfying session because the audience seemed happy to learn about all of this. It looked like many of them had more questions (and wonderment, as I had when I learned these things for the first time) in their mind than they came in with and I hope they follow up and eventually participate in Open Source projects to fulfill their curiosity and learn further.


Science Hack Day, Belgaum

We almost did not go, and then we almost cancelled. It was a good thing though that we ended up going because this ended up being one of our more memorable weekends out and definitely the most memorable tech event I have been to.

It started quite early with Kushal telling me that Praveen Patil was organizing a Science Hack Day with Hong Phuc’s help and that it might be an interesting place to come to. He mentioned that there were many interesting people coming in and that Nisha and I would have a good time. I wasn’t very keen though because of my usual reluctance to get out and meet people. This was especially an issue for me with Cauldron and Connect happening back to back in September, draining most of my ‘extrovert energy’. So we were definitely not going.

That is until Praveen pinged me and asked me if I could come.

That was when I posed the question to Nisha, asking if she wanted to do something there. She was interested (she is usually much more enthusiastic about these things than I am anyway) and decided to propose a hack based on an idea that she had already had. She was also fresh from Pycon Delhi where she enjoyed meeting some very interesting people and she was hoping for a similar experience in Belgaum. She proposed a hack to replace a proprietary microcontroller board in one of Ira’s toys with a Raspberry Pi to do some interesting things on pressing many of its buttons, like reading from a list of TODO items and playing songs from the internet. A couple of days before we were to drive down to Belgaum though, we had some issues which led to us almost cancelling the trip. Thankfully we were able to resolve that and off we went to Belgaum.

Poyarekar ladies watching the inauguration

The first impression of the event was the resort where it was hosted. The Sankalp Bhumi Resort at Belgaum was outside the city and was suitably isolated to give us a calm location. It felt like we were on holiday and that helped me relax a bit. The first day started with an informal inauguration ceremony with all of the mentors (including Nisha) giving a brief description of what they were attempting during the weekend. I found out then that there were workshops for school students going on at the same time, teaching them a variety of science hacks like making toys out of paper and straws, soldering and so on. It seemed like it would be total chaos with kids running around all over the place, but it was anything but that. The workshops seemed very well managed and more importantly, almost every child there was the quintessential wide-eyed curious student marvelling at all of the ‘magic’ they were learning.

An organic map of the venue that Arun Ganesh and his team created by mapping the area using OSM.

The hacks themselves were quite interesting, with ideas ranging from using weather sensors on various boards to various solar applications like a sun tracking solar panel, solar lamps, motion detectors, etc. My plan to remain aloof during the conference and just relax with Ira were foiled and I was promptly sucked into the engaging ideas. The fact that we had a bit of firefighting to do on the first morning (we forgot the password to the Pi and had to hunt for a microsd adapter to reset it) also helped me get more involved and appreciate the very interesting people that I found myself with.

The wall of people between me and the biomass burner

There were so many high points during the event that I am pretty sure that I’ll miss quite a few. The most memorable one was the lightning talk that Prof. Prabhu gave on a biomass burner that they had developed that could completely and cleanly burn a variety of bio-fuels, especially compacted dry organic rubbish. Then there was this spontaneous moment on Sunday when Arun Ganesh came up with a microscope with a broken mirror and wondered if we could add an LED under it with a firm pivot of some sort to provide light. It was a pretty simple hack, but we thoroughly enjoyed the process of burning a couple of LEDs in the process and hunting for parts in everybody’s toolkits.

Oh, and did I mention that Praveen did a Laser show to demonstrate some physics and mathematics concepts?

The hacked microscope

After a wonderful two days, it was finally time to go and we did not depart without getting an assurance from Praveen that we will do this again next year. Like I said, this was the most memorable event I have been to and more importantly, it is an event that I would like to take my daughter to every year to show her the wonders of science from an early age, to let her interact with some very interesting people (they were her ‘other friends’ over the weekend) and expand her horizons beyond the learnings she will get from school.


Going to FUDCon: Phnom Penh Edition

A little over a year ago, FUDCon APAC happened in Pune. I know because I lost a lot of nights sleep over it. The event also marked a turning point in my life because it coincided with my decision to move on from Red Hat and accept an offer with Linaro, a decision that I can say now was among the best I have taken in my life despite the very difficult choice I had to make to leave arguably the best team one could ever work with. FUDCon also brought me in touch with many volunteers from across Asia and it was interesting to see the kinds of challenges they faced when talking about Fedora and Open Source in general. That was also when I got to know Nisa and Somvannda from Cambodia better, especially when I had the chance to go over to Phnom Penh for APAC budget discussions. They had wanted to do a FUDCon in Phnom Penh in 2015 and we simply put out a better bid then.

I was not as frustrated as I look in this picture. Photo by Kushal Das.

We started a new trend at last year’s FUDCon, where we held a discussion to decide and announce next year’s FUDCon date and location. We did not actually make an announcement to that effect, but we did have lots of discussions and in the end agreed to support Cambodia in its bid for 2016, especially with Sirko Kemter moving there and taking care of some of the logistics that we had concerns about.

So here we are a little over a year later and it looks like the Phnom Penh FUDCon is happening as planned, alongside Barcamp Phnom Penh. I was ambivalent about going, primarily because I was going to be away for a long time in September and I did not want to be without my family any longer - I love to travel, but there’s only so much time I can spend without my family. That problem is now solved since Nisha has been increasingly getting involved in Pyladies and python programming and wanted to be part of FUDCon as well. It’s going to be expensive, but hey, we deserve a vacation right?

Photo-op with the youngest participant at FUDCon. Photo by Kushal Das.

And an interesting vacation this promises to be with a planned visit to Siem Reap and then back to Phnom Penh to attend what is claimed to be the largest gathering in Phnom Penh. Nisha will be doing a talk on spatial mapping. I had done an impromptu workshop at FUDCon Pune in 2015, where an idle discussion on C programming turned into me showing a group of very interested students how a simple Hello World program gets translated into something that the machine understands and executes. I will attempt to make that into a more formal workshop at Phnom Penh.

So if you’re coming to FUDCon, I’ll see you there!


GNU Tools Cauldron 2016, ARMv8 multi-arch edition

Worst planned trip ever.

That is what my England trip for the GNU Tools Cauldron was, but that only seemed to add to the pleasure of meeting friends again. I flewin to Heathrow and started on an almost long train journey to Halifax,with two train changes from Reading. I forgot my phone on the trainbut the friendly station manager at Halifax helped track it down andgot it back to me. That was the first of the many times I forgotstuff in a variety of places during this trip. Like I discovered thatI forgot to carry a jacket or an umbrella. Or shorts. Or full lengthpants for that matter. Like I purchased an umbrella from Sainsbury’s but forgot to carry it out. I guess you got the drift of it.

All that mess aside, the conference itself was wonderful as usual. My main point of interest at the Cauldron this time was to try and make progress on discussions around multi-arch support for ARMv8. I have never talked about this in my blog the past, so a brief introduction is in order.

What is multi-arch?

Processors evolve over time and introduce features that can be exploited by the C library to do work faster, like using the vectori SIMD unit to do memory copies and manipulation faster. However, this is at odds with the goal of the C library to be able to run on all hardware, including those that may not have a vector unit or may not have that specific type of vector unit (e.g. have SSE4 but not AVX512 on x86). To solve this problem, we exploit the concept of PLT and dynamic linking.

I thought we were talking about multiarch, what’s a PLT now?

When a program calls a function in a library that it links to dynamically (i.e. only the reference of the library and the function are present in the binary, not the function implementation), it makes the call via an indirect reference (aka a trampoline) within thebinary because it cannot know where the function entry point in another library resides in memory. The trampoline uses a table (called the Procedure Linkage Table, PLT for short) to then jump to the final location, which is the entry point of the function.

In the beginning, the entry point is set as a function in the dynamic linker (lets call it the resolver function), which then looks for the function name in libraries that the program links to and then updates the table with the result. The dynamic linker resolver function can do more than just look for the exact function name in the libraries the function links to and that is where the concept of Indirect Functions or IFUNCs come into the picture.

Further down the rabbit hole - what’s an IFUNC?

When the resolver function finds the function symbol in a library, it looks at the type of the function before simply patching the PLT with its address. If it finds that the function is an IFUNC type (lets call it the IFUNC resolver), it knows that executing that function will give the actual address of the function it should patch into the PLT. This is a very powerful idea because it now allows us to have multiple implementations of the same function built into the library for different features and then have the IFUNC resolver study its execution environment and return the address of the most appropriate function. This is fundamentally how multiarch is implemented in glibc, where we have multiple implementations of functions like memcpy, each utilizing different features, like AVX, AVX2, SSE4 and so on. The IFUNC resolver for memcpy then queries the CPU to find the features it supports and then returns the address of the implementation best suited to the processor.

… and we’re back! Multi-arch for ARMv8

ARMv8 has been making good progress in terms of adoption and it is clear that ARM servers are going to form a significant portion of datacenters of the future. That said, major vendors of such servers with architecture licenses are trying to differentiate by innovating onthe microarchitecture level. This means that a sequence of instructions may not necessarily have the same execution cost on all processors. This gives an opportunity for vendors to write optimal code sequences for key function implementations (string functions for example) for their processors and have them included in the C library. They can use the IFUNC mechanism to then identify their processors and then launch the routine best suited for their processor implementation.

This is all great, except that they can’t identify their processors reliably with the current state of the kernel and glibc. The way to identify a vendor processor is to read the MIDR_EL1 and REVIDR_EL1 registers using the MSR instruction. As the register name suggests, they are readable only in exception level 1, i.e. by the kernel, which makes it impossible for glibc to directly read this, unlike on Intel processors where the CPUID instruction is executable in userspace and is sufficient to identify the processor and its features.

… and this is only the beginning of the problem. ARM processors have a very interesting (and hence painful) feature called big.LITTLE, which allows for different processor configurations on a single die. Even if we have a way to read te two registers, you could end up reading the MIDR_EL1 from one CPU and REVIDR_EL1 from another, so you need a way to ensure that both values are read from the same core.

This led to the initial proposal for kernel support to expose the information in a sysfs directory structure in addition to a trap into the kernel for the MRS instruction. This meant that for any IFUNC implementation to find out the vendor IDs of the cores on the system, it would have to traverse a whole directory structure, which is not the most optimal thing to do in an IFUNC, even if it happens only once in the lifetime of a process. As a result, we wanted to look for a better alternative.


The number of system calls in a directory traversal would be staggering for, say, a 128 core processor and things will undoubtedly get worse as we scale. Another way for the kernel to share this (mostly static) information with userspace is via a VDSO, with an opaque structure in userspace pages in the vdso and helper functionsto traverse that structure. This however (or FS traversal for that matter) exposed a deeper problem, the extent of things we can do in an IFUNC.

An IFUNC runs very early in a dynamically linked program and even earlier in a statically linked program. As a result, there is very little that it can do because most of the complex features are not even initialized at that point. What’s more, the things you can do in a dynamic program are different from the things you can do in a static program (pretty much nothing right now in the latter), so that’s an inconsistency that is hard to reconcile. This makes the IFUNC resolvers very limited in their power and applicability, at least in their current state.

What were we talking about again?

The brief introduction turned out to be not so brief after all, but I hope it was clear. All of this fine analysis was done by Szabolcs Nagy from ARM when we talked about multi-arch first and the conclusion was that we needed to fix and enhance IFUNC support first if we had any hope of doing micro-architecture detection for ARM. However, there is another way for now…


A (not so) famous person (me) once said that glibc tunables are the answer to all problems including world hunger and of course, the ARMv8 multi-arch problem. This was a long term idea I had shared at the Linaro Connect in Bangkok earlier this year, but it looks like it might become a reality sooner. What’s more, it seems like Intel is looking for something like that as well, so I am not alone in making this potentially insane suggestion.

The basic idea here would be to have environment variable(s) todo/override IFUNC selection via tunables until the multi-arch situation is resolved. Tunables initialization is much more lightweight and only really relies on what the kernel provides on the stackand in the auxilliary vector and what the CPU provides directly. It seems easier to delay IFUNC resolution at least until tunables are initialized and then look harder at how much further they can be delayed so that they can use other things like VDSO and/or files.

So here is yet another idea that has culminated into a “just finish tunables already!” suggestion. The glibc community has agreed on setting the 2.25 release as the deadline to get this support in, so hopefully we will see some real code in this time.


In search of the tiger

The pretext was Nisha’s cousin’s wedding in Bangalore. We were already high from our wonderful wildlife experience in Thailand and when the chance to travel to Bangalore came, we were in no doubt that a safari in one of Karnataka’s national parks will grace one of the weekends. It did not take us long to settle on the national park - we were going to Bandipur! I booked us in Bandipur Safari Lodge for 3 nights, which gave us 6 safaris to look for wildlife. A tiger would be amazing but I was more interested in spotting leopards and sloth bears.

We landed at the Bangalore airport early in the morning and waited for our Zoomcar to arrive. I found out at the airport that the car I had booked had an accident the previous night so I was getting a Ford Figo instead, not the start I was looking for. In any case, we picked up our car and drove on to Gundlupet. The drive was thankfully uneventful and we reached the lodge just in time for our evening safari. I informed the staff that our 2 year old would be accompanying us for the safari and he was not very happy. He warned us that if she got scared or bored or cried, there was no way to return before the end of the safari. I assured him that our kid was an angel and he let us board.

Ira was actually quite amazing on the safari, especially for a 2 year old. She excitedly looked for animals and birds (she already has the ability to spot birds somehow!) and shouted out when she saw something interesting. Therein lied the problem unfortunately. She was often too excited and that was a bit disturbing for other occupants of the jeep. She was not very disruptive though and she fell asleep for the last third of the trip.

It rained for a while during the safari and that freshened the forest up a bit. The light looked divine and I was really excited about seeing an exotic animal or bird at that point. We saw some deer, Gaur peacock and langur, a mongooseon one side, a black-naped hare on the other. No tigers, no leopards, no sloth bears. I did not miss them either, because the entire experience was just wonderful - the lighting was great, the weather was pleasant and the birds and animals that we did see looked beautiful.

For the remaining trip we decided to alternate safaris to avoid disturbing our fellow holidayers. Nisha would do the safari next morning, I would do the following evening and morning and then Nisha would do the last one in the evening. We had already decided to skip the Monday morning safari in the interest of getting to Bangalore in time on Monday.

Nisha’s morning safari was a success - she saw a big male leopard ambling across about a 100 meters or so from her jeep. In her excitement she forgot to zoom into the cat and managed to get some interesting habitat shots instead. Either way, we had our first sighting! I was excited at the prospect of seeing the leopard that evening. Something else was in store for me though.

My evening safari jeep had three families with young children and at first I did not think much of it. Once we entered the forest however, some of the children and adults were quite annoyingand were constantly making noise. There were discussions of cricket as we trudged along and a lot of the shushing from the naturalist went unheeded. The most annoying bit was when we waited for a leopard to cross our track and at that precise moment one of the kids wanted to go pee. A parent stood up and demanded that the driver take them to a place where she can pee. As it turned out, the leopard did cross that path and later that night, also brought a kill to that spot. Such was my luck that evening. Despite that, I did manage to get my first sighting of a Crested Serpent Eagle, so the evening was not completely wasted.

I requested that I be put on a different jeep the next day and that set me up for the most memorable safari yet. No, I did not see a tiger, nor a leopard nor a sloth bear.

We saw elephants, but that was not the highlight either, even though it was really exciting.

I saw the first Indian Nightjar of my life! I had never imagined seeing a nightjar in my lifetime because I consider myself an average (or maybe a bit below) birder. Thanks to the wonderful company I had in the jeep, we were able to spot the beauty just as it flew from the front of our jeep to a tree nearby. But then, even if I had not seen a nightjar, this would have been the best safari of the trip because of the company I had. In addition to a good driver (but then all of the driver/guides at the safari lodge are terrific) we had a couple of keen wildlife photographers who were great at spotting and tracking and best of all, nobody was talking, let alone about cricket. I now wanted to do the Monday morning safari too and not give it up. I spent the afternoon trying to convince Nisha.

It was again Nisha’s turn and as expected, she came back with dozens of shots of a popular male tiger called Prince. The big male was lazing in a waterhole and all of the jeeps had converged on him, everyone firing away furiously on their cameras. Since I had not seen any big cats, Nisha let me do the Monday safari.

We started the morning looking for tiger tracks. We saw tracks of a male in the area that Prince was seen the previous evening and were following it. The driver got a phone call and was told of a sighting in a different sector of the forest. He apologized to us and started speeding away to the other section of the forest. We held on to our dear lives!

After about 15 minutes of a very bumpy ride, we reached a spot where a couple of other jeeps were already waiting. After another 10 minutes or so, the big male crossed over, just beyond our sight! Our driver made a desperate last attempt to drive closer so that we could get one shot, but it was too late. He was not one to give up though and quickly guessed that the tiger was headed to the waterhole nearby. We sped to that place and waited. In no time, we saw the huge male amble down to the edge of the pond. He took a drink and then hind legs first, settled into the mossy water. The Basavanagatta male (that is what he was called, although I am sure the spelling is grossly wrong) stayed there for a long time, glancing at us now and then. He was a little over 100 meters away from us, so he did not have any reason to feel nervous. He finally got bored of sitting in there and swam over to the other end of the pool and walked off.

Our driver instantly knew which way he was going and started driving around to the other side of the huge thicket. We waited there and finally the cat stepped out from the thicket. The huge cat looked at us, gave a snarl and ambled into another thicket. This time he was not more than 20 meters away. We spent some more time waiting to see if it would come out from another side of the thicket, but he did not, or maybe he escaped from some other spot, we don’t know. What I did know was that I wanted to do another safari!

I had run out of time though, so we had to check out and drive back to Bangalore. The drive back had a hint of melancholy as both of us wanted to stay longer. The lodge itself did not exactly ooze luxury (it was quite basic) but the people were warm and the forests were enchanting. Stories of people staying there for weeks at a time did not help as I wanted to do that too. Maybe some day I will go there without prior plan to return…


Fixing my eyes: The theoretical 10/10

The Whole Story

  1. Taking the plunge.
  2. There’s a hole in my eye
  3. The theoretical 1010

I could read the tiny sign on the right of the line of letters that said 1010. A similar tiny sign on the left said 66. The letters in the middle were also more or less perfect except for the B which seemed a bit muddy.

“Congratulations, you’re testing 66!”, she said.

“It is sharp but there is a slight haze, like when I get oily fingers on my glasses and try to wipe it off.”

“That will go away, don’t worry :)”

“And what’s with the dilating eye drops every night? They waste my morning because I see halos for almost the entire morning until their effect wears off.”

“Those are to relax your focussing muscles and help you heal faster. They’re only for 5 days, so you don’t have to live with that forever.”

And there you are, the end of a life changing episode that began under a month ago. I know I had promised to write in ‘live’, but the sequence of events went such that I did not have any time until today. It is not that late though, my left lens was implanted on Tuesday, the 29th of March and the right lens on Wednesday the 30th. So consider this deferred live :)

The Delay

My original appointment was on 21st to repeat the iridotomy in my right eye since even lasers were unable to pierce my eye of steel! That appointment was honoured and I had my iridotomy just like I had ordered, a little less painful than the last time. The bad news though was that my lenses had not arrived and hence we could not do the implants on 22nd and 23rd as planned. The lenses finally arrived over the weekend and we narrowed down on 29th and 30th for the surgeries.

The Left Eye

We started out early on Tuesday. I was strangely not very nervous, just wondering how it would be like without glasses. We found out on the drive to the hospital that theneeded me to give them a blood sample before the surgery to screen for HIV, Diabetes and some other common conditions. It should have been done previously but they failed to notify us and it meant a delay of a couple of hours for us. We were slightly annoyed but I am patient with such things - we’re humans and minor oversight is OK as long as it does not have serious consequences.

We reached and I gave my blood sample and the nurse put eye dilating drops into my left eye and the wait began. I had to watch people go out ahead of me as my blood sample was being tested but it was OK since I had Nisha to harrass with my silly jokes and theories. When they were finally ready for me, I was led into the OT along with another old lady who was about to have her cataract surgery.

We were greeted in the pre-Op room by a horrific sight. An old lady was lying prone on a bed and an anaesthetist was piercing a long needle into her eye as she screamed about how it hurt. In the room was another bed and a couple of chairs where more people sat and watched the scene in horror. One of the girls seemed to be weeping.

I wasn’t sure what to make of it. I am not a great fan of poking things into my eyes and seeing that definitely made my stomach churn. The anaesthetist calmed us by telling us that we did not have to go through that and topical anaesthesia was sufficient. Big relief, but it would have been better if we were not treated to that sight.

The rest of the wait was relatively uneventful and my turn was the last because they had to mark the axis of my toric lens in my eye. This involved putting a clamp to hold my eye open and then marking the axis with a device smeared with marker ink. High tech stuff!

Once the marking was in place, I was led to the OT bed and I was in an inexplicably chatty mood, asking silly questions to the doctor. The doc asked me to shut up and in hindsight, I realize that I may have been very nervous. As the operation progressed, he told me whenever he was doing something important, like inserting the lens or cleaning the eye or adjusting the lens. I felt some pressure throughout the operation, but no real pain. At the end, they gave me a pair of tacky eye shield glasses and led me out into the pre-Op room.

I had expected to be able to see from my left eye walking out of the operating room, but that did not happen because there was a light shining into my eye for the entire duration of the operation. Within minutes however the glare cleared and by the time I met Nisha outside the OT, I could see her clearly! We then spent a couple of hours with me chattering away in excitement, relating the anaesthesia story to her and her coaxing me to take a short nap. There were eye drops to be administered every 15 minutes so there wasn’t much chance of me sleeping anyway. The doc checked my eye on the way out, declared it to be ‘perfect’ and told me to administer the drops regularly so that we can implant the lens into my other eye the next day.

I spent the evening napping and drowning my eye in drops, eagerly anticipating the operation the following day and the resultant clear vision.

The Right Eye

I got up the next day and found that while the vision in my left eye was sharp, it was hazy, like when one smears oil on ones glasses and tries in vain to clean them with a cloth and use them. I mentioned that to the doc when we reached and he said that will clear in a few days. To his credit, it cleared by evening. My left eye inflammation had reduced significantly so he gave a go-ahead for the right eye implant on the same day. So I was back upstairs to flood my right eye with dilating drops. The drill right up to the operation was the same as the left eye (without the anaesthesia scare this time) and soon enough it was my turn to be operated on.

This time though, the procedure hurt a bit more than it did for my left eye. I mentioned it to the doc and he said we were almost done. Sure enough, we were and after it was done, I could see clearly! The human brain seems to have amazing resolution and it seems able to take two images at different exposures and produce one with acceptable resolution and exposure. I was very happy stepping out into the pre-Op area and finally out into the recovery area. In the recovery area I was even more incapable of relaxing compared to yesterday because now I had an almost perfect set of eyes to experiment with. The result was that I was much more tired by the end of the day and was glad to hit the sack.


The test the following day showed that my lenses were perfectly measured and I could have perfect vision once the eye had healed perfectly. Now my next visit is in about a week. I will probably not write about it unless there is something interesting to share. I have already begun hunting for my ghost glasses only to grab the side of my face. The biggest difference however is that there is no longer a shield between my eyes and the air outside. My eyes can feel the air freely and it is really unnnerving! It will take me a while to get used to.

Now off to rest my newly acquired eyes…


Fixing my eyes: There's a hole in my eye

The Whole Story

  1. Taking the plunge.
  2. There’s a hole in my eye
  3. The theoretical 1010

The fact that I can write this means that I have not lost my eyesight after being punched by a laser! There are a lot of things to be aware of though, so let me start from the beginning.

The day started with the realization that mom had an appointment with her doctor and we would have to take Ira along with us. The counsellor at Vasan told me that I could drive in so I was not very concerned about the procedure despite the scary stories online. Nisha however was not taking any chances and we ended up taking a cab. In hindsight, that was a great decision.

We reached right on time and my eyes were flooded with drops the moment I sat in the waiting area. The nurse topped up the drops some 3-4 times and through the hour and half of waiting, all I could do was listen to Nisha chasing Ira around as the little monster made the hospital her playground. After a little less than an hour a dull headache began to creep in; the doctor said it was expected and in fact an indication that the constricting drops are working.

Once he was satisfied with the state of my eyes, I was directed to the YAG laser room. The doc entered with a smug grin and asked me if I was ready. I had forgotten the horror stories by then and just shrugged and smiled. He reminded me that it is going to hurt a bit. That wasn’t enough of a warning, I had to actually experience it to realize how bad it would be. The doc poured some liquid into what looked like a small suction cup with a lens and stuck that to my right eye. After a lot of looking around my eye, he identified a spot and said, “Ready!”. There was a click and with it a hard flick to my eye. “That hurt a bit”, I told him and he only smiled. The first shot did not quite punch a hole in my iris and he had to take another shot. he told me the tissue of the iris of my right eye was pretty thick. “Is that good or bad?”, I asked. “In this case, not good”, he replied with a light snigger.

His sense of humour was a bit dark but I didn’t mind, maybe because I have a similar sense of humour. The second shot hurt just as much, but I knew what to expect so if was kinda OK. That did not work either so he decided to move on to the left eye. After a lot of searching, he made one shot on the left eye and we had a hole. “There was a nice spot on the left iris with thinner tissue so I knew the moment the laser fired that we had a good hole”, he said. He decided against making a third shot on the right eye and we decided to do it later.

Within minutes I started feeling a headache that grew worse by the minute. We went to his consulting room to discuss the schedule for the implants and the second iridotomy. I am scheduled to fly to Bangkok for Linaro Connect this weekend, so it had to be once I returned on 19th. The tentative schedule now is that we’ll repeat the iridotomy on the right eye on 21st, implant the lens in my left eye on 22nd and then the left eye on 25th, which was a Good Friday and hence one less working day sacrificed.

With that out of the way, I was prescribed 2 eye drops for 5 days and sent home. As we stepped out to have lunch, my head had started splitting with a headache with a mild nausea setting in, the kind one gets with a bad migrane. I could barely taste the food I had, such was the intensity of the headache at times. Ira’s constant flitting around (she’s approaching her terrible twos now) did not help things a lot. We finally got lunch over with and got into the cab back home. The nap in the cab worked wonders and that followed with an hours nap at home got rid of the headache. I am still seeing things a little darker than usual (the pupil is constricted to limit light entering the lens) but I can see sharp with my glasses, unlike the hazy overexposure due to the retina scan dilation yesterday.

The holes now mean that there is no turning back. Provided that the lens measurements don’t need to be repeated, it looks like I will be rid of my glasses before the end of the month.


Fixing my eyes: Taking the plunge

The Whole Story

  1. Taking the plunge
  2. There’s a hole in my eye
  3. The theoretical 1010

I usually don’t write about my very personal affairs like my eyesight (which is really poor) but I decided to make an exception this time. After a lot of mulling over it, I have decided to ‘go under the knife’ to fix my almost blind vision. I read a lot of blog posts about personal experiences and I decided to document my own experience because there aren’t any posts about the new lenses I will be getting, viz. the EyePCL by Care Group India. Most blog posts seem to be about the Visian ICL.

I am -10 diopters in both eyes with -2.75 astigmatism. This made me ineligible for the supposedly simpler (and definitely cheaper) LASIK procedure since that would leave me with little or no cornea for further corrections. The doctor advised that I do an ICL instead, which would cost four times as much (about ₹70,000 per eye as opposed to ₹35,000 for both eyes for LASIK) but would be a reversible procedure.

This was a little over a year ago and I finally took the plunge today. I went for a fresh work up today (at Vasan Eye Care, Kothrud, Pune since they did a decent LASIK job with Nisha and Siddhi’s eyes) and came back with blurry vision due to the retina check up. The highlight today was the white-to-white measurement which involved putting a clamp around my eye to prevent me from blinking while the doctor measured my eye with a vernier calliper! They had put numbing drops so that it didn’t hurt when the calliper touched the eye but it was a bit uncomfortable nevertheless.

Next in the process is an iridotomy, which involves punching one or more holes in the periphery of my eye to ease intraocular pressure. This is done because the most common side effect of IPCL is increased intraocular pressure, which could result in glaucoma. This happens tomorrow, so I hope to write another post about it tomorrow.


FUDCon, where friends meet

The madness is over. FUDCon Pune 2015 happened between 26-28 June 2015, and we successfully hosted a large number of people at MIT College of Engineering. This was not without challenges though and we met yesterday to understand what went well for us (i.e. the FUDCon volunteer team) and what could have been better. This post however is not just a summary of that discussion, since it is heavily coloured by my own impression of how we planned and executed the event.

The bid

Our bid was pretty easy to get together because we had a pretty strong organizer group at the outset and we more or less knew exactly what we wanted to do. We wanted to do a developer focussed conference that users could attend and hopefully become contributors to the Fedora project. The definition of developer is a bit liberal here, to mean any contributor who can pitch in to the Fedora project in any capacity. The only competing bid was from Phnom Penh and it wasn’t a serious competition by any stretch of imagination since its only opposition to our bid was “India has had many FUDCons before”. That combined with some serious problems with their bid (primarily cash management related) meant that Pune was the obvious choice. We had trouble getting an official verdict on the bid due to Christmas vacations in the West, but we finally had a positive verdict in January.

The CfP

The call for participants went out almost immediately after the bid verdict was announced. We gave about a month for people to submit their proposals and once we did that, a lot of us set out pinging individuals and organizations within the Open Source community. This worked because we got 142 proposals, much more than we had imagined.

We had set out with the idea of doing just 3 parallel tracks because some of us were of the opinion that more tracks would simply reduce what an individual could take away from the conference. This also meant that we had at most 40 slots with workshops taking up 2 slots instead of 1.


The website took up most of my time and in hindsight, it was time that I could have put elsewhere. We struggled with Drupal as none of us knew how to wrangle it. I took the brave (foolhardy?) task of upgrading the Drupal instance and migrating all of the content, only to find out that the schedule view was terrible and incredibly non-intuitive. I don’t blame Drupal or COD for it though; I am pretty sure I missed something obvious. SaniSoft came to the rescue though and we were able to host our schedule at shdlr.com.

The content

After the amazing response in the CfP, we were tempted to increase the number of tracks since a lot of submissions looked very promising. However, we held on tight and went about making a short list. After a lot of discussions, we finally gave in to the idea of making a separate workshop track and after even more discussions, we separated out a Container track, a Distributed Storage track and an OpenStack track. So all of a sudden, we now had 5 tracks in a day instead of 3!

Sankarshan continually reminded me to reach out to speakers at the event to make sure that their talk fit in with our goals. I could not do that, mainly because we did not have the bandwidth but also because I realize that in hindsight, our goal wasn’t refined beyond the fact that we wanted a more technical event. The result was that we made a couple of poor choices, the most notable being the opening keynote of the conference. The talk about Delivering Fedora for everyone was an excellent submission, but all of us misunderstood the content of the talk. The talk was a lot more focussed than we had thought it would be and it ended up being the wrong beginning for the conference since it seemed to scare away a lot of students.

The content profile overall however was pretty strong and most individual talks had almost full rooms. The auditorium looked empty for a lot of talks, but that was because each row of the massive auditorium could house 26 people, so even a hundred people in the auditorium filled in only the first few rows. The kernel talks had full houses and the Container, OpenStack and Storage tracks were packed. It was heartening to see some talks where many in the audience followed the speaker out to discuss the topic further with them.

One clear failure on the content front was the Barcamp idea. We did a poor job of planning it and an even poorer job of executing it.

Travel, Accommodation and Commute

We did a great job on travel and accommodation planning and execution. Travel subsidy arrangements were well planned and announced and we had regular meetings to decide on them. Accommodation was negotiated and booked well in advance and we had little issues on that front except occasionally overloaded network at the hotel. We had excellent support for visa applications as well as making sure that speakers were picked up and dropped to the airport on time. The venue was far from the hotel, so we had buses to ferry everyone across. Although that was tiring, it was done with perfect precision and we had no unpleasant surprises in the end.

Materials, Goodies and SWAG

We had over 2 months from the close of CfP to conference day, and we wasted a lot of that time when we should have been ordering and readying swag. This is probably the biggest mistake we had made in planning and it bit us quite hard near the closing weeks. We had a vendor bailing on us near the end, leading to a scramble to Raviwar Peth to try and get people to make us stuff in just over a week. We were lucky to find such vendors, but we ended up making some compromises in quality. Not in t-shirts though, since that was an old reliable vendor that we had forgotten about during the original quote-collection. He worked night and day and delivered the t-shirts and socks despite the heavy Mumbai rains.

The design team was amazing with their quick responses to our requests and made sure we had the artwork we needed. They worked with some unreasonable deadlines and demands and came out on top on all of them. The best part was getting the opportunity to host all of them together on the final day of the conference and doing a Design track where they did sessions on Inkscape, Blender and GIMP.

We struggled with some basic things with the print vendor like sizes and colours, but we were able to fix most of those problems in time.


We settled on MIT College of Engineering as the venue after considering 2 other colleges. We did not want to do the event at COEP again since they hosted the event in 2011. They had done really well, but we wanted to give another college the opportunity to host the event. I had been to MIT weeks earlier as a speaker at their technical event call Teknothon and found their students to be pretty involved in Open Source and technology in general, so it seemed natural to refer them as potential hosts. MITCOE were very positive and were willing to become hosts. With a large auditorium and acceptably good facilities, we finalized MITCOE as our venue of choice.

One of the major issues with the venue though was the layout of the session rooms. We had an auditorium, classrooms on the second floor of another building and classrooms on the 4th floor of the same building. The biggest trouble was getting from the auditorium to that other building and back. The passages were confusing and a lot of people struggled to get from one section to the other. We had put up signs, but they clearly weren’t good enough and some people just gave up and sat wherever they were. I don’t know if people left out of frustration; I hope they didn’t.

The facilities were pretty basic, but the volunteers and staff did their best to work around that. WiFi did not work on the first two days, but the internet connection for streaming talks from the main tracks worked and there were a number of people following the conference remotely.

HasGeek pitched in with videography for the main tracks and they were amazing throughout the 3 days. There were some issues on the first day in the auditorium, but they were fixed and the remainder of the conference went pretty smoothly. We also had a couple of laptops to record (but not stream) talks in other tracks. We haven’t reviewed their quality yet, so the jury is still out on how useful they were.

Volunteers and Outreach

While our CfP outreach was active and got good results, our outreach in general left a lot to be desired. Our efforts to engage student volunteers and the college were more or less non-existent until the last days of the conference. We spoke to our volunteers the first time only a couple of days before the conference and as expected, many of the volunteers did not even know what to expect from us or the conference. This meant that there was barely any connect between us.

Likewise, our media efforts were very weak. Our presence in social media was not worth talking about and we only reached out to other colleges and organizations in the last weeks of the conference. Again, we did not invest any efforts in engaging organizations to try and form a community around us. We did have a twitter outreach campaign in the last weeks, but the content of the tweets actually ended up annoying more people than making a positive difference. We failed to engage speakers to talk about their content or share teasers to build interest for their sessions.


Best. FUDPub. Ever.

After looking at some conventional venues (i.e. typical dinner and drinks places) for dinner and FUDPub, we finally settled for the idea of having the social event at a bowling arcade. Our hosts were Blu’O at the Phoenix Market City mall. The venue had everything from bowling to pool tables, from karaoke rooms to a dance floor. It had everything for everyone and everyone seemed to enjoy it immensely. I know I did, despite my arm almost falling off the next day :)


We had an approval for up to $15,000 from the Fedora budget and we got support from a couple of other Red Hat departments for $5,000 each, giving us a total room of $25,000. The final picture on the budget consumption is still work in progress as we sort out all of the bills and make reimbursements in the coming weeks. I will write another blog post describing that in detail, and also how we managed and monitored the budget over the course of the execution.

Overall Impressions

We did a pretty decent event this time and it seemed like a lot of attendees enjoyed the content a lot. We could have done a lot better on the venue front, but the efforts from the staff and volunteers were commendable. Would I do this again? maybe not, but that has more to do with wanting to get back to programming again than with the event organization itself. Setting up such a major conference is a lot of work and things only get better with practice. Occasional organizers like yours truly cannot do justice to a conference of this size if they were to do it just once every five years. This probably calls for a dedicated team that does such events.

There were also questions of whether such large conferences were relevant anymore. Some stated their preference for micro-conferences that focussed on a specific subset of the technology landscape, but others argued that having 10 conferences for 10 different technologies was taxing for budgets since it is not uncommon for an individual to be interested in more than 1 technology. In any case, this will shape the future of FUDCon and maybe even Flock, since with such a concentration of focus, Flock could end up becoming a meetup where contributors talk only about governance issues and matters specific to the Fedora project and not the broader technology spectrum that makes Fedora products.

In the end though, FUDCon is where I made friends in 2011 and again, it was the same in 2015. The conference brought people from different projects together and I got to know a lot of very interesting people. But most of all, the friends I made within our volunteer team were the biggest takeaway from the event. We did everything together, we fought and we supported each other when it mattered. There may be things I would have done differently if I did this again, but I would not have asked for a different set of people to work with.


The new fudcon.in: Why and How

We had a major change earlier this week, with the new fudcon.in website going live. This was a major task I was involved in over the last couple of weeks, and also one of the major reasons why we did not have a lot of visible action on the website. Hopefully you’ll see more action in the coming weeks as we come closer to the big day with just over a month to go.

Why did we do it?

The old fudcon.in website was based on Drupal 6.x with the COD module. Technically, this is a supported version of Drupal, but that is a pointless detail because every security or bug fix update was painful. The primary reason, it seemed to us, was COD. The 6.x version seemed more or less dead. We still stuck to it however, since the 7.x upgrade was far more painful than doing these updates and hacking at settings to get things working again.

That was until we decided to add the Speaker bio field to our sessions.

The COD module is very versatile and can let you ask for arbitrary information about a session. However, when you add a field, you can capture data from users, but cannot actually show it. The problem seemed to be in the way COD stored its additional data - drupal seemed unable to query it when displaying the session node and hence would refuse to show all of the additional fields, like FAS username, Twitter handle and speaker bio. Praveen and I hacked at the settings for days and couldn’t get it to work. We went live with the missing speaker bio, which apparently nobody else seemed to notice.

However, when we put out the talk list, the absence of speaker bio was evident, so I decided to take a crack at fixing it in code. I gave up because I was quickly overwhelmed by the Drupal maze of dependencies - I have spent way too long away from the web app world - and decided that I may have an easier time upgrading all of Drupal and COD to 7.x than peering at the Drupal/COD code and then maintaining a patch for it. I also felt that the upgrade would serve us better in the longer run, when we have to use the website to host a future FUDCOn - upgrading from 7.x ought to be easier than upgrading from 6.x.

How we did it

I sat back one weekend to upgrade the Drupal instance. The instructions make it sound so easy - retain the sites directory and your modules and change the rest of the code, call the Drupal update.php script and wait for it to do the magic. It is that easy, if your website does not use anything more than the popular modules. With COD, it is basically impossible to go from 6.x to 7.x, especially if you have added custom fields like we did.

Data definitions for COD seemed to have changed completely between 6.x and 7.x, making it near impossible to write a sensible migration script, especially when the migrator (yours truly) has no idea what the schema is. So I went about it the neanderthal way - remove all content, retain all users and then upgrade to Drupal 7.x from COD 6.x. That thankfully worked like a charm. This was a useful first step because it meant that at least we did not have to ask users to sign up again or add hundreds of accounts manually.

Once our user schema was on 7.x, the next task was to get COD 7.x. This again worked out quite easily since COD did not complain at all. Why would it - there was no conference content to migrate! Creating a new event and basic pages for the event was pretty straightforward and in fact, nicer since the new COD puts conference content in its own namespace. This would mean shared links being broken, but I didn’twant to bother with trying to fix that because there were only a few links that were shared out there. If this is too big a problem, we could write a .htaccess rule to do a redirect.

Adding sessions back was a challenge. It took me a while to figure out all of the data that gets added for each session and in the end I gave up due to exhaustion. Since there were just about 140 session entries to make, Praveen and I split that work and entered them ourselves. Amita and Suprith then compared content with fudcon.in to verify that it is all the same and the finally Praveen pushed the button to upgrade.

Like everything else, this upgrade taught me a few things. Web apps in general don’t think a lot about backward compatibility, which is probably justified since keeping backward compatibility often results in future designs being constrained - not something a lot of developers are comfortable with. I also had to refresh a lot of my database foo - it’s been more than 6 years since the last time I wrote any serious SQL queries.

The biggest lesson I got though was the realization that I am no longer young enough to pull an all-nighter to do a job and then come back fresh the next day.