Hell Oh Entropy!

Life, Code and everything in between

Back to the OS class: Memory Allocation

A lot of us in India learn OS concepts from textbooks. The concepts really go directly from the textbooks into examination papers for most, including me. We would grab on to key phrases like "semaphores", "paging", "segmentation", "stack" and so on, and never really stop to wonder how this all is *really* implemented. Some of us aren't interested since all we care about is getting a well paying job while others find goofing around to be a more attractive alternative. Either way, very few really get it.

Four years since I finished college, six since I last took an OS class, and I can say now that I finally got it. Somewhat.

Recently there was a very interesting case I hit upon, which got me wondering how memory allocation was managed by the workhorse of memory allocation, the malloc() function. The malloc function is implemented in the glibc library for Linux (and also for other Unix systems if you want). Its major responsibility (along with its friends, free, mallopt, etc.) is to do accounting of memory allocated to a process. The actual *giving* and *taking* of memory is done by the kernel.

Now the fun part is that on a typical *nix system, there are two ways to request memory from the OS. One is using the brk() system call and the other is by using the mmap() system call. So which one should glibc use to allocate memory? The answer is, both. What malloc does is that it uses brk() to allocate memory for small requests and mmap() for large requests.

So what is the difference between mmap and brk you ask? Well, every process has a block of contiguous memory called the data area. The brk() system call simply increases one end of the data area and hence increases size, allocating memory to the process. To free, all it does is decrease the same end of the data area. This operation is quite fast most of the time. On the other hand, the mmap system call picks up a completely different portion of memory and maps it into the address space of the process, so that the process can see it. Additionally, the mmap call also has to put zeroes in the entire memory area it is about to allocate so that it does not end up leaking information of some old process to this one. This makes mmap quite slow.

So why have mmap at all if it is so slow? The reason is the inherent limitation that brk() has due to the fact that it only grows one way and is always contiguous. Take a case where I allocate 10 objects using brk and then free the one that I had allocated first. Despite the fact that the location is now free, it cannot be given back to the OS since it is locked by the other 9 objects. One way that malloc works around this is by trying to reuse these free spaces. But what if the size of the object I am about to allocate next is larger than any of these freed "holes"? Those holes remain and the process ends up using more memory than it really needs. This is "internal fragmentation".

So to minimize the effect of this internal fragmentation, glibc limits allocation of small objects to brk(). Larger objects are allocated with mmap(). A threshold was set at 128KB, so objects smaller than it are allocated using brk and anything larger is allocated using mmap. The assumption is that smaller object requests would come more often, so the little fragmentation is worth the improvement in speed. Oh, and as for the reuse of the memory holes, it does the "best fit algorithm" -- remember that phrase? ;)

But with more recent versions of glibc (around 2006 actually, so not *very* recent), this threshold limit is dynamic. glibc now tries to adjust to the memory allocation pattern of your program. If it finds that you are allocating larger objects and freeing them soon, it will then increase the threshold, expecting you to allocate larger objects and free them more often. There is of course an upper limit of 32 MB to this. So anything larger than 32 MB will *always* be allocated using mmap. This is quite awesome since it speeds up malloc quite a bit. But it obviously comes with the price of potentially larger memory holes.

There is so much more to this, like the actual details of the way accounting of the brk'ed memory is done, obstacks, arenas. The fun seems to be only beginning.

Comments

Yahoo chat room support now in libyahoo2 trunk!

Yahoo chat room support is now in trunk. Many thanks to Kai (Kay) Zhang for this effort, which he made as part of his Fedora Summer Coding project. The code still needs more testing, so I also need to start working on the chat room implementation in ayttm.

I expected git-svn to commit the patchset along with the history in Kay's git repository, but unfortunately that did not happen. So those who want to see the history of commits for this may take a look at his libyahoo2 github repository

Comments

libyahoo2 to get chat rooms soon

Kay has been working on the libyahoo2 project as part of the Fedora Summer Coding initiative. He's been working on chat room support and things are looking quite good so far. Kay has finished working on the core functionality of logging in, joining and leaving rooms; only the chat room list functionality is remaining. He was a little shy of interacting with libyahoo2 upstream earlier, but he has been working on it since.

Looking at the pace of the project so far, we could have Kay's code merged into upstream very soon. Following this, me or Ray van Dolson will do a release in Fedora.

Comments

GCC Workshop at GRC, IIT Bombay

Mustafa (my manager) knew about my current fascination with learning the _real_ basics of computers (electronics, kernel, compilers, etc.), so he arranged for me to attend the GCC workshop held by IIT Bombay this week. My first reaction was that I would be wasting my time there since I didn't know the first thing about compiler theory and was a novice at best in assembly language programming. He said it wouldn't hurt to try. So I had to try.

I got a chance on the day before the workshop to read up some about things like IRs, RTLs, etc. It was enough that I would not be completely lost from day 1. But I did not attend day 1 at all, thanks to the countrywide strike that crippled all public transport (and some people too). So I spent that day working and also trying to cover what would otherwise have been covered in day 1 -- the various phases of compilation, passes, gray box probing to find out some more about intermediate outputs, etc. I got hold of last year's slides, so it was a little easier.

So I finally made it to days 2, 3 and 4. The first thing I noticed during the lecture sessions was that the professor really knew his stuff. He was well acquainted with the internal layout of gcc and was able to explain it well enough that I really _got_ it. Overall I come out of the sessions today with much more knowledge about gcc than I could ever gain in 4 days on my own. Here are some observations that I made during the course of this session.

The professor really knew his stuff. I say this again so that it does not look like I am ignoring that. There are also a lot of really talented individuals at the GRC, who are doing some pretty interesting research based on gcc. The trouble though is that there seem to have been no efforts whatsoever to share these ideas upstream.

One such idea is the Generic Data Flow Analyzer (GDFA). It is a patch to gcc that provides a data flow analyzer, which can be used to find and eliminate dead code or unused variables. It adds a gimple pass to the compilation sequence and intends to replace the current dead code elimination and unused variable elimination passes with the same code called with different parameters. While the idea is pretty interesting, the sad thing is that there are no signs of an attempt to push this idea upstream. All I could find was an announcement to the gcc mailing list, but no request for comments or for inclusion of the patch.

This is only one of many more ideas that are brewing in the GRC in the minds of some very talented people. But one felt that these ideas were being used only to get degrees and nothing was being done to actually test their feasibility in live production level code. It would be nice to see some of these ideas actually presented upstream with a genuine interest in getting them incorporated.

To conclude, it is a pretty good session for those who want to get started with learning compilers and gcc internals.

Comments

Yahoo chat future and libyahoo2

Philip had recently posted on libyahoo2-users that Yahoo is planning to open its instant messaging platform API to the public. It has been delayed a bit since then, but it is surely due.

So how does this change things for libyahoo2 or any other FOSS implementations? For one, the fun of digging through binary data and trying to make sense of it will be gone ;) But on a serious note, we can hope to have some more consistency in behaviour and support will definitely improve. I'm not very keen about the fact that the support will be over HTTP, but I guess it works well for them. For now we can only wait for their announcement before we know what the entire thing looks like. If it is anything like the messages that the current official yahoo! messengers send, then it's only really a wrapper around their old pain of a protocol. But this does not really use JSON, so it is likely that they're writing a fresh implementation. In any case, there is still time for it and in that time, we have some decent work going on on the libyahoo2 code base.

In other news, Kai Zhang has been working on implementing chat room support for libyahoo2 as his Fedora Summer Coding project. His code can be found here. Other than the brief comments in the git logs, everything seems to be quite ok. A bulk of the feature set is already in, so that is pretty good progress. Once the entire feature set is completed and tested, I will have them included in the main libyahoo2 source tree. Following that will be a release and a rebase on Fedora. This will be a good rebase compared to the ugly one the last time around, where I broke all API compatibility in an effort to revamp the authentication support.

Comments

Hacking on assembly code: Dynamic memory allocation on stack?

So I started dabbling with assembly language programming a couple of days ago. This was the next logical step in the "going lower down" move I have been doing ever since I started writing programs in Visual Basic some years ago (there, I admitted it). Since then I went through C#, Java, C++, C and now finally assembly. And it is fun to watch a program die in so many innovative ways. It is helping me understand the internals of a program much better.

One of the first things I learnt about assembly programming was that I needed to use completely different syscall numbers and instructions for x86_64 as compared to i386. For example, the syscall number for exit on i386 is 1 while on x86_64 it is 60. Same goes for write -- 4 on i386 and 1 on x86_64. I spent half an our trying to figure out why my program was calling fstat on x86_64 while a similar program built with --32 would work fine.

Crossing all these hurdles, I finally wrote a slightly more complicated (but still useless) program than a hello world. This is a program that takes in an integer string through the command line, converts it to an integer, converts it back to string and prints it back out. Pretty useful huh :)

Now for the interesting part in the code. I always thought of dynamic memory allocation as something you can only do through the OS using the brk() and/or mmap() syscalls. Generally we do this indirectly through malloc() and friends. But what I ended up doing in my program is allocating memory on the stack on the fly. Here's the code snippet:

    movb $0x0a, (%rsp)
    decq %rsp
next_digit:
    movq $0, %rdx
    divq %rdi
    addq $0x30, %rdx

    # Hack since we cannot 'push' a byte
    movb %dl, (%rsp)
    decq %rsp

The complete code along with the makefile is at the end of this post. You can build it if you have an x86_64 installation. What I do above is simply:

  1. Read a digit from the number
  2. Move the stack pointer ahead to make room for a byte
  3. Store the ascii representation of that number into that byte

I could not use the push instruction itself, since it can only push 16, 32 or 64 bit stuff on to the stack (with pushw, pushl, pushq). If you push a single byte value, it will be stored in one of the above sizes, not in just 1 byte. What I wanted was to create a string on the fly without limiting myself to a fixed size array, so this seemed to be the only approach. While this works, I still need to find out a few more things about this:

  1. Is it safe?
  2. If it is safe, then is there a similar way to do this in C without embedding assembly code? This would be really cool, especially in usage scenarios such as the above. Admitted that the above scenario is pretty useless in itself, but I'm sure there must be similar examples out there that are at least a little more useful.

The code:

.section .data
usage:
    .ascii “Usage: printnum-64 <the number>\n”
    usagelen = . - usage

.section .text .globl _start

Convert a string representation of an integer into an int

.type _get_num, @function _get_num: push %rbp movq %rsp, %rbp movq 0x10(%rbp), %rdx mov $0x0, %rcx mov $0x0, %rax nextchar: # Iterate through the string movb (%rdx), %cl cmp $0x0, %rcx je call_done

subq $0x30, %rcx
imulq $0xa, %rax
addq %rcx, %rax
incq %rdx
jmp nextchar

Convert a number into a printable string

.type _print_num, @function _print_num: push %rbp movq %rsp, %rbp movq 0x10(%rbp), %rax movq $0x0a, %rdi

# Hack since we cannot 'push' a byte and increment
# %rsp by only 1. push will push whatever it has as
# a 16, 32 or 64 bit value (pushw, pushl, pushq)
movb $0x0a, (%rsp)
decq %rsp

next_digit: movq $0, %rdx divq %rdi addq $0x30, %rdx

# Hack since we cannot 'push' a byte
movb %dl, (%rsp)
decq %rsp

cmp $0x0, %rax
jne next_digit

movq %rsp, %rbx
addq $0x1, %rbx
movq %rbp, %rcx
subq %rsp, %rcx

push %rcx
push %rbx
push $0x01
call _write
jmp call_done

Wrap around the write system call

.type _write, @function _write: push %rbp movq %rsp, %rbp movq 0x10(%rbp), %rdi movq 0x18(%rbp), %rsi movq 0x20(%rbp), %rdx movq $0x01, %rax syscall jmp call_done

I always do this when I am done with a function call

call_done: movq %rbp, %rsp pop %rbp ret

#Program Entry point _start: # Command line arguments: # The parameter list is: # argc: The number of arguments # argv: The addresses of all arguments one after the other # They can be popped out one by one pop %rax cmp $0x2, %rax jne error

# Pop out the first arg since it is the program name, but
# keep the second so that it can be fed into the next function
pop %rax

call _get_num
push %rax
call _print_num
jmp exit

error: push $usagelen push $usage push $0x2 call _write movq $0xff, %rax exit: movq %rax, %rdi movq $60, %rax syscall

The makefile:

32:
    as --32     $(target).s -o $(target).o
    ld -melf_i386   $(target).o -o $(target)

64: as $(target).s -o $(target).o ld $(target).o -o $(target)

If you save the source as foo.s, you can build it with:

make target=foo 64

Comments

Lots and lots of work

The past week has been quite hectic, with a lot of juggling between different things I have been wanting to do. So here's what I had on my mind:

  • I have been looking to learn more about compilers. I goofed off in college and missed out on the same course that was taught twice. I always understood enough to fool my teachers into thinking I knew it all, but not enough to really know it all. Or some for that matter. So now I want to make up for it.
  • I had not touched ayttm and libyahoo2 for quite a while. So I wanted to do something there
  • Kushal had asked me if I could package libraw for Fedora because some random app needed it. He asked me because I knew autotools and I could autotoolize the project before I package it.
  • Rahul pointed out this cool little command line audio player called gst123. I had been looking to write something like this for some time now but I just could not wrap my head around gstreamer. I tried it and immediately fell in love. I just had to package it for Fedora.
  • Work at my day job. Lots and lots of work.
  • Work at my day job. Lots and lots of work. Yes, it is worth mentioning twice

And so here's what I actually ended up doing over the week:

  • I had bought 3 books to study compilers. They're just lying there since I haven't had enough time to actually start studying.
  • Nothing on ayttm and libyahoo2. Not enough time
  • Packaged libraw and submitted for a package review. There is no activity on that bug report yet, but there was some action before it. Libraw upstream does not like autotools, so I had to hand-write a configure script to detect stuff. I also looked up and tried out the app for which Kushal wanted me to package libraw. The app is Shotwell, a photo management program. And it is good; I'm starting to use it for my photographs now. I'm glad I decided to package libraw for it.
  • I packaged gst123. The package has been approved and I have already submitted an update for F-13. I did this while on a bus from Pune to Mumbai :D

    gst123 is a really cool app, try it out. It might not play internet radio streams right out of the box (my use case), but you can easily pipe/grep/cut your way to getting it to work. Here's how I play the radio stream from Absolute Radio:

    gst123 `curl -s http://network.absoluteradio.co.uk/core/audio/ogg/live.pls?service=vcbb | grep File1 | cut -d '=' -f 2`

    See, it's so easy!

Oh yeah, work at my day job. Lots and lots of work.

Comments

Fedora Activity (half) Day

The FAD (Fedora Activity Day) was announced over a month ago with an intention to get some real work done during an event. I really only had a chance to participate in 1/4th of the FAD (1/2 day on Saturday), since I had to fly to Bangalore on Saturday evening to spend the weekend (or whatever was left of it) with family. But that was enough to get whatever I wanted out of the event.

Being pretty much a newcomer into the Fedora community, there wasn't much that I could think of to directly contribute but I wanted to do something. I really only maintain 1 package, which also does not have much traffic, so I wasn't exactly brimming with ideas. Rahul helped me there by asking me to do an Autotools workshop. I was also looking forward to meeting some of the guys I had met at FOSS.in last year; Susmit, Hiemanshu and Sayamindu. I could not meet Hiemanshu (did he come at all?), but it was good to meet Susmit and Sayamindu after quite a long time.

We started the day with my autotools workshop; I hope at least someone found it useful. I demonstrated the process of autotoolizing a simple C program using the same example I used during my Fedora classroom session earlier this month: linkc. The main reason I keep choosing this program is that I am too lazy to find or write anything on my own. The other reason is that the program helps to cover quite a few things at one go -- it is small, it has an external dependency, a subdirectory and some distributable files. So all those things win over the fact that the app just doesn't work as advertised. Oh well...

Once the only "session" of the day was over, everyone announced their aims for the two days while Sankarshan distributed some swag (t-shirts, stickers and buttons). After that it was pretty much everyone working on their own stuff. Me too.

Only a couple of days before FAD, Ray van Dolson added me as a co-maintainer for libyahoo2 in Fedora so that we could share the workload of doing releases/bug fixes. After discussion with him, I decided to do a libyahoo2 release into rawhide during the event. So I finally had something that I could do, which was much closer to Fedora.

I knew that the release would break freehoo, a console messenger for yahoo since libyahoo2 1.0.0 broke all backward compatibility, so I set about fixing that. The result was a bug report with a patch to fix freehoo to build with the latest libyahoo2. Finally, I also changed ayttm to dynamically link against libyahoo2 instead of cloning the code base all the time. There was absolutely no incentive in maintaining two code bases for it, so it finally had to go.

By the time the ayttm change was done, it was time to leave. But before that, Kushal asked me to take a look at libraw to see if I could pitch in with something there. So I will be looking at autotoolizing it and packaging it for Fedora. I was supposed to do it today, but all of my day was spent in playing catch-up with work at my day job. Maybe I'll have more time tomorrow for it.

Comments

Finding connections to a specific port with SystemTap

Earlier in the week I was asked by someone if I know a way to monitor applications trying to connect to the telnet port. The obvious answer was netstat/lsof, but the problem was that this application would be up for merely seconds and he did not know when/where it started up. All he had was his telnetd log telling him about the connections. So I decided to go the SystemTap way and came up with this:

/* snoop.stp */
function sockaddr_to_port:long(sck:long)
%{
        short ret = 0;
        struct sockaddr *sock = (struct sockaddr *) (long) THIS->sck;

        memcpy(&ret, sock->sa_data+1, 1);
        memcpy(((char *)&ret)+1, sock->sa_data, 1);

        THIS->__retvalue = ret;
%}

global testport = 0

probe begin(0) {
        if (testport == 0) {
                printf("Usage staprun snoop.ko testport=<portnum>\n");
                exit();
        }
}

probe syscall.connect {
        port = sockaddr_to_port($uservaddr)

        if (port == testport) {
                printf ("%d: %s trying to connect to port %d\n", gettimeofday_s(), execname(), testport);
                printf ("\tPID: %d\n", pid());
                printf ("\tUID: %d\n", uid());
                printf ("\tEUID: %d\n", euid());
                printf ("\tParent: %s\n", pexecname());
                printf ("\tPPID: %d\n", ppid());
        }
}

So this really is a very simple tap on the connect syscall to be able to intercept all connect requests on the system. The sockaddr_to_port is a little more interesting. It takes the first two bytes of sockaddr->sa_data and swaps them to get them to the correct byte order to return as the port number. The reason for the swap is that network byte ordering is essentially big endian (The most significant byte first) while x86 computers are little endian. So the port number 23, at byte level would be seen on your computer as:

0x00170000

while in network byte order it is seen as:

0x00000017

Once we have the port number, the rest is easy pickings with SystemTap giving us easy functions to collect the executable name, pid, parent process name, etc. As for the date, one may easily convert it to human readable form using:

date -d @<my date in seconds>

Now that we have all the pieces in place, we can build the above script into a kernel module using:

stap -vvvg -r `uname -r` snoop.stp -m snoop

The output of this is the kernel module named snoop.ko. The -g signifies the "Guru mode". We need it since our function sockaddr_to_port is embedded C and we need to tell stap that we really know what we're doing. I like putting in a lot of v's in the first place so that I get build errors right away. This is essentially building a kernel module. The -r and -m are for kernel version number and module name respectively. Once built, you can deploy the module using:

staprun snoop.ko testport=23

This loads the module into the kernel and waits to print messages (whenever the program decides to connect to telnet) to standard output. If you look into the output of lsmod, you will find that snoop is a loaded module.

Packages you will need to implement all of this:

  • systemtap
  • systemtap-runtime
  • kernel-debuginfo
  • kernel-devel
  • kernel-debuginfo-common

If you're using the PAE kernel then you need the kernel-PAE-debuginfo and kernel-PAE-devel instead of kernel-debuginfo and kernel-devel. If you're only looking to deploy a binary systemtap module, you will only need systemtap-runtime. Yes, I can run a module built on one system, on another system provided they have the same kernel version and architecture. But be careful of what you run, always inspect the source to make sure you're not running anything malicious.

Comments

Checking Reliance Netconnect prepaid account usage

So I've been using the Reliance Netconnect usb dongle since I shifted to Magarpatta City. Since work is so close to home now, I really only need internet access at home to check emails when I wake up (yes, I am an addict). Being a prepaid account, I wanted to know how I could monitor my usage. I asked the vendor and he told me to go to the Usage menu. I told him I use Linux. He insisted that I ought to be going to the usage menu.

I gave up asking him.

And it was a good thing I did, because the information was quite easily available online. All you had to do was go to this URL and enter your MDN, i.e. the number of your netconnect dongle. But that was very cumbersome, so I hacked up this little script so that usage monitoring is now just a command away:

#!/bin/sh

cleanup_temp () {
    rm -f $tempfile
    exit 2
}

if [ $# -lt 1 ]; then
    echo "Usage: $0 <reliance netconnect number>"
    exit 1
fi

trap cleanup_temp SIGINT
trap cleanup_temp SIGTERM
trap cleanup_temp SIGHUP

tempfile=`mktemp`

nc reliancenetconnect.co.in 80 > $tempfile <<ECHO
POST /RNetconnect/RNC/Netconnect_Authentication.jsp HTTP/1.1
Host:reliancenetconnect.co.in
Content-type: application/x-www-form-urlencoded
Content-Length: 14

MDN=$1
ECHO

grep "and your Netconnect" $tempfile | links -dump

rm -f $tempfile

And if you want to make things even simpler, modify the above to read the number from an environment variable and export that variable in your .bashrc.

Comments