Kaushik Ghose

All your genome are belong to us

racket-logo


Leave a comment

(time (apply max a))

A while ago I started up with Haskell in an effort to figure out what this functional programming was all about. One of the simple tests I did was try to find the maximum of a list of numbers and time it. This led me down an interesting rabbit hole, documented here.

I’ve now started with Racket in earnest (and am documenting it here). The corresponding test for Racket – the maximum of a list of a million numbers – is as follows:

#lang racket
(define a (build-list 1000000 values))
(time (apply max a))

And gives a respectable 92ms running time, 70ms of which is garbage collection. To recap, the corresponding performances for Python was 20ms and 2ms for C. Haskell’s performance was, well, confusing.

Untitled drawing


5 Comments

While x:

This is a cautionary tale about not misusing Python’s kindness. Or perhaps a cautionary tale about not trusting even widely used libraries to mirror what Python does. It is a little technical …

I have a program that, for the purposes of this post, operates on a list of pairs of numbers, going through each pair one by one until it exhausts the list. For certain reasons I’m using an iterator on the list and using Python’s ‘None’ to indicate that the list has run out, like so:

x = [(0, 0), (1, 2), (3,1)]
itr = x.__iter__()
v = next(itr, None)
while v:
  print v
  v = next(itr, None)

The pair of numbers represents particular data. The first number can range from 0 onwards and is always increasing monotonically while the second number is one of {0, 1, 2}. I have several tests for this function and have checked for different edge cases.

As you can see, in the snippet above, the list x is a list of Python tuples. After the bulk of the development for this (and related) functions had been completed, I had decided that a better way to store my list of tuples would be as a numpy structured array. That way I would have labels for my first and second numbers in the pair and wouldn’t get confused which was which six months from now.

And Python, being the great dynamic language it is, let me make the switch without a murmur. Python doesn’t care that the object I was initially passing it was a list of tuples and the new one is a numpy structured array! Python sees both objects yield iterators as needed and both objects are spitting out pairs of numbers at each iteration. So Python just works!

Things ran just fine for a while.

One day my colleague, who has used my program many times before, came across some very strange results in their experiments. I knew right away that this was going to be an interesting bug, just like any bug that is found after a program has been used for a while. The simple/easy bugs have all been squashed. What remains are the cockroaches. Evil, secretive errors in logic that scurry way from the daylight and come out only in the shadows of the night.

So I took the input data my colleague was using and ran my program. Indeed something bizarre was going on. For some of the data the program was acting exactly as it should. But for some of the data the program claimed that the list of pairs of numbers was empty and would exit right away. It would do the rest of the processing without any errors.

My first guess was that something was wrong with the input files. The input files were being generated with some hastily written code that I had not fully tested, so it was the logical choice, though this would make the bug less interesting.

I reviewed the input data and found that it was just fine. So the input to the program was correct. The program was just borking, seemingly, randomly.

I then started to do things the old fashioned way. I started to put in print statements at various parts of the code and simply print the length of my array of pairs of numbers. This was being done on a remote machine and I hadn’t learned yet how to hook my debugger into the remote machine. Also, sometimes, print statements are just a more pragmatic way to debug, regardless of what the eggheads will tell you.

The program loads the list. A million elements in the list. Good. The program does a transformation on the list. A million elements. Good. Another transformation. Still a million little pairs of numbers, hanging on for dear life. Then they enter this piece of code I sketched out above. And BAM! the program skips the loop, grinning as it whizzes by, zero little pairs of numbers.

I run the program again, this time with a different input set. A hundred thousand elements. Passes first transform, second transform, and then my loop. Round and round it goes in the loop, a hundred thousand times before shooting out, all dizzy at the end, having done exactly what it was supposed to do.

Very interesting.

What is different between the two data sets that would make the code DO this? So I opened by the data and started to look closely at the ones that failed and the ones that succeeded and then it struck me. All the ones that failed had (0, 0) as their first element.

And then it was clear.

When I wrote the loop:

itr = x.__iter__()
v = next(itr, None)
while v:
  print v
  v = next(itr, None)

I was a bit lazy. What I really meant was, “exit the loop if v is None”. Python is very kind and None evaluates to False in this test, so it is equivalent. The problem is that the loop will also terminate if v == 0. But, wait this is actually not a problem because my v is not actually a scalar number. It is a pair of numbers and during my testing I have repeatedly verified that (0, 0) != 0. One is a scalar value – zero – while the other is a Python object – a tuple. That can’t possibly be the problem!

But wait! Halfway through development I told you I switched from using Python lists of tuples to numpy arrays.

So I looked a bit into that, and BAM, sure enough, that’s where the behavior differs. For some reason the numpy library has been written to evaluate an element of a structured array the same as zero if both their elements are zero. This is different from taking a slice of a multi-dimensional array, where Python will complain that “ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()”

This is what still throws me. When I ask numpy for an element of my structured array of pairs of numbers, I get back a pair of numbers. These numbers are not simply another numpy array of two elements – which is what you would get from slicing a 2D numpy array. It’s a pair of numbers. However, this pair does not act as a Python tuple. It’s a different beast altogether.

Well, folks, the moral of the story is, don’t be lazy. If you want to test for None, test it explicitly. You never know what evaluates to 1 or to 0 even if you think you do.

If you would like to see this bug for yourself, here is a sufficient example:

import numpy as np

def looper(x):
  itr = x.__iter__()
  v = next(itr, None)
  while v:
    print v
    v = next(itr, None)

def python_list():
  print 'Python list'
  l = [(0, 0), (1, 0), (2, 2), (3, 0)]
  looper(l)

def numpy_struct_array():
  print 'Numpy array'
  l = np.array([(0, 0), (1, 0), (2, 2), (3, 0)], dtype=[('x', 'int32'), ('y', 'int8')])
  looper(l)

if __name__ == '__main__':
  python_list()
  numpy_struct_array()
light_bulb


Leave a comment

The three switches problem

The light bulb problem is a class of problem often presented at interviews as a test of lateral thinking. Here, in a mischievous spirit taken from the best subversive traditions we will attack this question by thinking a little too laterally …

There are different versions of the light bulb problem, but they rely on the same trick. Here is one version:

Your grand uncle lives in a rambling old house. He had an electrician over to wire a light in the attic but the electrician messed with the wiring and set up three switches instead of one. Your grand uncle wants to figure out which of the switches turns on the light in the attic. Problem is, you can’t tell from below whether the attic light is on or off. You can fiddle with the switches and then go up to the attic to inspect the light, but you can only do this once. The old coot says you can’t keep going back down and up the attic stairs fiddling with the light – apparently it disturbs the cat and he scratches the furniture.

The conventional answer to the question is to flip one switch on and leave it for a bit. Then turn it off and turn on another then go up to the attic. If the light is on, the last switch you flipped was the one. If the light is warm (Aha!) then the first switch you flipped is the one. If the light is off and cold, well is the remaining one. Let’s try and subvert the conventional answer to this problem by engaging in some mild subterfuge in the spirit of Calandra‘s “Angels-on-a-Pin“.

  1. Rapidly flip one of the switches on and off many times. Finally set this switch to the off position and flip another switch to on. Now go to the attic. If the light is on, it’s the currently on switch. If the light is burnt out, its the first switch.

  2. Call in your grand aunt, explain the problem to her, crawl up to the attic and have her flip the switches one by one. Yell when the light comes on.

  3. Flip a switch and inspect all the fixtures in the house, repeat and eliminate two of the switches. The third one is the attic switch. If none of the switches seem to be doing anything, where’s the problem? Just hit all three switches when you need the attic light on.

  4. Wait for a moonless night. Turn off all the lights in the house and draw all the curtains and blinds. Now flip the switches one by one. There will be a difference in the light level, however slight, from the attic light going on.

  5. Fly a drone up into the attic. Now you have eyes on target. Screw the cat.

  6. Take all bulbs out of their sockets and unplug all appliances. Now flip the switches one by one and watch the electric meter. This may take some patience.

  7. Open up the cover plate. Take a voltmeter and measure the drop in voltage as the switches are flipped. If only one has a voltage drop, that’s the one wired to the attic switch. If more than one have drops, throw stones into the attic until the light bulb is broken. Redo the measurements. The switch that now has no voltage drop (but did previously) is the one. Take a new bulb, go up into the attic and replace it.

  8. Flip each switch in turn and let the bulb warm up. Throw a bucket of water into the attic. If you hear a loud “POP” the bulb was on. Leave adequate cool down periods between tests. Take a new bulb, go up into the attic and replace it.

  9. Tie three strings to the switches. Go up into the attic with the strings. (Some will say this is subverting the question. Yes it is. Yes it is)

  10. Get all the shiny pots and pans your grand uncle has. Arrange them like mirrors so that you have a view of the attic from the switches.

As an aside, here is a problem I like better, as solving it actually teaches you some mathematical concepts:

Your grand uncle has passed on. In his will he’s bequeathed you an antique coin of immense value (so he says). He’s also left you with eight accurate forgeries of that coin that are indistinguishable from the original, except that they are just a fraction lighter. He’s also left you a balance, which can only be used twice before it breaks. So he says.

You are only allowed to take one of the coins with you. The original coin is priceless, the forgeries are worthless. So he says. You suspect all the coins are duds and the balance won’t break, but you take your Grand uncle’s will in the spirit it was meant, as a last fun puzzle to keep you busy so you don’t get too sad at the old codger’s passing.

chromosomes_feat


4 Comments

Seeing chromosomes

Did you know that chromosomes can be seen under a light microscope, and they get their name from the fact that they are strongly stained by certain colored dyes?

I was reading through our government’s collection of educational materials on genomics. Going through the history of the gene I was startled to learn that people knew about chromosomes by the late 1800s. I had assumed they were a later discovery, requiring more sophisticated instruments than a simple light microscope.

Walther Fleming was the first to report seeing these structures while studying cell division. The dye he was using, aniline, was heavily absorbed by these mysterious granules in the nucleus, which he called chromatin. Though human chromosomes can be quite long (Chromosome 1, the longest, is 85mm long), being a single molecule (!) you can imagine it’s quite invisible even under a microscope when stretched out.

It turns out, however, that during cell division the chromosomes condense and become dense enough that a stain will make them visible under a light microscope. Since the job of the DNA in the chromosomes is to serve instructions to create proteins, it is probably important for the DNA to be “unfurled” so it has a large surface area for ready transcription. When a cell needs to divide, however, the nuclear material needs to be transported and shuffled around, and it is best to have it in a tidy ball to do so.

The study of the structure of chromosomes using microscopes is called cyto-genetics. One could imagine that the physical structure of the chromosome is not important for the understanding of the genetic code. We mostly study DNA as a one dimensional, linear sequence of characters we call the genetic code. It turns out, however, that the physical three dimensional organization of chromosomes can affect which proteins are produced and in what quantity (“gene expression”)

I’m also told that chromosomes have their own domains in the nucleus – they are not just tendrils floating willy nilly in the nucleus.

Yup, that was your small, sticky ball of biological facts from me today …

Untitled drawing


2 Comments

Why would I ever write in C?

I’ve made computers do tricks in both C/C++ and Python for many years now. For the past few years I’ve written exclusively in Python for both work and play. I recently went back to a personal project to rewrite it in C++ and, for the first time in my life, thought, “Why would I ever write in C?”

I have two personal projects (A note taker/reference manager and a photo organizer) that I’ve been working on for over a decade now. I started both in C++ using the QT widget library. I moved the note taker to Ruby (on-rails, I had my browser-as-GUI phase) then to Python (bottle is an amazing and lightweight web framework, you should try it) and then I abandoned it in favor of paper and pencil (but I don’t do much manuscript writing anymore, which accounts for me not needing a reference manager and making do with paper notes).

The photo organizer I made two versions of in C++ using QT, and then I moved it to Python. I was quite happy with the Python version, but I decided I wanted some changes, and it seemed to be a new decade, so I started to itch to move it back into C++. QT had changed a lot since 2012 (which I think was the last time I messed with the C++/QT version of my photo organizer). There were a lot of new bells and whistles and I got excited and I wanted to sit down and see what I could do.

I wanted to start of with a library of core functions and an associated test suite. In Python, this kind of code is very easy to set up: I put the library in a source directory and the tests in a directory marked ‘tests’ and use nosetests to automatically discover and run them. The tests come about organically as short pieces of code I write in the REPL to test newly added features that smoothly build up the module

The first thing I realized is that I missed the REPL. For twenty years I had done without a REPL. I would code up a skeleton of an application, craft the makefile, have function stubs and then just make sure it compiled. I would then add functionality by fleshing out the stubs and adding new functions, refactoring into new files and so on. But now, just the thought of spending hours, perhaps days, setting everything up without any feedback, raised a barrier. It made things not fun. It was WORK! This is not a good thing for a project done in spare time, for enjoyment.

The next thing I missed were nosetests. The scene for unit tests is now much better for C++ and QT has it’s own little framework for writing and running tests. But they all look so CLUNKY! You have to declare a class, then include a header file, then make an application out of that. You have to tediously add every test separately, then add every function separately, then add every test case.

I guess, even with IDEs like QT Creator, there was too much boilerplate, too much gap between me writing some code and seeing what it did. The compile cycle didn’t help. Doxygen sounded OK, but there was no smile on my face when I went to download it and see how I would be using it in my project.

8085-Microprocessor-Kit

My first introduction to computers was via BASIC, then Pascal. I then went to C and C++. I had a class in 8085 assembly (We used kits packaged in neat wooden boxes. I enjoyed keying in the program and the data and then hitting go. My favorite project had been to write an infinite precision division program. I had seen Apollo 13 a few years back, and those kits and their interfaces reminded me of the DSKY).

My introduction to proper C++ was in a data structures and algorithms class, and the height of my C++ programming adventures was in a computer graphics class where I made animations out of particles in OpenGL and a kind of fly through of a complex scene. These were done on SGIs O2 machines, at the Indian Institute of Science’s Super Computer Center, where I would retire early each evening after dinner, because the O2 machines were limited in number and the cubicles would be quickly occupied.

I say all this, perhaps in a subconscious attempt to convince you that I’m not totally inexperienced in programming and do indeed intrinsically enjoy programming computers and understanding the art of programming computers, and don’t view it merely as a means to an end, which is to make these amazing calculating machines do clever tricks.

Screen Shot 2015-05-09 at 12.19.07 AM

But I think Python has spoiled me for ever in terms of the manner in which I enjoy computers. Writing C/C++ has become work. It’s something I would do wearing a suit and tie (not that I wear a suit and tie at work – which should tell you something). It’s something that I would do with a serious demeanor and a feeling, I imagine, mental patients have when wearing a straightjacket.

Boilerplate, boilerplate everywhere I look. Don’t smile. The doctors will take you away to the padded room. Better to look serious and keep your head down.

Performance you say? Well, to be honest, there are very few parts of my code where I need “C like” speed. And where I do, well, I can use CFFI, or more likely, I use Cython. It looks a little ugly, but hey, I have to write so little of it. The only annoying thing is inspecting the generated C code to make sure it’s getting rid of the dynamic Python calls.

Oh, yeah, pip install has spoiled me too. I ain’t chasing down your exotic library to read EXIF tag data in photo files. I haven’t got that kind of time. But I do have time to type pip install.

So, perhaps the only caveat I would add, in closing, is to say, the full title of this post should be “Why would I ever write in C for fun?”


Leave a comment

Cython __cinit__

cdef classes are awesome if you want lightweight data structures, for example, when you need millions of them. These cython classes have some special methods, the most basic of which is the __cinit__ method which is the analog of the __init__ method of regular Python classes.

The __cinit__ method is practical because it lets you initialize your Cython class transparently from straight python The downside is that we are now back to type checking and converting when we initialize, since we will accept Python variables. This can add perceptible overhead to object creation.

Consider the following class definition

cdef class A:
  cdef public:
    int a, b, c, d, e

  def __cinit__(self, int a, int b, int c, int d, int e):
    self.a, self.b, self.c, self.d, self.e = a, b, c, d, e

  def __repr__(self):
    return '({:d}, {:d})'.format(self.a, self.b)

If we run the following function

def one_million_objects():
  cdef:
    int n
    list x = []
  for n in xrange(1000000):
    a = A(n, n + 1, n + 2, n + 3, n + 4)
    x += [a]
  return x

and profile it, we obtain:

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.235    0.235    0.235    0.235 {v1.one_million_objects}

If you peek at the translated C code (which, admittedly, is pretty ugly) you will find that the relevant part of the code goes:

 for (__pyx_t_2 = 0; __pyx_t_2 < 1000000; __pyx_t_2+=1) {
    __pyx_v_n = __pyx_t_2;

    /* "v1.pyx":17
 *     list x = []
 *   for n in xrange(1000000):
 *     a = A(n, n + 1, n + 2, n + 3, n + 4)             # <<<<<<<<<<<<<<
 *     x += [a]
 * 
 */
    __pyx_t_1 = __Pyx_PyInt_From_unsigned_long(__pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_1);
    __pyx_t_3 = __Pyx_PyInt_From_unsigned_long((__pyx_v_n + 1)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_3);
    __pyx_t_4 = __Pyx_PyInt_From_unsigned_long((__pyx_v_n + 2)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_4);
    __pyx_t_5 = __Pyx_PyInt_From_unsigned_long((__pyx_v_n + 3)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_5);
    __pyx_t_6 = __Pyx_PyInt_From_unsigned_long((__pyx_v_n + 4)); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_6);
    __pyx_t_7 = PyTuple_New(5); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_7);
    PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1);
    __Pyx_GIVEREF(__pyx_t_1);
    PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_3);
    __Pyx_GIVEREF(__pyx_t_3);
    PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_4);
    __Pyx_GIVEREF(__pyx_t_4);
    PyTuple_SET_ITEM(__pyx_t_7, 3, __pyx_t_5);
    __Pyx_GIVEREF(__pyx_t_5);
    PyTuple_SET_ITEM(__pyx_t_7, 4, __pyx_t_6);
    __Pyx_GIVEREF(__pyx_t_6);
    __pyx_t_1 = 0;
    __pyx_t_3 = 0;
    __pyx_t_4 = 0;
    __pyx_t_5 = 0;
    __pyx_t_6 = 0;
    __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_2v1_A)), __pyx_t_7, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_6);
    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
    __Pyx_XDECREF_SET(__pyx_v_a, ((struct __pyx_obj_2v1_A *)__pyx_t_6));
    __pyx_t_6 = 0;

Our simple integers are being converted into python objects and then back again.

If we omit the __cinit__ definition and manually initialize the elements of the structure:

cdef class A:
  cdef public:
    int a, b, c, d, e

  def __repr__(self):
    return '({:d}, {:d})'.format(self.a, self.b)

With our million objects function being written as:

def one_million_objects():
  cdef:
    int n
    list x = []
    A a
  for n in xrange(1000000):
    a = A()
    a.a, a.b, a.c, a.d, a.e = n, n + 1, n + 2, n + 3, n + 4
    x += [a]
  return x

and profile it, we obtain:

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.146    0.146    0.146    0.146 {v2.one_million_objects}

Which is a pretty big savings. You will have guessed that this savings is due to the fact that we don’t do an expensive round trip through Python objects any more:

for (__pyx_t_2 = 0; __pyx_t_2 < 1000000; __pyx_t_2+=1) {
    __pyx_v_n = __pyx_t_2;

    /* "v2.pyx":15
 *     A a
 *   for n in xrange(1000000):
 *     a = A()             # <<<<<<<<<<<<<<
 *     a.a, a.b, a.c, a.d, a.e = n, n + 1, n + 2, n + 3, n + 4
 *     x += [a]
 */
    __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_2v2_A)), __pyx_empty_tuple, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_1);
    __Pyx_XDECREF_SET(__pyx_v_a, ((struct __pyx_obj_2v2_A *)__pyx_t_1));
    __pyx_t_1 = 0;

    /* "v2.pyx":16
 *   for n in xrange(1000000):
 *     a = A()
 *     a.a, a.b, a.c, a.d, a.e = n, n + 1, n + 2, n + 3, n + 4             # <<<<<<<<<<<<<<
 *     x += [a]
 * 
 */
    __pyx_t_3 = __pyx_v_n;
    __pyx_t_4 = (__pyx_v_n + 1);
    __pyx_t_5 = (__pyx_v_n + 2);
    __pyx_t_6 = (__pyx_v_n + 3);
    __pyx_t_7 = (__pyx_v_n + 4);
    __pyx_v_a->a = __pyx_t_3;
    __pyx_v_a->b = __pyx_t_4;
    __pyx_v_a->c = __pyx_t_5;
    __pyx_v_a->d = __pyx_t_6;
    __pyx_v_a->e = __pyx_t_7;

For some reason, we can not declare the

__cinit__

function as a

cpdef

function.

71AE90J735L


Leave a comment

How to shoot yourself in the foot with Cython

If you’ve grown fat and complacent using Python, try a little Cython. It’ll put hair on your chest and take some off your scalp.

Take the following Cython code I was writing (heavily paraphrased)

cdef:
  char *s = seq  # seq is passed in to the function
  unsigned char i, j
  unsigned char sub_mat[85]

for i, j in zip([65, 67, 71, 84], [84, 65, 67, 71]):
  sub_mat[i] = j

return [ sub_mat[s[i]]  for i in range(len(seq))]

This code compiles file. It even runs fine for a few test cases I set up for it. But when I put it into a standard testing pipeline I use, it caused a different test to freeze up.

My first reaction was, I’ve lost a pointer somewhere. So I started to dig around. (That reminds me, there has got to be a easier way to hook a debugger up to Cython. The prescribed ways are so complicated). I started to sprinkle print statements in the code. Nothing.

I then broke the final list comprehension out into a loop and put print statements in there. Suddenly I saw that my program wasn’t going off the reservation by losing a pointer. It was actually looping over and over again at that point. And then the lightbulb went off. The loop variable i had been typed as an unsigned char (max value 255) and if len(seq) was ever greater than that it would just cause the loop to go on for ever as i overflowed and reset to 0 repeatedly.

Of course, those of you programming in C will have immediately noticed this issue and asked – “Does len(seq) ever get larger than 255?” But I’ve grown fat and lazy on Python which lets me do things like 1<<1024 without missing a beat (yes, that creates a 1025 bit integer. Suck it up C programmer), so, on occasion, I have to lose some hair from the top of my head when I slum it with all you C-coders.

Follow

Get every new post delivered to your Inbox.

Join 54 other followers