Kaushik Ghose

I like to build things and watch them work

__repr__()

I think repr() is Python’s genius touch. What?! I’m sure you’re yelling at the screen right now, incoherent as items from the long list of things you love about Python fight each other to get gain command of your voice. But it’s what I think. And as you know, Lisp got there first.

Now, of course, you’re really mad at me. But listen. What do we use Python a lot for? We use it for anything that requires our language to be nimble, fluid and user friendly, like data analysis and algorithm prototyping. And why do we use Python? Because it has a REPL. We’ll think of something to do, we’ll write a set of functions and classes to do it (sometimes never leaving the REPL) and then run our data on it.

The key feature of the REPL is being able to see the results of a computation immediately printed below what we did. This is a user comfort feature, and user comfort has a disproportionate impact on our motivation to keep trying out new ideas.

Python’s built in types have decent printed representations, but these can get crowded (dicts with many keys get hard to read quickly, for example). And what about user defined classes? Often, the results of a computation are not a few strings or simple, single numbers. The result of adding two arrays or two tables can be visually complex, for instance. Representing these as unreadable combinations of Python’s built in types or equally unreadable pointer ids starts to defeat the purpose of the interpreted, REPL workflow.

To mitigate this, Python supplies us with the special repr() method. When we create a class we don’t have to define this method, but if we do define it, making it return a succinct string which we find useful as a summary of the object, at the REPL we are then rewarded with these succinct, readable representations.

For example, say we are doing some work with vectors and are interested in how the magnitude of the resultant vector changes as we add vectors together. We could do this as follows:

class Vect:
  
  def __init__(self, x, y):
    
    self.x, self.y = x, y

  

  def __radd__(self, other):
    
    return Vect(self.x + other.x, self.y + other.y)

  

  def __add__(self, other):
    
    return Vect(self.x + other.x, self.y + other.y)

  

  def __repr__(self):
    
    return "({}, {}):{:0.3f}".format(self.x, self.y, (self.x ** 2 + self.y ** 2) ** 0.5)

There are some things here in addition to repr which enable us to do vector additions. We will ignore those here.

So now when we create some vectors, we get a nice printout

Vect(3,4)   # -> (3, 4):5.000
Vect(5,12)  # -> (5, 12):13.000

This feature plays wonderfully with Python’s built in types, so a Set of these classes, a dict with these classes as keys or values, or a simple list of these classes all work as you would hope.

[Vect(3,4), Vect(5,12)]       # -> [(3, 4):5.000, (5, 12):13.000]
{Vect(3, 4), Vect(5, 12)}     # -> {(3, 4):5.000, (5, 12):13.000}
{Vect(3, 4): 'A', Vect(5, 12): 'B'}  # -> {(3, 4):5.000: 'A', (5, 12):13.000: 'B'}
sum([Vect(3,4), Vect(5,12)])  # -> (8, 16):17.889

(if (too (many parentheses)) (use parinfer) ‘())

When I first started learning Common Lisp I was a little annoyed by the syntax. After practicing writing Lisp code I have grown to like the simplicity of the basic language. However, like C/C++, even though you don’t have to format the code (i.e. add whitespace) it helps readability immensely if you do. In such cases a decently smart editor helps a lot, and the parinfer plugin is really, really neat.

For me, formatting Lisp code like it was Python improves it’s readability by a huge amount. Syntax highlighting, for Lisp, can’t do so much since Lisp syntax is minimal, but indenting things like meaty if statements, nested function calls and lengthy ‘cond’ constructs makes things easy to follow.

Some editor features, for example paredit, tailored to Lisps, are very helpful but, as the plugin manual itself notes:

ParEdit helps **keep parentheses balanced** and adds many keys for moving S-expressions and moving around in S-expressions. Its behavior can be jarring for those who may want transient periods of unbalanced parentheses, such as when typing parentheses directly or commenting out code line by line.

Jarring is right. I use Atom and had paredit running and had a session once where I thought my keyboard was broken because I couldn’t type parens. And not being able to type parens is pretty traumatic when learning Lisp.

Parinfer brings a very useful concept to the domain of Lisp editing. Like a most editor assists, it will close parentheses for you when you open them, and place the cursor in the right place. However, it does a bit of sophisticated parsing to handle the following condition:

(defun func (x)
  (setf x (+ x 3))
  (if (< x 3) t nil))
(defun func (x)
  (setf x (+ x 3)))      <--- note we now need a close parens here
  ; (if (< x 3) t nil))  <--- parinfer automatically generates it
                         <--- when we comment out the last line

(As a side-note, WordPress’ syntax highlighting for the tag does not have Lisp as an option, but I got the desired effect by claiming this was Python!)

Parinfer does take a little getting used to if you’ve been managing parens and indentation by yourself. I had to learn in which part of my S-exp to hit the “return” key to get the desired effect – the two outcomes being advancing to a new level of indentation (nesting) under the parent expression or dropping out to a new, separate expression

Don’t hang up!

I started using multiuser Unix systems in graduate school and I quickly learned to love 'nohup'. It let you do the impossible thing: you logged into a machine, started a long computation and then logged out! You didn’t have to hang around guarding the terminal till 2am. You could go get some sleep, hang out with friends, whatever.  This post is a short survey of ‘nohup’ like things that I’m still using and learning about, a decade and a half later.

Screen: For the longest time nohup <some command> & was my staple. I’d package all my code so that it was non-interactive, reading instructions from a parameter file, and writing out everything to data and log files, fire it off and come back the next day. Then someone introduced me to ‘screen‘. This was amazing. It let me reattach to a terminal session and carry on where I left off, allowing me to keep sessions open as I shuttle between work and home. I like doing screen -S <a name> to start sessions with distinct names.

Mosh: The one thing about screen is that I have to ssh back into the machine and start it up again. Mosh is a client server system that allow roaming, which means in practice, I can open a terminal, work on it, send my computer to sleep, work on the train with no WiFi, get into the office, and when my computer finds a WiFi again, Mosh takes up where it last left off, seamlessly – except for a little glitching on the display. Mosh doesn’t quite replace screen though – if I reboot my laptop or shutdown the terminal app, I lose the Mosh client. The session keeps running, but I can no longer interact with it.

reptyr: Ok, this is a fun one. Now suppose you have a running process that you started in a normal ssh session. Then you realize, this is actually a long running process. Oh great, the flashbacks to the late night terminal guarding sessions in grad school come back. There’s a program caller reptyr that lets you hop a running process from one terminal to another. What you want to do in this case, is start up a screen session and then invoke reptyr from the screen session.

Unfortunately I’ve not been able to get it to work properly – I always get

Unable to attach to pid 11635: Operation not permitted
The kernel denied permission while attaching. If your uid matches
the target's, check the value of /proc/sys/kernel/yama/ptrace_scope.
For more information, see /etc/sysctl.d/10-ptrace.conf

In my original terminal the program just stops and hands me back my command prompt. Nothing happens in my new terminal, but I can see that the program is running. It’s a bit like doing disown, which in turn is like doing nohup retroactively – your process can keep running, but there is no terminal. Fortunately you redirected stderr to a file by doing 2> err.txt, right? right?

Python, global state, multiprocessing and other ways to hang yourself

C and C++ were supposed to be the “dangerous” languages. There’s Stroustrup’s quote, for example, widely circulated on the internet. However, Stroustrup clarifies that his statement applies to all powerful languages. I find Python to be a powerful language that, by design, protects you from “simple” dangers, but lets you wander into more complex dangers without much warning.

Stroustrup’s statement in more detail is (from his website):

“C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off”. Yes, I said something like that (in 1986 or so). What people tend to miss, is that what I said there about C++ is to a varying extent true for all powerful languages. As you protect people from simple dangers, they get themselves into new and less obvious problems.

The surprising thing about Python I’d like to briefly discuss here is how the multiprocessing module can silently fail when dealing with shared state.

What I effectively did was have a variable declared in the parent process, passed to the child process which then modified it. Python happily lets you do this, but changes to the variable are not seen by the parent process. As this answer to the question on stackoverflow explains:

When you use multiprocessing to open a second process, an entirely new instance of Python, with its own global state, is created. That global state is not shared, so changes made by child processes to global variables will be invisible to the parent process.

I was thrown though, by two additional layers of complexity. Firstly as the example code below shows, I shared the state in a somewhat hidden manner – I passed an instance method to the new process. I should have realized that this implicitly shares the original object – via self – with the new process.

Secondly, when I printed out the ids of the relevant object in the parent and child processes the ids came out to be the same. As the documents explain:

CPython implementation detail: This is the address of the object in memory.

The same id (= memory address) business threw me for a bit. Then I heard a faint voice in my head mumbling things about ‘virtual addressing’ and ‘copy-on-write’ and ‘os.fork()’. So what’s going on here? A little bit of perusing on stack overflow allows us to collect the appropriate details.

As mentioned above, Python, because of some implementation reasons (keyword: Global Interpreter Lock – GIL) uses os.fork() to achieve true multiprocessing via the multiprocessing module. fork() creates an exact copy of the old process and starts it in a new one. This means, that, for everything to be consistent, the original pointers in the program need to keep pointing to the same things, otherwise the new process will fall apart. But WAIT! This is chaos! Now we have two identical running processes writing and reading the same memory locations! Not quite.

Modern OSes use virtual addressing. Basically the address values (pointers) you see inside your program are not actual physical memory locations, but pointers to an index table (virtual addresses) that in turn contains pointers to the actual physical memory locations. Because of this indirection, you can have the same virtual address point to different physical addresses IF the virtual addresses belong to index tables of separate processes.

In our case, this explains the same id() value and the fact that when the child process modified the object with the same id value, it was actually accessing a different physical object, which explains why it’s parent doppelganger now diverges.

For completeness, we should mention copy-on-write. What this means is that the OS cleverly manages things such that initially the virtual addresses actually point to the original physical addresses – as if you copied the address table from the original process. This allows the two processes to share memory while reading (and saves a bunch of copying). Once either of the processes writes to a memory location, however, a bunch of copying is done and the relevant values now reside in a new memory location and one of the virtual tables are updated to reflect this.

Which brings us to the question: what the heck am I doing worrying about addresses in Python?! Shouldn’t this stuff just work? Isn’t that why I’m incurring the performance penalty, so I can have neater code and not worry about the low level details? Well, nothing’s perfect and keep in mind Stroustrup’s saw, I guess.

Also, never pass up a learning opportunity, much of which only comes through the school of hard knocks. It also gives a dopamine rush you’ll not believe.  I wonder what kind of shenanigans you can get into doing concurrent programming in Lisp, hmm …

So, what about the other ways to hang yourself, as promised in the title? Oh, that was just clickbait, sorry. This is all I got.

Code follows:

import Queue as Q
from multiprocessing import Process, Queue
import time

def describe_set(s):
  return 'id: {}, contents: {}'.format(id(s), s)

class SetManager(object):
  def __init__(self):
    self.my_set = set()
    print('Child process: {}'.format(describe_set(self.my_set)))

  def add_to_set(self, item):
    print('Adding {}'.format(item))
    self.my_set.add(item)
    print('Child process: {}'.format(describe_set(self.my_set)))

def test1():
  print('\n\nBasic test, no multiprocessing')

  sm = SetManager()
  print('Parent process: {}'.format(describe_set(sm.my_set)))

  sm.add_to_set(1)
  print('Parent process: {}'.format(describe_set(sm.my_set)))

class SetManager2(SetManager):
  def __init__(self, q, q_reply):
    super(SetManager2, self).__init__()
    # SetManager.__init__(self)
    self.keep_running = True

    self.q, self.q_reply = q, q_reply
    self.sp = Process(target=self.loop, args=())
    self.sp.start()

  def loop(self):
    while self.keep_running:
      try:
        msg = self.q.get(timeout=1)
        if msg == 'quit':
          self.keep_running = False
        elif msg == 'peek':
          self.q_reply.put(list(self.my_set))
        else:
          self.add_to_set(msg)
      except Q.Empty:
        pass
      time.sleep(.1)

def test2():
  print('\n\nMultiprocessing with method set off in new thread')

  q, q_reply = Queue(), Queue()

  sm = SetManager2(q, q_reply)
  print('Parent process: {}'.format(describe_set(sm.my_set)))

  sm.q.put(1)
  time.sleep(1)
  print('Parent process: {}'.format(describe_set(sm.my_set)))

  sm.q.put(2)
  time.sleep(1)
  print('Parent process: {}'.format(describe_set(sm.my_set)))

  q.put('peek')
  time.sleep(1)
  print('Reply from child process: {}'.format(describe_set(q_reply.get())))

  sm.q.put('quit')
  sm.sp.join()

class SetManager3(SetManager2):
  def __init__(self, q, q_reply):
    super(SetManager2, self).__init__()
    self.keep_running = True
    self.q = q
    self.q_reply = q_reply

def start_set_manager3_in_process(q, q_reply):
  sm = SetManager3(q, q_reply)
  sm.loop()

def test3():
  print('\n\nMultiprocessing with object created in new thread')

  q, q_reply = Queue(), Queue()

  sp = Process(target=start_set_manager3_in_process, args=(q, q_reply))
  sp.start()
  # print('Parent process: items are: {}'.format(sm.my_set))

  q.put(1)
  time.sleep(1)

  q.put(2)
  time.sleep(2)

  q.put('peek')
  time.sleep(1)
  print('Reply from child process: {}'.format(describe_set(q_reply.get())))

  q.put('quit')
  sp.join()

test1()
test2()
test3()

Disassembly

Adventures in functional programming

Reading through Let over Lambda I ran into several instances where the author showed the disassembly of a bit of Lisp code. This struck me as awesome for several reasons and I wanted to do it myself. I initially thought I had to dig up a disassembler for my system (as I would have done it for C/C++) and I was blown away when I learnt that  there was a Lisp command for this!

I found getting the disassembly amazing because I didn’t think of Lisp – a dynamically typed high level language – as generating succinct machine code. I knew it could be compiled, but I expected the compilation to be something very messy, a bit like the C++ code that Cython generates for Python without the types put in – full of instruments to infer types at run time. Instead, what I saw was tight machine code that one…

View original post 186 more words

Olympus E-M10: A keeper

This is the second part of my post about my experiences with the OM-D E-M10 camera. Though my initial reaction was negative, I’ve found many things to love about this tiny but powerful camera. Most importantly, it makes me want to take pictures and I’m back to filling up my hard drive with images.

Electronic View Finder: On the whole, pretty cool.

This was my first shock when I got this camera, and possibly the biggest change I had to adapt to. I am used to the optical viewfinders found in Nikon SLRs and DSLRs and the EVF struck me as a cruel joke. Though part of it was simply adjusting to this new idea of looking at a computer screen rather than the actual scene, there are real issues with the EVF, mostly noticeable in low light: it blurs when you pan, you can sense the refresh rate and at low enough light – it simply stops working.

However, in most shooting conditions, once I got used to it, I stopped thinking about it and just shot naturally. And then the advantages of the EVF over an optical view finder began to dawn on me.

When I got my first SLR (A Nikon F-65) I was really excited about the depth-of-field preview button. Not only could I see the framing of the scene exactly, I could now check what the focus slice was! Well, the EVF is depth-of-field preview on steroids. It’s a preview of almost the exact image you will capture!

This realization first struck me while I was taking photos of my daughter indoors at night. I hit the white balance wheel and switched to incandescent, and the view finder updated to reflect this! Then I realized that I had noticed, but had not really remarked on, the fact that the effects of exposure compensation were, similarly, visible in real time. This is so much better than making these adjustments and then shooting a few frames only to find that the white balance is all wrong and your subject has a ghastly blue color cast.

The OM-D also has an online histogram display (I’ll write more about this later, but this is one of the features that make me think the OM-D is a camera designed by engineers who love their work and a management that keeps out of their way) and you can also see this through the EVF and use it to guide fine tuning of exposure.

Saving the best for last: the E-M10 was my first introduction to focus peaking. I had read wistfully about focus peaking as I scoured e-bay for a cheap split-prism focusing screen for my D40/D5100 because I was sucking at using my Nikkor 50mm f1.8 and I wanted it to be like the SLRs of old. With the EVF you can focus manual lenses just as you would have in the old days, with focus peaking replacing the split prism and ground glass.

Can you tell, I’m a convert! You need to take my effusiveness with a grain of salt. This is my first and only experience with EVFs. I’ve read reviews that say this EVF is small, and low resolution and dim compared to others. Whatever. I like the concept of the EVF and I am satisfied with the implementation of it on this camera.

Touch screen shooting: focus point selection and focus/recompose made obsolete

When I was comparing cameras, the E-M10’s touch screen did not factor into my decision. I considered it one of those things, like art filters, that were useless gewgaws added on to please the masses. The touchscreen, though, is a game changer.

The traditional way to get an off center target in focus is, of course, focus and recompose. There are people who will tell you that this causes problems because the focal plane of lenses is not flat and an object in focus at the center of view is not going to be in focus when moved to the edge of view. Though this is a physical fact, it’s importance has been artificially inflated by camera manufacturers eager to get people to upgrade their perfectly good cameras by dangling ever more focus points in front of their nose.

Let me tell you a bit about focus points. By the time you have used your dinky little cursor keys to hop your little red rectangle ten focus point squares across your viewfinder to sit on top of your subject, the moment has passed and the subject has left. The only real solution is to have the camera focus where you look, and that, surprisingly, has been tried, though, even more surprisingly, has been discontinued.

The next best thing is this new fangled live view + touch screen shooting. You view the image on your touch screen, tap on the screen where your subject is and Click! The camera focuses and shoots. We live in the future, my friends.

I removed the Sony A5100 from my shortlist partly because it did not have an EVF. I’m glad I insisted on an EVF, but I’m no longer opposed to just having a screen, as long as it is a touch screen. On the negative side, the LCD indeed is hard to see (washed out) even in moderate light and I prefer the D5100-type fully articulating screen to this semi-articulating one.

 

Face detection: A mixed bag

I’d seen face detection in point and shoots and again, did not think too deeply about it’s advantages. The reason for this is that invariably I got to see face detection when some one handed me their high end compact for a group photo and I would look at the display and see a few faces outlined, and I would think: “Great, I already know where the faces are, thanks”. The D5100 also had face detection in live view mode. I never really used live view on the D5100, because of it’s poor contrast based focusing system so, again, did not really see the use for it.

On the E-M10 (they really need more snappy nomenclature) face detection – when it works – is awesome and invaluable. Many scenes involve a person facing the camera and a busy background. The face is often – for a nice composition – NOT in the center of the frame. Face detection works marvelously to allow me to take the shot without thinking.

The problem is that this is making me lazy. I’m losing the instinct to focus/recompose and losing the deftness to nudge focus points (and this camera has so many) and when the detector fails e.g. when the subject is looking a little away, or there are two faces, it gets very frustrating. And, for a person who takes a lot of pictures of cats, I have to point out, there is no face detection for cats, which is a solved problem …

Twin control wheels: a double win

Another major reason for picking the E-M10 was the twin control wheels and they do not disappoint. My initial thought was that they would be great for M mode for shutter + aperture adjustments, but in A and S mode one of the dials can give exposure comp. With the func button they give rapid access to ISO and WB adjustment. This makes the camera very powerful to operate. On the D5100 I was forever fiddling with the menu to get these four parameters right.

The placement of the two dials looks awkward visually – the body is so small that they had to stack the dials on different levels to maintain a good dial size. I’m happy to report that the engineers have made the correct decision. The index finger works nicely on the front dial and the thumb on the rear. The camera strap does interfere a little and I’ve taken to putting my right index over the strap anchor point, rather than below it.The rear dial is also deceptively far way from the front one. I would be shooting, then reach for the rear dial and invariably not reach far enough with my thumb.

Super control panel. Gripe: Why can’t I change settings by touching the super control panel

The super control panel is very aptly named. I thought the Nikons had a nice summary panel showing you all the important camera settings, but Olympus has them beat. A lot of thought has gone into the panel layout – the controls are grouped such that in some modes a cluster of panels merges into a block, because they are controlled by one parameter. The only usability issue was that it took me a while to figure out that you had to press “OK” to activate the touch mode, where you can select parameters to change by touching the appropriate panel. Yet another win for the touch screen. Only gripe: sometimes a stray finger will activate the EVF eye detector and will blank out the touch screen as I’m selecting.

Startup delay: not really an issue

 

This was another aspect of the whole EVF/Mirrorless world that I wasn’t sure I would be comfortable with. I’m completely used to leaving my D5100 on all the time. I only switch it off to take out the card or change batteries. So, when I see something I like to shoot, I grab the camera, pop off the lens cap, raise it to my eyes (and not always in this correct order ..) and squeeze the trigger. Photo taken!

With the mirrorless, I wasn’t quite sure until I actually got the camera how I would work this. Some posts I had read online reassured me that the slight lag could be handled by hack pressing the shutter, or pressing any button actually, while raising the camera, to wake it from sleep mode. This way the EVF is on and the camera ready to shoot when you have it in position. And this truly works out well. It does feel a little awkward to someone used to an optical finder, but it works well enough, due to the fact that the camera has a sleep mode and does not need to be switched completely off.

Shutter sound

 

Pressing the shutter is followed almost instantaneously by a very crisp shutter sound (once I had turned off the annoying beep that accompanied the focus) and a slight vibration of the camera. It’s a very satisfying auditory and tactile response to the shutter press.  I think this is because there is only the soft kerchunk of the shutter and not the slightly bouncy thunk of the mirror. This is something that, because it is purely incidental and psychological, should not count, but it does.

Battery life: the downside of needing a screen to shoot

At the end of the day, I had taken around 200 photos when the camera declared that the battery was done and stopped shooting. This is a very big difference to the D5100, where I could go for days, shooting like this, even reviewing photos and videos before the battery gave out. I will be needing a spare battery. Perhaps two, to be safe.

Shutter count: an amusing aside. So, like many new camera owners, I asked the question “I know this is new, but how many miles does it actually already have on it. Checking the EXIF info for the photos I took, I found to my surprise that the EXIF did not contain the shutter count, like it does for Nikons. It turns out that shutter count is actually quite hidden, and only really meant for camera technicians to see as part of a larger suite of diagnostics. You have to enter a sequence of arcane keypresses to get to the relevant menu.

A great little camera

I could go on and on, about how light it is, that I don’t feel the weight on my neck even after a whole day of toting it around, about how configurable it is, how the menu structure is actually quite logical, how high ISO, upto 6400, is eminently usable for my purposes, how the kit lens is neat and tidy and does its job, how in-body image stabilization is such a step up for me, and how, in many such ways, it feels like a camera designed by happy engineers who love their job. In short it is a neat, well designed, tiny camera that does its job very well.

Oh, and here is a picture of a cat. I must now go and order some extra batteries.

 

Olympus E-M10: First impressions

I will not lie. My first reaction after unboxing the E-M10 and shooting a few frames was to return it. However, after poking round the menu a bit and trying out things for a while I think I will keep it – maybe.

(Update: I will keep it)

I guess I had over sold the small ness of this camera in my mind, because when I got it I was like, “Huh,  it’s not THAT small”. But, actually it is. It’s larger than the A510, and with the kit lens it won’t go in your regular pants pocket, but I could probably fit in a jacket or cargo pants pocket. With a pancake lens you could fit it in a slacks pocket.

But what did blow me away with its size was the lens. It really looked like a scale model of a lens. I held it in my hands for a while and marveled at it. You could fit two of those inside the standard Nikkor 18-55 DX kit lens, and it’s not even the “pancake” lens.

I liked the build immediately. The body is metal and feels like it, making the camera satisfyingly dense.  The dials click nicely and all the buttons are well placed.  I was a little disappointed by the battery door and the bulkiness (and ghastly color) of the charger.

I’m ok with the battery and card sharing the same door – especially since it looks like the battery needs to be changed often – but the door is a little clumsy. It has a little latch that you need to push shut to lock and it’s a little difficult to do this while maintaining pressure, since the door is spring loaded. I have gotten used to Nikon’s slim, all black chargers and the peculiar gray of the Olympus charger, and it’s ungainly thickness stands in stark contrast to the elegant design of the camera body.

I charged the battery, keeping my impatience at bay by reading the manual. I loaded the camera, switched it on, lifted the view finder to my eye and had my first disappointment.

I’ve never had a “professional grade” camera. I went from a Nikon F65 to a D40 to a D5100. I think only the F65 had an actual penta-prism. The others have penta-mirrors, which I believe are dimmer. I would read posts by people complaining how small and dim these optical viewfinders were compared to their professional grade cameras, but I never really felt the difference. The optical viewfinder of the SLR was, to me, an indispensable tool. You could see what the film was going to capture! Amazing! 95% coverage? Dim? Whatever!  The EVF, at least this EVF, is no optical view finder.

I was playing with this indoors and the impression I got was that I was peering at the world through an ancient CCTV system. The colors seemed off, there was blurring and lagging when I panned the camera. “I can’t shoot with this! It sucks!”

(Update: I quickly got used to the resolution of the view finder. The lag is imperceptible outdoors, even at dusk and there is a setting to up the refresh rate of the EVF, though I suspect it chews up more battery. See the next post.)

I squeezed off the shutter at a few subjects in the fading light. My biggest worry about this camera was the shutter lag, which really counts the delay between pressing the shutter, capturing focus and taking the picture. Depending on the lens and light conditions, even SLRs can take a while, but the dedicated phase detect focus system of the Nikon cameras allows the lens to spin towards focus in a deterministic and fast manner. The E-M10 has a contrast detect system. This is the same system that the D5100 uses in live view mode and Nikon’s system sucks.

All the reviews, measurements and posts one finds online about the speed of the E-M10’s auto focus are not mistaken. It truly is an effective AF system, despite it not being one of the fancy new hybrid AF systems that incorporate phase detect on the sensor. The pictures were a let down however. I’ve mentioned elsewhere that I can stand grain but not blur in pictures. Well these pictures were BLURRY! It was the over aggressive smoothing that’s present in the factory settings. Something that reviews have remarked on.

I went into the menu and switched it off. MUCH BETTER! Especially if you over expose a little bit. I would say that images at ISO 6400 with no smoothing are eminently usable for web/computer viewing, perhaps even for regular sized prints.

Oh, dpreview regularly complains that the Oly menu system is over-complicated. Personally, I found it to be better organized and richer than the Nikon D5100’s menu. I didn’t need to use the manual, and the tips on-hover are great – though they can get annoying when they obscure other text/menu options below them.

You can see a set of test shots in this album. The subjects are not interesting and it’s not very systematic. I was just playing round with high ISO and exposure compensation.

The live bulb mode is awesome, though, as you can see from the super blurred and over exposed photo of Dora the Explorer doll, you need a tripod for this kind of experiment, of course. This brings me to the joys of in body image stabilization. Stabilization is kind of like magic to me. I was shooting 1/30, even 1/10 hand held and was getting crisp photos (again of the Dora doll).

At night,  I was discussing the camera with my wife and making the same sort of summary as I have made here. At then end she said, “Yes, just sleep on it, before making a final decision”. I nodded as I picked out the strap from the box and started to thread it into the hooks. The instructions call for a slightly intricate loop for the strap, not for those with thick fingers. My wife watched me for a second, doing this, and remarked dryly “Well, that looks like kind of a decision”.

I guess it is. I guess it is.

Picking a camera

I needed to replace my dSLR and decided that I would get a mirrorless camera instead of another SLR. I wrote this post as a way of organizing my thoughts and research on my way to buying the replacement.

I would say I’m a practical photographer now. I started out a long-long time ago doing things like shooting water drops falling into buckets, but now I shoot for the memories – to capture and freeze time, as much as that is possible – and my subjects are mostly friends and family doing ordinary things in ordinary places.

I’m a staunch supporter of the maxim that the best camera is the one you have with you. I can stand grainy/noisy photos (in some circumstances, I actually like them), but not blurred or visibly smoothened ones. I hate using flash, I hate missing the moment (I rarely have people pose). I don’t earn money from the pictures and I don’t want a camera I am so afraid to lose/break/damage that I don’t take it with me everywhere.

All things considered, my main criteria for a daily use camera now are that:

  • It should be light in the hand (not a burden to bring with me all the time)
  • expendable (cheap),
  • focus fast and
  • have usable low light shots/video (good for web, may be 5×7 prints).

(Actually while we are at it, what I would really like is a still camera and image format that allows you to embed a short (say < 1min) audio into the image. The camera would let you select a photo and then record a memo to go with the photo. It would be easy to store this audio in an exif tag and have extensions to operating systems that would allow you to play back the recording as you preview the photo. But that is neither here nor there, but should serve as prior art, in case some company wants to patent it.)

 

My first digital camera was a canon point and shoot (A510), which I still have somewhere and which we kept using until the lens cover started to malfunction. It was small and went every where with me – I kept it in my pants pocket. The only complaint was the shutter lag. Movies were grainy and tiny – BUT IT TOOK MOVIES! This made me lust after DSLRs – which were rumored to have instant on and no shutter lag, just like my film SLR – but they were too pricey.

Until I got a refurbished D40 for a very decent price. I used the D40 daily – discovering DSLRS were all that they promised to be – until I found a refurbished D5100 which I bought because of the video and better high ISO. Both the D40 and the D5100 are small for DSLRs, but for the kind of things I wanted to do, I wanted even more portability, hankering back to the A510 which I carried unobtrusively with me all the time.

After my D5100 got lost/stolen I started a search for a camera that would combine the speed and effectiveness of those DSLRs with the small compact size of the A510. Surely the 10 years that elapsed since I got my Canon A510 was enough for those creative engineers to come up with something that answered this description?

I had heard rumors that there was a new category of camera, called mirror-less cameras, that used the same principle as compacts, but with upgraded sensors and optics, many of which supported interchangeable lenses. I hit dpreview (which I read for their detailed descriptions, and sometimes personal write ups of usability) and imaging-resource (which I read mainly for their “Timing and performance” section) to see what was available.

At one point, I was down to a mere 7 candidates, many of which were well out of my budget. From there, looking at price, features and usability, I ended up oscillating between the Olympus OM-D E-M10 and the Sony A6000. In reality the Sony was way out of my budget, but it is such a tempting camera. Phase detect ON THE CHIP. Wowza, that puts it in DSLR class! I was also very surprised that the price ($700 with kit lens) was so high despite it having come out three years ago, and with a replacement (the A6300) just out.

I considered the Sony A5100 but discarded it because of the lack of EVF. I think I would need an EVF. The lack of additional controls, while forgivable on the A6000 was going to be too annoying on the A5100. Basically the E-M10 seemed like an awesome deal at $425 on Amazon.  What made me hesitate was the smaller sensor and the contrast detect AF.

I was worried that this was going to be a compact class camera and I would get flashbacks from my A510 days, when I would have the camera with me, but I would miss my shot because, between pressing the shutter and the picture being taken, the world had changed, and the moment had gone. I also worried that indoor and night shots would come out blurry, or just missed. There was also unflattering things said about the video.

Reading the specs on imaging-resource (“Timing and performance” section), as well as the narrative on dpreview (section 7 “Experience”) and the sample videos gave me some confidence this wouldn’t be so bad. An interesting, very personal, opinion with a bunch of low light shots (Robin Wong’s blog),  suggested that the high ISO performance was enough for my taste.

What clinched it, was this direct comparison from CameraLabs between the A6000 and the E-M10 in an A6000 review, which stated:

So the A6000 is the better camera, right? Only in some respects. In its favour, the Olympus EM10 features built-in stabilization that works with any lens you attach, and while its sensor has 50% fewer Megapixels, the real-life resolving power is similar if you’re using the kit lenses. The A6000 may have far superior continuous AF, but the EM10 is quicker for Single AF and it continues to work in much lower light levels, while also offering better face detection too. The EM10 has a touch-screen which lets you simply tap to reposition the AF area instead of forcing you to press multiple buttons.

The highlighted bit was interesting enough for me to stop vacillating and go forward with the Olympus. (Even though the A6000 was out of my price range, if it looked like the A6000 was THAT much better of a camera, I might have waited for a price drop, a deal, or gone and bought second hand – which I never do for cameras, because of the risk – camera repair is expensive, and I can’t do it myself – not for these electronic ones)

I guess we’ll see in a month or so if I took the right decision in stepping away from my Nikon DSLR and into the mirror-less world. Interestingly,  I will be able to use my Nikon lenses, albeit only in full manual, with a fairly cheap and clean (no optics) adapter.

In case you wondered, the featured image is a full size crop of a shot from a water drop shooting session in 2013. It was taken with my late, lamented D5100 and the 50mm f1.8, on manual focus on this body. The D5100, which I LOVE, along with the 18-55mm kit lens, was lost and then stolen (because no one returned it to the lost and found) at the Ft. Lauderdale International Airport security checkpoint. It’s serial number is 3580262. More valuable than the camera, though, is a set of precious family photos with our daughter and her grandpa stored on the SD card in the camera.

It was this incident that prompted me to look into mirrorless cameras. Not only because I needed a new camera, but because I wanted to be able to throw the camera into my pocket, or at least into a stuffed backpack – we lost the camera primarily because we couldn’t consolidate all our bits and bobs and lost track of that one bag in the confusion of security check.

 

Rip Van C++

After a long hiatus, probably about a decade, I once again began coding in C++. It wasn’t hard to get back into the stride but I am surprised at how different modern C++ is, and how pleasurable it is to write. In no particular order here are the things that surprised me.

No more “this->” in class functions. I discovered this by accident when I forgot to add the “this->” qualifier in front of an instance variable and the code still compiled. I’m not sure if I like this – but it sure as hell is convenient.

CATCH. Man! One of my big draws for Python is how rapidly I can set up tests and run them. It basically made parallel testing and development (I’m sure the kids have some kind of cool name for it nowadays) possible for me. I remember with agony how it was to setup a test suite for C++ code. CATCH makes it almost as easy to write tests for C++. And you can break into the debugger. I think this settled the matter for me.

std::random. No more hanging out in dark alleys, setting up rendezvous with shady characters to get your random numbers. It’s right there, in the standard library! And you can seed it and everything!

CMAKE! Felix sent me a small self contained example, sat with me for a few minutes and that was enough to get me up and running. I do NOT miss creating makefiles by hand. Cmake appears to be very logical.

std::threads – I haven’t actually used this yet, but after looking through several examples it’s simplicity rivals that of Python’s multiprocessing pool. We’ve come a long way baby.

Contrary to what you might think, I’m not that excited by the auto keyword or by the (optional) funky new way you can write loops. I still have to get used to the lambda syntax – I think Python gets it just right – straight and simple.

Update: OK, I really like the:

for(auto x : <some collection>) {
   // process x
}

syntax. It’s pretty slick.

I was also amused by the difference between, I guess you would call it culture, for C++ and Python. I was looking for a logo for this post and, firstly, did not really find an official one, and secondly, landed on the standards page:

Screen Shot 2016-02-18 at 6.11.02 AM

which stands in contrast to the first hit for “Python” which is the official one stop shop for all your Python needs:

Screen Shot 2016-02-18 at 6.10.48 AM

Interestingly, not six months ago, I think I took a stab at getting back into C++, for a personal project, I think. And I had an adverse reaction. Nothing has really changed with C++ since then, so I guess this time round I actually stopped to ask for help and my kind colleagues pointed me in the right direction.

 

Ex Machina

A movie review with some fun discussion of the Turing test.

Spoiler Alert!

The thing I like the most about Ex Machina is that one of the protagonists not only gives us a proper definition of the Turing test, but he also describes a delicious modification of the Turing test that took me a second to savor fully. The plot also twists quite nicely in the end, though we kind of see it coming.

Movies about machines that replicate human thoughts and emotions pop up periodically. In most of these movies the machines win. Ex Machina is one of the better ones of this genre. Particularly satisfying is that the techno babble in the script touches on some advanced topics in machine intelligence.

To start with, there is a good definition of the Turing test. People make a lot of fuss about the Turing test and take it quite seriously and literally. The Turing test, to me, is basically is an admission that, when people…

View original post 1,046 more words

%d bloggers like this: