Why can’t I print from Google photos?

I love Google Photos as a means off backing up and sharing photos. On the Mac it requires minimal configuration and works without supervision and it is easy to share albums and photos. So I’m really puzzled why there is no way to print photos.

Google photos byline is “Free storage and automatic organization for all your memories.” and the software works! It appears to be written professionally – so perhaps a team from outside Google made it originally – I kid, I kid.

The auto uploader is easy to configure and non-intrusive. I tell it where my photos are and it silently looks for new ones, de-duplicates them and streams all my personal photos into google’s servers.  Wait. God! I just re-read that last sentence slowly. It’s too late now. … Anyway

Google’s statistical learning algorithms do some semi-useful things like image categorization and some cute things like animations with music which are nice but not essential or something I use often. I haven’t looked, but I assume that there is a way to bulk download if I ever need to recover the photos.

Update: Google photo is pretty much just a web only photo sharing service. The quality of the stored photos is OK for web viewing but does not stand up to closer scrutiny. I would only use this as a “backup” of last resort, a kind of cache in case all other real backups have failed. And I guess that’s why there is no print option – the quality is just too poor to really print.

Screen Shot 2016-10-22 at 7.56.10 PM.pngIn the example above the left image is of the google photos copy at 1:1 and the right is the original photo, also at 1:1. You can clearly see Google photo’s compression artifacts and poorer underlying resolution. There are also software glitches when viewing photos – the web viewer often gets stuck at a very low resolution of the photo, and you have to reload, or otherwise ‘jiggle’ the software to get it working again.

So, imagine my surprise and frustration when I went to print my photos and started to feel like Marcel The Mime stuck in that glass box. I tried to find the print button for two days, searching forums and stack overflow, convinced that it was just hidden and if I was just diligent enough I would find it, perhaps earning $100 in free prints at the end of it.

Once, I ran into a post that said I just needed to log into the Picasa webservice: I’d be able to see the photos I’d uploaded and then select for print. I went to picasaweb, and indeed, found my albums and found the print option. I was overjoyed. I started to collect photos to print. I then navigated away. A few days later I came back and discovered that the design had changed and I no longer had the “Print” button. I realized I was part of a giant psychological experiment which made the events in Gas Light look like kindness.

It was then that a bigger mystery began to occupy my mind. Why do this? Why fuck with your users like this? Why take a course of action that both leaves money on the table and angers users at the same time? I couldn’t stop thinking about it and this post is a form of therapy. I hope it works. So hear me out.

screen-shot-2016-10-09-at-7-34-10-pmNow, Google is desperate to make money from their services.

Whenever I do a search I see a string of ads above my search results that are either identical to my search results or considerably less informative.

Google is sacrificing search result accuracy and user convenience for revenue. Google was earning a healthy ad revenue before it started to advertise so luridly, and so it’s not clear to me why they’ve become so desperate.

screen-shot-2016-10-06-at-11-00-06-am

So, in this context the absence of any way to print photos from Google photos strikes me as particularly odd.

I’m not very experienced in product commercialization, but I imagine that if you create an online photo storage and management service, it’s a net plus to either offer a printing service yourself or, if that takes you too far outside your traditional domain of expertise, have an arrangement with an established photo printing service. Not letting your users print, and being ambiguous about it, is, on the other hand, a net negative.

So, is this lack of functionality malice or stupidity? Let’s take malice first.

When we upload our photos to google’s servers we are giving them intimate personal data. The images are being processed through statistical learning algorithms which can cluster faces and probably recognize backgrounds. We also give Google our personal and professional email. These data streams are a marketers dream. It’s the kind of information that allows Google to insert Ads for baby clothes in emails once you use the word ‘pregnancy’ in an email. In the future one can imagine that Google will insert such ads once you upload photos of your pregnancy to share with family.

Perhaps, though, that fear is overdone, as we can see from the clumsy state of targeted marketing; the brightest minds of our generation, thankfully and contrary to popular perception, have not been occupied in trying to serve ads to us (they have, of course, been occupied in borking our encryption algorithms and back-dooring our router hardware, but that is a matter for a different post) but an army of second rate minds have certainly been trying to productize our personal information.

So, from this point of view, as far as Google is concerned, we are the product and in exchange for some free storage we are giving google an even more complete peek into our personal lives so they can build a better psychological profile of us, so that they may judiciously prey on our deepest insecurities to sell us disposable razors. They don’t care if we can’t print, and they want this fact to be hard to discover. What they really want is us to upload our photos for their analysis.

 

What about stupidity? Google is a big company with many, many failed products. Most of the products failed not because of buggy software but because of a lack of imagination. A basic misunderstanding of what people want their computers to do for them. Like, say, print a bunch of photos into a photo book to give as a gift. The lack of a print facility is, under this hypothesis, just another example of product management sleeping at the helm.

There is of course another option – strategic insight.

Perhaps Google has decided for us that the vast majority of people no longer print photos. Perhaps they have seen into the future and it’s all digital, from the screens on our phones to the screens on our fridges. There will be no more eight-by-ten color glossy pictures of children and of wives and of parents and of halloween parties hanging on our walls, or inserted into albums (real albums, made of cardboard paper and cellophane) to be shown to relatives on thanksgiving. Perhaps we’ll be offering a guest a drink and instead of pulling out an album from our bookcase, we’ll swipe on our refrigerator and say ‘Hey did I show you our wedding photos?’

Well, that’s the future, and it ain’t here yet. I have relatives here and now that want photos of Mom and Dad, and I can’t waste half an hour downloading them and then uploading them to some other service EVERY TIME.

 

Olympus E-M10: A keeper

This is the second part of my post about my experiences with the OM-D E-M10 camera. Though my initial reaction was negative, I’ve found many things to love about this tiny but powerful camera. Most importantly, it makes me want to take pictures and I’m back to filling up my hard drive with images.

Electronic View Finder: On the whole, pretty cool.

This was my first shock when I got this camera, and possibly the biggest change I had to adapt to. I am used to the optical viewfinders found in Nikon SLRs and DSLRs and the EVF struck me as a cruel joke. Though part of it was simply adjusting to this new idea of looking at a computer screen rather than the actual scene, there are real issues with the EVF, mostly noticeable in low light: it blurs when you pan, you can sense the refresh rate and at low enough light – it simply stops working.

However, in most shooting conditions, once I got used to it, I stopped thinking about it and just shot naturally. And then the advantages of the EVF over an optical view finder began to dawn on me.

When I got my first SLR (A Nikon F-65) I was really excited about the depth-of-field preview button. Not only could I see the framing of the scene exactly, I could now check what the focus slice was! Well, the EVF is depth-of-field preview on steroids. It’s a preview of almost the exact image you will capture!

This realization first struck me while I was taking photos of my daughter indoors at night. I hit the white balance wheel and switched to incandescent, and the view finder updated to reflect this! Then I realized that I had noticed, but had not really remarked on, the fact that the effects of exposure compensation were, similarly, visible in real time. This is so much better than making these adjustments and then shooting a few frames only to find that the white balance is all wrong and your subject has a ghastly blue color cast.

The OM-D also has an online histogram display (I’ll write more about this later, but this is one of the features that make me think the OM-D is a camera designed by engineers who love their work and a management that keeps out of their way) and you can also see this through the EVF and use it to guide fine tuning of exposure.

Saving the best for last: the E-M10 was my first introduction to focus peaking. I had read wistfully about focus peaking as I scoured e-bay for a cheap split-prism focusing screen for my D40/D5100 because I was sucking at using my Nikkor 50mm f1.8 and I wanted it to be like the SLRs of old. With the EVF you can focus manual lenses just as you would have in the old days, with focus peaking replacing the split prism and ground glass.

Can you tell, I’m a convert! You need to take my effusiveness with a grain of salt. This is my first and only experience with EVFs. I’ve read reviews that say this EVF is small, and low resolution and dim compared to others. Whatever. I like the concept of the EVF and I am satisfied with the implementation of it on this camera.

Touch screen shooting: focus point selection and focus/recompose made obsolete

When I was comparing cameras, the E-M10’s touch screen did not factor into my decision. I considered it one of those things, like art filters, that were useless gewgaws added on to please the masses. The touchscreen, though, is a game changer.

The traditional way to get an off center target in focus is, of course, focus and recompose. There are people who will tell you that this causes problems because the focal plane of lenses is not flat and an object in focus at the center of view is not going to be in focus when moved to the edge of view. Though this is a physical fact, it’s importance has been artificially inflated by camera manufacturers eager to get people to upgrade their perfectly good cameras by dangling ever more focus points in front of their nose.

Let me tell you a bit about focus points. By the time you have used your dinky little cursor keys to hop your little red rectangle ten focus point squares across your viewfinder to sit on top of your subject, the moment has passed and the subject has left. The only real solution is to have the camera focus where you look, and that, surprisingly, has been tried, though, even more surprisingly, has been discontinued.

The next best thing is this new fangled live view + touch screen shooting. You view the image on your touch screen, tap on the screen where your subject is and Click! The camera focuses and shoots. We live in the future, my friends.

I removed the Sony A5100 from my shortlist partly because it did not have an EVF. I’m glad I insisted on an EVF, but I’m no longer opposed to just having a screen, as long as it is a touch screen. On the negative side, the LCD indeed is hard to see (washed out) even in moderate light and I prefer the D5100-type fully articulating screen to this semi-articulating one.

 

Face detection: A mixed bag

I’d seen face detection in point and shoots and again, did not think too deeply about it’s advantages. The reason for this is that invariably I got to see face detection when some one handed me their high end compact for a group photo and I would look at the display and see a few faces outlined, and I would think: “Great, I already know where the faces are, thanks”. The D5100 also had face detection in live view mode. I never really used live view on the D5100, because of it’s poor contrast based focusing system so, again, did not really see the use for it.

On the E-M10 (they really need more snappy nomenclature) face detection – when it works – is awesome and invaluable. Many scenes involve a person facing the camera and a busy background. The face is often – for a nice composition – NOT in the center of the frame. Face detection works marvelously to allow me to take the shot without thinking.

The problem is that this is making me lazy. I’m losing the instinct to focus/recompose and losing the deftness to nudge focus points (and this camera has so many) and when the detector fails e.g. when the subject is looking a little away, or there are two faces, it gets very frustrating. And, for a person who takes a lot of pictures of cats, I have to point out, there is no face detection for cats, which is a solved problem …

Twin control wheels: a double win

Another major reason for picking the E-M10 was the twin control wheels and they do not disappoint. My initial thought was that they would be great for M mode for shutter + aperture adjustments, but in A and S mode one of the dials can give exposure comp. With the func button they give rapid access to ISO and WB adjustment. This makes the camera very powerful to operate. On the D5100 I was forever fiddling with the menu to get these four parameters right.

The placement of the two dials looks awkward visually – the body is so small that they had to stack the dials on different levels to maintain a good dial size. I’m happy to report that the engineers have made the correct decision. The index finger works nicely on the front dial and the thumb on the rear. The camera strap does interfere a little and I’ve taken to putting my right index over the strap anchor point, rather than below it.The rear dial is also deceptively far way from the front one. I would be shooting, then reach for the rear dial and invariably not reach far enough with my thumb.

Super control panel. Gripe: Why can’t I change settings by touching the super control panel

The super control panel is very aptly named. I thought the Nikons had a nice summary panel showing you all the important camera settings, but Olympus has them beat. A lot of thought has gone into the panel layout – the controls are grouped such that in some modes a cluster of panels merges into a block, because they are controlled by one parameter. The only usability issue was that it took me a while to figure out that you had to press “OK” to activate the touch mode, where you can select parameters to change by touching the appropriate panel. Yet another win for the touch screen. Only gripe: sometimes a stray finger will activate the EVF eye detector and will blank out the touch screen as I’m selecting.

Startup delay: not really an issue

 

This was another aspect of the whole EVF/Mirrorless world that I wasn’t sure I would be comfortable with. I’m completely used to leaving my D5100 on all the time. I only switch it off to take out the card or change batteries. So, when I see something I like to shoot, I grab the camera, pop off the lens cap, raise it to my eyes (and not always in this correct order ..) and squeeze the trigger. Photo taken!

With the mirrorless, I wasn’t quite sure until I actually got the camera how I would work this. Some posts I had read online reassured me that the slight lag could be handled by hack pressing the shutter, or pressing any button actually, while raising the camera, to wake it from sleep mode. This way the EVF is on and the camera ready to shoot when you have it in position. And this truly works out well. It does feel a little awkward to someone used to an optical finder, but it works well enough, due to the fact that the camera has a sleep mode and does not need to be switched completely off.

Shutter sound

 

Pressing the shutter is followed almost instantaneously by a very crisp shutter sound (once I had turned off the annoying beep that accompanied the focus) and a slight vibration of the camera. It’s a very satisfying auditory and tactile response to the shutter press.  I think this is because there is only the soft kerchunk of the shutter and not the slightly bouncy thunk of the mirror. This is something that, because it is purely incidental and psychological, should not count, but it does.

Battery life: the downside of needing a screen to shoot

At the end of the day, I had taken around 200 photos when the camera declared that the battery was done and stopped shooting. This is a very big difference to the D5100, where I could go for days, shooting like this, even reviewing photos and videos before the battery gave out. I will be needing a spare battery. Perhaps two, to be safe.

Shutter count: an amusing aside. So, like many new camera owners, I asked the question “I know this is new, but how many miles does it actually already have on it. Checking the EXIF info for the photos I took, I found to my surprise that the EXIF did not contain the shutter count, like it does for Nikons. It turns out that shutter count is actually quite hidden, and only really meant for camera technicians to see as part of a larger suite of diagnostics. You have to enter a sequence of arcane keypresses to get to the relevant menu.

A great little camera

I could go on and on, about how light it is, that I don’t feel the weight on my neck even after a whole day of toting it around, about how configurable it is, how the menu structure is actually quite logical, how high ISO, upto 6400, is eminently usable for my purposes, how the kit lens is neat and tidy and does its job, how in-body image stabilization is such a step up for me, and how, in many such ways, it feels like a camera designed by happy engineers who love their job. In short it is a neat, well designed, tiny camera that does its job very well.

Oh, and here is a picture of a cat. I must now go and order some extra batteries.

 

Olympus E-M10: First impressions

I will not lie. My first reaction after unboxing the E-M10 and shooting a few frames was to return it. However, after poking round the menu a bit and trying out things for a while I think I will keep it – maybe.

(Update: I will keep it)

I guess I had over sold the small ness of this camera in my mind, because when I got it I was like, “Huh,  it’s not THAT small”. But, actually it is. It’s larger than the A510, and with the kit lens it won’t go in your regular pants pocket, but I could probably fit in a jacket or cargo pants pocket. With a pancake lens you could fit it in a slacks pocket.

But what did blow me away with its size was the lens. It really looked like a scale model of a lens. I held it in my hands for a while and marveled at it. You could fit two of those inside the standard Nikkor 18-55 DX kit lens, and it’s not even the “pancake” lens.

I liked the build immediately. The body is metal and feels like it, making the camera satisfyingly dense.  The dials click nicely and all the buttons are well placed.  I was a little disappointed by the battery door and the bulkiness (and ghastly color) of the charger.

I’m ok with the battery and card sharing the same door – especially since it looks like the battery needs to be changed often – but the door is a little clumsy. It has a little latch that you need to push shut to lock and it’s a little difficult to do this while maintaining pressure, since the door is spring loaded. I have gotten used to Nikon’s slim, all black chargers and the peculiar gray of the Olympus charger, and it’s ungainly thickness stands in stark contrast to the elegant design of the camera body.

I charged the battery, keeping my impatience at bay by reading the manual. I loaded the camera, switched it on, lifted the view finder to my eye and had my first disappointment.

I’ve never had a “professional grade” camera. I went from a Nikon F65 to a D40 to a D5100. I think only the F65 had an actual penta-prism. The others have penta-mirrors, which I believe are dimmer. I would read posts by people complaining how small and dim these optical viewfinders were compared to their professional grade cameras, but I never really felt the difference. The optical viewfinder of the SLR was, to me, an indispensable tool. You could see what the film was going to capture! Amazing! 95% coverage? Dim? Whatever!  The EVF, at least this EVF, is no optical view finder.

I was playing with this indoors and the impression I got was that I was peering at the world through an ancient CCTV system. The colors seemed off, there was blurring and lagging when I panned the camera. “I can’t shoot with this! It sucks!”

(Update: I quickly got used to the resolution of the view finder. The lag is imperceptible outdoors, even at dusk and there is a setting to up the refresh rate of the EVF, though I suspect it chews up more battery. See the next post.)

I squeezed off the shutter at a few subjects in the fading light. My biggest worry about this camera was the shutter lag, which really counts the delay between pressing the shutter, capturing focus and taking the picture. Depending on the lens and light conditions, even SLRs can take a while, but the dedicated phase detect focus system of the Nikon cameras allows the lens to spin towards focus in a deterministic and fast manner. The E-M10 has a contrast detect system. This is the same system that the D5100 uses in live view mode and Nikon’s system sucks.

All the reviews, measurements and posts one finds online about the speed of the E-M10’s auto focus are not mistaken. It truly is an effective AF system, despite it not being one of the fancy new hybrid AF systems that incorporate phase detect on the sensor. The pictures were a let down however. I’ve mentioned elsewhere that I can stand grain but not blur in pictures. Well these pictures were BLURRY! It was the over aggressive smoothing that’s present in the factory settings. Something that reviews have remarked on.

I went into the menu and switched it off. MUCH BETTER! Especially if you over expose a little bit. I would say that images at ISO 6400 with no smoothing are eminently usable for web/computer viewing, perhaps even for regular sized prints.

Oh, dpreview regularly complains that the Oly menu system is over-complicated. Personally, I found it to be better organized and richer than the Nikon D5100’s menu. I didn’t need to use the manual, and the tips on-hover are great – though they can get annoying when they obscure other text/menu options below them.

You can see a set of test shots in this album. The subjects are not interesting and it’s not very systematic. I was just playing round with high ISO and exposure compensation.

The live bulb mode is awesome, though, as you can see from the super blurred and over exposed photo of Dora the Explorer doll, you need a tripod for this kind of experiment, of course. This brings me to the joys of in body image stabilization. Stabilization is kind of like magic to me. I was shooting 1/30, even 1/10 hand held and was getting crisp photos (again of the Dora doll).

At night,  I was discussing the camera with my wife and making the same sort of summary as I have made here. At then end she said, “Yes, just sleep on it, before making a final decision”. I nodded as I picked out the strap from the box and started to thread it into the hooks. The instructions call for a slightly intricate loop for the strap, not for those with thick fingers. My wife watched me for a second, doing this, and remarked dryly “Well, that looks like kind of a decision”.

I guess it is. I guess it is.

Picking a camera

I needed to replace my dSLR and decided that I would get a mirrorless camera instead of another SLR. I wrote this post as a way of organizing my thoughts and research on my way to buying the replacement.

I would say I’m a practical photographer now. I started out a long-long time ago doing things like shooting water drops falling into buckets, but now I shoot for the memories – to capture and freeze time, as much as that is possible – and my subjects are mostly friends and family doing ordinary things in ordinary places.

I’m a staunch supporter of the maxim that the best camera is the one you have with you. I can stand grainy/noisy photos (in some circumstances, I actually like them), but not blurred or visibly smoothened ones. I hate using flash, I hate missing the moment (I rarely have people pose). I don’t earn money from the pictures and I don’t want a camera I am so afraid to lose/break/damage that I don’t take it with me everywhere.

All things considered, my main criteria for a daily use camera now are that:

  • It should be light in the hand (not a burden to bring with me all the time)
  • expendable (cheap),
  • focus fast and
  • have usable low light shots/video (good for web, may be 5×7 prints).

(Actually while we are at it, what I would really like is a still camera and image format that allows you to embed a short (say < 1min) audio into the image. The camera would let you select a photo and then record a memo to go with the photo. It would be easy to store this audio in an exif tag and have extensions to operating systems that would allow you to play back the recording as you preview the photo. But that is neither here nor there, but should serve as prior art, in case some company wants to patent it.)

 

My first digital camera was a canon point and shoot (A510), which I still have somewhere and which we kept using until the lens cover started to malfunction. It was small and went every where with me – I kept it in my pants pocket. The only complaint was the shutter lag. Movies were grainy and tiny – BUT IT TOOK MOVIES! This made me lust after DSLRs – which were rumored to have instant on and no shutter lag, just like my film SLR – but they were too pricey.

Until I got a refurbished D40 for a very decent price. I used the D40 daily – discovering DSLRS were all that they promised to be – until I found a refurbished D5100 which I bought because of the video and better high ISO. Both the D40 and the D5100 are small for DSLRs, but for the kind of things I wanted to do, I wanted even more portability, hankering back to the A510 which I carried unobtrusively with me all the time.

After my D5100 got lost/stolen I started a search for a camera that would combine the speed and effectiveness of those DSLRs with the small compact size of the A510. Surely the 10 years that elapsed since I got my Canon A510 was enough for those creative engineers to come up with something that answered this description?

I had heard rumors that there was a new category of camera, called mirror-less cameras, that used the same principle as compacts, but with upgraded sensors and optics, many of which supported interchangeable lenses. I hit dpreview (which I read for their detailed descriptions, and sometimes personal write ups of usability) and imaging-resource (which I read mainly for their “Timing and performance” section) to see what was available.

At one point, I was down to a mere 7 candidates, many of which were well out of my budget. From there, looking at price, features and usability, I ended up oscillating between the Olympus OM-D E-M10 and the Sony A6000. In reality the Sony was way out of my budget, but it is such a tempting camera. Phase detect ON THE CHIP. Wowza, that puts it in DSLR class! I was also very surprised that the price ($700 with kit lens) was so high despite it having come out three years ago, and with a replacement (the A6300) just out.

I considered the Sony A5100 but discarded it because of the lack of EVF. I think I would need an EVF. The lack of additional controls, while forgivable on the A6000 was going to be too annoying on the A5100. Basically the E-M10 seemed like an awesome deal at $425 on Amazon.  What made me hesitate was the smaller sensor and the contrast detect AF.

I was worried that this was going to be a compact class camera and I would get flashbacks from my A510 days, when I would have the camera with me, but I would miss my shot because, between pressing the shutter and the picture being taken, the world had changed, and the moment had gone. I also worried that indoor and night shots would come out blurry, or just missed. There was also unflattering things said about the video.

Reading the specs on imaging-resource (“Timing and performance” section), as well as the narrative on dpreview (section 7 “Experience”) and the sample videos gave me some confidence this wouldn’t be so bad. An interesting, very personal, opinion with a bunch of low light shots (Robin Wong’s blog),  suggested that the high ISO performance was enough for my taste.

What clinched it, was this direct comparison from CameraLabs between the A6000 and the E-M10 in an A6000 review, which stated:

So the A6000 is the better camera, right? Only in some respects. In its favour, the Olympus EM10 features built-in stabilization that works with any lens you attach, and while its sensor has 50% fewer Megapixels, the real-life resolving power is similar if you’re using the kit lenses. The A6000 may have far superior continuous AF, but the EM10 is quicker for Single AF and it continues to work in much lower light levels, while also offering better face detection too. The EM10 has a touch-screen which lets you simply tap to reposition the AF area instead of forcing you to press multiple buttons.

The highlighted bit was interesting enough for me to stop vacillating and go forward with the Olympus. (Even though the A6000 was out of my price range, if it looked like the A6000 was THAT much better of a camera, I might have waited for a price drop, a deal, or gone and bought second hand – which I never do for cameras, because of the risk – camera repair is expensive, and I can’t do it myself – not for these electronic ones)

I guess we’ll see in a month or so if I took the right decision in stepping away from my Nikon DSLR and into the mirror-less world. Interestingly,  I will be able to use my Nikon lenses, albeit only in full manual, with a fairly cheap and clean (no optics) adapter.

In case you wondered, the featured image is a full size crop of a shot from a water drop shooting session in 2013. It was taken with my late, lamented D5100 and the 50mm f1.8, on manual focus on this body. The D5100, which I LOVE, along with the 18-55mm kit lens, was lost and then stolen (because no one returned it to the lost and found) at the Ft. Lauderdale International Airport security checkpoint. It’s serial number is 3580262. More valuable than the camera, though, is a set of precious family photos with our daughter and her grandpa stored on the SD card in the camera.

It was this incident that prompted me to look into mirrorless cameras. Not only because I needed a new camera, but because I wanted to be able to throw the camera into my pocket, or at least into a stuffed backpack – we lost the camera primarily because we couldn’t consolidate all our bits and bobs and lost track of that one bag in the confusion of security check.

 

Depth-of-field

Depth-of-field (DoF) is one of the most fun things about photography. It is enjoyable on both the technical and artistic levels. Depth-of-field is the extent (“depth”) in a scene that is in focus (“field”) on a photograph. Artistically it is usually used to isolate a subject from the surroundings and can be used to indicate depth. The subjective nature of the out of focus elements (the blur) is called Bokeh and is a marketing term invented by the Japanese to sell lenses with larger apertures that are insanely expensive (I’m kidding, I’m kidding).

There are many, many good technical articles on DoF and many many religious wars on Bokeh. What I want to do here is focus on the values of DoF that you can get at different focal lengths and at different apertures and try and relate that to everyday portraits.

A good place to get started on the geometry of DoF is Wikipedia’s article, which also has a nice derivation for formulae for DoF which I will be using.

Intuitively, DoF exists because lenses bring to focus only an infinitely thin slice of the three dimensional scene onto a two dimensional plane. The parts of the scene in front of, and behind this slice become progressively more and more defocused. Depending on the size at which you view the image you will notice this blur at an earlier (large image) or later (small image) stage. Aperture plays an important role in DoF with larger apertures giving shallower DoF.

Like most things in photography DoF is the one-dimensional shadow of a multi-dimensional creature depending upon focal length (f), aperture (a), subject distance (s) and sensor size. This creature is well expressed by the equation:

D = \frac{sf^2}{f^2 \pm ac(s-f)}

Here c is the circle of confusion and is the amount of blur you will tolerate before you say “this is out of focus”. Typically the value for c is given a value based on the resolving power of the sensor. Film with larger grain will have larger c and sensors with smaller sizes will have smaller c. I’m doing the calculations with c = 0.02mm which is the accepted value for APS-C sizes sensors (entry level DSLRS).

As I mentioned, here I’m interested in figuring out DoF for common focal lengths for when I want to take portraits. The general advice given is that for a proper portrait you need fast glass so you can open the aperture up wide and isolate your subject by blurring the background. I was interested in getting some values of what range of apertures to use for different focal lengths in order to create good portraits.

For reasons that will become clear shortly, I need to know how far back I should stand to get the framing I want with each lens. As you can guess, the longer the focal length of the lens the further back I need to stand. I can work out this distance by computing the magnification factor mfor the portrait. This is the ratio of the size of the image on the sensor to the actual size of the subject.

I’m working this out for APS-C sensors, which are 24×16 mm. A person’s head is about 26cm high x 15 cm deep (nose to ears) x 18cm wide (I know it really looks like clueless nerd trying to do ‘art’, but bear with me). I’m going to split portraits into two classes I like:

Close-up: The face fills the frame, every part is in focus (especially the eyes) and any background is blurred as much as possible (generally oriented tall). Here m = \frac{24.}{260} \simeq \frac{24}{240} = 0.1.

Environmental: The head and upper body take up about 50% or less of the frame area and the rest comes from the subject’s surroundings which are blurred, but retain enough structure such that you can tell what it is, which gives the subject a context (generally oriented wide). Here m \simeq \frac{10.}{260} \simeq 0.04

I can compute the subject distance (how far I need to stand) from the equation m = \frac{f}{s-f} which ultimately results in

\displaystyle D = \frac{f^3 (\frac{1+m}{m})}{f^2 \pm ac\frac{f}{m}}

Now we can plot our DoF for a range of focal lengths for our purpose at different apertures for an APS-C sensor:

closeup_dof_plot

environmental_dof_plot

In the plots, the shaded gray area represents the nose-ear depth, the black lines represent the DoF at the given aperture and subject distance for the indicated focal lengths (given in mm above the shaded area eg. 18,35,50…)

Here’s two interesting things I take away from the plots.

The first is that DoF remains constant whatever focal length you use (as long as you are keeping the size of the subject on your photo constant). Without working this out I would have expected longer focal lengths to have smaller DoFs: this is true for the same subject distance, but since we are keeping image size constant, we get this different result.

The second is, for close-ups you need to go down to f/22 to get a DoF that covers nose-to-ears and even for environmental portraits you need to stop down to at least f/5.6 to get the nose and ears in focus.

For the longest time I was trying portraits with lenses like the 35mm/1.8 and 50mm/1,8 wide open and I was failing miserably (especially with the 50mm, which I manual focus). Then I started getting braver and started to stop down more, 4.0, even 5.6, and I got good results. These calculations show me why.

I would say, you don’t necessarily need fast glass to obtain subject isolation. In some cases it looks cool to have one eye in focus and the other eye and nose and ears not but I prefer to have the whole face in focus, with the environment thrown off. For this, it seems, I can use pretty much any lens since most lenses will give me at least f/5.6 In terms of aesthetics, of course, 35mm and longer is preferred for portraits to avoid distorting the face unpleasantly.

I have studiously stayed away from the ‘artistic’ aspects of Bokeh. If you do want to get into depth in this aspect of the field, all you really need to know, is that the worst insult you can throw in a Bokeh debate is to tell the other fellow his Bokeh looks like donuts.

Code follows:

"""
https://en.wikipedia.org/wiki/Depth_of_field#Derivation_of_the_DOF_formulas.
http://www.dofmaster.com/dofjs.html"""
import pylab

#m - subject magnification
#f - focal length
#a - aperture number
#c - circle of confusion

# Plot the depth of field by focal length and aperture

#D = \frac{f^3 (\frac{1+m}{m})}{f^2 \pm ac\frac{f}{m}}
Dn = lambda m,f,a,c: ((f**3)*(1.+m)/m)/(f**2 + a*c*f/m)
Df = lambda m,f,a,c: ((f**3)*(1.+m)/m)/(f**2 - a*c*f/m)

H = 0.15 #Nose to ears

F = [.018, .035, .050, .085, .105, .135, .2, .3] #Our choice of focal lengths, need to be expressed in m.
A = [22, 16, 8.0, 5.6, 3.4, 1.8, 1.4] #Our choice of f-stops

c = 0.00002 #0.02mm is circle of confusion http://www.dofmaster.com/dofjs.html for D5100 (APS-C)

dn = pylab.empty((len(F), len(A)))
df = pylab.empty((len(F), len(A)))
m = 0.1
#m = 0.04

for i,f in enumerate(F):
  for j,a in enumerate(A):
    dn[i,j] = Dn(m,f,a,c)
    df[i,j] = Df(m,f,a,c)

pylab.figure(figsize=(10,4))
pylab.subplots_adjust(bottom=.15)
for i,f in enumerate(F):
  s = (f/m)+f
  pylab.fill_between([s-H/2.0, s+H/2.0], [len(A)-.9, len(A)-.9], y2=-.1, color='gray', edgecolor='gray')
  pylab.text(s-.1,len(A)-.5,'{:02d}'.format(int(f*1000)))
  for j,a in enumerate(A):
    pylab.plot([dn[i,j], df[i,j]], [j, j], 'k',lw=2)
pylab.xlabel('Distance (m)')
pylab.ylabel('Aperture f-stop')
pylab.setp(pylab.gca(), 'ylim', [-.1,len(A)], 'yticks', range(len(A)), 'yticklabels', A)
pylab.suptitle('Close-up')
#pylab.suptitle('Environmental')