The marvelous bee odometer

You probably know that not only can bees compute the vector (direction and distance) to a discovered food source relative to their hive, but they can also convey this vector to their hive mates. Here, I’ll talk a little bit about one component of this system – the Bee odometer: how do bees figure out the distance to the food.

Bees are amazing. In general, insects are amazing. If you are into robotics of any kind, you should definitely be studying the insect literature. Bees can not only perform vector summation to compute the bee-line from their hive to a source of food they have discovered, but they can convey this vector to their hive mates via a form of sign-language!

Bees use a dance to instruct hivemates as to the direction and distance of a food source. They make circular and figure eight movements on the vertical surface of the hive. They waggle during the dance and the intensity of the waggling conveys the distance. The orientation of the axis of the dance conveys direction)

How they compute the direction is just as marvelous as to how they compute the distance, but I will just talk about distance here. There are two competing hypotheses as to how they keep track of how far they have flown. One is that they track how much energy they have consumed while flying to the food, which correlates with the distance. This hypothesis has a nice ecological feel to it because energy (food) is the main reason they are flying out in the first place, and we can imagine that there are decisions to be made based on how much energy it takes to get to a food source.

The competing hypothesis is that bees use optic flow – the percept of visual motion from their environment – to determine how far they’ve flown. This hypothesis is way cool.

Like in most such scientific debates, a small cottage industry grew around this question, sustaining several scientific families for most of their adult lives. The basic method for the experiments researchers did was to have bees fly from their hive to a food source and then watch their dances when they got back to the hive. The dances tell us what the bee THOUGHT the distance to the food was, and of course, we can measure the actual distance ourselves and compare. Then various tricks are played on the bees to try and figure out what cues they are using to determine flight distance.

The review by Esch and Burns [1] makes interesting reading in the context of the sociology and politics of science. One experiment supporting the energy hypothesis was done by a fella named Heran. He had bees forage going either uphill or downhill from a hive to a feeder placed at a constant distance. The energy hypothesis predicts that bees going uphill will measure a longer distance than bees flying downhill (even though the actual distance was the same). Apparently out of seven runs of the experiment with this setup, Heran found five that showed the bees didn’t care if they were going uphill or downhill – they just flagged a fixed distance. In just two runs Heran found an answer supporting the energy hypothesis. He explained away the other five results by blaming winds and reported that bees use energy (!)

A scientist called von Frisch was a pioneer in the study of the natural behavior of bees (among other insects). According to Each and Burns [1] in one study von Frisch actually had evidence for an optic flow hypothesis. von Frisch observed that bees flying over water to get to a feeder would signal shorter distances than bees flying over land to get to a feeder at the same distance. The smooth reflective surface of the water offers far less by way of optic flow (image motion) that say a grassy knoll dotted with trees and cows and buttercups. When a bee flies over the water, it’s basically this dark blue featureless sheet which tricks it into thinking it’s not really flying far.

In fact, bees are known to dive low – sometimes too low, ending up drowning – because they can’t sense any motion from the water, and it turns out from later experiments, this sense of motion from the ground is essential to them, and they use it to adjust how far they fly from objects.

But here comes the kicker. He had another set of similar experiments done on a windy day, when the wind happened to blow against the bees flying over the lake and with the bees flying over land. Now this time the bees flying over the lake signaled a longer distance. Was this because they had to work harder or because there was a ton of optic flow from the ripples on the lake that made it look like they were flying a lot? It’s a confound.

The proper action would have been to repeat the experiment on a windless day – like the previous experiment – to avoid this confounding variable, but von Frisch averaged the two results together and found that bees don’t care if they were flying over land or water – they always correctly estimated the distance.

Obviously von Frisch liked the energy hypothesis. And von Frisch was a big cheese. So the energy hypothesis came into favor. But as always the truth will out and people kept investigating and finding things that didn’t quite jive with the energy hypothesis.

Finally, there was a series of elegant experiments that strongly supported the optic flow hypothesis. Scientists placed relatively short tubes at the mouths of feeders. The tubes had different kinds of textures on them, some designed to elicit a lot of optic flow (i.e. very dense textures) while others were almost featureless (generating very little optic flow). When bees had to get to the feeder by passing though highly textured tubes they reported much longer distances than when they flew through sparsely textured tubes [2, 3]. The researchers also did a bunch of fun quantitative analyses to estimate how the bee converted the optic flow it is probably sensing to the distance it thinks it flew.

Now, like in most biological systems, it is unlikely the honey bee is using just this one cue – it’s probably a combination of cues that gives the honey bee it’s final percept of distance, but the experiments suggest that vision, and optic flow specifically are the most heavily used cues by bees to determine how far they’ve flown, as well as to adjust things like how high they fly above a surface and how much distance they will keep from obstacles.

References

  1. Distance estimation by foraging honeybees. Esch H, Burns J.
    Journal of Experimental Biology 1996 199: 155-162.
  2. Honeybee dances communicate distances measured by optic flow. Esch HE, Zhang S, Srinivasan MV, Tautz J. Nature. 2001 May 31;411(6837):581-3.
  3. Honeybee navigation: nature and calibration of the “odometer”. Srinivasan MV, Zhang S, Altwein M, Tautz J. Science. 2000 Feb 4;287(5454):851-3.

Millions of tiny hypotheses

I think I recall the exact moment when I began to get a little scared of math. It was an algebra class and we were being taught tricks (tools) that could be used to solve different classes of problems. Except, I don’t recall the teacher ever explicitly saying this was a toolbox. I think what he did was go through a list of equations, performing some trick on each – like adding x to both sides, multiplying by y on both sides and so on – and then proceeding with algebraic simplification. This demoralized me, and started to make me less enthusiastic about actually doing mathematics, though I remained fascinated by it.

I also, I think, recall why I got demoralized. I watched him solve an equation with one of these tricks and sat there staring glumly at the board. I fully understood how to apply the technique, what I couldn’t figure out was how I would know that I would have to apply that technique to that problem as opposed to some other technique. What didn’t help was that I had classmates who seemed to breeze through it, knowing intuitively which equation required which tool.

Unfortunately I did not have enough self-realization at that time to go and ask for help over this, or to talk it over with my mother (who was a mathematician). I just decided I didn’t have the innate talent for mathematics. Fortunately this did not cripple me and I had no hesitation diving into topics like physics and then electrical engineering (which I probably chose because it was applied physics) which had enough interesting math, with cool applications.

Many years later a stray comment on some message board somewhere by someone stuck in my head. They said that the way they did mathematics was to form hypothesis after hypothesis, testing them. If the hypothesis results in a falsehood, discard it and start again. Another comment, on a different message board by a different person, said that it had been explicitly stated to them that there was no magical solutions to mathematical problems, and it was a myth that there were people with innate mathematical skills and those without: you solved mathematical problems through gumption – you kept banging your head against it, trying different things, figuring it out as you went.

These two simple comments made a big impression on me. They suddenly freed me from the expectation that mathematical talent was innate. It raised the possibility that stubbornness – a trait my mother was fond of pointing out in me – was all you needed to solve mathematical problems. I was not worried about the top 99th percentile of mathematicians who most probably had something special going on their brains. I just wanted to enjoy math, learn more and more of it, and see if I could use it in daily life. These comments were immensely liberating. I had looked at the journey between the problem and the solution as mostly an embarrassment, as in the longer it took, the stupider you were. These comments turned the journey into part of the process, something to take joy in.

I just wish my math teachers had said things like this. I know that when I teach my daughter mathematics this is what I’m going to emphasize. It’s about creating millions of small hypotheses – magical worlds with crazy ideas – and then applying mathematical rules to figure out if the ideas contradicted each other, or if they fit. Just like the shape puzzles she loves to solve. Each shape fits in a particular slot. Sometimes, you can see right away that the shape will fit in a slot. Sometimes, you make a guess, if it works, good. If it doesn’t, ALSO GOOD. YOU LEARNED SOMETHING! Now try a different slot!

The year humans lost a pair of chromosomes

Scientists, supposedly, live and die by facts and data. This was a doctrine I received at a very young age and was extremely surprised when I first got into this business when I realized that scientists are just people and extremely fallible. The “scientific process” is very slow, often taking decades if not centuries to correct bad knowledge. I used to think that wrong interpretations were a result of limitations in technology – if we only had THAT device, we could have corrected that earlier. The reality is more dastardly: many mistakes in science persist because the creators of wrong knowledge refuse to acknowledge their mistakes and often intimidate others (with correct information) from positions of power. This phenomenon has lead a once well known practitioner of the trade to remark “Science advances one funeral at a time”. 

I still get surprised when I learn about examples where a researcher had correct data, arrived at the correct conclusion but discarded it (or was forced to discard it) because it went against dogma. I had read about the Millikan experiment in Feynman’s surely you are joking book (It was part of a commencement address he gave.) This is perhaps the most well known example of biases in science holding things back, but in a way this particular error was tiny (at least in my world view, physicists are used to arguing about things to many more decimal places).

What prompted this post was me recalling this phenomenon while reading “Essential Genetics” by Hartl and Jones. In the chapter on Gene Linkage and Genetic Mapping there is an excerpt (and editorial) from a 1956 paper by Tijo and Levan that states that up until then it was widely believed that humans had a chromosome number of 48, just like chimpanzees and gorillas. That’s not the bad part – we often have incomplete or wrong information.

Here’s the galling part: a previous researcher (Hansen-Melander) had previously led an experiment where the researchers had repeatedly found 46 chromosomes in human samples. Instead of reporting this, they decided the experiment was borked and stopped it, because they could not find the “correct” number of human chromosomes.

A note on “The Worst Programming Interview Question”

Rod Hilton has a blog post where he argues against using puzzler type questions during an interview. I agree with him in spirit, but feel that puzzler questions have a place in interviews, provided they are used correctly.

Hilton’s piece is very well written and I recommend it, especially if you are in a position where you interview candidates. Hilton is strongly against Puzzler questions because he feels they add additional stress to an already stressful situation and only detect if a candidate has heard the question before and memorized the answer.

I generally agree with Hilton’s sentiments, but, if you form a Union of questions/strategies that different people consider unsuitable for judging job candidates, you will end up with nothing left to ask or look at. This suggests that interviewing is an art: a complex human activity that can be learned through observation and practice, but is probably hard to convey in writing.

When I was interviewing for jobs I experienced a variety of interview styles. I am lucky to report that I enjoyed every interview. I interviewed at three places, and only in one did I get a puzzler kind of question.

In one firm, in the first round I got a very easy ETL (industry speak for scratching out usable, neat data, from a raw data file) problem, which I solved in Python and guessed they were using to kick out folks who could not program at all. In the next round I gave a talk and faced a panel of interviewers most of whom asked me straight technical questions related to statistics (though one of the interviewers clearly had no interest in interviewing me and went through my CV in front of me and asked random questions from it).

At the next firm the first round was simply getting to know me and them (which I guessed they were using to kick out people who could not communicate/were socially unsuitable). The next round I found immensely enjoyable: they gave me some of their data and just let me lose on it with a very open ended question. I had a lot of fun playing with it and ended up making an IPython notebook of my results. They were very thoughtful in giving guidelines: they said they were looking at an applicant’s ability to condition data (detect and toss out bad data) and an applicant’s ability to present analysis.

This was the interview I enjoyed the most and I suspect the interview that took me and the company folks the most time to do. When I was done with my notebook I was very satisfied and happy and eager to go work for these folks. I was invited over for a site interview, but by that time I had already received and accepted a job offer. This brings me to the puzzler question.

The job offer that I finally accepted was from the company where I was presented with a puzzler question. It was a statistical puzzler, rather than a programming/algorithm one. It caught me off guard. I worked through it the best I could, but I could tell the answer wasn’t quite there. My interviewer would listen to me think, wait for my partial answer and then give me a hint. I used the hints to move forward, but I felt I was failing.

At the end, I was pretty sure I would not hear from the company again, so I was surprised when I was asked in for a second interview. This was a broader discussion of work and interests and I felt that went really well, and indeed, I received an offer shortly.

A few months into the job, I spoke about the puzzler question with the colleague who had interviewed me. His reasoning for the puzzler, which I found satisfactory, was that it was a way to test how people thought when faced with a tough problem and whether they thought at all, since the work we do if often difficult and often does not have clear cut answers. I also realized, on my first day on the job, that my colleagues had been reading one of my blogs (this one) and that formed part of their opinion about me.

Hilton’s points are well taken. I feel, however, that there is great value in seeing whether people are willing to think. When I was preparing for my Ph.D. defense one of my advisors told me that some questions during a defense are thrown at the candidate simply to see how they react. The panel assumes – the question being so bizarrely out of scope of the thesis – that the candidate doesn’t know the answer, and they want to see how the candidate thinks through novel situations.

When the job you are hiring for requires people to take initiative and solve problems on their own, given a minimum of guidance, it is very effective to be able to see how a candidate reacts to such situations: do they seem eager to tackle open ended situations, or are they unhappy when directions are vague and outcomes not clear? It is possible that a Puzzler question will be able to unearth some of these qualities.

If I were to use puzzler questions during an interview, I would open by being honest: I would say that this is an intentionally difficult question, it is not meant to test knowledge but rather approach. I would then pick a case from my own work that I had to think through, so I would be familiar with the topic AND it would be a relevant kind of puzzler question.

What, of course, you can not judge from this kind of puzzler question is staying power: the person was willing to spend 10min thinking about a problem, but will this person come back to the problem day after day, week after week, until they crack it?

There are also many other factors at work here: perhaps the person does not like to expose their vulnerabilities to strangers – when faced with the same question with familiar colleagues the same person that choked at the interview could be spectacular to brainstorm with.

Looking back, the interview I liked the most – the one where I got to play with some data analysis – was actually a puzzler question. It was open ended, it did not have a set answer and it was designed to see how I thought. But it was not a cookie cutter puzzler question that you would get out of an interviewing book – it was a real life test, that was relevant to the position they were interviewing for.

Taking this further, I think the only way firms should interview candidates is to have paid internships or trial periods where they get to see a person in their “natural” state, working through the kind of problems they would actually face, and interacting as they normally would.

The problem of course, like many other things we don’t do well, is that we don’t have that kind of time …

The engineer’s disease

Knobs are fun. I turn on the gizmo, twiddle the knobs and see what happens. What is not fun, is if I turn on the gizmo, and it just sits there and does NOTHING until I twiddle the knobs in JUST the right way. Even worse is a gizmo where I twiddle the knobs and the first thing that happens is that damn thing catches fire. There are too many gizmos out there to play with to deal with this stuff.

The disease

But, you are thinking, that’s what makes gizmos FUN! It’s fun when things catch fire. It’s FUN when it takes hours to set the knobs in just the right way before the thing works. It’s FUN to show off how you have this giant panel and how EXPERT you are in flipping all the switches and setting all the knobs so the thing works. It’s FUN to watch all those lights go on and off.

I think that way too. And it’s a disease. The engineer’s disease. When I was very young, I didn’t realize it’s a disease. I thought, because it’s fun for me, it’s gotta be fun for everyone else, right? It’s fun to to fiddle with equipment, take apart appliances, to assemble pack furniture and read thick manuals. But then I came to realize that it’s not fun for everyone.

It’s a bit like a stamp collection, or your wedding photos. It’s a very personal pleasure to spend hours pouring over a stamp collection which may not be shared by visitors. You only show them the stamps or the photos if they ASK for them. Otherwise, you’re just being selfish and self-centered and your guests won’t have fun at your place.

It’s kind of the same with building things. When I start building something I often start out simple. I start out with some wood, or some wire, or an empty file on a computer. After some time passes things have gotten complicated. That’s FUN. As I go on I often end up with something that is so elaborate, that has so many knobs and switches even I don’t know how it works, or what some knobs do. I have to read my own manual.

Sometimes I find that I’ve forgotten to write a manual. Sometimes, after a while, I lose interest, and I put it in a drawer, or in the basement and start working on something new. That’s OK, because the pleasure is in the building and getting lost in the complexity, and I’m just building it for me. It’s different when you build things that you want other people to use, however.

Good design: minimize controls, give feedback, don’t explode

Old school radio with two knobs
Old school radio

Old school radios are a good example of good design. There are two large knobs on the box.  One knob is for volume, the other is for station. In the beginning you have to twist the volume knob a little hard, to get it to the ON position, from then on it’s twist to make it louder or softer.

The other knob is to tune the station. It’s pretty quick to learn the controls, and you get an effect pretty quickly. You click the volume switch and you hear this hiss. You twiddle the station knob and THINGS HAPPEN.

Best of all, there was hardly any way you can twist the knobs and have the radio explode in your hands.

This illustrates to me two principles one should follow when building something for others: minimize controls and always give feedback. This reduces the barrier for users trying out and experimenting. And, of course, don’t explode. That’s rude.

Good design: Do something right away, keep options, but behind a panel

Another principle, which is a little harder for me to express, is ‘immediate effect’: getting the device to do something useful should not take a lot of fiddling. Ideally just turning on the gizmo and doing the simplest thing should produce a useful result. At most, a user should have to twiddle one knob to get things going.

It’s often fairly easy to set up some defaults so that the machine works right away. However, it is important to make sure the user has choices, and knows they have choices. It is important to have a panel, clearly marked, that an experienced user can flip open to expose the GIANT array of knobs in all their glory and fiddle away to their heart’s content to fine tune what they want.

What is difficult is to make sure that the user doesn’t make the device explode by a bad combination of settings. Sometimes it is too hard to predict what will happen and all we can do is disclaim warranty, but often we can set interlocks that prevents users from blowing themselves up, and that is good engineering, often the very best.

Perfection is achieved … with time and experience

So, the engineer’s disease is not in making complex things. Not only is that fun, it is often very necessary. The engineer’s disease consists of making things more complex than needed and then throwing the complexity in the faces of others.

Building complexity for complexity’s sake is only a disease when it is done in public. In private you can do as much obscene engineering as you want. But in public, one should respect the time and patience of others. This is where experience and exposure  become important. When engineering a system, we have to learn when to say no and not add yet another knob without making the device too simple and too specialized.

The other important side to Engineer’s disease, is in exposing too much functionality too soon. It’s fun (and in a malicious, immature way, I should add) to intimidate and impress friends and relatives by showing them this GIANT wall of knobs that runs your magic machine with all the blinking lights. But it’s no way to treat colleagues and strangers.

Here, it is important to really talk to people who are not so interested in how the system is built, but rather interested in using it on a daily basis. Insights from such users will allow one to learn how to best get out of the way of them when they try to use your gizmo to do their work.

There are many commonly used expressions that capture these thoughts. I have two favorites.

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. – Antoine de St-Exupery

But, what is stopping us then?

Not that the story need be long, but it will take a long while to make it short. – H D Thoreau

(This second one is pretty popular and has many versions. I find Thoreau’s version to be the wittiest and pithiest)

Down the rabbit hole

I was putting some finalizing touches to pre-processing some data in preparation for some analysis I was raring to do. The plan was to create some pretty pictures, get some insight, get this off my desk by noon and go into the weekend with no backlog and a clear conscience. But here I am, this Friday evening, rewriting the code to work around a bug in someone else’s software. I was angry for a while but then I wasterrified.

To be honest, it’s not a sudden realization, and I suspect that all of you have had this realization. It’s just that it has to happen to you personally, to bring that extra visceral element in.

Everyone who has used Windows (and now, sadly Mac OS X to an increasing and annoying degree) knows that all software has bugs. I used to think this had primarily to do with the fact that these are graphical operating systems that have to deal with asynchronous and random input.

These are difficult problems to find test cases for and the bulk of the software is a series of checks and balances to see if the user is allowed to do what they just did given what they have been doing in the past. I wrote GUIs once and I swore I would never do it again. No matter how many checks you put in, someone somewhere is going to come in and press the right combination of keys at just the right pace to lock your software and crash it, eventually.

Widely used, well tested computing packages, on the other hand, I pretty much trusted. Yes, it is true, that there are tricky algorithms such as integration and differentiation and optimization that are hard to get very right and not all implementations are equal. Some trade accuracy for speed and so on, but I don’t expect anything to collapse catastrophically if thousands of people have been using it for years.

And yet here I was sitting at my keyboard, fuming, because a module I was using presented a strange, extremely unexpected bug. To top it off, the library doesn’t do any fancy computations, doesn’t do any graphics or any user interface stuff. All it does is take tabular data and save it to disk.

The selling point of the software is that it allows you to build a file on disk, many gigabytes in size, much, much larger than your available memory, and still process data from it seamlessly. I’ve used it for a while and it works for me.

Another great thing about the library was that it had an easy way to indicate missing data. It uses something called a ‘NaN’ which expands to Not-a-Number which is a fairly common value we put in our data to say “hey, don’t take this value into account when you do some computation, like summing or multiplying this table of numbers, it’s not really there.”

So, I had this large table full of numbers with a few missing data points, which I had filled with NaNs. Say the rows of the table are individual people (my actual case is completely different, but this will do) and the columns are attributes such as height, weight, eye color, zip code and so on.

I was interested in asking questions like “Give me data for all the people who are taller than 5′, have blue eyes and live in zip code 02115”. I took a little chunk of my data, loaded it into memory, asked my questions and got back a list of names. Everything checked out.

So I saved the data to disk and then did the same exact thing, except I used the software’s special ability to pull chunks of data straight from disk. Sometimes I would get sensible answers but sometimes I would get nothing. The software would tell me nobody matched the set of conditions I was asking for, but I knew for a fact that there were people in the database matching the description I had given.

The even odder thing was that if I loaded the file back from disk in its entirety and then asked the questions I got all correct answers.

My first reaction was that I was screwing up somewhere and I had badly formatted my data and when I was saving the data to disk I was causing some strange corruption. I started to take away more and more of my code in an effort to isolate the loose valve, so to speak.

But as the hours went by, I started to decide, that however unlikely, the problem was actually in the other guy’s code. I started to go the other way. I started to build a new example using as little code as I possibly could to try and replicate the problem. Finally I found it. The problem was very very odd and very insidious.

In the table of data, if you had a missing value (a NaN) in the first or second row of  any column the software would behave just fine when the data were all in memory. When, however, you asked the software to process the same data from disk it would return nothing, but only when you asked about that column. If you asked about other columns the software would give the correct answer.

This took up most of my Friday. I wrote in to the person who made the software suggesting it was a bug. They got back to me saying, yep it was a bug, but they knew about it and it was actually a bug in this OTHER software from this other fella that they were using inside their own code.

By this time, I was less angry and more curious. I went over to these other folks and sniffed around a bit and read the thread where this bug was discussed. I couldn’t understand all the details but it seems, that in order to make things work fast, they used a clever algorithm to search for matching data on the data table when it was on disk. This clever algorithm, for all its speed and brilliance, would stumble and fall if the first value in the column it was searching in was not-a-number. Just like that.

Importantly, instead of raising an error, it would fail silently and lie, saying it did not find anything matching the question it was given. Just like that.

This exercise brought home a realization to me. You can make your code as water tight as possible, but you almost always rely on other people’s code somewhere. You don’t want to reinvent the wheel repeatedly. But you also inherit the other fellow’s bugs. And the bugs they inherited from yet another fellow and so and so forth. And you can’t test for ALL the bugs.

And then I thought, this is the stuff that launches the missiles. This is the stuff that runs the X-ray machine. This is the stuff that controls more and more of vehicles on the road.

The most common kind of code failure we are used to is the catastrophic kind, when the computer crashes and burns. When our browser quits, when we get the blue screen and the spinning beach ball. This is when you KNOW you’ve run into a bug. But this is not the worst kind. The worst kind are the silent ones. The ones where the X-ray machine delivers double the dose of x-rays and ten years later you get cancer.

And even though I knew all this cognitively, it took me a while to calm down.

Optogenetics vs Electrics

Since very early on, we have known that electrical currents can affect the nervous system. The most well known and successful use of electrical stimulation in the nervous system is the cochlear implant. A tiny and clever signal processor converts sounds recorded in a microphone into electrical pulses that drive neurons in the patient’s auditory nerve, passing enough information for the patient to interpret speech. A second well known use of electrical stimulation in the nervous system is deep brain stimulation, where electrodes are placed in the basal ganglia and change the activity of certain motor circuits that have been damaged by Parkinson’s disease.

In the 1960s many experiments were performed that demonstrated how electrically activating neurons can elicit movements, sensations and even trigger memory retrieval and strange emotional states. A routine procedure in neurosurgery is electrical mapping, where a sedated patient is electrically stimulated to map out the functions of a part of the brain that is going to be operated on, to ensure that important cognitive and motor functions are not damaged.

Optogenetics is the emerging field of using lasers, instead of electric currents, to activate or deactivate specially treated neurons. In basic science it is used to perturb the activity of neurons to observe effects on the neural circuit and/or behavior. In translational science the goal is to supersede electrical-microstimulation as a means of activating neurons to deliver information or fix broken neural circuits. Here, instead of pulsing electricity to activate neurons in a particular way, we pulse laser light to switch the neurons on and off to deliver our message.

The neurons are treated by infecting them with a virus carrying genetic data that forces the infected neuron to produce special channels. The channels are activated by laser light of particular colors. When laser light is directed at the neurons bearing these channels, the current from the opened/closed channel changes the firing of the neuron, either activating it or silencing it. The viruses are designed not to destroy the neuron or hijack its machinery for replication but simply to encourage the neuron to produce the special light activated channels.

In order to use this method in the living subject one must first deliver the virus to the appropriate location in the nervous system. One direct way is to simply take a syringe with a tiny bit of virus and inject it into a brain region. Another way, which is currently not possible in humans or other primates but is possible in mice, is to develop a genetically altered organism that has special ‘markers’ on neurons of interest and when virus is injected into the brain the virus only infects those specially marked neurons.

The next step is to deliver laser light to the infected neurons. This requires either a fiber optic cable that reaches the target or small lasers (probably solid state lasers – small microchip LEDs) that can be implanted into the target. This still requires a harness that delivers power and signal to the laser for a remotely located signal processor.

This section is simply a crass opinion from a person who has done some research in electrical micro-stimulation and has only seen optogenetics second hand.

As a basic research tool optogenetics is the bees knees right now. I think the usefulness of the tool is rather oversold. In research preparations where direct access to the neurons is practical, such as slice or culture preparations, people have used intracellular and patch recordings to directly manipulate neural activity using electricity and chemicals to uncover the hidden mechanisms of neural machinery. In preparations where behaving subjects are needed (to see what the role of a particular brain region is in shaping behavior) clever experiments have already uncovered how the activation or silencing of neurons affects processing. Such experiments have been done using electrical stimulation to activate neurons, cooling to deactivate neurons and pharmacology (drugs) to do both, often targeting specific neuron types. Optogenetics merely replicates such findings (which isn’t a bad thing) or refines them by incremental amounts.

As a translational tool – a tool that will be used regularly in surgeries and therapies in humans – I think the probability of optogenetics replacing electrical microstimulation is very, very low. First, the viruses need to be cleared for human use. We need extensive trials, eventually in humans, that indicate that the viruses will not harm the patient. Secondly the infrastructure for optogenetics is cumbersome. Not only do we need to inject the virus into the brain, we then need to lower fiber optics or the tether for a solid state laser, into the target area. This is about the same disruption we inflict when we lower electrodes for electrical stimulation in the brain.

The big sell for optogenetics is that we can activate or deactivate neurons based on the channel type. However, in terms of translational tools, the final effect of activation/deactivation is the result of complex interactions within the local circuit. It has not yet been shown that adding the ability to deactivate a group of neurons is going to offer us any added benefit over our existing crude excitation only electrical stimulation techniques.

Another sell for optogenetics is that we can activate neurons of only a given type. Again, if we had detailed knowledge of local brain circuitry (and indeed if we were convinced that local circuitry is simply not randomly wired up) we might be able to extract some extra mileage out of stimulating just one type of neuron in a local circuit. Again, this is yet to be shown to have a therapeutic benefit.

I believe that in terms of brain interfaces and therapies optogenetics will have minimal impact in both improving our knowledge of brain function and in being used as a way to deliver information into the brain. Electrical micro-stimulation is unsexy because it has been used for a long time. This does not make it an in-effective tool as many believe.

“Impact”

Well, someone has finally come out and said what we are all thinking. A professor, Marc Kirschner, has an editorial in the tabloid ‘Science’ criticizing the NIH’s emphasis on pre-judging “impact” and translational “significance” of proposed studies.

The editorial is frank, with some of my favorite quotes being:

Thus, under the guise of an objective assessment of impact, such requirements [of judging impact of proposed and current research] invite exaggerated claims of the importance of the predictable outcomes—which are unlikely to be the most important ones. This is both misleading and dangerous.

In today’s competitive job and grant market, these demands create a strong inducement for sloppy science.

And they should reemphasize humility, banishing the words “impact” and “significance” and seeing them for what they really are: ways of asserting bias without being forced to defend it.

I’m not sure, however, if speaking our minds, even when done by senior scientists, is going to change anything. The problem is not really of attitude, as Kirshner suggests. I think it is simply of funds, as in not enough of compared to the number of scientists vying for them.

When ever demand exceeds supply the supplier can do any arbitrary thing they want. In many jobs, like a coder position at Google or a tenure track research position, where there are many more applicants than positions we would think quality would be selected for, that the cream would rise to the top.

What really happens, however, is that since imperfect humans are judging other imperfect humans in qualities they hope are predictive of future success, we pick arbitrary criteria. We think we are being clever and competitive, when we are actually asking people to jump through hoops just for the heck of it. It would be better to just have a lottery.

I think that until we have more grants or fewer scientists we will continue to apply stupid criteria simply because we need to filter. When one thing is as good (or bad) as the other, its just chance. Like Kirshner, I agree we should not use “impact” and “significance” to judge suitability of grant proposals.

I have a more radical suggestion: use a lottery.

My former adviser (John Maunsell) has said to me that he believes that once you go below the 20 percent line in grant scoring its all noise. The bad ones have been taken out and now all the good ones are in the mix and no one can tell which ones will pan out and be significant.

We should be honest, filter the badly designed experiments out, and then do a lottery on the proposals that contain technically sound science.

(On the topic of grant scoring, have you noticed how NIH scores have tended towards bimodal? It’s like reviewers only use two numbers 1 and 5. Almost as if they know that with the tight paylines its a lottery and so in order for their favorite grants to win they really need to push the distributions apart)

The pitfall of big science.

Neuroscience was in the news recently because the President of the United States is giving his name to a somewhat diffuse initiative called BRAIN. The way this is being sold is that this is a ‘challenge’ initiative, like Kennedy’s Moon shot or the genome project. A challenge initiative that will enable us to understand the functioning of the human brain in order to cure brain disorders.

I think this is the beginning of the end of what I liked about American science. What I liked about American science was that funding was spread across many scientists doing many diverse things. And I think that is the only way to do fund basic science.

By its nature each individual bit of basic science is a high risk endeavor.  Experiments, when properly thought out to maximize new knowledge, are most likely to fail, leading us into many blind alleys. The only criterion, in my mind, for a good experiment, is one which tells us clearly when we’ve gone into one of those blind alleys.

The fact is, that no one can predict the blind alleys and the breakthroughs. Often scientific discoveries are not only accidental, but they are unrecognized at the time of discovery. Only later, after other discoveries are made or the socio-economic landscape has changed do these discoveries suddenly appear significant and revolutionary.

Funding science is like picking stocks. You don’t know which one is going to work out. In the stock market, you can do some things that are, at the very least immoral, and perhaps illegal, to game the system and ensure a flow of money from the less connected, small investor, to the better connected bigger investor. In science, you can not game the system. No one has a special insight into where the next big break through is coming from. You can’t time this market.

I used to think, therefore, that the NIH and NSF, by giving out a lot of grants to people doing many different things, was playing the game right. Diversify and fund as many different ideas as possible. A small fraction will bear fruit, but we will simultaneously maintain a decent mass of well trained scientists and reap the benefits of the occasional breakthrough.

But two factors seem to have broken the back of that system. The financial crunch has bled out funding from the NIH for over a decade and simultaneously the funding agencies have gone into a strange turtle mode: they only want to fund science that is ‘guaranteed’ to get ‘results’. There are no guarantees in science. Often we do not know what a ‘result’ is.

As part of this effort to only fund great science, the funding agencies are looking to only fund the ‘super star’ scientists. These are scientists, who, through some objective criterion, are ‘better’ than the rest. These seemingly objective criteria (publication impact, publication volume) are increasingly boiling down to political influence. How many friends and supporters they have on journal review staff,  grant panels, in the funding agencies and now, in the very machinery of government itself.

President Obama’s announcement of the BRAIN initiative, to me, is like a marker in the road to the increasing politicization of scientific funding. Usually when people think about politics in science they think about political appointees stopping scientific progress to benefit religious constituents or industrial powerhouses that pull their strings.

I think of politicization as the judging of scientific endeavor not from the strict lens of scientific correctness, but from the more subjective lens of ‘sexiness’ or ‘impact’. It is my firm opinion that this is a grave mistake. As scientists we can only judge scientific correctness. We may think we can judge impact, but the truth is that these are merely very biased opinions. We may sometimes be correct, but in most situations we do not have the foresight to see what combination of future socio-economic factors and other discoveries will make which contemporary scientific finding useful or useless.

With the BRAIN initiative we are now formalizing a centrally controlled system of doing science where a small number of politically skillful people will control an ever decreasing  purse of money to fund an ever decreasing and less and less diverse thinking set of scientists. This is a disaster.

I would rather propose that the number of grants should be kept constant (or increasing) and the size of each grant should go down. PIs should also be restricted in how many graduate students and post docs they can hire. Scientists are creative people, they will find ways to stretch that research budget. Perhaps PIs could take a little less by way of salaries, labs using common equipment could find ways to pool resources. But the science will go on. And will be diverse and we will let the future sort out what was a good result and what was a false positive or a mere curiosity of no practical value.