The other great analog-digital debate

Some of you are old enough to remember the great analog-v-digital debate: Vinyl or CD? This post is about the OTHER great (but slightly less well known) analog-v-digital debate: do we simulate neurons on digital computers or on custom designed analog VLSI chips?

When I was at the Univ. of Maryland I was hooked on Neuromorphic engineering by Timothy Horiuchi. The central tenet of Neuromorphic engineering is that transistors operating in the subthreshold (analog) zone are great mimics of the computations done by neurons, and the way to intelligent machines is through building networks of such neuro-mimetic neurons on analog Very Large Scale Integration (aVLSI) chips. This press release of some work being done at INI at Zurich reminded me of this field.

What the writeup also reminded me about, was the great debate between digital and analog implementations of neural circuits. Proponents of Neuromorphic VLSI base their work on the idea that transistors working in the sub-threshold zone give, for “free”, a nice non-linearity between input and output that is at the heart of neural circuits. When applying for funds from DARPA they also remind the grant reviewers that aVLSI circuits have very low power consumption compared to digital VLSI circuits.

A well designed and debugged aVLSI Neuromorphic chip is a great feat of engineering (often taking several fabrication rounds to get all the design problems weeded out) which makes iterating over designs very time consuming and unwieldy.

The proponents of old school digital computation, where neural behavior is encoded in an algorithm (implementing differential equation models of neurons) point to the ease of implementation (you can use your favorite programing language) the ease of debugging (you just recompile while you have a drink) and the ease of modifying and elaborating the design (comment your code!!).

There are some specific issues with aVLSI too. When you make giant neural networks hooking up digital neurons is usually done using a connection matrix. This matrix simply tells the simulating program which neuron gets inputs from which other neurons and which neurons it projects to. In aVLSI you need to physically wire up neurons on the chip layout. This means you can no longer modify the network organization to test out ideas on the fly – you need to design a new chip layout, send it for fabrication wait, debug and so on. (And the moment you start changing connections you have to start moving the whole design around because the exact routing of the wires affects the behavior of chip because everything is so close and the voltages so low that the capacitance between wires matters. As I said, it is a true feat of engineering).

People have come up with non-analog solutions to this ‘routing’ problem, by creating hardware versions of the connection matrix: separate circuits, often on a separate chip, that are dedicated to hooking up neurons to other neurons, somewhat like a telephone switchboard. These lose the low power advantage of aVLSI and increase the complexity of the circuits.

You know that I’m going to give my two cents. I think, not being very qualified to comment on either analog and digital implementations of Neural circuits, that aVLSI might have some niche applications in tiny devices tailored to a specific task where small size and low power consumption are important. However, for the vast majority of machine intelligence applications, I think simple simulations of neural circuits, performed by ever more powerful and power efficient digital circuits will prevail.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s