Molecular combing

Molecular combing is a fun technique that is used to study longer stretches of DNA and quite different from the next generation sequencing by synthesis I’ve been exposed to. In this technique  DNA is mechanically stretched out on a slide and then locations of interest are labeled using fluorescent compounds. One can then image the slide and inspect the regions that light up.

DNA in solution coils up randomly. In molecular combing a charged probe is dipped in the solution and attracts the  ends of the DNA molecules. The probe is then retracted slowly from the solution, moving against a glass slide. The surface tension of the fluid pulls on the DNA and helps to stretch it out. Fluorescent DNA probes designed to attach to attach to particular locations on the DNA can then be applied and the slide is observed under a fluorescent microscope.

One use of this method is to detect large genetic events, like structural variations (long insertions, long deletions, inversions) that are really hard to detect with short read technologies and even with long read technologies. The method does not give single base resolution – a person I talked to said they get 1kb resolution (which is pretty great, if you consider that we are looking at DNA molecules under the microscope!) – but it the only way to get these classes of events. Typically this would be used when you know what you are looking for and want to test individual samples to see if they have this feature or not.

As a side note this reminds me of my Neuroscience days. One way we would classify techniques to study brain activity was a two-dimensional grouping based on resolution in time and space, by which we mean, if you detected an event with a technique, how well could you tell at what point in time it occurred – 1ms, or 1hour? and where it occurred – in THIS CELL, or in this half of the brain?

So, for example, invasive single cell recording would give very high resolution in both time (1ms or less) and space (this neuron), fMRI would sit in the middle, with moderate resolution in both time (1min or so) and space (1mm cubed chunks of the brain), and EEG would have relatively high time resolution (a few ms) and very crummy spatial resolution (This half of the brain, or if you used fancy statistical techniques, this region of the brain).

An effective study would try and combine different techniques to get at a question from different levels of detail to try an cross-validate the findings (we can rarely do this because of resource limitations, though)

Similarly, in genomics, groups are now trying fusion based approaches where “evidence classes” (It’s fun – when you move to new fields you always find the same concepts, but with totally different names. I guess everyone wants to sound cool in different ways. In electrical engineering (robotics) the buzz word for a while was “Sensor Fusion”) from different techniques are combined together to cross-validate the existence of genetic features.

The devil in the details of course is how you weight the different evidence classes and what do you do when they conflict with each other.

Further reading:

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s