The Allen Brain Atlas is an online tool that combines structure, function, and gene expression data to create a comprehensive catalogue of histological sections and three-dimensional renderings of the human and mouse brains. While it was established primarily to accelerate neuroscience and neuroanatomy research, it is available free online. The images above are taken from renderings of the mouse brain showing the innervation of the olfactory bulb (bottom and top) and the expression of the App gene (middle) implicated in amyloid fibril formation in Alzheimer’s disease and, interestingly, mental retardation in Down Syndrome patients (APP, the human analogue to mouse App, is encoded on chromosome 21).
Networks of neurons are not and cannot be wires like you would see on the side of a highway; they usually eminate from a point of origin and move to connect certain points of the brain, but they are in no way disorganised. For example, while the corpus collosum maintains extensive innervation throughout both hemispheres, other areas of the brain will not. It is this sort of macro-scale compartmentation that allows different parts of the brain to perform different functions - in vertebrates like us, for example, intense lateralisation of the hemispheres gives rise to lower-level organisation, namely the specific structures that perform fundamentally different tasks (the hippocampus, the cerebrum, and the cerebellum, for example). Before in situ hybridisation, the characteristic pattern of neuronal spread throughout the brain was probed by creating lesions in different areas and noting the resulting phenotype; this worked because without excitation, neurons die. This same reality makes it highly suboptimal for the brain to organise itself as a tangled mess of fibres if many of the pathways are likely to become redundant; it both necessitates and causes an organised structure built upon the frequency of signal transduction to particular areas.
This walk-through, macro-scale sculpture entitled “Branching Morphogenesis” was created by Peter Lloyd Jones, Andrew Lucia and Jenny Sabin. The piece comprises five curtains created from 75,000 cable ties; each tie represents a force exerted by the lung endothelial cells on the protein matrix that surrounds them as they form capillaries, and each curtain represents a slice in time, connected at the densest areas of force distribution.
“The issue is basically that there are traits where patterns of inheritance within the population strongly imply that most of the variation is due to genes, but attempts to ascertain which specific genetic variants are responsible for this variation have failed to yield much. For example, with height you have a trait which is ~80-90% heritable in Western populations, which means that a substantial majority of the population-wide variation is attributable to genes. But geneticists feel very lucky if they can detect a variant which can account for 1% of the variance.” - Razib Khan
Many common, chronic diseases have a significant genetic component, including things like type 2 diabetes, Crohn’s disease, rheumatoid arthritis and even obesity. After these diseases were predicted to be strongly heritable, they became targets for genetic research. However, this research has had far fewer successes than failures where the ability to discern the underlying genetic architecture of complex traits is concerned. What is missing heritability, where has it gone, and how can we find it?
Starting from Somewhere
The human genome is big, about 3 billion base pairs. In terms of genetic research, that means it’s not feasible to say, “I’m going to study cancer!”, pick a place, and start looking. Instead, for the past decade or so scientists have been correlating diseases and regions of the genome, building a map using just two primary tools: Genetic markers from a wide range of genetic regions, and a tool to measure the correlation of a particular region with a disease of interest. That correlation tool has most commonly been a Genome-Wide Association Study, or GWAS, although recently other methods are gaining traction.
All GWAS studies do is provide a correlation hypothesis. When a GWAS study is run, thousands of samples - in the study of diseases, usually cases and controls - are genotyped on an SNP array. After the genotyping, statistical tests (like the Pearson chi-squared test) can give the probability that there is no correlation between a particular SNP and the disease state. If the probability of no correlation is very low, that particular genetic region is added to a list of associated SNPs.
What’s the problem? First, GWAS provides no evidence of causation. That means that it’s possible that the correlation observed is not actually with the disease we’re interested in, but with some other variable that happens to align well with the disease one - a confounding factor, if you will. Although this can be be avoided somewhat through massive sample size (which in itself poses problems) all GWAS can ever provide is a measure of how well something correlates with something else. That brings me to the second - and probably more important - problem: The list of associated SNPs for most diseases can only explain a fraction of predicted heritability.
Figure of a classic Manhattan plot from a GWAS study on pancreatic cancer published in Nature Genetics.
That, fundamentally, is the crux of the problem - why isn’t GWAS picking up more heritability than it is? Is it “missing heritability” in that we don’t know enough about our genomes to deduce where it comes from - or do we simply know of too few variants to explain complex disease?
Maybe the Model’s Wrong: The Profound Impact of Considering Epistasis
The easiest conclusion to draw, of course, is the latter - that GWAS hasn’t been done enough yet, and we haven’t found all the variants that explain heritability. While that could easily be true, I’m skeptical. I see genes as the starting point - a massive book of code, if you will, that makes our whole system run. Belabouring the book analogy, some people can read books better than others; those with learning difficulties, for example, will read slower, and will make more mistakes, and those with dyslexia may misread whole words, which can change the meaning of an entire sentence or paragraph. Of course, the words on the page haven’t changed - but the ability to read them effectively is somewhat altered. The same is true, I think, of genetics in the human system; just because the underlying code is “right” doesn’t mean that the body can read it, interpret it, and act on it correctly. The pathway from genes to disease is long and complex, and I think that often overmuch emphasis is placed on genetics and not on the multitude of steps that happen afterward to elicit the diseased phenotype. Remember that disease is not the result of genetics directly - it’s (usually) the result of the proteins encoded by those genes. Reading into things like epistasis, RNA editing, and epigenetics has enhanced my skepticism.
In a recent PNAS paper, Eric Lander has discussed epistasis in the context of missing heritability. He says that the additive model that has been adopted in the study of complex disease genetics can’t be exactly right - that in context of this missing heritability problem, the additive effects of associated SNPs aren’t explaining everything because an additive model doesn’t take complex interactions into account. He postulates that if the interactions between genes are considered, the total heritability is much smaller than anticipated and thus the percentage of what identified variants explain becomes larger.
However, completely denying the additive risk model poses its own problems. That’s why Lander and his fellow authors haven’t denied it; they’ve extended it by introducing the limiting pathway (LP) model, which reduces to the original model for certain additive traits but provides flexibility for complex interactions to exist among other traits. The paper itself says it best:
“In short, genetic interactions may greatly inflate the apparent heritability without being readily detectable by standard methods. Thus, current estimates of missing heritability are not meaningful, because they ignore genetic interactions.
The results show that mistakenly assuming that a trait is additive can seriously distort inferences about missing heritability. From a biological standpoint, there is no a priori reason to expect traits to be additive. Biology is filled with nonlinearity: The saturation of enzymes with substrate concentration and receptors with ligand concentration yield sigmoid response curves; cooperative binding of proteins gives rise to sharp transitions; the outputs of pathways are constrained by rate-limiting inputs; and genetic networks exhibit bistable states.”
So Why Not Epigenetics?
DNA has also been shown to have environmentally-driven plasticity. Epigenetics - or the study of heritable changes in gene expression or cellular phenotype that don’t involve changes in the underlying DNA sequence - is another crucial place where heritability could be missing. It’s quite easy, in theory, to conceive of a modification like DNA methylation or histone modification that could alter gene expression without affecting the underlying nucleotide sequence of the genome - however, many scientists doubt that’s where missing heritability is hiding.
The amazing thing about genetics, to me, is how much gray area there still is. I mean, it’s not in every subject where a paper is published that changes the entire research paradigm of a subset of scientists. Do we know where missing heritability comes from? Of course not, otherwise it wouldn’t still be missing. Do we know where to start looking? Well, after the PNAS paper and other research, yes - I think revising the current model to incorporate nonlinear interactions between genes and seriously look at epigenetics’ implications in recurrence risk is a good place to start. I also think that computation - and particularly experimental translation into computational tractability - will play a huge part in modern genetics, because the two are inherently liked through the generation and analysis of massive amounts of data. Can we ever find the heritability we’re missing? Maybe. But I think to do that we would need to know the ins and outs of inheritance and the stuff that makes up our genome much better than we currently do.
I’ll leave you with this thought: The PNAS paper mentioned here is a highly controversial one, but I think that it’s important to emphasise it because so often in mathematical modeling the data is crafted to fit a pre-existing model and not the other way around. Mathematical modeling can be extraordinarily useful in biology, especially genetics, provided it doesn’t hold us back. If our models stop fitting the data properly, it’s time to adapt them - and thus, having tried the simplest additive explanation, I think the consideration of nonlinear epistatic interactions in our genome is a step forward in the search for missing heritability - or perhaps realising heritability isn’t missing at all.
All references used are cited in-line, including the PNAS paper mentioned and an excellent epigenetic mathematical model that made for an interesting read.
Researchers have discovered a novel type of communication between bacteria mediated by “bacterial nanotubes” that bridge over to neighbouring cells, providing an ideal platform for the exchange of cellular molecules and signals within and between species. In the image above, Bacillus subtilis is pictured visualised by a high-resolution electron microscope after growth to mid-exponential phase; intercellular nanopores connecting neighboring cells are easily visible.
“The synthesis of life, should it ever occur, will not be the sensational discovery which we usually associate with the idea. If we accept the theory of evolution, then the first dawn of the synthesis of life must consist in the production of forms intermediate between the inorganic and the organic world, forms which possess only some of the rudimentary attributes of life, to which other attributes will be slowly added in the course of development by the evolutionary action of the environment.” - Stephane Leduc, 1911
In July 2007, a group of scientists associated with the American Research Council issued a report about something they termed “weird life.” Weird life, they said, could be life in a form that we have never seen before - an organism may not depend on water, for example, or it may have a completely different, non-nucleic-acid based system of heredity and still be alive. Their definition of weird life was vague, and not by accident: One of the primary challenges in the discussion of life, both on earth and elsewhere in the universe, is that life itself is a very difficult thing to parameterise. As David Greer, a professor of physics at New York University, says, “There is no mathematically rigorous definition of life.” Our determination of life is based entirely on our own human experience, and thus its working definition is less a set of functional rules for classification and more a set of somewhat ambiguous statements designed to organise the unknown. The precise problem with trying to organise the unknown, of course, is that nothing is known about it; but without a reconcilable definition of life - or “weird life”, as the case may be - we don’t even know where to start looking.
The key, I think, to this almost certainly inaccurate (and definitely not mathematically rigorous) but working definition is to explore how life came about in the first place. This serves two purposes: First, the definition of life could arguably be based on the most basic conditions necessary for it to occur, and second, life in its most rudimentary forms are more likely to be homogenous across biological systems (however more complex or different from our own) than the large-scale plants and animals we traditionally associate with life. In addition, the makeshift definition should be written as a set of provable postulates, and should be sufficiently inclusive to potentially apply to all forms of aptly labeled “weird life” without being overly promiscuous, so to speak.
The Primordial Soup’s Gone Off
Ever since Stanley Miller’s infamous experiment in 1953, the long-time leading hypothesis into the origin of life was his theory, built around the reducing atmospheric gases of early earth and electric charge passing through them in the form of lightning. Miller’s experiment, which has been replicated, successfully showed that shooting a spark through reducing gases in a laboratory beaker produces biomolecules - in Miller’s case, approximately 10 amino acids and several nucleic acid precursors, although others who have repeated the experiment have had rather more success. The experiment illustrates clearly that life could have begun this way.
Of course, the origin of life is still a black box; in reality any number of plausible hypotheses could be correct. However, for me there are several unaddressed issues in Miller’s experiment that make me skeptical that it is the whole story behind the evolution of us. The primary issue is simply time; the earth is only 4.5 billion years old, and the oldest microfossils of early cell-like structures that have been found date back 3.5 billion years. While a billion years seems like - well, a billion years to us, it’s actually quite quick on an evolutionary timescale. To me, this means that life didn’t simply come down to a lucky lightning strike - it indicates that there was a driving force behind its development that pushed it forward faster.
In 1993, a different theory for the origin of life - termed the hydrothermal vent idea - came into prevalence. It suggested that instead of a collection of atoms in the early ocean, life came out of deep-sea hydrothermal vents. There is much compelling evidence for this idea; two of the most compelling bits, I think, are the existence of an energy disequilibrium and the interconnected micropores found on the vents’ surface.
The ocean, even on the early earth, was a fairly stagnant place in terms of energy gradients; lightning strikes could perhaps have caused them sporadically, but in different locations and to varying degrees with very little continuity. Hydrothermal vents, on the other hand, are rich in energy disequilibrium, boasting temperature, pH, and redox gradients.
So why are energy gradients so important? Because for cells, harnessing energy as ion gradients is about as universal as the genetic code. A new paper recently published in Cell postulates that tiny micropores found on the surface of deep-sea vents - conveniently approximately the diameter of a cell - could have been the starting point of life on earth. In modern cells, about 75% of a cell’s ATP budget - or biological energy - goes into making proteins; conversely, ATP is replenished by proteins that harness chemiosmotic gradients. The paper postulates that the energy disequilibrium provided by hydrothermal vents - specifically, that sustained disequilibrium at a submarine hydrothermal vent interfacing with ocean water - generates conditions that thermodynamically favour the formation of life’s building blocks, particularly amino acids, in the presence of hydrogen gas, carbon dioxide, and ammonium. If a leaky membrane built of lipid precursors accumulated near a vent, the budding system would have a ready-made metabolism by exploiting the pre-existing chemiosmotic gradient. Once enough precursors accumulated, and the “metabolism tap” was shut off due to the newly formed “membrane“‘s impermeability, natural selection would strongly favour cells with simple antiporters that could continue to exploit the ion gradient.
Defining Life from Vents
If, for the sake of argument, the thermal vent hypothesis is found to be the way things actually were, what then? What about life? Defining life by the characteristics of the first cell does not appeal to me; this leads to a definition of characteristics that are shared because they originate from a common ancestor, and not because they are actually fundamental to life. However, the hydrothermal vent hypothesis does, I think, enhance our understanding of what is needed for life, at least on this planet, and based upon the need for a biochemical gradient for protein production and the necessity of a lineage to exploit progress made in the previous generation, I would define life as:
A physical compartment across the walls of which energy can be generated and utilised for biochemical reactions, and;
one that possesses a material of heredity that may be passed to the next generation.
It’s not a particularly restrictive definition, nor is it likely broadly accurate. However, the fact remains that there are many definitions of life; few widely agreed upon, and certainly none reasonably consented to in their entirety without special cases. Considering what was necessary for the first cell to form is as valid a method of organising the unknown as any other, and perhaps, one day, we’ll be able to find a distinctly new organism somewhere in the universe, one that shifts our entire paradigm on biochemistry, heredity, and what it means to fundamentally be alive. Until then, I think, formal and constructive definitions will elude us, and “weird life” will continue to be - well, weird.
An Afterthought: The Interesting Case of Protocells
He works extensively with oil and water systems, designing in vitro protocells. He also works with tar systems to simulate the stuff of the early universe, like those in the images at the top of this post; his protocells are comprised of single-digit numbers of chemicals, and yet are able to locate food, respond to one another within an environment, and even divide and hybridise into wholly new organisms with new functional characteristics.
So are these protocells alive? Martin Hanczyc believes that nothing can be considered “alive” in a black-and-white way; rather, these protocells fall somewhere in the range of an intermediate between the inorganic and organic world, and while they possess some attributes necessary for life they simply fall on a continuum along with humankind and this desk. A video of his TedX talk, in which he explains further, can be found here.
Due to its length and their quantity, references in this post are cited using links where they are most relevant. Most of the information used comes from a new paper in Cell on the Origin of Membrane Bioenergetics (Martin and Lane, 2012), and Martin Hancycz’s TedX talk. For another take on Martin Hancycz’s work, see this post here.
Stem cell differentiation is an important part of the body’s development and repair. Through complex genetic regulation and epigenetic reprogramming, pluripotent or totipotent stem cells will receive signals that cause them to differentiate into a particular cell type. There are many lineages leading to the many different cell types that make up complex organisms, and some are highly implicated in novel disease treatment.
Recently, scientists in the Neuroregeneration Laboratory at McLean Hospital (an affiliate of Harvard Medical School) have found that different types of neurons can be grown from human stem cells, including patient-specific induced pluripotent stem (iPS) cells. This has profound clinical consequences in the treatment of neurodegenerative synucleopathies, including Alzheimer’s Disease and Parkinson’s - particularly because the types of neurons that can be grown from stem cells are relevant to those diseases. New research has also shown that dissociated primordial neurons and stem cells implanted into the adult central nervous system can grow to reconnect neuronal pathways, forming physiological and molecular links with pre-existing tissue. Stem cells have also recently been reprogrammed in another landmark breakthrough - it seems whatever way you look at it, stem cells are the future!
“No man-made structure is designed like a heart. Considering the highly sophisticated engineering evidenced in the heart, it is not surprising that our understanding of it comes so slowly.” - Daniel. D. Streeter Jr.
Throughout history, scientists have always wondered about the heart. We have dissected it and probed it, shocked it and sliced it into thin sections to peer at under a microscope. Everything we have tried has taught us something new, but nothing has taught us enough to fully understand the whole-scale functioning of the organ itself. Now, cardiovascular diseases account for ~30% of all human deaths worldwide, edging out parasites and infectious disease by ~7%; sedentary lifestyles and massive meals have certainly left developed countries worse for wear in the heart department. In the face of cardiovascular crisis, scientists are using wholly new approaches to try and tease something new out of the heart again - something that could be radically useful in clinical practise.
Somewhere To Start
Successful physiological analysis of any organ system requires understanding of the key functional relationships between the components: Cells (the parts), organs (the whole) and networks (signaling cascades, charge propagation, etc. - the method of communication between the parts to help form the whole functioning organ structure), and how these components change in a disease state. Some of this information may reside in the genome, but GWAS studies have identified a relatively low percentage of explained heritability in cardiovascular disease; some may also reside in the proteins the genes encode for, but to me that’s too simplistic an explanation. Rather, many scientists believe that it is the interactions between proteins - within the context of subcellular, cellular, tissue, organ, and whole-human-body-system structures - that drive the disease state.
In order to understand these complex interactions, and to attempt to make sense of the vast amount of data being generated through experimentation, physiologists have begun to explore these protein interactions in the heart in a quantitative and computational manner, with the long-term goal of acheiving patient-by-patient personalised medicine with their algorithms. Starting from tiny ions and scaling up to model the entire human heart with startling accuracy across thousands of processors, systems physiology is translatable, innovative, and exciting.
Building From the Bottom Up
In reductionist biology, to model an organ means to first model its constituent parts. The cells of the heart - cardiomyocytes - are highly specialised, and are able to sustain electrical propagation from the Sinoatrial Node (SA) to the Atrioventricular Node (AV), while baseline physiology facilitates further propagation to the ventricle and Purkinje neuron fibres, thus causing the timed contraction necessary to pump blood through the system. Scientists were precisely able to model both the action potential in each individual myocardial cell and that action potential’s propogation between cells, forming the foundation for further research into cardiac arrythmia and providing a wholly different “dissection of the heart.”
Cardiac excitation, very generally, involves both the generation of an action potential by individual cardiac cells and the propagation of that action potential through intracellular gap junctions. In the equation below, the derivative of the transmembrane potential with respect to time can be related to the capacitance multiplied by the total transmembrane ionic current:
The simplest model for the propagation of the action potential relies upon a continuous chain of excitable elements. In this chain - imagine a series of cardiomyocytes - the current will flow from a depolarised cell to its less depolarised neighbours via intercellular resistive gap junctions. Cells in a system are always more complicated than cells in a clamp changing the charge on the membrane capacitance, and therefore a slightly more complicated equation is needed to model the propagation of the action potential between excitable heart cells.
Those of you keener at maths might realise that the second equation presented can be very simply thought of as an extension of the first equation. As shown above, the transmembrane potential is dependent upon both time and space, but in our friendly single-cell example, the transmembrane potential is independent of space and thus if only one cell is present in the chain the second propagation equation reduces to the first.
And thus, we have a propagating current and our building blocks of the heart.
Creating the Heartbeat
Of course, the heart is not a perfect sphere - it has an architecture, a shape. The activation sequence of different parts of the heart has been found to be strongly influenced by the fibrous-sheet architecture of the myocardium, causing non-uniform excitation of those connected excitable heart cells and the boom-boom sound you hear when your heart beats. Simply, different parts of the heart are activated at different times.
Supercomputing the Heart
Modeling the whole-organ system is the most complicated task, requiring data input from multiple levels of complexity and multiple scales, from the ions present inside each myocardial cell to the whole organism. The Alya Red project is trying to use 10,000 processors to model a whole heart using similar* mathematics to those I have outlined briefly here.
Most striking from the video above is the precise geometry and organisation of the muscle fibres designed to propagate the axial current, as well as the diffusion tensor imaging used to validate the model. The scientists behind Ayla Red hope that it will be useful for medical professionals to understand the human body better, diagnose pathologies and even plan surgeries.
Impact on Modern Medicine: Does All This Maths Add Up?
Well, does it? That’s the big question, of course - are all these hours of simulating and 10,000 processors worth hard, cold net life years to patients? While these systems are beautiful and fascinating, the primary goal in the study of human organs must always be clinical. The euHeart Project thinks that these models can be easily adapted into ‘personalised algorithms’ by taking precise biophysical measurements from patients and running code to simulate their individual heart. Models of the heart can connect the dots surrounding pathophysiology; they can allow doctors to peer into underlying causes in a way they couldn’t before without extensive surgery, and assist treatment decisions. However, there’s currently a “significant translational barrier” - models are based too much on data in animal models and controlled conditions, and not enough on real-life human data, which is significantly harder to come by.
The euHeart Project will, over the next four years, attempt to modify existing frameworks so they’re useful in a clinical setting; they hope to allow them to rely on non-invasive measurements and simulate the human heart accurately. Will that ever happen, or is all this modeling a waste of time? Personally, I think it’s feasible; we have a long way to go, but never say never.
Science is all about seeing something that hasn’t been seen before, or figuring something out that connects the dots. While this new methodology in physiology isn’t perfect, it’s certainly a different way of looking at the human heart and valuable research about arrhythmia and cardiomyopathies has already come out of it.
*Similar is a strong word. Try vaguely connected; they’re at the organ level and I primarily spoke about the cellular level. Plus their mathematics will be loads more complex than my simple example.
All figures were taken from references cited in-line via links or made myself. Referencing is done in-line as well via links; special mentions go to Science, vol. 295 no. 5560 “Modeling the heart — from genes to cells to the whole organ” and Physiology Review, vol. 84 “Basic Mechanisms of Cardiac Impulse Propagation and Associated Arrhythmia.”
The cerebral vasculature is a complex network that allows only 18% of the total blood volume of the body through the delicate tissues of the brain. This allows the transport of oxygen and nutrients that are essential to brain function. In the wide field, plane-projection confocal image above, the superficial cerebral vasculature of a mouse - specifically the actin, α-N-acetylgalactosamine residues, and DNA in cell nuclei - are labeled in situ.
FYMB has a new name. I thought a lot about whether or not I wanted to change the name at all, because I think ‘Fuck Yeah Molecular Biology’ really does adequately express how I feel about molecular biology. After I finally decided I probably needed to change it, I set about choosing a name - which was not easy. I thought up a lot of names, and put them in a massive list, but many were unavailable and/or taken by another science blogger somewhere else in the Internet (I wouldn’t want to tread on anyone’s toes). After a winnowing based on unavailability, only a few names were left, and thus A Molecular Matter was decided on (partially because it had a unclaimed Twitter handle…obviously most important).
‘A Molecular Matter’ came out of the combination of molecules being a specific organisation of matter, and matter being the term used to mean a state or affair; molecules are made of matter, a matter of molecules. It’s not particularly witty and is probably a bit of a stretch, but I like it and that’s what’s important. As with all things, I know there’s going to be some disagreement about the name choice - that’s unavoidable, really, but I would like to hear your thoughts (however soul-crushing). Most importantly, the content and theme of this blog will remain the same, and so if you liked the posts I made before you will probably continue to like the ones I will make.
I also did some major theme editing. In fact, I pretty much completely rewrote my theme, so it should load a lot more quickly now. However, it’s still in the ‘buggy beta version’ stage so if you have any problems with it please do let me know.
Last, I will be posting again! During the first term at university it was really difficult to find time to breathe, let alone think about anything else, but I’m hoping to post as much as I can in the second term. (And in the next couple of days - first post due tonight!)
Behold, a screenshot of a tiny, tiny part of the nominations list for Open Lab 2013. If you look closely, you may notice one particular blog name (or perhaps several, if you’re tuned into internet science or just really like to read) that looks a little bit familiar. Profanity and all, FYMB has worked its way onto the nominations page - something I’m both super excited about (Scientific American staff and a panel of judges will be forced to read my blog!) and also super nervous about, because, well, Scientific American staff and a panel of judges will be forced to read my blog. Nearly-second-year-student worries about looking hopelessly incompetent next to slick science bloggers aside, Open Lab 2013 is a great project, and nominations close on the 1st of October. So if you see a post anywhere on Tumblr that you think has some really awesome science in it, submit it!
In other news, the eagle-eyed among you may have noticed that I haven’t been posting recently. This has been a combination of me moving down to university and forgetting to call Virgin Media before I went and the new next-door neighbours password-protecting their WiFi. Never fear, though - FYMB will be returning to your dash on or before the 29th September between 1 and 6pm. However, FYMB, well…may not continue to be called FYMB. While having the word ‘fuck’ in the name of this blog accurately represents how I personally feel about molecular biology (fuck yeah!), it’s a bit harder to explain on a CV - and given the following this blog has gathered unbelievably quickly, the fact that it’s beginning to appear on sites other than Tumblr, and that I’ll begin applying for Ph.D.s next year, the profane blog name, while cool, has probably got to go. I’m developing my own list of potential name candidates, but if you have a suggestion, feel free to pop it in my ask box and I’ll definitely consider it.
In the meantime, have a great rest of the holidays, and I’ll see you a bit later in September! (Or whenever the magic that is WiFi makes an appearance at my student residence).