Sunday, 28 October 2007

Operons breaking the rules

ResearchBlogging.orgFor every rule in biology there is an exception. This causes problems in teaching. Do you give the rule and leave out the exception, effectively lying to the people you are teaching or do you give them the rule and exception and making rule mean much less? This is a real problem. Well I have just discovered eukaryotes have broken a golden. They have operons. I was always taught eukaryotes don’t have operons, only prokaryotes do.

For anyone needing to be reminded what an operon is it is several genes all on the same mRNA molecule. One name for this is polycistronic mRNA (cistron is an old name for what we know as a protein coding gene). Several genes have a single promoter and a terminator of transcription controlling them all. On this polycistronic mRNA are several Shine-Dalgarno sequences (bacterial ribosome binding sites) so a ribosome binds in front of each gene and translates it until it reaches a stop codon and falls off. The classic example is the lac operon. It makes 3 proteins used by E. coli to break down the sugar lactose. The reason why you would want a single promoter to control the expression of several proteins is because if they have related function like in the lac operon you want them all to be turned on at the same time and at the same level of expression therefore making the same number of proteins.

Classically eukaryote have monocistronic mRNA (one gene per mRNA) but their is more to this story. Strange transcripts have been found in animals from the simple nematodes to complex mammals such as us. One type of mRNA that has been called operons by some people in C. elegans uses something called trans-splicing. A gene is expressed with one promoter but contains 2 protein coding regions. But this does not mature into a polycistronic mRNA but during RNA processing the introns are removed and the two coding regions are separated. But how does the downstream RNA resist degradation by RNases? It has a SL-exon from another transcript trans-spliced onto it. This way one ‘gene’ makes 2 mRNA (see diagram). This is not a conventional operon like those found in bacteria. This is not a minor thing in the nematodes genome – 15% of all genes in C. elegans are found in operons of this type. We still have monocistronic mRNA here so eukaryotes are behaving fairly well so far.

But in Drosophila and higher organisms dicistronic mRNAs have been found. No trans-splicing involved what so ever. This is definitely breaking one of the golden rules in biology. Here a promoter causes the transcription of 2 genes onto the same transcript. Each one of these gets translated. It is unclear at the moment if both genes have there own ribosome intonation site are if the ribosome that translates the first gene also translates the second one by not falling off at the stop codon of the first gene. One example (that is conserved between man and mouse) is the dicistronic mRNA coding the growth and differentiation factor 1 (GDF-1) and a membrane protein of unknown function (UOG-1). Whether they are co-transcribed because they have related function is obviously unknown.

Living creates are strange. We think we understand a bit about them, create rules that we think life follows so we can understand life better. But life doesn’t always follow the rules we think it does. So the best we can do is make rules or models up and test them as this is how science works. I just hate it when something I thought I new turned out to be wrong.


T. Blumenthal (2004). Operons in eukaryotes Briefings in Functional Genomics and Proteomics, 3 (3), 199-211 DOI: 10.1093/bfgp/3.3.199

Monday, 8 October 2007

The Fanconi anaemia DNA repair pathway

Fanconi anaemia (FA) is an autosomal recessive chromosomal instability, characterised by congenital abnormalities, defective haemopoiesis, a high risk of leukaemia and development of solid tumours. FA patients have a 10% incidence of leukaemia and are 50 times more likely to develop solid tumours (particularly hepatic, oesophageal, oropharyngeal and vulval) by early adulthood. The disease consists of 13 complementation groups, each connected to mutations in 13 corresponding FANC genes (A, B, C, D1, D2, E, F, G, I, J, L, M, and N) which make up the FA pathway. Two thirds of FA cases are caused by inherited mutations in FANCA.

Interstrand crosslink damage

FA cells are characterised by hypersensitivity to damage by DNA Interstrand CrossLinking (ICL) agents such as mitomycin C and diepoxybutane. Cells fail to repair ICL damage, resulting in polysomies, radial chromosomal structures and unrepaired breaks (Figure 1). ICL agents tether two DNA strands together, creating a physical barrier to replication and transcription, which causes arrest of DNA replication. In normal cells, several mechanisms come in place to minimize damage and maintain the replication event. This necessitates removal of the physical barrier, repair of the gap and re-establishment of the replication fork. This complex repair mechanism involves the Fanconi anaemia proteins, translesion synthesis (TLS) polymerases, and the homologous recombination (HR) machinery. The FA pathway has also been implicated in spontaneous DNA damage repair, despite early speculation that the pathway was specific to ICL damage. In addition, there is some uncertainty as to whether the FA pathway also functions in intra-strand crosslink repair.


The FA pathway

The FA proteins are part of a complex DNA repair pathway, which is not yet fully understood. They are commonly arranged in three distinct groups according to their function. The first group is the FA core complex, which consists of FANC A, B, C, E, F, G, L and M. This complex stabilises stalled replication forks, processes the ICL site and activates a second group, which consists of FANCD2 and FANCI. Finally, group 3 consists of the remaining FANC proteins, BRCA2 (which is the same gene as FANCD1), FANCJ and FANCN, which have accessory or uncertain function in the pathway (Figure 2). The most recent model of this pathway is described in Wang (2007), Kennedy & D’Andrea (2005) and Niedernhofer (2007), and a detailed description of the model follows.

An ICL physically blocks replication by preventing separation of the strands by the advancing DNA helicase, which causes the enzyme to arrest. The replication fork arrest activates the ATR kinase, which subsequently phosphorylates FANCD2. A double strand break (DSB) upstream of the ICL is carried out by XPF-ERCC1, MUS81-EME1 or MUS81-EME2. Simultaneously, the monoubiquitin ligase FANCL, part of the FA core complex, monoubiquitinates FANCD2 and FANCI. This stage is regulated by the deubiquitinating enzyme USP1, and the monoubiquitin acts as a chromatin localisation signal. UPS1 is itself regulated by cell cycle controls, and self-cleaves in response to DNA damage. The integrity of the whole of the core complex is essential for this stage, as proven by the lack of chromatin localisation in FA core complex mutants. The downstream function of FANCI-Ub is uncertain, but FANCD2-Ub is arguably the most important component of this pathway, acting as a “fire captain” to recruit the proteins needed for the rest of the pathway (Figure 3).


As group 2 is activated, the FA core complex is also carrying out DNA processing in preparation for the next step in the pathway. FANCM acts as a DNA translocase. It anchors the rest of the FA core complex onto DNA, which stabilises the broken replication fork and facilitates removal of the ICL. A nucleotide excision repair (NER) endonuclease “unhooks” the ICL (which remains attached to the one strand), and translesion synthesis (TLS) polymerases recruited by the FA core complex fill in the gap. The TLS polymerases’ ability to pass through the damaged site has the trade-off of being error-prone, introducing random mutations in the repair site. The ICL adduct is then completely removed by a NER exonuclease, and the result is a replication fork broken by a DSB, with the ICL removed and the gap filled in (Figure 4). The next step is to repair the DSB, which is carried out by homologous recombination (HR) proteins.


HR-mediated replication fork re-establishment

FANCD2-Ub forms a complex with BRCA2 and chromatin. All the FANC proteins, along with BRCA1, NBS1, PCNA, RAD51 and other related proteins are recruited by the FANCD2-Ub/BRCA2-chromatin complex into nuclear foci at the damage site. BRCA1 forms a complex with BRCA2 and is essential for the formation of nuclear foci, even though its exact function is unknown. FANCJ, also identified as the helicase BRIP1 (BRCA1-interacting protein), depends on BRCA1 to translocate to the DNA damage site and unwind the replication fork in order to promote HR. The MRN complex, which consists of MRE11, RAD50 and NBS1 and has 3’-5’ exonuclease activity, processes either side of the DSB to prepare it for HR.

DSB repair can occur in two ways. Firstly by single-strand annealing, an error-prone non-homologous end joining (NHEJ) pathway, where the two strands are trimmed until homologous repetitive sequences are reached, and the strands are then annealed at that region of homology. This is mutagenic since the non-homologous intermediate region is deleted, but this is preferable to the arrest of replication. Secondly by strand invasion HR, where RAD51 is loaded onto chromatin by BRCA2 to form a nucleoprotein filament, within which Sister Chromatid Exchange (SCE) occurs and a Holliday junction forms between the strand being repaired and a homologous strand (Figure 5). Holliday junctions are intersections of four strands of DNA, which appear most commonly during meiosis when “crossing-over” occurs, and are resolved by endonucleases such as MUS81-EME1. FA cells show high levels of SCE in FANCC mutants, while other FANC mutant cells have normal SCE levels, suggesting a regulatory role for FANCC over SCE formation and a generally significant role in HR. HR thus re-establishes the replication fork and at the end of the S-phase, USP1 deubiquitinates FANCD2 to turn the pathway off.

In summary, ICL damage causes replication arrest, one of the strands is separated by a DSB, the FA core complex stabilises the broken fork and TLS repairs the ICL. At the same time, the structural integrity of the FA core complex is essential for the monoubiquitination of the “fire captain” FANCD2, which forms a complex with BRCA2 and chromatin and recruits HR and accessory proteins into nuclear foci. Finally, the MRN complex processes the ends of the DSB for HR to repair it and allow replication to restart.

There has been increased interest in the FA pathway in the past decade or so due to the discovery of its connection with cancer and complex relationship with the HR pathway. The pathway model is still in its early stages but already there are important questions raised about cell cycle regulation and its connection with DNA repair.

References

Kennedy, R.D. and D'Andrea, A.D. (2005) The Fanconi Anemia/BRCA pathway: new faces in the crowd. Genes Dev. 19: 2925-2940.

Niedernhofer,L.J., Lalai,A.S., and Hoeijmakers,J.H.J. (2005) Fanconi Anemia (Cross)linked to DNA Repair. Cell 123: 1191-1198

Patel,K.J. and Joenje,H. (2007) Fanconi anemia and DNA replication repair. DNA Repair 6: 885-890

Tischkowitz,M. and Dokal,I. (2004) Fanconi anaemia and leukaemia - clinical and molecular aspects. British Journal of Haematology 126: 176-191

Wang,W. (2007) Emergence of a DNA-damage response network consisting of Fanconi anaemia and BRCA proteins. Nat Rev Genet 8: 735-748

Friday, 28 September 2007

Bioluminescence

In many biological studies it is essential to be able to translate into an observable and measurable signal the specific process under analysis. This is normally possible through the use of assay systems based on radioactivity, photon absorption or photon emission. Radioactivity and photon emission are less and less used nowadays, the first due to the restrictions and cares associated with the use of such hazardous materials, the second due to its low sensitivity.

Photon emission can occur by two different processes- either fluorescence or chemiluminescence. Both processes involve the emission of photons associated with transitions between energy states, but the way the excited state is generated varies. In fluorescence, the absorption of light (i.e. of photons) produces the excited state, while in chemiluminescence this is due to exothermic chemical reactions (see diagram- fluorescence is on the right, luminescence on the left).


Both of these processes have their advantages and disadvantages. Fluorescence (of which the most famous example is probably GFP) normally shows a much stronger signal, as the source of the excited signal are photons which can be introduced in the sample at very high rate. However, this jamming of photons tends to create a high background signal which influences the sensitivity of the assay. Chemiluminescence does create lower intensity signals, but as no photons need to be provided there is hardly any background signal. Whichever of the two systems is used will depend on the machinery required in each assay. If the efficiency of light collection is limited, then fluorescence is the best option, this being the reason why until recently fluorescence was the main assay system used. The development of more sensitive machinery, however, has made the background noise the most important problem to solve, therefore justifying the increased interest in chemiluminescence systems.

Bioluminescence is a form of chemiluminescence that naturally occurs in certain living organisms. The enzymes that catalyse this reaction are called luciferases and their substrates luciferins. Note, however, that these are very general terms as bioluminescence seems to have evolved several times independently. This is why the several existing luciferases have such different molecular structures. In fact, luciferases have been cloned from several different types of organisms, namely jellyfish (Aequorea), sea copepod (Gaussia princeps), corals (Tenilla), click beetle (Pyrophorus plagiophthalamus) and several bacterial species. The most successfully luciferase, however, is the widely used firefly luciferase, from the firefly Photinus pyralis. It is commonly used in its humanized variant (codons optimized for mammalian expression), requiring ATP and magnesium in the oxidation of its substrate luciferin, and yielding a yellow-green light with a maximum luminescence of 560 nm.

Firefly luciferase (FLuc) is normally used as a reporter gene, which may mean very different systems of reporting. The most obvious is, of course, the insertion of the luciferase gene in the same plasmid and under the same regulatory promoter as the gene of interest. In my current department (Gene Therapy department, NHLI), for example, firefly luciferase is the main reporter gene used in CFTR transfections. However, FLuc can report more than the expression levels of transfected genes. As it depends on ATP for its activity, it is ideal in assays of ATP concentration. It can also, for example, be an indicator of how well G-protein coupled receptors (GPCR) work. This is possible by using a cAMP response element (CRE) upstream of the luciferase gene. GPCR activation causes an increase in intracellular cAMP concentration. This increase leads to Protein Kinase A activation, which in turn phosphorylates CRE binding protein. The CRE binding protein will bind to the CRE upstream of the luciferase gene, leading to increased transcription. Overall, an increase in CPCR activity leads to increased luminescence signal. This type of report system, for example, is useful in the study of GPCR agonists. The complexity of the system, however, makes it prone to interference, which can lead to false positive results. To prevent this, a dual-reporter system can be used involving a second luciferase, Renilla Luciferase (RLuc), which is an internal control to detect aberrant data. A dual luciferase system can also be used to detect two different processes or expressed genes simultaneously (as FLuc and RLuc have different spectral maximuns, namely 560 and 480 nm).

As a reporter of transfected gene expression, FLuc shows a few limitations, namely the fact that it is an intracellular protein. This means that measuring of transfected gene expression requires, in in vitro cell culture the lysis of the transfected cells, and in in vivo studies the killing of the animal in order to access the transfected tissue. In patients under clinical trial, luciferase assays imply invasive obtainment of tissues (I am still trying to find out how this is done exactly). This is why the development of new luciferase, namely a secreted luciferase is important. Until now, around 4 types of secreted luciferases have been found, but the only one commercially available for cell supernatant assays is that produced by the copepod Gaussia princeps (see figure). Gaussia luciferase (GLuc) is naturally secreted from cells and used by this animal as a defence mechanism. As Gaussia princeps lives at a depth of between 350 and 1,000 m, the sudden production of light is a good distractive mechanism against dark-adapted predators. As a secreted luciferase, Gaussia is reported to give very good signals when medium supernatants are assayed. The important question right now (at least for me) seems to be if it can be assayed in something more… well, you will have to wait for my year-away talk next year for more information!

References

Fan F., Wood K. (2007). Bioluminescence Assays for High-Throughput screening, Assay and Drug Development Technologies, 5. pp 127-136

Markova S., Golz S., Frank L., Kalthof B., Vysotski E. (2004). Cloning and Expression of cDNA for a luciferase from the marine copepod Metridia longa, The Journal of Biological Chemistry, 5. pp 3212-3217

Roda A., Pasini P., Mirasoli M., Michelini E., Guardigli M. (2004). Biotechnological applications of bioluminescence and chemiluminescence, Trends in Biotechnology, 22. pp 295-303

Serganova I., Moroz E., Moroz M., Pillarsetty N., Blasberg R. (2006). Non-invasive molecular imaging and reporter genes, Central European Jornal of Biology, 1. pp 88-123

Tannous B., Kim D., Fernandez J., Weissleder R., Breakfield X. (2004). Codon-optimized Gaussia Luciferase cDNA for Mammalian Gene expression in culture and in vivo, Molecular Therapy, 11. pp 435-443

Wiles S., Ferguson K., Stefanidou M., Young D., Robertson B. (2005). Alternative Luciferase for monitoring bacterial cells under adverse conditions, Applied and Environmental Microbiology, 71. pp 3427-3432

Sunday, 23 September 2007

Reaction or catalyst? Which started life?

If only the question was as simple as ‘which came first the chicken or the egg’ (answer: the egg). The question here is which came first - DNA or proteins. as we all know DNA stores the code to make proteins but proteins are needed to read this code and make more DNA. Obviously you can’t have one without the other. So the RNA world hypothesis was born. RNA can do both store information and catalyse reactions, perhaps even catalyse its own replication. I believe this is a beautiful theory. However beauty is only skin deep. There are real problems with this explanation of the origins of life apart from the obvious difficulties in testing it, which is true for any explanation of the origin of life. RNA is famous for being chemically unstable. It can break itself down quite easily (especially in alkaline conditions). Also where did the RNA come from? I don’t mean did it come from outer space (because radiation levels in space are so high RNA or cells could not have survived). I mean how the molecules were created on earth. Like it or not RNA is a complex molecule and no experiment trying to recreate the primordial soup has ever found RNA nucleotides. Or any really complex molecules, only the simplest of amino acids have been made.

This has lead to another explanation for the origins of life. Perhaps thinking of life as a bunch of replicators has blinded use to it. Living organisms can also be thought of as chemical factories. Genes and their products are simply there to control these reactions. Some believe it was chemical reactions that came before the proteins or RNA that catalyse the reaction. This is called the metabolism first theory. People who follow this theory have described what is needed for a chemical system to be the beginnings of life. 1) A boundary or form of membrane is needed to keep life and non-life away from each other. The 2nd law of thermodynamics states the universe my decrease in order but life increases in order so inside the boundary entropy decreases but this generates heat that causes an increase in entropy outside it. 2) An energy source must have existed. Perhaps some sort of redox reaction to power the chemical reactions in the metabolism first model. Radiation may have been used. 3) The energy source must be linked to the other chemical reaction. For me this is difficult to see how this could happen without proteins there to help things along but perhaps if I knew more chemistry it would be clearer. We use ATP as our energy currency but how could redox reactions or radiation be linked to these ancient chemical reactions? 4) The chemical reactions must be able to change and evolve. If a cycle of reactions was created where A became B and B became C and C became D and D became A again we have something to expand upon. If we had a carbon input such as E we could take compounds off the cycle and expand it (see diagram). These reactions could be powered by a redox reaction of X to Y. Eventually complex molecules could be created 5) One final requirement for these reactions to have been the origins of life is need and that is to be able to replicate. It is hard to imagine how this is possible before a lipid membrane existed for it to divide into two. If possible this would have allowed for Darwinian evolution through the competition for recourses.


The RNA-first approach has some support for it. Minerals have been found that contain boron in ‘containers’ or ‘bowls’ in Death Valley which could help create the ribose sugar in RNA. If such pores with boron existed billions of years ago RNA could have been created. Laboratory experiments have shown some randomly generated RNA molecules can catalyse the addition of an ATP molecule to itself. This is tested by using an ATP molecule with a sulphur atom not an oxygen atom and using a column that pulls out the sulphur containing RNA molecules only. RNA can carryout many reactions such as making and breaking DNA and RNA links and amide bonds and even make links with sugars. So their is diversity in RNA’s ability to catalyse reactions but its limit appears to be speed. It is possible one reason proteins took over from RNA as life’s catalyst because proteins are faster as well as because protein is more stable. Metabolism first has the great weakness of not having much lab experiments to support it but only computer simulations.


I believe these two theories could work together. Perhaps it is only because I find the rna world aesthetically pleasing i want to save it but these reactions could eventually become so complex they make rna. Over time these built up and started to take control and then made proteins to do much of there job when dna then took over as the info store and rna was simply the messenger and helps out in only a few reactions today. This nicely explains why the formation of the peptide bond in the ribosome is still done by rna. We could go even further into theory and suggest there was a polymer before RNA that could act as catalyse and self replicator. PNA has been suggested. Instead of having the sugar-phosphate backbone like RNA and DNA it has peptides attached to bases forming a backbone. Sadly such a molecule does not exist in our cells today or leave fossils in the ground for us to examine so we cannot test if PNA was really the first molecules that lead to life. If PNA did exist it all became RNA and then DNA.


At the moment we have many ideas about what may have been involved with the start of life on earth but it is difficult to prove anything. What we can do is explore the potentials of these molecules or chemical systems. After all theories about the origins of life cannot be tested directly but the do make predictions and by testing these predictions we can hopefully learn a lot.



References

Albert et al. (2002) Molecular biology of the cell. 4th edition.

Shapiro (2007) A simpler origin for life. Scientific American 296: 24-31.

Thursday, 23 August 2007

The Lives Of Stars

I am currently reading Carl Sagan’s classic popular science book Cosmos, and felt inspired to write about a chapter I particularly enjoyed. Chapter IX, ‘The Lives of Stars’, takes us on a journey through space and time, looking at the Sun as well as some of its distant cousin stars, all of which behave in strange and wonderful ways.

Probably the most surprising thing about stars is that it all boils down to simple chemistry. Four hydrogen nuclei will combine, under very high gravitational pressure and temperature, to form a helium nucleus and emit light as a gamma ray photon. This is the almost disappointingly simple answer to a question that has tormented humankind since we first realized that there was actually a huge hot bright disc up there, a question that has led millions to invent all sorts of farfetched hypotheses and religions to explain it away. It really gets interesting when you travel backwards and forwards in time to see where it all comes from and where it will all end up. Let’s take it from the top, then.

The big bang was an explosion and rapid expansion of the fabric of spacetime, which consisted of some matter in the form of protons, neutrons and electrons, as well as a huge amount of nothing. The rapid cooling that followed due to this expansion caused these elementary particles to form hydrogen and helium gas clouds. The explosion itself was uneven, so clouds began to form clusters of various sizes, collapsing into themselves under the force of gravity. These massive clouds of gas are the birthplaces of millions of stars, eventually forming the galaxies we see and live in today, such as the Andromeda galaxy pictured on the right. Stars consist of that same gas having collapsed into itself at various points in space.

Stars are essentially massive engines that burn hydrogen. When temperatures in the core of a star are high enough (over 10 million degrees), the collapse stops as the outer layer is held back by the combustion taking place in the core. The photons emitted by the reaction take a million years to reach the outer layer. The sun has been a simultaneously exploding and collapsing hydrogen bomb for about 5 billion years, and it will continue to behave that way for about as long. Eventually, all engines run out of fuel, and so do all stars, but that does not always mean their death.

As the hydrogen runs out, the reaction will begin to cool and the star will expand outwards, engulfing the inner solar system. However, it will soon begin collapsing again under its own gravitational force, this time until temperatures get high enough to burn helium. Sagan compares this beautifully to a Phoenix rising out of its ashes, except this is not just an ancient myth but a real event that is constantly happening throughout the universe. The remaining hydrogen left over in the expanded region of the star will burn while helium burns at the core at higher temperatures. This is a red giant, with a hot carbon and oxygen-producing helium reactor in its core and a planet-engulfing hydrogen-burning outer region.

When the helium runs out, it does mean the end for most stars. A new expansion will take place, and the star will shoot out concentric shells of gas that will form the planetary nebula (pictured below). At this stage, the Sun would engulf Pluto. A few more massive stars can recollapse and burn carbon and oxygen for a while, but this is not very common. After the sun expands for the last time, the solar system will become a blue and red-fluorescent dead world. Billions of years later, the exposed core will become a white dwarf, and eventually a cold, dead black dwarf.

A planetary nebula

There are so many different aspects to this story that rival any storyteller’s wildest imagination. The poetic elegance of the lives of stars masks their terrible and devastating effect on the observing civilizations of their orbiting planets, but the universe is of course entirely indifferent and apathetic. I strongly recommend Cosmos to anyone who wants to catch a glimpse of the amazing things astronomy has discovered, especially since the invention of the radio telescope which can take us right to the edge of the universe.

Thursday, 2 August 2007

Someone else in the universe already posted this but in a galaxy far far away

I would like to start off by saying I know nothing about physics apart from the excess of Sci-Fi I watch when I was younger and the random conversations I have with my physics friend Bob. Nevertheless I do like it despite it never making any sense. As a biologist there is a reason behind just about everything (ie evolution has shaped it all). It unsettles me when I don’t know why something is the way it is (so most of immunology then). Before reading the rest of this you need to assume a couple of things about the universe for what I am going to say to make sense, they are apparently correct but what do I know. Firstly space is infinite or extremely large (according to Prof Max Tegmark evidence for a small universe or donut shaped one is weak and he believes the universe is infinite) and secondly that matter is evenly spread out throughout space (not just clumped around us).

For the first part we will say our universe is the part of space we can see (the outer edges are determined by the age of the universe and how fast light it, as it gets older we can see more because more light has had time to travel to us). This is also called our hubble. Level I multiverse or parallel universes are simply hubbles out there that are like ours (and many more not like ours). If space is infinite all arrangements of matter that are possible exist! I like to think star wars and LOTRs obey the laws of physics so they could exist. Everything in level I has the same laws of physics as our universe because it is only an extension of ours. Out there, there are more hubbles just like ours but very far away. One estimate says one just like ours in every way could be 10 to 10118 metres away! Closer will probably be some hubbles that are very similar to ours but slightly different. Their will be an infinite number of all hubbles because space is infinite (from my understanding). I think these estimates are far to low and don’t take into account some things but it gives you an idea that even a low estimate is amazingly far away.

That was the easy stuff. Level II multiverse (or parallel universes) is less accepted than level I but still apparently explains a lot of things in physics. It helps biologists in their fight against pro intelligent design arguments. It is where many multiverses (same as in level I) exist but each multiverse is separate and has different physical constants or number of dimensions. This explains why our universe just so happens to be able to support life – it isn’t custom built to support life but is just one possibility of how the universe works. We have 3 spatial dimensions and one time dimension but if we had more time dimensions events would be completely unpredictable and if we had more spatial dimensions atoms would be unstable. If the mass of an proton was slightly larger it would decay to fast for molecules to be made so obviously nothing for evolution to act on and life to form. Each multiverse is still infinite within a sea of inflating infinite empty space (don’t ask me how you can have infinite space inside infinite space, because I just don’t understand). We can never travel between these multiverses even at the speed of light because they are moving away from each other faster! That sounds like science-fiction to me.

Level III is the one that interested me the most as a child (yes I have always been this way, Egon Spengler was my favourite Ghostbuster…need I say more). Every choice means you have to choose a path to follow. But in level III all outcomes exist but the other choices exist in another universe and not in space as we know it but ‘elsewhere’. However the choices I am talking about take place on the quantum scale! This goes back to Schrödinger’s cat (if you don’t know what it is, it would be best to look it up before continuing). From what I understand all it means is all states or positions exist until someone observes what state it is in. So something is both on and off or dead and alive until someone checks. This is called a superposition. An alternative theory is that both do exist but new universes are created to accommodate the other possibilities. It is like rolling a die that is only governed by the rules of the quantum world and not the overall rules of the universe. Because it is only on the quantum scale the outcome is completely random (from what I understand). According to level III multiverse it will not land on a 1 or a 2 or a 3… but will land on all six values at once. How; each one exists in a different universe, easy! And we thought biology was screwed up, at least it usually makes some sense. The outcome of level III is the same as for level I and II, more universes most slightly different from ours. The difference is how it is made.

Level IV is the one I feel least confidence about explaining. It is there to explain why our universe works under a set of specific mathematics and not controlled by other models. Universes that work using different mathematical models may exist outside our spacetime and work in completely different ways. They work using different laws of physics even more different to ours than by multiverses in level II. Level I, II and III were created by the same big bang. Level IV exists outside spacetime and will have had there own starting events but more level I and II (and III) could have been created by other big bangs as well. In level I and III you will have a Doppelgänger but in II and IV space will be so radically different you will not. Well I think you might if a parallel universe created in level II or IV is very similar and works off the same rules as ours.

All of these theories do make some predictions so they no longer lay in the realms of metaphysics but of real science. So in the next few years we may see some of these confirmed or rejected as fact. Whether they exist will have little impact on our lives. Will it comfort you to know someone out there in the infiniteness of space is in exactly the same situation as you or better of, or in a worse situation! I don’t think it will keep me awake at night. Nothing we can do will have an affect on them. I only discuss this because I find it interesting. But I find most things about our world its place in the universe interesting. We didn’t evolve to understand things like this but we did evolve to ask question about our environment.

For more information see Scientific American special report on Parallel Universe by Max Tegmark or his website:

http://space.mit.edu/home/tegmark/multiverse.html

this has more references.

Please ask me questions, no promises I can start to answer but I have thought about the topic for a while and would like to hear your thoughts and see of you have the same questions as me.

Tuesday, 3 July 2007

Colds and the cold


My mum always told me if I didn’t wear socks and shoes and had cold feet I would get a cold! Some evidence suggests that this old wives tale may be true. Adults have about 3-5 cold infections every year (unless you are a student and it is considerably more than this) and the symptoms are so well know and easily recognised that people self diagnose and there are no special tests doctors perform to say you have a cold, they go off these same old symptoms as the public. Saying what is a cold or flu is not easy but flu is much worse than most common colds, but a bad cold could be easily confused with a mild case of flu. Someone’s reaction to an upper respiratory infection depends more on the person (such as there stress level) than the virus that infects them. There are more than 200 serotypes of viruses (viruses with different antigens) that cause the common cold. Once infected by one of them you have antibodies to protect you from it causing another cold but there are plenty more waiting in the wings to hit you the next time exams are approaching. The rhinovirus is the most common cause of the common cold. The symptoms we experience when we get a cold are not the virus damaging us but our body reacting to the virus. Histamine triggers nerves in the noise to fire and tell the brain to make you sneeze and a sore throat is from a small peptide (bradykinin) signalling to nerves to tell you something is wrong. The colour of you lovely mucus changes from clear to yellow to green as more leukocytes (such as neutrophils) are recruited to fight the infection (Eccles, 2005). The majority of infected people are believed to not have any symptoms or only very mild ones. These are called cub-clinical infections and they can spread to others who will develop a full blown cold (Eccles, 2002).

The question is does the cooling of the body’s surfaces increase the chances of you getting a cold. The name cold suggests a link to me. The usual answer to why we get more colds in winter and cold weather is because we all crowd around in close spaces indoors and breath the same air. However I disagree with this. I do not change my habits during the winter and summer, I live in the same house with the same people who stay in the same no matter what the weather and go to school/uni and sit in the same classes with the same amount of people no matter what the weather. So how can you explain why I get more colds in the winter? I guess my mum is right. Because my feet aren’t warm enough…well there is now evidence to support my mum’s theory. Eccles and Johnson (2005) did find that when people put there feet into cold water and were exposed to a virus they were more likely to develop cold symptoms than those who did not have cold feet. But what are the mechanisms that mean cooling of the body’s extremities to let the virus get the upper hand? Vasoconstriction happens when you get cold, therefore less blood flows to the upper airways. This restricts the supply of heat and nutrients to leukocytes that eliminate viruses in a non-specific manor and reduces phagocytosis. Virus replication may also be increased, rhinoviruses replicate better at 33oC than 37oC. This could all cause a sub-clinical infection to become a full blow cold! Runny noise and all. Vasoconstriction helping cause a cold may also explain why some people get more colds than others. It has been shown that people who get more colds a year have a greater vasoconstriction response than those who only get a couple of colds per year (Eccles, 2002).

There are a lot of questions about colds and how they cause disease and how we catch them. But these diseases do not cause a lot of deaths, only reduce work output and the symptoms can be treated directly. I am just interested in what happens when I am ill.

Monday, 25 June 2007

Guest article: Economics...

As we struggle through the last exams of term, I thought that leaving the 'biology bubble' for a while could be a good idea... so I have invited Miguel Faria e Castro, a promising economics student at the faculty of Economics, Universidade Nova de Lisboa (New University of Lisbon) to bring a bit of fresh air to the blog...

When the invitation to write an article for this blog first came, I felt that anything written by an Economics student would feel a bit out of place. It happens, however (and most fortunately, in my own personal opinion) that Economic Science is not restricted to, as some people think, heavy statistical paperwork and impossible mathematical models with two thousand variables. A wide, varied field, it incorporates a lot of knowledge from other scientific subjects, namely psychology, sociology, physics – even biology!

It is not my intention to bore you with sophisticated technical terms or theories which are able to explain whether you opt for a cup of tea or a chocolate bar under the right conditions. That is why I have decided, in this brief article, to give you a first glance at what really is economic science.

Curiously, the term “Economics” comes from a very picturesque Greek expression: Oikosnomos, or, in plain English, “how to take care of your house”. Coined somewhere in the ideological turmoil of the 18th century, it came as a brief, yet clear, reference to the founding pillar of the entire science, the concept of scarcity: How can I satisfy my endless needs with the limited resources that are available to me? This is commonly known as the fundamental problem of economics, this is the problem that more or less 6 billion people face every day, at every moment.

Everything gets more complicated when we notice that not only people have different interests and, therefore, different needs, but also different qualities and quantities of resources available to themselves. This was what Adam Smith, a late 18th century Scottish philosopher who is commonly recognised as the first modern economist, attempted to analyse when working in his great (quite literally, three volumes of 1,000 pages each) work: The Wealth of Nations. Basically, the entire work was a defence of two fundamental economic theories, and I am almost sure you have already heard of at least one of them somewhere: The Invisible Hand, and The Division of Labour. While the former states that each individual, while attempting to prosecute his own self-interest will ultimately contribute to the society’s greater good (or, “after all, being greedy is not that bad”), the latter is based on the principle that certain countries appear to have a natural advantage on the performance of certain tasks over others, and that the entire world would gain if each country focused on what it’s good at. Another founding father of economics, the British David Ricardo would further develop this theory, by proving, with a simple mathematical example that it would be better for both countries if Portugal only produced wine and England textiles, rather than having them producing both resources. Look in wikipedia for “Comparative Advantage” and you’ll learn a neat trick with which to impress your friends (this last part sounds quite nerd, but we were actually told it by a professor).

As with every science, new theories on how to better satisfy people’s needs appeared. The so-called Classics, which I have just mentioned, tended to follow a very liberal orientation. Throughout the 19th and 20th centuries, economics, initially seen as a weird mixture of politics, law, philosophy and mathematics, increasingly became a matter of interest for politicians – the rise of Marxism as a political ideology only happened because Karl Marx had launched the theoretical basis for his own economic principles (usually called Marxian economics, as to avoid any embarrassing confusions). Until the 20’s, economic science focused almost exclusively on the behaviour of those who produced every day’s goods and those who bought it – Consumer and Producer Theory, the basis for what is today known as Microeconomics (it studied the individual behaviour of economic agents). In 1929, however, with the Great Depression, the Western “Civilised” World was hit, for the first time in History, with widespread inflation and unemployment. Two occurrences which, as the economists of that time conceded could barely be explained by the tools that they were using.

This is where another great mind enters. An Englishman named John Maynard Keynes – a brilliant thinker (the philosopher Bertrand Russell once said that Keynes was the most intelligent man he had ever met) and fierce investor who would publish his thoughts on what had happened during the Great Depression in the United States of America and the rest of the world. His work led to the creation of an entirely new field – macroeconomics, or the study of the aggregate behaviour of all economic agents (and now the State plays a special role…) when faced with certain circumstances and conditions. Keynes was the first to identify the fact that a phenomenon such as inflation (which is, by the way, the continuous and generalised rise in the price level or, in English, the occurrence thanks to which things are much more expensive today than they were when your parents were eighteen) could never be explained by studying what an individual consumer does or a single firm does not do. Only by studying the economy as a whole, can we grasp the real magnitude of such an event. A recurring joke about economists tells you that one of the advantages of being one is that, when you are in the unemployment line, you will at least understand why you are there. Thank Mr Keynes for enlightening you on that. But enough of history.

A common misconception is that economics only deals with money. Money is, as surprising as it may seem, a very small part of this grand show. Do you eat money? Do you drink money? Do you drive money? The answer is no – money is merely the means for you to get whatever you need. This is where we get at another important concept – the Classical Dichotomy, which basically states that real, not nominal, variable are important. What is the difference? Imagine you are a happy German with a monthly income of €5,000. Then, there’s that Polish guy who makes only €5 a day. It happens, however, that an apple in Germany costs €1,000 , while the Poles can eat them for €0,5 each. Which means that, nominally, you earn €5,000. But your real wage is 5 apples, while the Pole earns 20 apples a month. His real income is four times yours, even though you nominally earn a thousand times more. Interesting, eh?

Sunday, 24 June 2007

Guest article: Decision-making

Our guest writer this week (well, maybe this will become weekly) is Christina Ieronymides, student and aspiring thylacine hunter. Hailing from Cyprus, she has just completed her BSc in Zoology at Imperial College, and will be doing her MSc at the Institute of Zoology (London Zoo) starting this autumn. Without further ado...


As a guest writer on this blog I feel the need to point out that my grasp of genetics does not extend much further than the basics, and molecular biology, in fact, is one of the subjects I tend to steer clear of. This does not mean I dislike the field; it merely indicates that I may find myself in need of a molecular geneticist friend at some point in my career.

In this article I’d like to introduce a couple of my pet subjects. The nature-nurture debate is well known to most people involved in the biological sciences, and despite all the controversy, “a bit of both” is the answer. This gives rise to the phenotypic gambit: that there is a genetic element in animal behaviour, and as such what you see is behaviour that is adaptive and under selection. This is the basis of behavioural ecology, a field to which I was first introduced by Richard Dawkins in his classic work The Selfish Gene.

In his book, Dawkins uses game theory to look at the evolutionary outcome of social interactions between individuals adopting different behaviours, and more specifically to explain how cooperation can evolve amongst social conflicts. He explains how cooperation results in the best overall outcome for the individuals involved, when a win-win outcome is possible. Cooperation is, however, a fragile state, dependent on small population size, repeated interactions over time and communication. These conditions are vulnerable to outside influences, such as immigration, population growth and change in the social circumstances.

Externalities imposed on such a system of reciprocal altruism inevitably lead to the break down of cooperation. This is the root of the Tragedy of the Commons. Open-access resources are doomed to overexploitation because people only think in the short term, and this is what we see happening in the world’s fisheries today.

Achieving sustainable resource use is only possible in the light of the incentives that individual resource users face. In the case of the bushmeat trade, conservation, development and animal welfare collide. Overexploitation of tropical forest fauna for food has both biological consequences and consequences for people. This bio-economic system operates at a number of scales (from the decisions made by an individual hunter to consumer demands and the wider economy) and is dynamic and heterogeneous (species abundances and population dynamics within the forest), and is therefore particularly difficult to tackle.

Resource management is in general complex and policies put in place to ensure sustainability are usually unsuccessful, as conservation clashes with the human element. In many cases, it is necessary to convince the public that management is worth while. Scientists are generally quite poor at communicating their knowledge to the people they are serving. This lack of communication between science and the public carries much of the blame for the distrust towards science that we see growing among lay people. Laurie Marker’s work on conservation of the cheetah in Namibia is an excellent example of understanding the social, economic and biological issues and cooperating with the locals in such a way so as to introduce acceptable and effective solutions. Marker’s work highlights the fact that accessible information, education and the fostering of trust are paramount for the successful implementation of any resource management scheme or policy.

Friday, 22 June 2007

Why dream?

We all know what dreaming is, experiencing sensations such as images and sounds during sleep that we usually cannot control. Lucid dreaming is when the dreamer is aware they are dreaming and can sometimes alter their actions and the dream world. Anxiety is the most common felt emotion during dreams. Dreaming is associated with the REM (Rapid Eye Movement) stage of sleep. We go through ~4 cycles of REM sleep. All the muscles in our body are paralysed apart from the muscles in your eyes hence the name.

But why do we dream at all, nearly all mammals and birds appear to do it. Suggesting it must be evolutionary beneficial but how? Some people have thought dreams were messages from god or predictions of the future. Early psychologists like Freud were interested in dreams and their meanings. He believed dreams were wish fulfilment. Your unconscious mind doing things your conscious mind does not allow it when you are awake, but your ‘superego’ sensors your dreams. Part of this is symbolism, items and events represent what you really want. Freud has said not everything is a symbol, ‘sometimes a cigar is just a cigar’. He thought if you understand what they meant and resolve the deep arguments within yourself then all is well. I have never thought this made evolutionary sense…how many psychoanalysists were there in cavemen times to help them by charging +£60 to listen to them speak? Not many I would assume but I have no data on the matter.

Neuroscientists have also tried to explain why we dream. The great neurobiologist Crick (yes I do mean the on and only Crick) put forward the theory of reverse learning as a reason for why we dream. This theory says we dream to forget! The theory says we take in too much info in during the day so at night we erase these memories. Otherwise they would become damaging to you. This theory has little experimental support. It doesn’t explain why newborns sleep and spend so much time in REM (assuming they are dreaming). Also why do dreams have a story if they are simple a deleting mechanism? Why not simple flashes of sound and images, or has our brain come to make sense out of this, what function could that have. Lloyd et al. (unpublished) criticises it by showing more dreams are experienced when revising and yet much of the information stays. He suggests dreams are either a way of rehearsing or organising the information. But he fails to explain why dreams usually have little to do with what has been learnt. But it is clear information is not being forgotten.

Dreams have also been seen as a way of solving problems. Are you in fact thinking during dreams and coming up with ideas? This seems hard to grasp – you are actually thinking but have no control. Kekulé figured out the structure of benzene through dreaming of snakes biting its tail making a ring. Lloyd et al. also provides evidence of this. The subject woke up from a dream with the answer to chemistry homework that they were unable to solve beforehand. However there is no evidence people who do not pay attention to dreams have more difficulties in life than those who do.

It is hard to figure out what is going on with dreams, we cant figure out what memories are on the molecular level let alone how dreams arise and what function they have.

Just remember you do not know you are dreaming in non-lucid dreams then how can you prove you are not dreaming right now! (warning the devil is in me)

Friday, 15 June 2007

Hypothermia and paradoxical undressing

The temperature of our body is normally around 37ºC… however, it is only necessary a 2º decrease for hypothermia to initiate… As body temperature decreases even more the consequences become more and more negative, so that between 25 and 28ºC the heart just stops and dead is the logical outcome...

Considering that hypothermia is caused by a decrease in body temperature, how to explain that so many people are found dead due to this condition and yet with no clothes on? As this scenario was very often observed in poor people in cities, the logical explanation was raping and theft. Yet this doesn’t explain why victims of hypothermia refuse warming clothes when rescued… Overall, this phenomenon is commonly referred to as ‘paradoxical undressing’…

How to explain this apparently contradictory behaviour? A quite logical theory tries to give an explanation. When hypothermia initiates, the organism tries to prevent vital organs from cooling too much, and as a consequence the ‘available’ heat is concentrated to central areas of the body. This is possible by reducing peripheral circulation, namely by contraction of blood vessels. Vasoconstriction, possible by contraction of those muscles situated around blood vessels, requires energy, namely glucose provided by circulation itself.

Vasodilation, on the other hand, doesn’t require energy. Therefore, after a period of stress as that of hypothermia, with reduced income of energy and accumulated tiredness, the muscles lining the vessels tend to relax and allow once again the flow of blood into the restricted areas of the blody. The sudden flow of blood kept warm by its concentration elsewhere is thought to be responsible by the sudden sensation of warmth which leads hypothermia victims to undress. Obviously, the undressing turns out to help the hipothermial process, and victims eventually die. In fact, there is no register so far of any hypothermia victim able to survive without help after reaching the ‘paradoxical undressing’ state.

To finish off it is interesting to note that in 20% of the lethal cases of hypothermia another perhaps not so strange situation is registered, the so called ‘hide-and-die’ syndrome, which leads to the finding of hypothermia victims hiding in the most unlogical places, namely under beds of behind closets. This is probably the remains of an old instinct that seems to be present in a variety of animals and can basically be translated into the following piece of wisdom- ‘when things get really bad, find somewhere to hide’. I wonder if this works with the exams too… :-)


Based on ‘Paradoxical undressing; 21st April 2007, pag 50, New Scientist

Yawning makes you cool

ResearchBlogging.orgYawning is a semi-voluntary action performed by pretty much all vertebrates. It consists of three phases: a long inspiratory phase, a brief acme, and finally rapid expiration. This is accompanied by many physiological and neurological changes, such as muscular stretching, increase of blood flow and a sense of pleasure with dopaminergic activity in certain regions of the brain (Daquin et al., 2001). Yawning does not, as such, affect arousal in a physiological sense (Guggisberg et al., 2007), but it is obviously associated with boredom, waking up and feeling sleepy. As everyone knows very well, yawning is also contagious. This is not true just for humans, this is true for all social animals who have a sense of self-awareness (i.e. can recognize themselves in the mirror) (Perriol et al., 2006).

Why do we yawn? There are so many different changes associated with yawning that it could really be anything. Hippocrates considered it to be a mechanism for exhaustion of the fumes preceding fever. For the largest part of the 20th century, it was commonly thought that it was a mechanism for "balancing" oxygen and carbon dioxide levels, until this was shown to be untrue in 1987 (Provine et al.). There is obviously some connection with the states of sleep and wakefulness, but is that all? Also, why is it contagious? Is it just a mirror neuron driven response, or does it have a useful function?

The evidence for the connection between yawning and the sleep-wake rhythm is quite convincing. Even though this is generally common knowledge, Zilli et al. (2007) showed that yawning occurs more frequently in the early morning and late evening. Also, it was shown that evening-types (people who tend to stay up late, like myself) yawn more frequently than morning-types, showing that there is a connection between changes in the sleep-wake rhythm and yawning.

Another interesting connection is that between yawning frequency and REM sleep as observed throughout life. Humans can yawn from as early as 12 weeks after conception. Yawning frequency then very slowly declines with age. There is a very similar pattern in the amount of REM sleep, along with some physiological connections, such as the muscle stretching in yawning counteracting the muscular atonia seen during REM sleep (Walusinski, 2006).

The contagiousness of yawning is, for most people, its most interesting aspect. Anderson et al. (2004) showed that yawning is contagious in chimps, while Paukner et al. (2006) showed that the same is true in stumptail macaques. Infant chimps exhibited no yawning in response to the same experiments, which is in line with the evidence that human children under the age of 5 do not respond to seeing a person yawn like older humans do. Considering this and the fact that only animals with a sense of self-awareness exhibit contagious yawning, could contagious yawning have some connection with intelligence or social interaction?

Another aspect of contagious yawning is that it is correlated with interoception (sensitivity to stimuli arising within the body), self-awareness and empathy, as well as mental state attribution (the ability to inferentially model the mental states of others) (Platek et al., 2003). Functional MRI shows that brain activity correlated with contagious yawning is significant in the bilateral precuneus and posterior cingulate, regions highly associated with self-processing but, surprisingly, there is no activity in regions associated with mirror neurons (Platek et al., 2004). Yawning seems to circumvent the mirror neuron system, suggesting that yawning is an automatically released behavioural act and not an imitated motor pattern requiring detailed understanding (Schurmann et al., 2004).

The
most surprising data comes from a 2007 study by Gallup et al. which suggests that yawning is a mechanism for thermoregulation of the brain. The response of human subjects to videos of people yawning was observed. Subjects breathing through the mouth exhibited 48% contagious yawning, while subjects instructed to breathe nasally did not yawn at all. Nasal breathing is associated with brain thermoregulation by the cooling of the vertebral artery and the forebrain. Also, subjects with a cool compress applied to their foreheads yawned significantly less than subjects with no compress or a warm compress applied. This leads to the formulation of an interesting theory with testable predictions about the thermoregulatory function of yawning based on the rapid intake of cool air. The authors predict that as ambient temperature approaches body temperature, yawning should diminish, and when ambient temperature exceeds that of the body, yawning should stop. This might also explain why subjects with a warm compress applied to their forehead did not yawn more, since the body might detect the high temperature compress and interpret it as higher ambient temperature.

We are getting quite close to a Unified Theory of Yawning. Even though the physiology and neurology behind it is very complex, we seem to be moving in the right direction. We can even make some evolutionary hypotheses about yawning without sounding entirely speculative. For example, we might say that a population in which yawning is contagious will be more synchronised and better able to maintain its function as a unit, whether that involves moving around at specific times of day, making sure no-one wanders off after bedtime or keeping safe from enemies. Hopefully all the yawning you just did while reading this article was only because just thinking about it makes you yawn.


References

Andrew C. Gallup, Gordon G. Gallup Jr. (2007). Yawning as a Brain Cooling Mechanism: Nasal Breathing and Forehead Cooling Diminish the Incidence of Contagious Yawning Evolutionary Psychology, 5 (1), 92-101

Also, a website entirely dedicated to yawning called Baillement, which is French for "yawn". Lots of papers to read with some comments by the website authors themselves.

Monday, 11 June 2007

Cell death, is it all about apoptosis?


Where would you be without cell death, well dead I reckon. Cell death is needed in development to shape you body, such as give you fingers by killing the webbing between them. Not only that, but cell death is needed to shape the development of the brain. These things are done by apoptosis, what I would describe and the clean way of dealing with dead cells. They get broken down by nice signals outside the cell or if something internal goes wrong they order themselves to die (like by p53 to stop the cell becoming a cancerous cell and killing you). When they break down they do it in such an ordered way you wouldn’t really notice it, the dead parts get eaten up by near by cells and no nasty lysomal enzymes get released to damage the rest of the tissue. However not all cells die in this lovely way. Necrosis is where the cell is damaged and really just bursts open and releases lysomal enzymes that damage the tissue and it causes an inflammatory response by releasing HMGB1 from the nucleus into the extracellular environment to clear away the corpses of the cells.

However the two may not be so distinct. The same injury can cause either and if the apoptosis mediators are not functioning through mutations then necrosis can take over. If ATP levels drop in a cell going through apoptosis necrosis kindly steps in and does the job. Necrosis seems to have a controlled cellular pathway involving ROS, calcium signalling and proteases being activated. Some of the necrosis program appears to work in the background of apoptosis and parts of apoptosis keep the death ordered. Necrosis also (sometimes) is beneficial to an organism. In rabbits it has been reported that excess cells in theur growth plates down by necrosis.

So necrosis may be damaging and causes a lot of brain damage in strokes and Alzheimer's disease, but it is probably a backup program for when apoptosis fails to handle the situation rather than simply the cell bursting open!

The figure is from: Leist M. and Jäättelä M. (2001) Four deaths and a funeral: From caspases to alternative mechanisms. Nature Reviews Molecular Cell Biology 2: 1-10.