Recent surveys of educational attainment and schooling suggest that girls are outperforming boys all the way through from primary school to A-levels. The reason for this discrepancy, though, remains unknown. Many explanations have been suggested, ranging from the fact that there are simply more boys who fall into the category of disadvantaged" to boys lacking interest, having shorter attention spans, or not being encouraged to take learning seriously by male role-models.
But could there be a simpler explanation? It is no secret that a number of educational and mental disorders have a higher incidence in males than in females, usually because the genes which influence them are linked to the X-chromosome. Imagine a couple, of whom one parent has an X-chromosome gene linked to a particular disorder. If this parent passes on the "defective" gene, this is more likely to affect boys, who have only one X-chromosome (and hence no "good" copies of the gene to counterbalance the defective one) than girls, who will have one defective and one, usually dominant, normal copy of the gene. As a result, the couples' male children may display characteristics associated with the disorder, while female ones typically will not ("normal" genes are no always dominant, but this is often the case).
What does this have to do with human intelligence more generally? Well, if a genes' malfunction is associated with mental disorder, it's function is clearly important to mental functioning. The concentration of these genes on the X-chromosome, moreover, suggests that this chromosome might play an important role in the evolution of human intelligence, and warrants further investigation. A paper by Zechner et al. (2001), published in the journal Trends in Genetics, has done just that, and has found that, in fact, the X-chromosome has a concentration of genes associated with cognition that is over three times higher than that for any other chromosome, even when the authors corrected for possible bias in their data collection.
And a link between the sex chromosomes and intelligence raises some interesting possibilities for its evolution. Zechner et al. (2001) discuss one of these particularly. They note that a link to the X-chromosome may implicate sexual selection in the evolution of intelligence. Sexual selection occurs when individuals of one sex "select" mates on the basis of particular characteristics and/or there is competition between members of the same sex for mates or resources. Both these types of selection (called intersexual and intrasexual selection respectively) can cause the evolution of traits which are unrelated to fitness in the traditional sense; like the male peacock's tail, which is a drain on physical resources, but provides its owner with a reproductive advantage.
Many sexually selected traits, moreover, differ between the sexes. In intersexually selected traits (where one sex chooses mates on the basis of their characteristics), moreover, the possession of a trait in the selected sex must be linked to a preference for it in the other (selecting) sex. Many such traits are linked to the X-chromosome (Zechner et al. 2001). Sexual selection can also occur much faster than natural selection, providing a potential explanation for the speed with which human intelligence seems to have evolved in Homo sapiens - perhaps there is no adaptive explanation for our brains at all, and they are the result of selection for intelligent mates by women...?
References
Zechner, U., Wilder, M., Kehrer-Sawatzki, H., Vogel, W., Fundele, R., & Hameister, H. (2001). A high density of X-linked genes for general cognitive ability: a run-away process shaping human evolution? Trends in Genetics, 17 (12), 697-701 DOI: 10.1016/S0168-9525(01)02446-5
Saturday, 27 March 2010
Friday, 26 March 2010
How Shoes Can Change Your Life - And Your Skeleton
A cross-section of a foot inside a shoe. Taken by Mattes, and downloaded from the Wikimedia Commons 26/03/2010.
You might think that shoes can only change your life if you are a sex-and-the-city type shoe lover, spending huge amounts of money on designer footwear. And for most of us, that kind of dedication to shoes is fairly incomprehensible - after all, they're just things to wear to keep your feet safe from broken glass and tarmac, right? Wrong....
In fact, footwear doesn't just change your life in the way that owning that perfect pair of Jimmy Choos can affect a girl. Instead, it can influence the way you walk, the shape of your foot, and even the number and type of pathologies present in your foot bones. A recent study by Zipfel and Berger (2007), for example, has found that some 70% of European males and 66% - that's two in every three! - females has some pathological condition in their big toe, compared to only about 35% of individuals from an archaeological population which habitually walked barefoot.
The study found similar results for all other bones (Zipfer and Berger 2007), suggesting that the habitually unshod foot is healthier than the habitually shod foot in almost all ways. In addition, to ensure that the difference was not due to population differences, they included two other modern (shoe-wearing) populations, Zulu and Sotho, and found similar patterns for all three. The only major anomaly, in fact, was that while all populations (including the unshod one) showed higher proportions of damage to the end of the bones closer to the toes, the Zulu males showed a high proportion of damage to the bone shaft (Zipfer and Berger 2007). The authors concluded that this was because the Zulu population came from a mining town, where males were likely to injure themselves at work.
Perhaps most telling, though, was the fact that high levels of bone deformation or pathological changes that obscured measurement could cause an individual to be excluded from the sample. This is normal in osteological studies: you have to be certain that the measurement you are taking is the same for each individual you study. Interestingly, Zipfer and Berger throw in the fact that while a number of individuals from the three habitually shod populations fell into this category - that is, their foot bones were so damaged they could not be measured - this was true for none of the archaeological, unshod population.
So next time you buy a pair of shoes, take a moment to think if they are really comfortable and properly fitted - it may save you considerable pain later.
References
ZIPFEL, B., & BERGER, L. (2007). Shod versus unshod: The emergence of forefoot pathology in modern humans? The Foot, 17 (4), 205-213 DOI: 10.1016/j.foot.2007.06.002
Friday, 19 March 2010
Linking Footballers, Fingers and Sexual Selection
Footballers, particularly those who play at national or international levels, sometimes seem to have it all: celebrity, fitness, money and success. But rather than just supposing that this is the result of football's cultural status and importance, researchers have also suggested that it is the result of natural selection - not the survival of the fittest, as modern medicine and cultural systems ensure that in the Western world at least, most people have the chance to live, but perhaps the success of the fittest.
The argument runs like this. In prehistoric times, humans were subject to both natural and sexual selection. Sexual selection works in two ways. Firstly, there is mate choice (intersexual selection), which may be expressed by one or both sexes - i.e. either men or women or both may select their mates according to certain criteria of judgement. Secondly, within a sex (usually males) there may be competition for resources, particularly those which enable access to mates or successful child-rearing. This second form of selection, called intrasexual selection, is what produces fighting between the males of many species. Many Western societies now frown upon direct competition in terms of fighting, but the characteristics which make for good fighters likely remain - and may be expressed as prowess in sports which require high levels of spatial judgement, speed, endurance and strength. Football may be one such sport.
If this was the case, we might expect to see correlations between the levels of the male hormone testosterone, often associated with strength and other typically male characteristics, and football ability. Testosterone has also been linked to the formation of an efficient cardiovascular system in men, making it potentially even more important for fighting and/or football playing, although its action upon these systems occurs before birth (pre-natally), and therefore cannot easily be measured for large samples of football players.
To test for a link between football (or sporting) abilities and testosterone levels, then, researchers have to be a little more creative. The paper I have recently read on the subject, published in 2001 by Manning and Taylor, for instance, looks for correlation between sporting abilities in football players of various standards and the ratio of the second digit (the index finger) and the fourth digit (the ring finger). This ratio, written as 2D:4D, is typically lower in men than in women. That is, men tend to have shorter index fingers relative to the length of the ring finger while women have the opposite. This ratio has been explicitly linked to testosterone levels during foetal development, and does not change after a child is born (barring accidents involving the fingers), making it a good proxy for prenatal testosterone levels (Manning and Taylor 2001).
Manning and Taylor, therefore, carried out three studies looking for a link between the 2D:4D ratio, sporting ability (particularly in football) and visual-spatial judgement, thought to be an indicator of high "fitness" in men. The first two of these studies used participants from sports centres and libraries, and asked them to rank their sporting abilities on a scale from 10 ("I have represented my country") down to 0 ("I do no sport"). The first of these studies found a link between 2D:4D ratios and sporting scores such that the higher the score a participant gave the lower the digit ratio was, and hence the higher their pre-natal testosterone exposure likely was. The second found a similar link between visual-spatial judgement scores and 2D:4D ratios. Both these relationships were quite variable (with participants at a particular sporting level having varying digit ratios), but were statistically significant, that is, highly unlikely to have arisen due to chance alone.
The third study, meanwhile, examined footballers specifically and ranked them according to their league and coach references. It also involved a "control" group of non-footballers, for comparative purposes. It found that there was not only a difference in 2D:4D ratio between footballers and controls (footballers had lower average digit ratios), there was also a decrease in digit ratio the higher up the sporting scale the footballer was. So international players had lower 2D:4D ratios than players in the premier league, who had lower ratios than first division club players, and so on. Coaches, interestingly, fell between internationals and premier club players, suggesting (as is indeed the case) that they would have been highly successful footballers themselves (Manning and Taylor 2001).
So, all this suggests that footballers, and sporting professionals in general, are successful because they are some of "the fittest" in an evolutionary sense; they have high pre-natal testosterone levels, and hence well developed "fighting" skills which can be transferred into sports. Manning and Taylor also note that there are two possible explanations of this high fitness. Firstly, as study two suggested, there may be a link between visual-spatial awareness and pre-natal testosterone levels. Alternatively, or as well, the role of testosterone in the development of the cardiovascular system may be important. Both hypotheses are supported by some evidence (for example, that exposure of male foetuses to female hormones in the womb can lead to both digit anomalies and malfunctions of the cardio-vascular system), but we cannot yet discriminate between them. Still, if the great social institution and economic phenomenon that is international football could have arisen as the result of selection for male fighting abilities, we may be looking too hard for direct evolutionary explanations of other modern human traits like culture and language. Perhaps they, too, as some researchers suggest, were in part the by-products of selection for other features.
References
Manning JT, & Taylor RP (2001). Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans. Evolution and human behavior : official journal of the Human Behavior and Evolution Society, 22 (1), 61-69 PMID: 11182575
The argument runs like this. In prehistoric times, humans were subject to both natural and sexual selection. Sexual selection works in two ways. Firstly, there is mate choice (intersexual selection), which may be expressed by one or both sexes - i.e. either men or women or both may select their mates according to certain criteria of judgement. Secondly, within a sex (usually males) there may be competition for resources, particularly those which enable access to mates or successful child-rearing. This second form of selection, called intrasexual selection, is what produces fighting between the males of many species. Many Western societies now frown upon direct competition in terms of fighting, but the characteristics which make for good fighters likely remain - and may be expressed as prowess in sports which require high levels of spatial judgement, speed, endurance and strength. Football may be one such sport.
If this was the case, we might expect to see correlations between the levels of the male hormone testosterone, often associated with strength and other typically male characteristics, and football ability. Testosterone has also been linked to the formation of an efficient cardiovascular system in men, making it potentially even more important for fighting and/or football playing, although its action upon these systems occurs before birth (pre-natally), and therefore cannot easily be measured for large samples of football players.
To test for a link between football (or sporting) abilities and testosterone levels, then, researchers have to be a little more creative. The paper I have recently read on the subject, published in 2001 by Manning and Taylor, for instance, looks for correlation between sporting abilities in football players of various standards and the ratio of the second digit (the index finger) and the fourth digit (the ring finger). This ratio, written as 2D:4D, is typically lower in men than in women. That is, men tend to have shorter index fingers relative to the length of the ring finger while women have the opposite. This ratio has been explicitly linked to testosterone levels during foetal development, and does not change after a child is born (barring accidents involving the fingers), making it a good proxy for prenatal testosterone levels (Manning and Taylor 2001).
Manning and Taylor, therefore, carried out three studies looking for a link between the 2D:4D ratio, sporting ability (particularly in football) and visual-spatial judgement, thought to be an indicator of high "fitness" in men. The first two of these studies used participants from sports centres and libraries, and asked them to rank their sporting abilities on a scale from 10 ("I have represented my country") down to 0 ("I do no sport"). The first of these studies found a link between 2D:4D ratios and sporting scores such that the higher the score a participant gave the lower the digit ratio was, and hence the higher their pre-natal testosterone exposure likely was. The second found a similar link between visual-spatial judgement scores and 2D:4D ratios. Both these relationships were quite variable (with participants at a particular sporting level having varying digit ratios), but were statistically significant, that is, highly unlikely to have arisen due to chance alone.
The third study, meanwhile, examined footballers specifically and ranked them according to their league and coach references. It also involved a "control" group of non-footballers, for comparative purposes. It found that there was not only a difference in 2D:4D ratio between footballers and controls (footballers had lower average digit ratios), there was also a decrease in digit ratio the higher up the sporting scale the footballer was. So international players had lower 2D:4D ratios than players in the premier league, who had lower ratios than first division club players, and so on. Coaches, interestingly, fell between internationals and premier club players, suggesting (as is indeed the case) that they would have been highly successful footballers themselves (Manning and Taylor 2001).
So, all this suggests that footballers, and sporting professionals in general, are successful because they are some of "the fittest" in an evolutionary sense; they have high pre-natal testosterone levels, and hence well developed "fighting" skills which can be transferred into sports. Manning and Taylor also note that there are two possible explanations of this high fitness. Firstly, as study two suggested, there may be a link between visual-spatial awareness and pre-natal testosterone levels. Alternatively, or as well, the role of testosterone in the development of the cardiovascular system may be important. Both hypotheses are supported by some evidence (for example, that exposure of male foetuses to female hormones in the womb can lead to both digit anomalies and malfunctions of the cardio-vascular system), but we cannot yet discriminate between them. Still, if the great social institution and economic phenomenon that is international football could have arisen as the result of selection for male fighting abilities, we may be looking too hard for direct evolutionary explanations of other modern human traits like culture and language. Perhaps they, too, as some researchers suggest, were in part the by-products of selection for other features.
References
Manning JT, & Taylor RP (2001). Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans. Evolution and human behavior : official journal of the Human Behavior and Evolution Society, 22 (1), 61-69 PMID: 11182575
Thursday, 11 March 2010
It’s Official – Fathers ARE Important to their Childrens’ Upbringing
David Cameron’s “Broken Britain”, with its image of moral decay driven by the breakdown in family life and poverty, may be inciting a lot of debate in parliament and the public press, but to read many studies of human evolution, you might be mistaken for thinking that the human male has never actually played a meaningful role in childcare. Most evolutionary studies focus on female life history – age at first reproduction, number of offspring and interbirth interval, for example – to the exclusion of the fathers. Those studies which do consider the role of male care in the evolution of human populations usually suggest that their role is indirect, that is that they provide food or other resources to their wife and kids, but are not involved in child rearing directly.
A recent paper in the American Anthropologist focusing on the potential importance of direct parenting by men (Gettler 2010), is therefore a refreshing novelty. It notes that modern humans are unusual among mammals in having both a long childhood (requiring more input from caregivers) and a relatively short interbirth interval. This suggests that individuals other than a child’s mother are likely involved in their upbringing, thus reducing the pressure on mothers and enabling them to have more children.
In many modern human societies, these additional caregivers are fathers (as well as other relatives), but evolutionary hypotheses largely assume that female behaviour is the most important factor in changing reproductive behaviour. The “grandmother hypothesis”, for instance, proposes that the extended post-reproductive lifespan of women caused life-history change, by ensuring mothers could rely upon their own female relatives. Another such model, the “allomother” hypothesis, suggests that other females – maybe young ones practicing their childcare, or other members of the group – were the key. Care by both parents has also been suggested as an important factor, but only relatively recently (Gettler 2010).
Gettler’s hypothesis, though, is slightly different – he suggests that it was men, not women, who caused the change, by getting involved, perhaps for the first time, in the direct care of children, in particular by helping females to carry offspring during population movements. His research is based on assessment of energy expenditure in Homo erectus and later species in our genus (working on the assumption that earlier species likely reproduced with a longer interbirth interval similar to that of a chimpanzee), and builds on the idea that for large bodied hominins, the key tactic to reduce the energetic cost of each offspring was to “stack” them, reducing interbirth intervals, weaning infants earlier and thus lactating for shorter periods. Overall energetic costs of living were relatively high in Homo erectus, especially compared to earlier species with smaller, less “expensive” brains, but reductions in gut size suggest that both trade-off between organs and increased dietary quality were acting to counterbalance the increased cost. This has led to the traditional model for life-history change, which proposes that this increased quality diet relied upon meat, which was hunted by males and given to females by the hunters. This would lead to monogamous pair-bonding, division of labour, and, thereby, to shorter interbirth intervals as women and children were no longer subject to the same selective pressures as their earlier counterparts. More recent studies of hunter-gatherers, in contrast, suggest that gathering provides more calories in the day-to-day life of groups, and researchers now are uncertain whether pre-modern humans would have been sufficiently efficient hunters for this provisioning model to be correct.
Gettler’s model, though, recognises that in fact carrying infants – especially in hunter-gatherer societies where travel distances per day can be long – may be more energetically expensive than lactating, and is not usually alleviated by division of labour. Instead, he proposes, when groups moved around, it was the men who carried the offspring, reducing female energetic costs dramatically and thus (indirectly) enabling them to bear more children with shorter gaps between births. Those males who thus became directly involved with their childrens’ upbringing would have a fitness advantage over those who did not, producing more offspring and perpetuating the behaviour, particularly where males and females were foraging together and ranging over large areas.
In addition to this new model, moreover, Gettler (2010) also notes that this model emphasises the potential complexity of male-child relationships. The energetic benefit to the mother of male carrying of children only pertains in certain circumstances, for example, where foraging is roughly equally efficient in both sexes and hunting is not male-dominated and frequent. This is, in my view, even more interesting than the suggestion that direct male care was important, as it suggests that life-history models are finally coming into line with other fields of palaeoanthropology, in which the complexity of evolutionary processes have been the subject of increasing certainty in recent years. Behavioural and cultural flexibility, and the occupation of variable environments have been emphasised in models of human physical evolution for a few decades now, but life history research has remained focused on the savannah hypothesis until very recently.
I’m not sure what the implications are for the Tories’ social policies on Broken Britain, though....
References
Gettler, L.T. (2010). Direct male care and hominin evolution: why male-child interaction is more than just a nice social idea. American Anthropologist, 112 (1), 7-21 : 10.1111/j.1548-1433.2009.01193.x
A recent paper in the American Anthropologist focusing on the potential importance of direct parenting by men (Gettler 2010), is therefore a refreshing novelty. It notes that modern humans are unusual among mammals in having both a long childhood (requiring more input from caregivers) and a relatively short interbirth interval. This suggests that individuals other than a child’s mother are likely involved in their upbringing, thus reducing the pressure on mothers and enabling them to have more children.
In many modern human societies, these additional caregivers are fathers (as well as other relatives), but evolutionary hypotheses largely assume that female behaviour is the most important factor in changing reproductive behaviour. The “grandmother hypothesis”, for instance, proposes that the extended post-reproductive lifespan of women caused life-history change, by ensuring mothers could rely upon their own female relatives. Another such model, the “allomother” hypothesis, suggests that other females – maybe young ones practicing their childcare, or other members of the group – were the key. Care by both parents has also been suggested as an important factor, but only relatively recently (Gettler 2010).
Gettler’s hypothesis, though, is slightly different – he suggests that it was men, not women, who caused the change, by getting involved, perhaps for the first time, in the direct care of children, in particular by helping females to carry offspring during population movements. His research is based on assessment of energy expenditure in Homo erectus and later species in our genus (working on the assumption that earlier species likely reproduced with a longer interbirth interval similar to that of a chimpanzee), and builds on the idea that for large bodied hominins, the key tactic to reduce the energetic cost of each offspring was to “stack” them, reducing interbirth intervals, weaning infants earlier and thus lactating for shorter periods. Overall energetic costs of living were relatively high in Homo erectus, especially compared to earlier species with smaller, less “expensive” brains, but reductions in gut size suggest that both trade-off between organs and increased dietary quality were acting to counterbalance the increased cost. This has led to the traditional model for life-history change, which proposes that this increased quality diet relied upon meat, which was hunted by males and given to females by the hunters. This would lead to monogamous pair-bonding, division of labour, and, thereby, to shorter interbirth intervals as women and children were no longer subject to the same selective pressures as their earlier counterparts. More recent studies of hunter-gatherers, in contrast, suggest that gathering provides more calories in the day-to-day life of groups, and researchers now are uncertain whether pre-modern humans would have been sufficiently efficient hunters for this provisioning model to be correct.
Gettler’s model, though, recognises that in fact carrying infants – especially in hunter-gatherer societies where travel distances per day can be long – may be more energetically expensive than lactating, and is not usually alleviated by division of labour. Instead, he proposes, when groups moved around, it was the men who carried the offspring, reducing female energetic costs dramatically and thus (indirectly) enabling them to bear more children with shorter gaps between births. Those males who thus became directly involved with their childrens’ upbringing would have a fitness advantage over those who did not, producing more offspring and perpetuating the behaviour, particularly where males and females were foraging together and ranging over large areas.
In addition to this new model, moreover, Gettler (2010) also notes that this model emphasises the potential complexity of male-child relationships. The energetic benefit to the mother of male carrying of children only pertains in certain circumstances, for example, where foraging is roughly equally efficient in both sexes and hunting is not male-dominated and frequent. This is, in my view, even more interesting than the suggestion that direct male care was important, as it suggests that life-history models are finally coming into line with other fields of palaeoanthropology, in which the complexity of evolutionary processes have been the subject of increasing certainty in recent years. Behavioural and cultural flexibility, and the occupation of variable environments have been emphasised in models of human physical evolution for a few decades now, but life history research has remained focused on the savannah hypothesis until very recently.
I’m not sure what the implications are for the Tories’ social policies on Broken Britain, though....
References
Gettler, L.T. (2010). Direct male care and hominin evolution: why male-child interaction is more than just a nice social idea. American Anthropologist, 112 (1), 7-21 : 10.1111/j.1548-1433.2009.01193.x
Sunday, 7 March 2010
Human and Chimpanzee Handedness
Of the many mysteries surrounding human evolution, the question of why humans, alone out of all the apes, display a strong tendency towards being right-handed is perhaps less well known than uncertainties about our locomotion, brain size and cultural capacity. Yet the fact remains, over 90% of humans are right handed, and strongly so - there are proportionally few left-handed individuals and very few ambidextrous ones. Handedness is a manifestation of laterality - having a behaviourally dominant side or limb - and may be related to the relative dominance of the two halves of the brain. In humans, who are (mostly) right-handed, the left side of the brain, which is the side associated with language, is therefore dominant.
In chimpanzees and other apes, though, the situation is different. Laterality is reduced (with many individuals being ambidextrous), and although the results of early research are inconclusive, there is no demonstrable preference for being right-handed over being left-handed (Braccini et al. 2010). In fact, these authors, publishing in the current issue of the Journal of Human Evolution, argue that much more research into handedness in the apes is needed to establish whether the human predominance of the right side of the body is an extension of a trait present in the last common ancestor or a uniquely human character.
Braccini et al. therefore set up an experiment in which a number of chimps (of whom 15 were ambidextrous, 15 right-handed and 16 left-handed according to previous research) were given sticks to access peanut butter in the middle of plastic tubes. Each chimp was tested in three different postures: for the first, they were allowed to hold the tube and all sat down to extract the food; for the second, the tube was suspended vertically above head-height but within a short distance of a wall, so they could support themselves with one hand while standing bipedally, and finally, the tube was suspended above head-height but away from the wall, so the chimps had to stand unsupported (Braccini et al. 2010). The research found that the degree of laterality (preference for one hand over the other) increased significantly as the chimps moved from seated to supported and unsupported bipedalism and from supported to unsupported postures, but the level of right-handedness in the group, interestingly, did not - in fact, although there was a slight increase in the proportion using their right hands to access food when standing bipedally, on the whole, the change of posture merely strengthened the chimps' earlier hand preferences.
The most interesting implication of this study, of course, is that while it does not disprove the widely-held hypothesis that tool use has driven human lateralization, it does require that an additional factor be invoked to explain the high proportion of right-handed people. In fact, Braccini et al.'s paper suggests that a change to bipedal locomotion, especially if associated with tool use and manipulation of the environment, might indeed have enhanced lateralization in early hominins substantially. The subsequent change which led to 90% being right-handed might, in fact, be the enhanced lateralization of the brain which accompanied the origins of language - a skill primarily located in the left half of the brain. While Braccini et al.'s paper does not provide any direct evidence for this change, it does support the existence of a second, directional, shift in lateralization which, furthermore, must have arisen after the human-chimpanzee split. However, this problem is of a chicken-and-egg nature - we cannot know whether cerebral lateralization occurred before, or was enabled as a result of, increases in right-handedness and left-brain dominance. In addition, it may be that there is no selective advantage to being right handed - because this is the result of selection for other features. It will be interesting to see how the debate turns out!
References
Braccini S, Lambeth S, Schapiro S, & Fitch WT (2010). Bipedal tool use strengthens chimpanzee hand preferences. Journal of human evolution, 58 (3), 234-241 PMID: 20089294
In chimpanzees and other apes, though, the situation is different. Laterality is reduced (with many individuals being ambidextrous), and although the results of early research are inconclusive, there is no demonstrable preference for being right-handed over being left-handed (Braccini et al. 2010). In fact, these authors, publishing in the current issue of the Journal of Human Evolution, argue that much more research into handedness in the apes is needed to establish whether the human predominance of the right side of the body is an extension of a trait present in the last common ancestor or a uniquely human character.
Braccini et al. therefore set up an experiment in which a number of chimps (of whom 15 were ambidextrous, 15 right-handed and 16 left-handed according to previous research) were given sticks to access peanut butter in the middle of plastic tubes. Each chimp was tested in three different postures: for the first, they were allowed to hold the tube and all sat down to extract the food; for the second, the tube was suspended vertically above head-height but within a short distance of a wall, so they could support themselves with one hand while standing bipedally, and finally, the tube was suspended above head-height but away from the wall, so the chimps had to stand unsupported (Braccini et al. 2010). The research found that the degree of laterality (preference for one hand over the other) increased significantly as the chimps moved from seated to supported and unsupported bipedalism and from supported to unsupported postures, but the level of right-handedness in the group, interestingly, did not - in fact, although there was a slight increase in the proportion using their right hands to access food when standing bipedally, on the whole, the change of posture merely strengthened the chimps' earlier hand preferences.
The most interesting implication of this study, of course, is that while it does not disprove the widely-held hypothesis that tool use has driven human lateralization, it does require that an additional factor be invoked to explain the high proportion of right-handed people. In fact, Braccini et al.'s paper suggests that a change to bipedal locomotion, especially if associated with tool use and manipulation of the environment, might indeed have enhanced lateralization in early hominins substantially. The subsequent change which led to 90% being right-handed might, in fact, be the enhanced lateralization of the brain which accompanied the origins of language - a skill primarily located in the left half of the brain. While Braccini et al.'s paper does not provide any direct evidence for this change, it does support the existence of a second, directional, shift in lateralization which, furthermore, must have arisen after the human-chimpanzee split. However, this problem is of a chicken-and-egg nature - we cannot know whether cerebral lateralization occurred before, or was enabled as a result of, increases in right-handedness and left-brain dominance. In addition, it may be that there is no selective advantage to being right handed - because this is the result of selection for other features. It will be interesting to see how the debate turns out!
References
Braccini S, Lambeth S, Schapiro S, & Fitch WT (2010). Bipedal tool use strengthens chimpanzee hand preferences. Journal of human evolution, 58 (3), 234-241 PMID: 20089294
Saturday, 6 March 2010
Fossilisation and Vegetation Patterns: Another Study of Decay and its Implications
Following on from my recent post about the decay of chordate animals, I have encountered a related paper, this time from Quaternary Research and focusing on the preservation of plants in middens (rubbish dumps) constructed by woodrats. This paper, written by Nowak et al. (2000), explores the question of how well these middens represent the vegetation surrounding them, by developing a method which calculates the probability that species that are missing from the midden are actually not present in the landscape.
To do this, Nowak et al. carry out two surveys. In one, they examine 27 woodrat middens less than 200 years old and compare them with the vegetation that currently surrounds them, and in the other, they compare middens from the same location and of the same age. From the first study, they obtain a "best case" scenario, which allows them to estimate the upper limit of the probability that midden remains accurately represent the surrounding vegetation and the lower limit of the probability that the midden is not representative of vegetation (Nowak et al. 2000). From the second experiment, which is more realistic (because the authors do not actually know the exact nature of the vegetation surrounding the midden), they can calculate the lower bound of the probability of accurate representation and the upper bound of the probability of inaccuracy - a worst case scenario.
Their findings are promising for palaeontologists interested in plant fossils. Overall, the probability of a false interpretation of vegetation pattern based on midden composition is between 7 and 11%, and for some species, it is between 0 and 6%. The grasses are the obvious exception to this, however, with inaccurate representation of grass species potentially as high as 40% (Nowak et al. 2000), although for a minority of species the results were inconclusive because of difficulties identifying fossil specimens, random fluctuations in probabilities, small sample sizes or (possibly) selection against those species by the woodrats. At the same time as these results are promising for palaeontological investigations of vegetation patterns, and potentially extensible to other organisms (like animals), however, I cannot help but feel that for most species - particularly those which are now extinct - carrying out analyses to this level of detail will be impossible and making assumptions based on studies of extant relatives potentially risky; there is no way to know how much we can generalise from results like these. That said, any moves forward in the study of palaeontological data quality are highly valuable, and, if these studies are continued, they may prove useful across the board at least in providing a ballpark probability that absence from a site actually implies absence from the surrounding area. I will be watching out for further studies of this type focusing on mammals - woodrats do not range far from their middens, so the scientists only had to evaluate 100m circles of vegetation, but I imagine surveying the surroundings of recent mammal assemblages will be much more arduous.
References
Nowak, R. (2000). Probability That a Fossil Absent from a Sample Is Also Absent from the Paleolandscape Quaternary Research, 54 (1), 144-154 DOI: 10.1006/qres.2000.2143
To do this, Nowak et al. carry out two surveys. In one, they examine 27 woodrat middens less than 200 years old and compare them with the vegetation that currently surrounds them, and in the other, they compare middens from the same location and of the same age. From the first study, they obtain a "best case" scenario, which allows them to estimate the upper limit of the probability that midden remains accurately represent the surrounding vegetation and the lower limit of the probability that the midden is not representative of vegetation (Nowak et al. 2000). From the second experiment, which is more realistic (because the authors do not actually know the exact nature of the vegetation surrounding the midden), they can calculate the lower bound of the probability of accurate representation and the upper bound of the probability of inaccuracy - a worst case scenario.
Their findings are promising for palaeontologists interested in plant fossils. Overall, the probability of a false interpretation of vegetation pattern based on midden composition is between 7 and 11%, and for some species, it is between 0 and 6%. The grasses are the obvious exception to this, however, with inaccurate representation of grass species potentially as high as 40% (Nowak et al. 2000), although for a minority of species the results were inconclusive because of difficulties identifying fossil specimens, random fluctuations in probabilities, small sample sizes or (possibly) selection against those species by the woodrats. At the same time as these results are promising for palaeontological investigations of vegetation patterns, and potentially extensible to other organisms (like animals), however, I cannot help but feel that for most species - particularly those which are now extinct - carrying out analyses to this level of detail will be impossible and making assumptions based on studies of extant relatives potentially risky; there is no way to know how much we can generalise from results like these. That said, any moves forward in the study of palaeontological data quality are highly valuable, and, if these studies are continued, they may prove useful across the board at least in providing a ballpark probability that absence from a site actually implies absence from the surrounding area. I will be watching out for further studies of this type focusing on mammals - woodrats do not range far from their middens, so the scientists only had to evaluate 100m circles of vegetation, but I imagine surveying the surroundings of recent mammal assemblages will be much more arduous.
References
Nowak, R. (2000). Probability That a Fossil Absent from a Sample Is Also Absent from the Paleolandscape Quaternary Research, 54 (1), 144-154 DOI: 10.1006/qres.2000.2143
Monday, 1 March 2010
Decay Processes and Chordate Phylogeny
I have just read a Nature paper reporting some experimental work studying the pattern of decay in two soft-bodied species, Lampetra and Branchiostoma, which are thought to be the best proxies of the early chordates (chordates are the group of animals that includes the vertebrates and those invertebrates that are their closest relatives).
The authors, Sansom et al. (2010), note that our understanding of the early evolution of the chordates is very sparse, in large part because the early chordates were entirely soft-bodied and are only rarely preserved, and in part because the interpretation of those soft-bodied fossils we do have is complex. It is especially hard to distinguish the earliest true chordates from their close, non-chordate relatives (called the "stem chordates"). They suggest that this might be rectified by better understanding of the sequence in which features of early chordates decay. In particular, we need to know whether the characteristics which characterise the true chordates decay relatively fast upon the death of their bearer, as if this is true, the partially-decayed true chordates will be misinterpreted as stem chordates, which they now resemble (Sansom et al. 2010).
And, indeed, this is exactly what Sansom et al. found was the case. In their experiments, which tracked the order in which features of the two species decayed, those which were lost first were those which were most informative about the relationships between early chordates. As a result, the relative abundance of stem chordates in comparison with true chordates in the fossil record may be the result of the incomplete preservation of the crucial characteristics which would enable researchers to identify their real relationships.
I think this paper is fascinating. At the same time, though, if it is true that the characteristics which are most informative about early chordate evolution are those which decay first, it is difficult to see how we will every sort the true chordates from their stem chordate relatives, barring finds of even more exceptionally preserved fossils than we already have from the relevant period. Despite this, knowing more about taphonomy (the processes of decay and destruction that affect dead organisms) can only inform our reconstructions of the evolutionary history of life, even if some parts of that history can never be fully resolved.
References
Sansom, R., Gabbott, S., & Purnell, M. (2010). Non-random decay of chordate characters causes bias in fossil interpretation Nature, 463 (7282), 797-800 DOI: 10.1038/nature08745
The authors, Sansom et al. (2010), note that our understanding of the early evolution of the chordates is very sparse, in large part because the early chordates were entirely soft-bodied and are only rarely preserved, and in part because the interpretation of those soft-bodied fossils we do have is complex. It is especially hard to distinguish the earliest true chordates from their close, non-chordate relatives (called the "stem chordates"). They suggest that this might be rectified by better understanding of the sequence in which features of early chordates decay. In particular, we need to know whether the characteristics which characterise the true chordates decay relatively fast upon the death of their bearer, as if this is true, the partially-decayed true chordates will be misinterpreted as stem chordates, which they now resemble (Sansom et al. 2010).
And, indeed, this is exactly what Sansom et al. found was the case. In their experiments, which tracked the order in which features of the two species decayed, those which were lost first were those which were most informative about the relationships between early chordates. As a result, the relative abundance of stem chordates in comparison with true chordates in the fossil record may be the result of the incomplete preservation of the crucial characteristics which would enable researchers to identify their real relationships.
I think this paper is fascinating. At the same time, though, if it is true that the characteristics which are most informative about early chordate evolution are those which decay first, it is difficult to see how we will every sort the true chordates from their stem chordate relatives, barring finds of even more exceptionally preserved fossils than we already have from the relevant period. Despite this, knowing more about taphonomy (the processes of decay and destruction that affect dead organisms) can only inform our reconstructions of the evolutionary history of life, even if some parts of that history can never be fully resolved.
References
Sansom, R., Gabbott, S., & Purnell, M. (2010). Non-random decay of chordate characters causes bias in fossil interpretation Nature, 463 (7282), 797-800 DOI: 10.1038/nature08745
Subscribe to:
Posts (Atom)