Where can I obtain a legal 'Do Not Switch Me Off' (DNSMO) order? It is extremely difficult to detect consciousness in coma and PVS patients so how can anyone have the right to decide to terminate their lives?
The unfortunate (or fortunate depending on how you look at it) tale of Rom Houben, classed as Persistent Vegetative State (PVS) and stuck with that label for over 20 years, demonstrates our current lamentable lack of knowledge about how to detect 'conscious' activity in the brain. I would have thought, given this situation, that the default position would be to leave all such cases connected to all necessary life support until we have the knowledge to deal with them correctly. But is this the default position?
I very much hope that current scientific studies and brain imaging techniques such as fMRI are forcing neurologists and neurophysiologists to move away from some of the prevailing opinions of ten years ago, such as those of the American Medical Association (AMA), whose conclusions on the subject are cited in 'The End of Life: Medical Considerations - Persistent Vegetative State'.
The terminology here is confusing because the brain is a massively complex organ but medical specialists in the field have to have a way of classifying the presence or absence (or degree) of consciousness in their patients, so they have come up with a system of labelling. So terms such as PVS, MCS and coma are often used. There are many problems with this labelling system. The problem for Rom Houben was that he got stuck with a label which meant that there was, for two decades, minimal intervention to find out what was really happening in his brain.
It seems to me that the labels are really about the legal classification of the presence or absence of consciousness, so that specialists and lawyers can feel comfortable giving advice to the families and advocating decisions about withdrawal of life support (or non-intervention in secondary complications etc). This all becomes horribly financial. The cost of maintaining a PVS patient on life support is expensive and could be estimated in the region of £100,000 per year. No wonder there is so much pressure to make a decision on withdrawal.
There will certainly be many cases of brain damage where it is obvious to a neurologist that conscious thought has been wiped out. Cases where little is left intact but 'old brain' structures providing autonomic functions, can be clear cut. But often there will be some degree of uncertainty about whether the patient has lost all consciousness. Here's another obvious terminology problem: what constitutes consciousness anyway?
I'm not approaching this from an ethical or financial standpoint. How about looking at it from the point of view of probabilities? There is a high probability that technological discoveries in the field of brain scanning will mean that some PVS-classed patients, such as Mr Houben, once re-evaluated are found to be in a 'locked in' state, conscious but unable to communicate. Others will be found to be in a dreamlike state - living an internal life but unlikely ever to return to the 'real' world. Yet others will be found to be teetering on the border, just requiring the correct delicate intervention to bring them back. There is also a high probability that the appropriate 'delicate intervention' techniques and technologies will become available.
Nobody knows how many patients are in the above-mentioned states. That's the point - the brain is too complex for specialists to know for sure. The classifications don't take account of that, or what 'might' be possible for these patients in future. My point is that they are still alive and they can wait to find out what will be discovered and what will be possible for them. Just don't switch them off.
There's a societal attitude problem here also. This one is the 'death with dignity' meme. I have absolutely no idea what that's all about. Death is the most undignified proposition you can't imagine. If their brains are gone they won't care about dignity. If they are still alive then give them the dignified chance to let you know. If they are in pain give them massive but non-lethal quantities of pain-relief medication. If you don't know which of these situations pertain then don't use death as the default position. The 'dignity' that loved ones 'seek' for the PVS patient is imposed by them.
Perhaps cases such as that of Rom Houben will spur an almost instantaneous rethink on the treatment of such patients, with DNSMO stickers appearing on beds and wheelchairs in hospitals across the globe. But perhaps not. Deathism runs deep.
02 December 2009
25 November 2009
'The Fallen' are just plain dead
Despite the techno-warfare predictions of countless sci-fi novels and questionable computer games, militarism has no place in the future of humanity. Why not? Because if warfare survives then we won't. Militarism will become increasingly old-hat, and so will its language.
It is not logical to posit a medium to distant future of nanotech-based weaponry with soldiers clad in robotic exoskeletons blasting each other to smithereens by disrupting each others bodies at the molecular level, or other exotic and hideous means. This is an example of the lack of scope in much of science fiction, and part of the tacit assumption that the future will be just like now - only more so. Civilian technologies have emerged from death-tech in the past but if we are to survive then it will be partly as a result of the availability of technology to all rendering militarism redundant.
Nanotech could be taken as a case in point. Let's assume that some sizable percentage of all wars fought are over some kind of resource. What would be the point of fighting a war about oil, for example, if molecular manufacturing means that any citizen can make whatever they need from the comfort of their own home?
On a wider scale the planet will increasingly face existential risks. Troublemakers could easily construct world-killing devices in small and secret labs. We would never know what hit us. Not even time to get one of your exoskeletal socks on.
Every advance affects every other advance and the effect is exponential. Either you find a way to arrest war at the root, even memetic, level or there is no future to flounce about in with your implausible gun.
I find war euphemisms incredibly ugly. I hear much of 'The Fallen'. What happens if we deconstruct this particular euphemism? Let's find out:
It is likely that I am a person of limited financial means. I may not have done particularly well at school and, if tested, it is likely that my IQ is average to low. I may have a family member in the military, a brother perhaps, who I look up to. I do my training for warfare and quickly become institutionalised. I fit in here. During my second month in Afghanistan a bullet fired from a Type 56 rifle enters the back of my cranium, at the occipital bone, causing it to explode due to hydrostatic shock. When the headless body arrives back in the motherland it appears that it has 'fallen'. The type of falling involved is unclear but unquestioned. Most do not surmise that it is the type of falling where, after a short twitching delay, a headless body slumps into the sand. Most assume that it is a more poetic type of 'falling', where a valiant and idealistic young man sacrifices his short life for his country and, sort of, 'falls' from life into gallant death.
Just to say that he is dead would, surely, be less of an insult. His consciousness no longer exists but how can his family bear this sickly verbiage?
This is just one example of many I could choose. Deconstruct away at your leisure but do deconstruct. The 'Fallen' euphemism is an example of an insidious class of 'heroic death' memes which abound in militarised societies like ours. The general populace are infected with the meme via the vector of easily accessible and highly-compliant media channels. The pomp, the ceremony, the ageing and complicit royalty. All very predictable, and very very familiar.
But it's not just about bad judgement of the hierarchies and the complicity of the citizenry. It's the soldiers themselves. They are the most complicit. They are the most infected with the meme, and it will ultimately kill some proportion of them. We also have to face the ugly truth that some percentage of them, it would be interesting but tricky to found out how high a percentage, go to war because they want to be involved in chaos and carnage. This being the case it means that their brains are malfunctioning.
This is where we are now but I'm an optimist and I don't think that this can persist. I'd like to use language to de-glamorise war. We'd have to input this language early in the life of a child. The language would be clear and stark. Wars kill people. Guns and tanks are tedious. Soldiering is for failures.
War is about as un-futuristic as it gets.
It is not logical to posit a medium to distant future of nanotech-based weaponry with soldiers clad in robotic exoskeletons blasting each other to smithereens by disrupting each others bodies at the molecular level, or other exotic and hideous means. This is an example of the lack of scope in much of science fiction, and part of the tacit assumption that the future will be just like now - only more so. Civilian technologies have emerged from death-tech in the past but if we are to survive then it will be partly as a result of the availability of technology to all rendering militarism redundant.
Nanotech could be taken as a case in point. Let's assume that some sizable percentage of all wars fought are over some kind of resource. What would be the point of fighting a war about oil, for example, if molecular manufacturing means that any citizen can make whatever they need from the comfort of their own home?
On a wider scale the planet will increasingly face existential risks. Troublemakers could easily construct world-killing devices in small and secret labs. We would never know what hit us. Not even time to get one of your exoskeletal socks on.
Every advance affects every other advance and the effect is exponential. Either you find a way to arrest war at the root, even memetic, level or there is no future to flounce about in with your implausible gun.
I find war euphemisms incredibly ugly. I hear much of 'The Fallen'. What happens if we deconstruct this particular euphemism? Let's find out:
It is likely that I am a person of limited financial means. I may not have done particularly well at school and, if tested, it is likely that my IQ is average to low. I may have a family member in the military, a brother perhaps, who I look up to. I do my training for warfare and quickly become institutionalised. I fit in here. During my second month in Afghanistan a bullet fired from a Type 56 rifle enters the back of my cranium, at the occipital bone, causing it to explode due to hydrostatic shock. When the headless body arrives back in the motherland it appears that it has 'fallen'. The type of falling involved is unclear but unquestioned. Most do not surmise that it is the type of falling where, after a short twitching delay, a headless body slumps into the sand. Most assume that it is a more poetic type of 'falling', where a valiant and idealistic young man sacrifices his short life for his country and, sort of, 'falls' from life into gallant death.
Just to say that he is dead would, surely, be less of an insult. His consciousness no longer exists but how can his family bear this sickly verbiage?
This is just one example of many I could choose. Deconstruct away at your leisure but do deconstruct. The 'Fallen' euphemism is an example of an insidious class of 'heroic death' memes which abound in militarised societies like ours. The general populace are infected with the meme via the vector of easily accessible and highly-compliant media channels. The pomp, the ceremony, the ageing and complicit royalty. All very predictable, and very very familiar.
But it's not just about bad judgement of the hierarchies and the complicity of the citizenry. It's the soldiers themselves. They are the most complicit. They are the most infected with the meme, and it will ultimately kill some proportion of them. We also have to face the ugly truth that some percentage of them, it would be interesting but tricky to found out how high a percentage, go to war because they want to be involved in chaos and carnage. This being the case it means that their brains are malfunctioning.
This is where we are now but I'm an optimist and I don't think that this can persist. I'd like to use language to de-glamorise war. We'd have to input this language early in the life of a child. The language would be clear and stark. Wars kill people. Guns and tanks are tedious. Soldiering is for failures.
War is about as un-futuristic as it gets.
18 November 2009
My Placebo
The effects of placebo are fascinating. But are they really that surprising?
If we accept the highly plastic properties of the brain then it seems to follow that a patient's brain structure can be physically altered by the perception that they are taking something that is good for them. The effect is heightened by the fact that the pill is being prescribed by a professional in the field of medicine, who must know what he/she is doing.
The placebo effect gets a bad press. The beneficial effects of placebo appear to be treated as a negative because the patient was 'fooled' into getting better. Does this actually matter? This is just semantics. If it is the case that your mind 'fools' your body into getting better then there must be a myriad ways in which your mind does this all the time without placebo. Is this 'conned' wellness inferior to 'real' wellness?
I have also come across this attitude from people who have used placebo-centred treatment such as homeopathy. Obviously they think that homeopathy is not placebo but when I explain that it is, and why it is, they can feel embarrassed. Why should they feel that way? If they went to see a nice person who listened to them and who spoke sympathetically about their condition, then gave them some harmless pills, and then they got better, surely they should be delighted. I would be. This is a wonderful beneficial effect of neurobiological processes, not a cause for embarrassment.
I have read that the placebo effect can work even in cases where the patient is told that he/she is being given a placebo in the form of a sugar pill. The doctor speaks calmly and sympathetically to the patient, explaining that there is scientific evidence which shows that these pills can have a beneficial effect in some cases. Why would this work? How can the patient be fooled if the sham is revealed to them before they even start the treatment? Well, all the other elements of the system are still in place - the sympathetic health professional, the thrice daily pill-taking ritual, the follow-up visits to the professional to talk about the condition, and so on. I would venture that, as a result of this, other crucial 'hidden' elements are still in place - the health professional as de-facto psychotherapist, the pill ritual as regular trigger for mood, appetite and sleep-affecting neurotransmitters such as serotonin, the ongoing care as a longer-term enabler/consolidator of neuroplastic change via increased levels of plastic change associated (speculative) neuromodulators such as oxytocin.
Maybe we should be more positive and up-front about the placebo effect. Towards this end I have made up my own placebo, Abcepol (made by Hedmed), and put it up for sale on Ebay. It's just a bit of fun really but there is a serious point. I want to see if people are prepared to pay for a placebo when it clearly states that that's exactly what it is. If anyone buys it I'll give the proceeds to a neuroscience-related charity.
If we accept the highly plastic properties of the brain then it seems to follow that a patient's brain structure can be physically altered by the perception that they are taking something that is good for them. The effect is heightened by the fact that the pill is being prescribed by a professional in the field of medicine, who must know what he/she is doing.
The placebo effect gets a bad press. The beneficial effects of placebo appear to be treated as a negative because the patient was 'fooled' into getting better. Does this actually matter? This is just semantics. If it is the case that your mind 'fools' your body into getting better then there must be a myriad ways in which your mind does this all the time without placebo. Is this 'conned' wellness inferior to 'real' wellness?
I have also come across this attitude from people who have used placebo-centred treatment such as homeopathy. Obviously they think that homeopathy is not placebo but when I explain that it is, and why it is, they can feel embarrassed. Why should they feel that way? If they went to see a nice person who listened to them and who spoke sympathetically about their condition, then gave them some harmless pills, and then they got better, surely they should be delighted. I would be. This is a wonderful beneficial effect of neurobiological processes, not a cause for embarrassment.
I have read that the placebo effect can work even in cases where the patient is told that he/she is being given a placebo in the form of a sugar pill. The doctor speaks calmly and sympathetically to the patient, explaining that there is scientific evidence which shows that these pills can have a beneficial effect in some cases. Why would this work? How can the patient be fooled if the sham is revealed to them before they even start the treatment? Well, all the other elements of the system are still in place - the sympathetic health professional, the thrice daily pill-taking ritual, the follow-up visits to the professional to talk about the condition, and so on. I would venture that, as a result of this, other crucial 'hidden' elements are still in place - the health professional as de-facto psychotherapist, the pill ritual as regular trigger for mood, appetite and sleep-affecting neurotransmitters such as serotonin, the ongoing care as a longer-term enabler/consolidator of neuroplastic change via increased levels of plastic change associated (speculative) neuromodulators such as oxytocin.
Maybe we should be more positive and up-front about the placebo effect. Towards this end I have made up my own placebo, Abcepol (made by Hedmed), and put it up for sale on Ebay. It's just a bit of fun really but there is a serious point. I want to see if people are prepared to pay for a placebo when it clearly states that that's exactly what it is. If anyone buys it I'll give the proceeds to a neuroscience-related charity.
04 November 2009
Psychological Continuity
If you were to be duplicated just before you died and the duplicate survived, would it be you?
This can be seen as a deep philosophical question with a myriad differently nuanced answers. But I'm not much of a philosopher so my answer is simply "no". A duplicate of your entire person or a perfect molecular copy of your brain, could be just like you for some period time but it would not be you.
The soul-mongers may be rubbing their hands with glee at this point but this has nothing to do with them or their fantastical constructs. Their agenda is to promote the notion of a unique ethereal part that "lives on" somehow after you die. By their definition the duplicate person would have no soul - one person one soul - that's all that God hands out (then takes back). What a quaint and morbid idea.
We cannot make such a copy at present so you can think of this as a thought experiment. In the relatively near future we will be able to make such copies and there will be various ways of doing this. The copying will not be the problem, the method of the state/substrate transfer process may be.
Let's say that the scientists doing the copying decide to conceal who is the "original" and whom the "duplicate", even from themselves. Immediately after completion of the copy process both entities would insist that they are the "real" version of the person and both would be correct. If you dispute this then, in what sense would they not both be correct? Let's not get hung up on which of them would be composed of the most recently rearranged atomic material. But that is all it really comes down to. Then divergence sets in.
When does the divergence between the two entities set in? How much do they diverge? Pretty much immediately or somewhat later. A little or vastly. What does it matter? They diverge, they are not the same person. This could be an excellent moral thought experiment for the religious if they were a little more imaginative - they could have a 'soul dilution' construct with each duplicate being, in comparison to the pre-duplication 'original', a kind of watery orange squash in the soul department.
There is, of course, no dilution. Both versions are valid entities ready to go back out into the cosmos on their own divergent paths, no matter how closely they stick together. But weren't we talking about the death of the "original"? This kind of duplication wouldn't save you, so what would? Well, we know that we are constantly in the process of being rebuilt at the molecular level and that, every few years, every atom in our bodies will have been replaced. So in what sense are we the "same person" as a few years previously? The key is that we feel the same because of our memories and the continuous "psychological flow" of our being. When we think back we don't usually detect vast gaps prior to which we suspect that we may have been somebody else. The psychological continuity of the self is an illusion but a very useful one, and one that we feel we must maintain in order for "me" to mean anything. So if there is to be any kind of 'movement' of our 'selves' from one state/substrate to another there must be a transition process which maintains psychological continuity.
I have imagined that this could be done with a future perfected version of a virtual brain akin to the Blue Brain project. A perfect human/virtual brain interface would also be required. The neocortical columns of the dying person are wired to the virtual brain and data communication begins at whatever level of resolution/fidelity is required. At first the 'generic' virtual brain is acting only as a relay so that the patient's columns can adjust to the new environment. Gradually some of the less-active columns in the bio brain could begin to 'share' some thought/memory structures with the virtual, allowing the virtual brain to 'learn' the bio brain's structure and patterns. The virtual columns gradually take on more and more responsibility until the virtual brain is handling entire neocortical areas, and the virtual and bio are operating as one entity. The process continues until only some autonomic functions, such as regulation of blood oxygenation, are being handled by the bio brain. The virtual brain does not strictly require autonomic functions but some simulation of those functions would be required in order to prevent the patient from suffering a kind of ontological shock brought on by the realisation of the substrate transfer. If required the biological body and brain stem can continue to function in tandem with the virtual brain indefinitely but the transfer of the 'self' to the new substrate is now complete.
I don't think it's looking good for teleportation. A teleport would be a kind of duplicator/destroyer device. Let the duplicates live. Anything else would be unthinkable. But this isn't about duplication, it's about transfer. And here's where it does get philosophical. Have you ever felt like you have, even for a short while, become "one" with another person? It can be joyful, unsettling or both. I think it's a realisation that our boundaries are mutable and that we could, ultimately, accept a new substrate as home.
This can be seen as a deep philosophical question with a myriad differently nuanced answers. But I'm not much of a philosopher so my answer is simply "no". A duplicate of your entire person or a perfect molecular copy of your brain, could be just like you for some period time but it would not be you.
The soul-mongers may be rubbing their hands with glee at this point but this has nothing to do with them or their fantastical constructs. Their agenda is to promote the notion of a unique ethereal part that "lives on" somehow after you die. By their definition the duplicate person would have no soul - one person one soul - that's all that God hands out (then takes back). What a quaint and morbid idea.
We cannot make such a copy at present so you can think of this as a thought experiment. In the relatively near future we will be able to make such copies and there will be various ways of doing this. The copying will not be the problem, the method of the state/substrate transfer process may be.
Let's say that the scientists doing the copying decide to conceal who is the "original" and whom the "duplicate", even from themselves. Immediately after completion of the copy process both entities would insist that they are the "real" version of the person and both would be correct. If you dispute this then, in what sense would they not both be correct? Let's not get hung up on which of them would be composed of the most recently rearranged atomic material. But that is all it really comes down to. Then divergence sets in.
When does the divergence between the two entities set in? How much do they diverge? Pretty much immediately or somewhat later. A little or vastly. What does it matter? They diverge, they are not the same person. This could be an excellent moral thought experiment for the religious if they were a little more imaginative - they could have a 'soul dilution' construct with each duplicate being, in comparison to the pre-duplication 'original', a kind of watery orange squash in the soul department.
There is, of course, no dilution. Both versions are valid entities ready to go back out into the cosmos on their own divergent paths, no matter how closely they stick together. But weren't we talking about the death of the "original"? This kind of duplication wouldn't save you, so what would? Well, we know that we are constantly in the process of being rebuilt at the molecular level and that, every few years, every atom in our bodies will have been replaced. So in what sense are we the "same person" as a few years previously? The key is that we feel the same because of our memories and the continuous "psychological flow" of our being. When we think back we don't usually detect vast gaps prior to which we suspect that we may have been somebody else. The psychological continuity of the self is an illusion but a very useful one, and one that we feel we must maintain in order for "me" to mean anything. So if there is to be any kind of 'movement' of our 'selves' from one state/substrate to another there must be a transition process which maintains psychological continuity.
I have imagined that this could be done with a future perfected version of a virtual brain akin to the Blue Brain project. A perfect human/virtual brain interface would also be required. The neocortical columns of the dying person are wired to the virtual brain and data communication begins at whatever level of resolution/fidelity is required. At first the 'generic' virtual brain is acting only as a relay so that the patient's columns can adjust to the new environment. Gradually some of the less-active columns in the bio brain could begin to 'share' some thought/memory structures with the virtual, allowing the virtual brain to 'learn' the bio brain's structure and patterns. The virtual columns gradually take on more and more responsibility until the virtual brain is handling entire neocortical areas, and the virtual and bio are operating as one entity. The process continues until only some autonomic functions, such as regulation of blood oxygenation, are being handled by the bio brain. The virtual brain does not strictly require autonomic functions but some simulation of those functions would be required in order to prevent the patient from suffering a kind of ontological shock brought on by the realisation of the substrate transfer. If required the biological body and brain stem can continue to function in tandem with the virtual brain indefinitely but the transfer of the 'self' to the new substrate is now complete.
I don't think it's looking good for teleportation. A teleport would be a kind of duplicator/destroyer device. Let the duplicates live. Anything else would be unthinkable. But this isn't about duplication, it's about transfer. And here's where it does get philosophical. Have you ever felt like you have, even for a short while, become "one" with another person? It can be joyful, unsettling or both. I think it's a realisation that our boundaries are mutable and that we could, ultimately, accept a new substrate as home.
Labels:
blue brain,
cortical column,
death,
psychological continuity,
teleport
30 September 2009
Cognitive Democracy
It is common, and tedious, to hear people discussing the poor quality of their political representatives. They often think those that represent them are mentally deficient or plain mendacious. They may be right on both counts but how would we know before it was too late?
What attracts people to careers in politics? Why do they appear to abandon deeply held principles once embedded in the system? Why do some turn rotten and steal from the societies that elected them? The structure-centric responses to these questions have been hotly debated for centuries. But, ultimately, the answers must be to do with brains.
When you vote for a representative, particularly when you choose on the basis of what you perceive to be their personality, you should have no reason to believe that you have made a logical choice. What, after all, really formed the basis of your choice? Usually you will not be personally acquainted with the politician in question. You may have seen them on the television a few times. They may represent a Party that you feel an affinity with. He/she may be 'the best of a bad bunch'. It's not much to go on.
People will choose to go into politics for a variety of reasons. Some will be 'conviction' politicians with a real sense of what they believe to be morally correct; others will see politics as a useful (and sometimes easy) career ladder; others will spot an opportunity for power and influence. Their capabilities will vary enormously: some will be from the intellectual elite (although the dearth of scientists in political life makes this less likely); others will struggle to think rationally and coherently. Either could end up running a nation.
It would be useful to have the tools to measure these intentions and capabilities before we cast our vote.
There is, of course, massive debate about how these factors can be reliably measured. For example, IQ tests are often discredited. Emotional intelligence in now given more credence but the markers are hard to identify. Psychometrics of one kind or another are often used as part of job interviews. Intensive psychological evaluations are undertaken on patients in psychiatric institutions. The tools are by no means perfect but perhaps they should utilised on those seeking to be our political representatives, and the results made freely available to us.
'League tables' feature often in the news at present: schools, hospitals (including individual surgeons), police forces and so on. League tables of political performance and consistency, while useful, are not what I am talking about here. If a person chooses to put him/herself forward to represent us and to have a measure of say in our lives at this most intimate level do we not need to know a great deal more about their intentions and capabilities before we are in a position to make a logical choice?
It could be argued that this kind of testing would be invasive or a breach of human rights. I would disagree. Testing would be a voluntary part of the qualification process to stand for elected office. No coercion would be required or involved.
It should be an ambition of an enlightened society to be represented by the right people. Not necessarily the best and brightest but a healthy combination of the brightest, most stable, least corruptible, most logical, most nurturing, least mendacious etc. Good eggs not rotten apples. Effective, understanding and striving voices. Not brutish, memetically infectious demagogues.
Achieving this may require assessment of candidate suitability via the most rigorous scientific testing measures available.
Can we rely on the enlightened intellectual and emotional altruism of the few 'incidentals' to shape our future societies? Or do we need to find a humane and reasoned method of bringing just those candidates to the fore?
What attracts people to careers in politics? Why do they appear to abandon deeply held principles once embedded in the system? Why do some turn rotten and steal from the societies that elected them? The structure-centric responses to these questions have been hotly debated for centuries. But, ultimately, the answers must be to do with brains.
When you vote for a representative, particularly when you choose on the basis of what you perceive to be their personality, you should have no reason to believe that you have made a logical choice. What, after all, really formed the basis of your choice? Usually you will not be personally acquainted with the politician in question. You may have seen them on the television a few times. They may represent a Party that you feel an affinity with. He/she may be 'the best of a bad bunch'. It's not much to go on.
People will choose to go into politics for a variety of reasons. Some will be 'conviction' politicians with a real sense of what they believe to be morally correct; others will see politics as a useful (and sometimes easy) career ladder; others will spot an opportunity for power and influence. Their capabilities will vary enormously: some will be from the intellectual elite (although the dearth of scientists in political life makes this less likely); others will struggle to think rationally and coherently. Either could end up running a nation.
It would be useful to have the tools to measure these intentions and capabilities before we cast our vote.
There is, of course, massive debate about how these factors can be reliably measured. For example, IQ tests are often discredited. Emotional intelligence in now given more credence but the markers are hard to identify. Psychometrics of one kind or another are often used as part of job interviews. Intensive psychological evaluations are undertaken on patients in psychiatric institutions. The tools are by no means perfect but perhaps they should utilised on those seeking to be our political representatives, and the results made freely available to us.
'League tables' feature often in the news at present: schools, hospitals (including individual surgeons), police forces and so on. League tables of political performance and consistency, while useful, are not what I am talking about here. If a person chooses to put him/herself forward to represent us and to have a measure of say in our lives at this most intimate level do we not need to know a great deal more about their intentions and capabilities before we are in a position to make a logical choice?
It could be argued that this kind of testing would be invasive or a breach of human rights. I would disagree. Testing would be a voluntary part of the qualification process to stand for elected office. No coercion would be required or involved.
It should be an ambition of an enlightened society to be represented by the right people. Not necessarily the best and brightest but a healthy combination of the brightest, most stable, least corruptible, most logical, most nurturing, least mendacious etc. Good eggs not rotten apples. Effective, understanding and striving voices. Not brutish, memetically infectious demagogues.
Achieving this may require assessment of candidate suitability via the most rigorous scientific testing measures available.
Can we rely on the enlightened intellectual and emotional altruism of the few 'incidentals' to shape our future societies? Or do we need to find a humane and reasoned method of bringing just those candidates to the fore?
Labels:
cognitive,
democracy,
politics,
psychometric testing
23 September 2009
Computing with the Multiverse
Quantum computers are real. Their capabilities may be limited at present but they exist. This may not seem like an extraordinary statement until you fully consider what a quantum computer does.
It does not matter that the practical applications of even an advanced future quantum computer would likely be limited to factoring vast numbers for secure encryption purposes. In many ways conceiving of this potential 'killer app' is just a way to secure the funding to build them in the first place. No, what really matters is that the calculations involved just aren't possible within the bounds of this Universe alone.
So where are the calculations being done? They are being done in tandem with the requisite number of equivalent quantum computers, staffed by the requisite number of equivalent copies of the person/people running the quantum computers, in the requisite number of equivalent universes required to complete the calculation.
As the wonderful John Gribbin points out in his recent book 'In Search of the Multiverse', this is not equivalent to the 'phase spaces' used by mathematicians to undertake complex calculations requiring theoretical extra dimensions. 'Phase spaces' work but you cannot use them to do the kind of calculations a quantum computer can do - because the (to all intents and purposes infinite) computing power of the Multiverse is simply not available without one.
If this very real example of how the Multiverse can be used for a practical application leaves you reeling then you are not alone. There is no guarantee that even the operators of these devices are absorbing the full reality of the Multiverse when they talk about 'spin direction' and 'superposition'. If superposition works and can be used for calculation purposes then what does it matter?
At the quantum level - the level of the almost impossibly tiny - 'objects' do not behave as they do at our level. Their position is not fixed but consists of 'clouds' of positional probabilities. They can be in multiple places at the same time. This may seem impossible to us but it is a law of the nature of the quantum level. The property of 'fixedness' only arises at larger scales, including our own. This quantum property can be harnessed for computation by using quantum superpositions to 'represent' binary digits - allowing one to undertake inconceivably large numbers of calculations all at the same time.
My reading of Gribbin's explanation of the process is that the quantum computers in the various universes 'call' for the answer all at the same time. The superpositions allow the calculation to be 'split' and undertaken across the requisite part of the Multiverse. The answer is then instantly 'collapsed' back to the observer in each universe.
Why am I trying to explain this when I don't understand it and do not have the mathematical language to describe it? Because I feel a sense of wonder at this and wish to teach myself a language for describing it to myself.
I want to internalise it and I think it might be enjoyable for others to internalise it too.
It does not matter that the practical applications of even an advanced future quantum computer would likely be limited to factoring vast numbers for secure encryption purposes. In many ways conceiving of this potential 'killer app' is just a way to secure the funding to build them in the first place. No, what really matters is that the calculations involved just aren't possible within the bounds of this Universe alone.
So where are the calculations being done? They are being done in tandem with the requisite number of equivalent quantum computers, staffed by the requisite number of equivalent copies of the person/people running the quantum computers, in the requisite number of equivalent universes required to complete the calculation.
As the wonderful John Gribbin points out in his recent book 'In Search of the Multiverse', this is not equivalent to the 'phase spaces' used by mathematicians to undertake complex calculations requiring theoretical extra dimensions. 'Phase spaces' work but you cannot use them to do the kind of calculations a quantum computer can do - because the (to all intents and purposes infinite) computing power of the Multiverse is simply not available without one.
If this very real example of how the Multiverse can be used for a practical application leaves you reeling then you are not alone. There is no guarantee that even the operators of these devices are absorbing the full reality of the Multiverse when they talk about 'spin direction' and 'superposition'. If superposition works and can be used for calculation purposes then what does it matter?
At the quantum level - the level of the almost impossibly tiny - 'objects' do not behave as they do at our level. Their position is not fixed but consists of 'clouds' of positional probabilities. They can be in multiple places at the same time. This may seem impossible to us but it is a law of the nature of the quantum level. The property of 'fixedness' only arises at larger scales, including our own. This quantum property can be harnessed for computation by using quantum superpositions to 'represent' binary digits - allowing one to undertake inconceivably large numbers of calculations all at the same time.
My reading of Gribbin's explanation of the process is that the quantum computers in the various universes 'call' for the answer all at the same time. The superpositions allow the calculation to be 'split' and undertaken across the requisite part of the Multiverse. The answer is then instantly 'collapsed' back to the observer in each universe.
Why am I trying to explain this when I don't understand it and do not have the mathematical language to describe it? Because I feel a sense of wonder at this and wish to teach myself a language for describing it to myself.
I want to internalise it and I think it might be enjoyable for others to internalise it too.
Labels:
john gribbin,
multiverse,
quantum computer
16 September 2009
How's my RTPJ?
Have you always been aware of the beliefs of others or did you just grow that way?
With an inhibited Right Temporo-Parietal Junction (RTPJ) would you struggle to understand that others can have beliefs different to yours? Would this inhibition cloud your ability to make moral judgements? Current neuroscientific research using fMRI suggests increased activity in this small brain region when volunteers are tasked with thinking about various situations from the point of view of another human being.
Studies by Rebecca Saxe and others see the RTPJ as being key to the morality aspects central to a cohesive "theory-of-mind". Her studies have found that the abilities of children to reason out and judge scenarios of "people thinking about thinking people" develop markedly and rapidly between the ages of approximately three and seven years old.
An example would be where a child, with the aid of props, was asked to envision a man putting a sandwich down on a box. The man then leaves and the sandwich gets blown off the box by the wind. A second man comes along and puts his sandwich down on the box, not seeing the one on the ground, then leaves. The child, once given the scenario, is asked which sandwich the first man will take when he returns. According to the Saxe studies the children would respond thus:
Other work, such as that of JP Mitchell does not appear to directly contradict the Saxe papers but does, again, bring up the issue of "localisation". He appears to be saying there is no current conclusive proof that the RTPJ is solely responsible for this kind of reasoning, despite the fact it lights up under fMRI when these judgement tasks are undertaken.
It must be very tempting for neuroscientists to fit specific cognitive functions to specific brain regions, especially now that fMRI studies seem to corroborate some of these theories. It's also much easier to explain to laypeople than telling them that fMRI studies suggest increased blood flow in areas that might be associated with a particular function when subjects undertake cognitive tasks that might stimulate blood flow to the region in question. While the "localisations" may be broadly correct there seems to be a bit too much "shoehorning" going on in some of these studies.
There are, of course, ethical issues connected to the potential ability to disrupt a person's ability to make reasoned moral judgements; or to actually change their beliefs by electro-mechanical means! This ability doesn't appear to be on the horizon any time soon. I have no fear of and can see a lot of value in this kind of work and, unlike Ms Saxe, I do think that this will help us to understand "the hard problem" of consciousness.
This is me signing off thinking about me thinking about you thinking about your beliefs about how you think about the thoughts of thinking people.
With an inhibited Right Temporo-Parietal Junction (RTPJ) would you struggle to understand that others can have beliefs different to yours? Would this inhibition cloud your ability to make moral judgements? Current neuroscientific research using fMRI suggests increased activity in this small brain region when volunteers are tasked with thinking about various situations from the point of view of another human being.
Studies by Rebecca Saxe and others see the RTPJ as being key to the morality aspects central to a cohesive "theory-of-mind". Her studies have found that the abilities of children to reason out and judge scenarios of "people thinking about thinking people" develop markedly and rapidly between the ages of approximately three and seven years old.
An example would be where a child, with the aid of props, was asked to envision a man putting a sandwich down on a box. The man then leaves and the sandwich gets blown off the box by the wind. A second man comes along and puts his sandwich down on the box, not seeing the one on the ground, then leaves. The child, once given the scenario, is asked which sandwich the first man will take when he returns. According to the Saxe studies the children would respond thus:
- The 3-year-old says that the first man will take the sandwich on the ground although it is dirty because it is "his". When told that the first man actually takes the one on the box the child expresses surprise - presumably meaning that she cannot understand that the first man would not know that his was the one on the ground, so she thinks it unfair that he took the one on the box.
- The 5-year-old says that the first man will take the sandwich on the box. This appears to show a more developed understanding of the thoughts of others because the child understands that the first man would think (albeit mistakenly) that his sandwich was the one on the box. However, the 5-year-old, still says that it is "bad" of the first man to take the sandwich on the box. Is this evidence that the child has not yet developed the moral capacity to know that the first man cannot be blamed for not knowing he was mistaken?
- The 7-year-old says that the first man will take the sandwich on the box. Crucially, she also knows that the first man should take no blame for his mistake because it was a simple accident/misunderstanding.
Other work, such as that of JP Mitchell does not appear to directly contradict the Saxe papers but does, again, bring up the issue of "localisation". He appears to be saying there is no current conclusive proof that the RTPJ is solely responsible for this kind of reasoning, despite the fact it lights up under fMRI when these judgement tasks are undertaken.
It must be very tempting for neuroscientists to fit specific cognitive functions to specific brain regions, especially now that fMRI studies seem to corroborate some of these theories. It's also much easier to explain to laypeople than telling them that fMRI studies suggest increased blood flow in areas that might be associated with a particular function when subjects undertake cognitive tasks that might stimulate blood flow to the region in question. While the "localisations" may be broadly correct there seems to be a bit too much "shoehorning" going on in some of these studies.
There are, of course, ethical issues connected to the potential ability to disrupt a person's ability to make reasoned moral judgements; or to actually change their beliefs by electro-mechanical means! This ability doesn't appear to be on the horizon any time soon. I have no fear of and can see a lot of value in this kind of work and, unlike Ms Saxe, I do think that this will help us to understand "the hard problem" of consciousness.
This is me signing off thinking about me thinking about you thinking about your beliefs about how you think about the thoughts of thinking people.
27 August 2009
Synthetic Biology
SynBio is coming. While we have been busy getting in a lather about nanotech, synthetic biology has been creeping up on us at incredible pace.
The field of SynBio combines engineering and biology in pursuit of the creation of novel new forms of synthetic life with customised functions. The idea is not to create some "imitation" of life but genuine functioning cellular forms. A "second genesis". The technology is more advanced than many realise. Some experts in the field state that they are less than a year away from creating the first complete system.
Versions of simpler elements of the jigsaw, such as a cell wall formed from fatty acids, have been around for a while but one company has now created a fully functional ribosome. The protein biosynthesis process undertaken by the ribosome translates mRNA into protein. Ribosomes are like the protein micro-factories of cells. With this incredibly complex part of the problem appearing to have been solved it looks like it won't be long before the synbio kit of parts is complete.
With the ability to make customised microscopic lifeforms, such as bacteria to clean up man-made toxins or specialised antibodies to attack specific types of cancer cells in precise locations, the microscopic world will be open to greater and more direct intervention than ever before. We had perhaps assumed that we would have to wait for the arrival of full-blown nanotech, with its molecular submarines and cutting gear, to see this type of revolution.
But all the while it has not only been the physicists who have been seeing the potential of viewing microscopic objects as potential machine parts. Synthetic biologists can grow the parts they require for their machines. They have realised that biology is also an engineering substrate. This is way beyond transgenics, where genetic material constituting desirable properties from one lifeform are mixed with another. This is about understanding the pre-evolved building blocks of life, classifying them, replicating them and assembling them into new forms. Those new lifeforms can then, if required, be evolved further in the lab.
Synthetic Biology is an exciting new field, the results of which will soon explode into the headlines. All the old arguments about "playing God" with be brought forth with greater vehemence and incomprehension than ever.
We know that all life evolved from one "ancestor" cell. It only had to happen and take hold once to give rise to all life on this planet. We're now on the cusp of seeing a brand new form of life - one created by human beings.
The field of SynBio combines engineering and biology in pursuit of the creation of novel new forms of synthetic life with customised functions. The idea is not to create some "imitation" of life but genuine functioning cellular forms. A "second genesis". The technology is more advanced than many realise. Some experts in the field state that they are less than a year away from creating the first complete system.
Versions of simpler elements of the jigsaw, such as a cell wall formed from fatty acids, have been around for a while but one company has now created a fully functional ribosome. The protein biosynthesis process undertaken by the ribosome translates mRNA into protein. Ribosomes are like the protein micro-factories of cells. With this incredibly complex part of the problem appearing to have been solved it looks like it won't be long before the synbio kit of parts is complete.
With the ability to make customised microscopic lifeforms, such as bacteria to clean up man-made toxins or specialised antibodies to attack specific types of cancer cells in precise locations, the microscopic world will be open to greater and more direct intervention than ever before. We had perhaps assumed that we would have to wait for the arrival of full-blown nanotech, with its molecular submarines and cutting gear, to see this type of revolution.
But all the while it has not only been the physicists who have been seeing the potential of viewing microscopic objects as potential machine parts. Synthetic biologists can grow the parts they require for their machines. They have realised that biology is also an engineering substrate. This is way beyond transgenics, where genetic material constituting desirable properties from one lifeform are mixed with another. This is about understanding the pre-evolved building blocks of life, classifying them, replicating them and assembling them into new forms. Those new lifeforms can then, if required, be evolved further in the lab.
Synthetic Biology is an exciting new field, the results of which will soon explode into the headlines. All the old arguments about "playing God" with be brought forth with greater vehemence and incomprehension than ever.
We know that all life evolved from one "ancestor" cell. It only had to happen and take hold once to give rise to all life on this planet. We're now on the cusp of seeing a brand new form of life - one created by human beings.
Labels:
synbio,
synthetic biology
19 August 2009
Neuroplasticity and handedness
I used to be ambidextrous.
When I was a child I had the ability to use either hand for most tasks. It was useful but also confusing. I had a nagging feeling that I should be settling on exactly which hand I should use to draw my pictures with. There wasn't much pressure from the adults around me to settle this internal argument but any suggestions I did receive from outside always favoured the right.
I now wonder if I lost something when I finally settled on using my right hand. Some of the old 'confusion' is still there - when cutting with scissors I use my left hand; when playing pool or darts my left hand dominates.
Having now read a fair amount about neuroplasticity - Norman Doidge's "The Brain That Changes Itself" is a particular favourite - I think that my ambidextrous abilities were a result of an earlier and more readily plastic phase of my brain's development. Once settled into a less readily plastic phase the right hand came to dominate. But can that process cause an imbalance?
I would assume that an imbalance would be more likely in an obviously left-handed child forced to use their right. This still happens - the old sinister/dexter nonsense. But when this happens more internally can it be a case of the left brain (right hand) achieving dominance over the right brain (left hand)? The probable answer is that it is much more complicated than that. Left-handed people do not necessarily have a more dominant right brain than right-handed. But I wonder if some of the 'flexibility' of thought and motion from the more plastic and ambidextrous phase would be better retained than lost.
I have started brushing my teeth and sometimes shaving with my left hand. This feels awkward and unnatural at first but it does appear to get easier after a few weeks. A kind of 're-learning' may be happening.
This would seem to tie in somewhat with current neuroplasticity research. There is growing evidence, for example, that stroke victims who lose the use of a limb, can learn to use that limb again by restricting the movement of the good limb. In the case of an arm the good limb could be put in a tight sling so that the non/semi-functional limb is 'forced' into use. Is it so different to try to force a once highly functional hand to undertake some of the tasks now so ably performed by its opposite number in an effort to regain some of its 'lost' function?
Someone I know has, in the fairly recent past, lost a good deal of the function in one leg. He insists the problem is entirely 'physical' and localised to the area of the leg itself, while at the same time talking about how the limb 'won't obey his commands'. It is clear to me that the damage has resulted from a small stroke but the affected person will never agree with this because of the stigma he sees attached to such a brain-associated dysfunction. The physiotherapists he has seen have made no real effort to investigate the root of the problem. I have suggested some kind of restrictive therapy to 'force' the affected limb into better use but he won't hear of it. Meanwhile the leg becomes less functional with each passing day.
Brain plasticity is never lost but it does become harder to 'activate' as the years go by. Exercises, both physical and mental, which re-activate and promote neuroplasticity can be of huge benefit to all. There appears to be resistance in some quarters to the notion of training the brain like a muscle or using it as 'tool' to effect change in itself and in the rest of the body. Some of the methods touted for achieving plastic change are pure hokum but others, such as the software developed by Posit Science, may just have a chance of making a real difference for those struggling with weakening mental agility.
I won't ever be fully ambidextrous again but that doesn't mean I should give up on my left hand. Balance in all things.
When I was a child I had the ability to use either hand for most tasks. It was useful but also confusing. I had a nagging feeling that I should be settling on exactly which hand I should use to draw my pictures with. There wasn't much pressure from the adults around me to settle this internal argument but any suggestions I did receive from outside always favoured the right.
I now wonder if I lost something when I finally settled on using my right hand. Some of the old 'confusion' is still there - when cutting with scissors I use my left hand; when playing pool or darts my left hand dominates.
Having now read a fair amount about neuroplasticity - Norman Doidge's "The Brain That Changes Itself" is a particular favourite - I think that my ambidextrous abilities were a result of an earlier and more readily plastic phase of my brain's development. Once settled into a less readily plastic phase the right hand came to dominate. But can that process cause an imbalance?
I would assume that an imbalance would be more likely in an obviously left-handed child forced to use their right. This still happens - the old sinister/dexter nonsense. But when this happens more internally can it be a case of the left brain (right hand) achieving dominance over the right brain (left hand)? The probable answer is that it is much more complicated than that. Left-handed people do not necessarily have a more dominant right brain than right-handed. But I wonder if some of the 'flexibility' of thought and motion from the more plastic and ambidextrous phase would be better retained than lost.
I have started brushing my teeth and sometimes shaving with my left hand. This feels awkward and unnatural at first but it does appear to get easier after a few weeks. A kind of 're-learning' may be happening.
This would seem to tie in somewhat with current neuroplasticity research. There is growing evidence, for example, that stroke victims who lose the use of a limb, can learn to use that limb again by restricting the movement of the good limb. In the case of an arm the good limb could be put in a tight sling so that the non/semi-functional limb is 'forced' into use. Is it so different to try to force a once highly functional hand to undertake some of the tasks now so ably performed by its opposite number in an effort to regain some of its 'lost' function?
Someone I know has, in the fairly recent past, lost a good deal of the function in one leg. He insists the problem is entirely 'physical' and localised to the area of the leg itself, while at the same time talking about how the limb 'won't obey his commands'. It is clear to me that the damage has resulted from a small stroke but the affected person will never agree with this because of the stigma he sees attached to such a brain-associated dysfunction. The physiotherapists he has seen have made no real effort to investigate the root of the problem. I have suggested some kind of restrictive therapy to 'force' the affected limb into better use but he won't hear of it. Meanwhile the leg becomes less functional with each passing day.
Brain plasticity is never lost but it does become harder to 'activate' as the years go by. Exercises, both physical and mental, which re-activate and promote neuroplasticity can be of huge benefit to all. There appears to be resistance in some quarters to the notion of training the brain like a muscle or using it as 'tool' to effect change in itself and in the rest of the body. Some of the methods touted for achieving plastic change are pure hokum but others, such as the software developed by Posit Science, may just have a chance of making a real difference for those struggling with weakening mental agility.
I won't ever be fully ambidextrous again but that doesn't mean I should give up on my left hand. Balance in all things.
Labels:
brain,
doidge,
handedness,
neuroplasticity,
posit science,
stroke
05 June 2009
Speech Arrest
I recently watched some fascinating live neurosurgery on Channel 4. The patient was awake throughout most of the process and was capable of answering questions posed by the studio audience and viewers phoning in.
The "speech arrest" phenomenon was particularly interesting to witness. The surgeon was removing a large tumour and wanted to ensure that he was not causing any damage to Broca's Area, an area on the brain particularly concerned with speech. He used and electrode to stimulate the approximate area while the patient counted from 1 to 20. If the electrode intruded into Broca's area the patient would "lose" numbers in the sequence but resume consistent counting when the probe was moved away.
Once Broca's Area had been identified the surgeon was able to work past it and go further down into the brain to remove part of the tumour. Unfortunately the tumour was invasive, with webs spreading out deep into the brain, so could not be removed fully. The patient will undergo chemotherapy to try to mitigate this.
The "speech arrest" phenomenon was particularly interesting to witness. The surgeon was removing a large tumour and wanted to ensure that he was not causing any damage to Broca's Area, an area on the brain particularly concerned with speech. He used and electrode to stimulate the approximate area while the patient counted from 1 to 20. If the electrode intruded into Broca's area the patient would "lose" numbers in the sequence but resume consistent counting when the probe was moved away.
Once Broca's Area had been identified the surgeon was able to work past it and go further down into the brain to remove part of the tumour. Unfortunately the tumour was invasive, with webs spreading out deep into the brain, so could not be removed fully. The patient will undergo chemotherapy to try to mitigate this.
Labels:
brain,
neurosurgery
26 April 2009
Blue brain
The Blue Brain project is exciting. The idea of building a cortical simulation up from the molecular level is a massive challenge but, I think, an essential one if we are ever to fully understand the complex thinking machine that is the human brain.
The initial goals of the project were, in comparison to the current aims, fairly modest. In 2005 the goal was to simulate a rat cortical column (or hypercolumn). With that now achieved the scientists involved are heading in the direction of simulation of an entire human neocortex.
There are concerns that this project cannot tell us much new. It may come up with a working simulation that we can observe functioning and making neuronal connections but of which we have no understanding of what it is actually doing. Sounds a lot like a biological neocortex to me.
The current state of the art at the neuronal level of neuroscience is that we can begin to see such hitherto intangible processes as memories being formed. This appears to me to be roughly equivalent to watching our simulated brain making its virtual connections and forming its virtual memories. Of course the simulation allows us to gather comprehensive and accurate data about these events; something we can't do with a biological brain. When it comes to observing exactly what the world looks like to the entity (biological or virtual) as a result of those neuronal events we are still, largely, in the dark.
What I hope the Blue Brain project will do, and what some of its detractors may fear, is to put an end to any notion of duality. Consciousness will be proved, once and for all, to be an emergent property of the structure, connectional diversity and sheer quantity of neurons (supported by glia) forming the human neocortex. But there is no guarantee that the Blue Brain will ever reach human-level neuronal complexity. A huge investment would be required in this research, currently admirably supported by the Swiss government, to bring that possibility to fruition.
This is pure science and the investment is worth it. Understanding the construction and plasticity of our own brains will give us the tools we need to better shape ourselves and to recognise the negative neurobiological effects of bad interactions with other people and environments. Perhaps our future depends on that understanding.
The initial goals of the project were, in comparison to the current aims, fairly modest. In 2005 the goal was to simulate a rat cortical column (or hypercolumn). With that now achieved the scientists involved are heading in the direction of simulation of an entire human neocortex.
There are concerns that this project cannot tell us much new. It may come up with a working simulation that we can observe functioning and making neuronal connections but of which we have no understanding of what it is actually doing. Sounds a lot like a biological neocortex to me.
The current state of the art at the neuronal level of neuroscience is that we can begin to see such hitherto intangible processes as memories being formed. This appears to me to be roughly equivalent to watching our simulated brain making its virtual connections and forming its virtual memories. Of course the simulation allows us to gather comprehensive and accurate data about these events; something we can't do with a biological brain. When it comes to observing exactly what the world looks like to the entity (biological or virtual) as a result of those neuronal events we are still, largely, in the dark.
What I hope the Blue Brain project will do, and what some of its detractors may fear, is to put an end to any notion of duality. Consciousness will be proved, once and for all, to be an emergent property of the structure, connectional diversity and sheer quantity of neurons (supported by glia) forming the human neocortex. But there is no guarantee that the Blue Brain will ever reach human-level neuronal complexity. A huge investment would be required in this research, currently admirably supported by the Swiss government, to bring that possibility to fruition.
This is pure science and the investment is worth it. Understanding the construction and plasticity of our own brains will give us the tools we need to better shape ourselves and to recognise the negative neurobiological effects of bad interactions with other people and environments. Perhaps our future depends on that understanding.
Labels:
blue brain,
brain,
consciousness,
neuron,
plasticity,
simulation,
virtual
01 April 2009
CADIE fool
I don't normally take much interest in April Fools but I did like Google's CADIE prank. I looked up the CADIE blog after seeing the sky-high stats for it on Alexa.
The high level of interest is possibly borne out of a desire among many to witness the genuine emergence of a human-like artificial intelligence. But who knows whether the intelligence of an AI or 'artilect' would be remotely human-like. We already have an abundance of 'weak AI' around us but many don't even notice it. It's strong AI that holds the fascination.
I sometimes wonder if we would even notice if strong AI were present. If it were to emerge as a non human-like intelligence, perhaps via a fertile medium like the Internet, perhaps it would choose not to communicate with us. Perhaps it wouldn't think to do so. Maybe it would be 'thalient'.
There is a definition on Wikipedia of the concept of 'thalience'. This current definition may or may not have been adopted by the AI community, but from my reading of Karl Schroeder's book this definition is incorrect. To me thalience is to an artilect as intelligence is to a human. In other words thalience would be a way of understanding the environment (medium) and communicating with peers independently of human modes, and values.
'Artificial Intelligence' or at least 'strong AI' could be a complete misnomer. Can intelligence, no matter how generated, ever be artificial? I don't think so. When it emerges it will be as real as ours - but it won't be the same. It will be thalient. That is both fascinating and frightening.
Perhaps that's why we want to believe in CADIE. Because it seems like us and is therefore reassuring.
The high level of interest is possibly borne out of a desire among many to witness the genuine emergence of a human-like artificial intelligence. But who knows whether the intelligence of an AI or 'artilect' would be remotely human-like. We already have an abundance of 'weak AI' around us but many don't even notice it. It's strong AI that holds the fascination.
I sometimes wonder if we would even notice if strong AI were present. If it were to emerge as a non human-like intelligence, perhaps via a fertile medium like the Internet, perhaps it would choose not to communicate with us. Perhaps it wouldn't think to do so. Maybe it would be 'thalient'.
There is a definition on Wikipedia of the concept of 'thalience'. This current definition may or may not have been adopted by the AI community, but from my reading of Karl Schroeder's book this definition is incorrect. To me thalience is to an artilect as intelligence is to a human. In other words thalience would be a way of understanding the environment (medium) and communicating with peers independently of human modes, and values.
'Artificial Intelligence' or at least 'strong AI' could be a complete misnomer. Can intelligence, no matter how generated, ever be artificial? I don't think so. When it emerges it will be as real as ours - but it won't be the same. It will be thalient. That is both fascinating and frightening.
Perhaps that's why we want to believe in CADIE. Because it seems like us and is therefore reassuring.
06 March 2009
Spooky Action
An experiment has been devised to directly observe the quantum paradox in action.
The quantum paradox is probably not a subject that most of us spend time thinking about. Quantum entanglement, dubbed "spooky action at a distance" by Einstein, goes against the grain of our common sense understanding of our universe. Einstein hated it.
It is exciting to know that the quantum paradox can now be observed in some way, given that it was the act of observation that was both the spanner in the works and the key component of the theory.
Quantum entanglement makes teleportation possible. Lab experiments have demonstrated that photons can be 'teleported' and it won't be long before scientists are able to teleport something as (comparatively) large as a virus. There is no prospect of teleporting people, though. Too much data and the drawback of having to die as part of the process should rule it out for the foreseeable future.
It would be easy to wax philosophical about quantum entanglement. The fact that an electron in a star in another solar system may be able to tell me the quantum state of its entangled partner in the iris of my eye, is something that I could struggle with. It seems like that should mean something profound. But it doesn't. Just because I currently 'contain' one of the electrons does not imply significance. The entangled partner could just as easily be in a pile of yak faeces.
The quantum paradox is real. That means that it is a tool that can be used. It is fascinating to watch scientists learning to use paradoxical tools. They are already starting to build computers using quantum states. A fully-fledged quantum computer will produce encryption that is truly unbreakable. These devices may become common, they may be used by scientists, governments and individuals. And yet they will still be operating by the power of paradox.
And once it starts to work for them, will anyone care that it is paradoxical?
The quantum paradox is probably not a subject that most of us spend time thinking about. Quantum entanglement, dubbed "spooky action at a distance" by Einstein, goes against the grain of our common sense understanding of our universe. Einstein hated it.
It is exciting to know that the quantum paradox can now be observed in some way, given that it was the act of observation that was both the spanner in the works and the key component of the theory.
Quantum entanglement makes teleportation possible. Lab experiments have demonstrated that photons can be 'teleported' and it won't be long before scientists are able to teleport something as (comparatively) large as a virus. There is no prospect of teleporting people, though. Too much data and the drawback of having to die as part of the process should rule it out for the foreseeable future.
It would be easy to wax philosophical about quantum entanglement. The fact that an electron in a star in another solar system may be able to tell me the quantum state of its entangled partner in the iris of my eye, is something that I could struggle with. It seems like that should mean something profound. But it doesn't. Just because I currently 'contain' one of the electrons does not imply significance. The entangled partner could just as easily be in a pile of yak faeces.
The quantum paradox is real. That means that it is a tool that can be used. It is fascinating to watch scientists learning to use paradoxical tools. They are already starting to build computers using quantum states. A fully-fledged quantum computer will produce encryption that is truly unbreakable. These devices may become common, they may be used by scientists, governments and individuals. And yet they will still be operating by the power of paradox.
And once it starts to work for them, will anyone care that it is paradoxical?
23 February 2009
Fusion
I watched an interesting 'Horizon' programme about nuclear fusion.
If we are to become a Kardashev Type 1 civilisation we are going to need fusion. Many would probably ask why we would wish to push through to this level and not just be satisfied with our current Type 0.72. Striving for this staging post could be the death of us but so could trying to stand still.
It is currently fashionable to wear personal energy consumption level as a badge. But the idea of a "carbon footprint" is arbitrary. The result depends on which questions you ask. Inefficient energy consumption is wasteful and counter-productive but so is inefficient energy generation. Wind turbines are strangely beautiful but there can never be enough of them to make a real difference. Such 'green' forms of energy production lull us into a false sense of security. They make us feel that we are doing something. But it will never be enough.
All our energy comes from the sun one way or another. Oil is encapsulated power from long-long dead vegetation which once used photosynthesis to harness solar energy. And so the sun is always the way forward.
Fusion involves learning the lessons from the hearts of stars to make miniature suns here on Earth, using tokamaks or laser fusion. These techniques stand a good chance of success over the next 10-20 years but they are poorly funded, perhaps because of the current fixation with employing primitive wind, tidal, bio fuel and nuclear fission methods.
Evolution has always required energy to make exponential leaps to new levels of complexity. We are part of that process and nuclear fusion is the paradigm shift in energy generation that we require.
Why do we require this? Why didn't we just stay in the primordial soup?
If we are to become a Kardashev Type 1 civilisation we are going to need fusion. Many would probably ask why we would wish to push through to this level and not just be satisfied with our current Type 0.72. Striving for this staging post could be the death of us but so could trying to stand still.
It is currently fashionable to wear personal energy consumption level as a badge. But the idea of a "carbon footprint" is arbitrary. The result depends on which questions you ask. Inefficient energy consumption is wasteful and counter-productive but so is inefficient energy generation. Wind turbines are strangely beautiful but there can never be enough of them to make a real difference. Such 'green' forms of energy production lull us into a false sense of security. They make us feel that we are doing something. But it will never be enough.
All our energy comes from the sun one way or another. Oil is encapsulated power from long-long dead vegetation which once used photosynthesis to harness solar energy. And so the sun is always the way forward.
Fusion involves learning the lessons from the hearts of stars to make miniature suns here on Earth, using tokamaks or laser fusion. These techniques stand a good chance of success over the next 10-20 years but they are poorly funded, perhaps because of the current fixation with employing primitive wind, tidal, bio fuel and nuclear fission methods.
Evolution has always required energy to make exponential leaps to new levels of complexity. We are part of that process and nuclear fusion is the paradigm shift in energy generation that we require.
Why do we require this? Why didn't we just stay in the primordial soup?
Labels:
kardashev,
nuclear fusion,
tokamak
04 February 2009
Toolbox
Changing my mind is more difficult than I thought. Or am I just telling myself that?
Brain plasticity continues well into adulthood and perhaps, to some extent, throughout our entire lives. This wonderful capacity of the cerebrum is not just for kids. So what is it that makes it feel so difficult to alter our own personality traits and establish new and clearer channels of thinking? Simply telling your brain to change doesn't seem to do the trick and we can find ourselves 'spinning' on our favourite hangups day after day.
Part of the problem may be that we don't know what the 'tools' or 'controls' are to effect the change. We learn throughout our lives that certain results require certain processes but nobody ever explains to us how to self-program. For others, with a more critical mindset, new-age positive thinking and meditation-based 'toolboxes' can appear clumsy and ineffective.
My brother makes musical instruments. I once asked him how he undertook some of the intricate wood planing tasks required to make a guitar fretboard and he explained that he had to make the tiny planing tools before he could make the fretboard. I realise that this analogy is also clumsy but it gets the point across. The only convincing and effective tools for reshaping our own minds must be made by ourselves.
So how to do this? Realising that it can be done is a start. Accepting that it's all physical and that, therefore, the tools are real will also help. We all know the kind of words and situations that make us cringe, so don't use those. Our left hemispheres can be overly clamorous and dominating, so find some space and time to open up to the right. Stop your 'spinning' thoughts in their tracks as often as possible - you can because they are yours.
It's tempting to think that your brain is working at it's best when neuronally 'lit up' like a Christmas tree but that's not the case. Focus is required and that means a certain stillness but does not require psychobabble.
These are the things that I am attempting but all our brains are different. Finding focus and stillness is hard, because that is the story I have told myself throughout my life and therefore that story has become part of my cortical wiring.
Brain plasticity continues well into adulthood and perhaps, to some extent, throughout our entire lives. This wonderful capacity of the cerebrum is not just for kids. So what is it that makes it feel so difficult to alter our own personality traits and establish new and clearer channels of thinking? Simply telling your brain to change doesn't seem to do the trick and we can find ourselves 'spinning' on our favourite hangups day after day.
Part of the problem may be that we don't know what the 'tools' or 'controls' are to effect the change. We learn throughout our lives that certain results require certain processes but nobody ever explains to us how to self-program. For others, with a more critical mindset, new-age positive thinking and meditation-based 'toolboxes' can appear clumsy and ineffective.
My brother makes musical instruments. I once asked him how he undertook some of the intricate wood planing tasks required to make a guitar fretboard and he explained that he had to make the tiny planing tools before he could make the fretboard. I realise that this analogy is also clumsy but it gets the point across. The only convincing and effective tools for reshaping our own minds must be made by ourselves.
So how to do this? Realising that it can be done is a start. Accepting that it's all physical and that, therefore, the tools are real will also help. We all know the kind of words and situations that make us cringe, so don't use those. Our left hemispheres can be overly clamorous and dominating, so find some space and time to open up to the right. Stop your 'spinning' thoughts in their tracks as often as possible - you can because they are yours.
It's tempting to think that your brain is working at it's best when neuronally 'lit up' like a Christmas tree but that's not the case. Focus is required and that means a certain stillness but does not require psychobabble.
These are the things that I am attempting but all our brains are different. Finding focus and stillness is hard, because that is the story I have told myself throughout my life and therefore that story has become part of my cortical wiring.
22 January 2009
Projection
So all this may be a hologram. Should this worry you? I don't think so - at least no more than the idea that everything may be composed of tiny vibrating cosmic strings or that all possibilities are actually played out, as in the many worlds interpretation.
The holographic universe idea does not equate to acceptance, tacit or otherwise, of the simulation argument. But the notion of us playing out an ancestor simulation isn't as laughable as it may appear.
To me it's an issue of resolution. How grainy is reality? Of what type of "pixels" is our universe composed? It seems logical that there must be a smallest unit of reality and that that unit may be fiercely insubstantial. But at the same time it feels counter-intuitive because we think of ourselves and our world as solid and not "projected" in the way that pixels are. If you can accept the idea of our being "projected" at any kind of resolution, albeit a mind-bogglingly high one, then you can accept that something might be running the projector.
None of this implies God. Any intelligence capable of running ancestor simulations must have itself evolved from something less intelligent. The very idea of such an intelligence wishing to run the simulations indicates that they are doing so in order to see how they themselves evolved. God never gets to evolve. How dull.
Enough of the wild speculation and back to holograms. We accept that a moving image on a 2-dimensional surface can give us the illusion of 3-dimensional reality. We will soon have to accept holographic "televisions" that sit in the centre of our rooms, so that we can walk around them and do such banal things as looking at the back of the actors heads while they recite their lines. It is a big leap from that point to accepting our universe as a holographic projection but would it make us any less "real" if it were true?
I am a thinking entity utilising synapses, neurons and glia. Beyond that functional level the units of the thinking substrate become markedly less tangible, but that doesn't affect my ability to think. I would like to know how small the units get because that is the kind of thing that brains ponder? But if I eventually come to discover that I am actually living in a kind of Flatland that won't invalidate me as a thinking entity.
Being part of an ancestor simulation wouldn't either.
The holographic universe idea does not equate to acceptance, tacit or otherwise, of the simulation argument. But the notion of us playing out an ancestor simulation isn't as laughable as it may appear.
To me it's an issue of resolution. How grainy is reality? Of what type of "pixels" is our universe composed? It seems logical that there must be a smallest unit of reality and that that unit may be fiercely insubstantial. But at the same time it feels counter-intuitive because we think of ourselves and our world as solid and not "projected" in the way that pixels are. If you can accept the idea of our being "projected" at any kind of resolution, albeit a mind-bogglingly high one, then you can accept that something might be running the projector.
None of this implies God. Any intelligence capable of running ancestor simulations must have itself evolved from something less intelligent. The very idea of such an intelligence wishing to run the simulations indicates that they are doing so in order to see how they themselves evolved. God never gets to evolve. How dull.
Enough of the wild speculation and back to holograms. We accept that a moving image on a 2-dimensional surface can give us the illusion of 3-dimensional reality. We will soon have to accept holographic "televisions" that sit in the centre of our rooms, so that we can walk around them and do such banal things as looking at the back of the actors heads while they recite their lines. It is a big leap from that point to accepting our universe as a holographic projection but would it make us any less "real" if it were true?
I am a thinking entity utilising synapses, neurons and glia. Beyond that functional level the units of the thinking substrate become markedly less tangible, but that doesn't affect my ability to think. I would like to know how small the units get because that is the kind of thing that brains ponder? But if I eventually come to discover that I am actually living in a kind of Flatland that won't invalidate me as a thinking entity.
Being part of an ancestor simulation wouldn't either.
16 January 2009
I am a Strange Loop
Some time ago I read 'I am a Strange Loop' by Douglas Hofstadter.
His style can appear pedantic but there is a core of succinct truth in his work. Hofstadter clearly accepts himself as a purely material being. He attempts to put his finger on what "I" means within this context.
The core of his explanation is that the Self must be generated within the atomic structure of the brain. "I" is a "loop" generated by the brain feeding its own output back into its own input. But it's no ordinary type of loop; not one that is constrained to repetitive, mundane processes. The complex and chaotic nature of the feeds creates something unique within the system: that which we call "me".
The unpredictable output of "loopy" systems is demonstrated in video feedback experiments, on which Hofstadter is keen. He gives other examples including Godelian mathematical anomalies and language experiments, which can be made to exhibit a similar type of capricious behaviour.
I picked up an older book in a charity shop some time after I had read "I am a Strange Loop" - "The Creative Loop" by Erich Harth. His theme is similar but perhaps easier to understand and I think he may have been influenced by Hofstadter's earlier work. Harth uses the analogy of a hall of mirrors to describe the Self. We're not always directly responding to the input of our senses, we're responding to our inner reflections of those inputs chaotically mixed with all the other inputs we have ever had, and all of their component reflections.
I like this explanation. All Hail the careenium. I accept my loopiness and revel in it.
His style can appear pedantic but there is a core of succinct truth in his work. Hofstadter clearly accepts himself as a purely material being. He attempts to put his finger on what "I" means within this context.
The core of his explanation is that the Self must be generated within the atomic structure of the brain. "I" is a "loop" generated by the brain feeding its own output back into its own input. But it's no ordinary type of loop; not one that is constrained to repetitive, mundane processes. The complex and chaotic nature of the feeds creates something unique within the system: that which we call "me".
The unpredictable output of "loopy" systems is demonstrated in video feedback experiments, on which Hofstadter is keen. He gives other examples including Godelian mathematical anomalies and language experiments, which can be made to exhibit a similar type of capricious behaviour.
I picked up an older book in a charity shop some time after I had read "I am a Strange Loop" - "The Creative Loop" by Erich Harth. His theme is similar but perhaps easier to understand and I think he may have been influenced by Hofstadter's earlier work. Harth uses the analogy of a hall of mirrors to describe the Self. We're not always directly responding to the input of our senses, we're responding to our inner reflections of those inputs chaotically mixed with all the other inputs we have ever had, and all of their component reflections.
I like this explanation. All Hail the careenium. I accept my loopiness and revel in it.
Labels:
careenium,
hofstadter,
loop,
self,
strange
09 January 2009
Cryonics
I have signed up with Alcor to be cryonically stored after my death.
Of course we don't have the technology to 'wake' a human from death. But everything is made of atoms, so one day the techniques may exist to intervene at the atomic level to fix the massive degradation that happens to a human brain after death and the subsequent storage.
This isn't something I would wish for. It's just practical. Once you reject the concept of an immortal soul you can simply accept your eventual non-existence, or you can take some faltering steps towards retaining something, anything of the unique construct that is you. I have chosen the latter option.
Of course we don't have the technology to 'wake' a human from death. But everything is made of atoms, so one day the techniques may exist to intervene at the atomic level to fix the massive degradation that happens to a human brain after death and the subsequent storage.
This isn't something I would wish for. It's just practical. Once you reject the concept of an immortal soul you can simply accept your eventual non-existence, or you can take some faltering steps towards retaining something, anything of the unique construct that is you. I have chosen the latter option.
08 January 2009
Learning things
I have been learning things.
The things that I have learned have changed my mind - physically. That's what happens when you learn. The process alters neuronal connections to create new ones; to strenghten some, to weaken others so that over time the physical structure of the neocortex changes.
Those of us who don't hold any fuzzy Cartesian dualist notions of a mind/body split will not find this surprising. After all, everything is made of atoms. It's difficult to shake the notion of mind being separate though. We have so much ingrained vocabulary that reinforces it. But it's a straightjacket - carrying around this ethereal element which we think of as 'me' but which we cannot explain.
Learning about neuroscience is important. Surely it's an essential grounding for any field of human intellectual endeavour. How can a philospher, for example, opine about the nature of the mind and the human condition if she has no idea where or how her opinions are being generated, stored and reinforced?
Dendrites, the antennae of neurons, are a little like trees - hence the name derived from Greek. Some types of dendrite have 'spines'. The dendrite as a whole and the quality and quantity of the spines are affected by many environmental factors. The 'trees' can grow well or poorly. Their environment may be the cortex of a Downs Syndrome child, in which case many will be stunted and withered, as we would perhaps expect. But a similar 'withering' effect can be observed in the neurons of children with a poor social environment, bereft of proper human interaction and nurturing.
Cartesion dualism is wrong. We are biological and our 'minds' are generated by biochemical processes within our brains. Isn't that liberating.
The things that I have learned have changed my mind - physically. That's what happens when you learn. The process alters neuronal connections to create new ones; to strenghten some, to weaken others so that over time the physical structure of the neocortex changes.
Those of us who don't hold any fuzzy Cartesian dualist notions of a mind/body split will not find this surprising. After all, everything is made of atoms. It's difficult to shake the notion of mind being separate though. We have so much ingrained vocabulary that reinforces it. But it's a straightjacket - carrying around this ethereal element which we think of as 'me' but which we cannot explain.
Learning about neuroscience is important. Surely it's an essential grounding for any field of human intellectual endeavour. How can a philospher, for example, opine about the nature of the mind and the human condition if she has no idea where or how her opinions are being generated, stored and reinforced?
Dendrites, the antennae of neurons, are a little like trees - hence the name derived from Greek. Some types of dendrite have 'spines'. The dendrite as a whole and the quality and quantity of the spines are affected by many environmental factors. The 'trees' can grow well or poorly. Their environment may be the cortex of a Downs Syndrome child, in which case many will be stunted and withered, as we would perhaps expect. But a similar 'withering' effect can be observed in the neurons of children with a poor social environment, bereft of proper human interaction and nurturing.
Cartesion dualism is wrong. We are biological and our 'minds' are generated by biochemical processes within our brains. Isn't that liberating.
Subscribe to:
Posts (Atom)