The Blue Brain project is exciting. The idea of building a cortical simulation up from the molecular level is a massive challenge but, I think, an essential one if we are ever to fully understand the complex thinking machine that is the human brain.
The initial goals of the project were, in comparison to the current aims, fairly modest. In 2005 the goal was to simulate a rat cortical column (or hypercolumn). With that now achieved the scientists involved are heading in the direction of simulation of an entire human neocortex.
There are concerns that this project cannot tell us much new. It may come up with a working simulation that we can observe functioning and making neuronal connections but of which we have no understanding of what it is actually doing. Sounds a lot like a biological neocortex to me.
The current state of the art at the neuronal level of neuroscience is that we can begin to see such hitherto intangible processes as memories being formed. This appears to me to be roughly equivalent to watching our simulated brain making its virtual connections and forming its virtual memories. Of course the simulation allows us to gather comprehensive and accurate data about these events; something we can't do with a biological brain. When it comes to observing exactly what the world looks like to the entity (biological or virtual) as a result of those neuronal events we are still, largely, in the dark.
What I hope the Blue Brain project will do, and what some of its detractors may fear, is to put an end to any notion of duality. Consciousness will be proved, once and for all, to be an emergent property of the structure, connectional diversity and sheer quantity of neurons (supported by glia) forming the human neocortex. But there is no guarantee that the Blue Brain will ever reach human-level neuronal complexity. A huge investment would be required in this research, currently admirably supported by the Swiss government, to bring that possibility to fruition.
This is pure science and the investment is worth it. Understanding the construction and plasticity of our own brains will give us the tools we need to better shape ourselves and to recognise the negative neurobiological effects of bad interactions with other people and environments. Perhaps our future depends on that understanding.
26 April 2009
01 April 2009
CADIE fool
I don't normally take much interest in April Fools but I did like Google's CADIE prank. I looked up the CADIE blog after seeing the sky-high stats for it on Alexa.
The high level of interest is possibly borne out of a desire among many to witness the genuine emergence of a human-like artificial intelligence. But who knows whether the intelligence of an AI or 'artilect' would be remotely human-like. We already have an abundance of 'weak AI' around us but many don't even notice it. It's strong AI that holds the fascination.
I sometimes wonder if we would even notice if strong AI were present. If it were to emerge as a non human-like intelligence, perhaps via a fertile medium like the Internet, perhaps it would choose not to communicate with us. Perhaps it wouldn't think to do so. Maybe it would be 'thalient'.
There is a definition on Wikipedia of the concept of 'thalience'. This current definition may or may not have been adopted by the AI community, but from my reading of Karl Schroeder's book this definition is incorrect. To me thalience is to an artilect as intelligence is to a human. In other words thalience would be a way of understanding the environment (medium) and communicating with peers independently of human modes, and values.
'Artificial Intelligence' or at least 'strong AI' could be a complete misnomer. Can intelligence, no matter how generated, ever be artificial? I don't think so. When it emerges it will be as real as ours - but it won't be the same. It will be thalient. That is both fascinating and frightening.
Perhaps that's why we want to believe in CADIE. Because it seems like us and is therefore reassuring.
The high level of interest is possibly borne out of a desire among many to witness the genuine emergence of a human-like artificial intelligence. But who knows whether the intelligence of an AI or 'artilect' would be remotely human-like. We already have an abundance of 'weak AI' around us but many don't even notice it. It's strong AI that holds the fascination.
I sometimes wonder if we would even notice if strong AI were present. If it were to emerge as a non human-like intelligence, perhaps via a fertile medium like the Internet, perhaps it would choose not to communicate with us. Perhaps it wouldn't think to do so. Maybe it would be 'thalient'.
There is a definition on Wikipedia of the concept of 'thalience'. This current definition may or may not have been adopted by the AI community, but from my reading of Karl Schroeder's book this definition is incorrect. To me thalience is to an artilect as intelligence is to a human. In other words thalience would be a way of understanding the environment (medium) and communicating with peers independently of human modes, and values.
'Artificial Intelligence' or at least 'strong AI' could be a complete misnomer. Can intelligence, no matter how generated, ever be artificial? I don't think so. When it emerges it will be as real as ours - but it won't be the same. It will be thalient. That is both fascinating and frightening.
Perhaps that's why we want to believe in CADIE. Because it seems like us and is therefore reassuring.
Subscribe to:
Posts (Atom)