Originally Posted by Tassman
Thanks Tassman, will check this one out.
One of the things that make sense to me out of Life 3.0 was his descriptions of what's needed for intelligence at a physical level. I'll dig back into the book to find the details when I have some time but essentially we should be able to create it. It seems inevitable if we don't wind up destroying our technological society first.
I also like that it's pragmatic, the work he is doing seems important - really think now about the ways we develop AI and it's impact so that humanity isn't resigned to life's rubbish heap (not his words). He recognises that when AI will emerge is difficult to predict but based on surveying the AI community the average currently falls in the mid part of this century.
The book also looks at the consciousness aspect - that it's a bit of a red herring. Competence is the key. I agree here, if AI is able to do anything better than we can, what do we do?