Originally Posted by Ron_Tomkins
1. I think it's highly probably that AI and human augmentation are going to be parallel developments, if not functionally the same. Some
level of brain/computer integration is going to happen way, way before we start developing AI of the kind being discussed here. This is also another reason I lean far away from "The AI is going to be this singular lightning strike moment that just happens" thing.
So I think the idea that we're in some reasonable level of risk (obvious caveat that we can't see the exact future even when we're not talking about the Singularity) from creating machines that are going to rapidly outpace us is I overblown. The same technology we use to make machines smarter then humans is going to make the humans smarter at the exact same (or at least equivalent) rate.
I think it is infinitely more likely that brain/computer integration is going to be an established thing that leads
to AI, as in we integrate computers with human brains more and more AI is almost going to be inevitable side effect and in that scenario it's a lot harder to pinpoint the place where AI is suddenly going to become this separate thing we're going to have to worry about singing Daisybell or sending Terminators after us.
tL;dR version. By the time AI could become a threat to us the line between AI and "us" isn't going to be there. We will be the tech and the tech will be us. Worrying about AI in that world will be akin to worry that your hearing aid or artificial leg is going to otherthrow you when you take it off the night.
2. Again this is all academic because we can't stop it. We would need a form of Worldwide Luddite Totalitarianism that's unimaginable.
I get that being one of the last techno-optimist in a world where pretty much everybody thinks we're gonna be living in a Black Mirror episode any day now is a hard sell at this point, but "Oh this new tech is going to be the end of" has been the line since Og smashed two rocks together to make the first "Rock with a sharp edge."