No.2534
Does anyone here actually believe improvements in artificial intelligence will lead to a runaway process ending in artificial super-intelligence (aka the singularity), and if so, can you explain why the feedback effect of an AI improving itself has to be runaway rather than simply lead to diminishing returns? Is it about AI's speed advantage and innate breadth of knowledge?
You could, I suppose, argue that if smart humans can do something, then a smart AGI could do it too, and faster. Therefore if smart humans could eventually create super-intelligence, and smart AGI is successfully created, the rest of the staircase gets climbed much quicker in something resembling a runaway process. But I think people who argue this assume artificial super-intelligence is possible in the first place. What if it's simply not possible, or at least not possible with techniques derived from current advances? How can people be so sure there isn't a hard wall not so far away?
Despite being intelligent, and despite arguably harboring low-level super-intelligence among us in the form of the rare genius, we haven't been able to improve our own brains at all… at best we have some blunt hammers in the form of psychiatric drugs, and we don't even know how some of them work.