2023-04-19 AI as "Terminator"?
Is AI going to make humans extinct? What kind of question is that even?
What worries me is that we are worrying about the wrong things. Everywhere you see discussions of Terminator-like scenarios where evil AIs dispose of mankind. But: amazing as they already are - currently AIs are structurally pretty far from even being able to even have their own goals, let alone wanting and being able to run the world. There are dangers connected with AI, but these are not a rogue AI deciding to kill off mankind, but our fellow humans abusing them or even using them with seemingly the best intentions. Please don't throw the baby out with the bathwater.
That said, thre are some pretty real threats related to AI that are IMHO discussed much less than they should. What's worse about those "Terminator" discussions is that they divert from tackling those real problems.
First, of course, the massive job displacement they will create. I think that's already started to sink into peoples minds, though I didn't see much in terms of solutions or alleviations suggested. And what we are seeing now are still just the beginnings.
Second, and more important: while the AIs themselves don't really have a desire for power over people, those who command their makers certainly do, and in no small measure. In some ways, they even have an obligation to their shareholders to be evil. While OpenAI managed to make a pretty good and helpful impression so far, we'll see how long that holds up as soon as real money is made creating AIs. And Trump devotee Elon Musk has already started to have ChatGPTs evil twin " TruthGPT" made. As if the world was in dire need to receive even more "alternative facts" as those language models currently create anyway!
Perhaps the various open source language models will come to help, or the LAION initiative, though it's hard to imagine that they can compete in the AI race with the top dogs.