by Gustavo Rehermann (wallabra) - contact: [email protected]
I'm happy that we are starting to see AI safety initiatives. I understand their problem statement. But I think they are very myopic when it comes to the extent of what it means to have AI widespread in society, not just misaligned AI, but any AI.
How about we start making AI do the parts of life we actually don't want to deal with, while we get to keep the enjoyable parts like art, writing, and trust in a meaningful and real reality, for ourselves? Or rather, how about we use it once we know it can do these things, and do them sustainably?
And, ultimately, what if the problems rightfully attributed to AI actually go much, much deeper than just the AI itself? Let's dive in.