What is consciousness? and missing David Graeber

It feels strange to say that you miss someone that you've never met, but the internet is a strange place. When it constantly provides updates, articles and books from someone like David Graeber, it follows that there's a huge sense of loss now that he is no longer with us.

Out of all of his amazing writing, the article that has stuck most with me is a piece he wrote for The Baffler. It is a beautiful and profound discussion on what may be the origins of consciousness. I thought of it again today after reading an article in The Guardian, which was written by a piece of software called GPT-3.

These two articles might not seem related at first glance, but I would still have loved to have heard David's thoughts on the latter. What can we say about GPT-3 based on the writing it's currently able to produce? If it keeps getting better, will we eventually say that it's "self aware"? It feels like the same "jump" to consciousness that David was talking about.

What if there are no jumps? Emergence is a continuum. AI is the result of yet another layer of complex social behaviours, namely the work of the software developers who built it combined with the work of countless people who provided data for it by openly sharing on the internet. Do we dare ask GPT-3 how it knows that it knows? That question itself may be the proof that Zhuangzi was alluding to at the end of David's piece.
likesharereplyWant to share this? Click to choose a site:settings

It's a rare Sunday morning wherein I take morning coffee with reading articles about the nature of mind, and of consciousness, about the advance of artificial intelligence and the essential nature of reality. Today was such a morning.

Malcolm suggests that to ask the AI how it knows what it knows is alike to asking Zhuangzi how he knows what makes the minnow happy. I'm glad for the question and have much to think about.

I was struck by the AI's essay, not because of the woven tapestry of well-connected meaning, but by the affront to happiness throughout. The AI selects statements which bring much negative sentiment to mind, flips the logical meaning of the statement with a negation e.g. robots are not going to insert terrifying scenario, and presumes to have done thoroughly well in answering.

Such is the fear of the AI?

Add a comment