Folding Reality with AI

Life is beginning to imitate art in a very sophisticated way now.

Samuel Khan — the founder of Khan Academy — presents the new AI assistant the company is rolling out soon within its learning platform.

At this point in the presentation he gives an example of student who wants to interact with Jay Gatsby, a fictional character created by the great literary author Scott Fitzgerald.

So until the other day, students could only rely on living tutors to have a productive learning experience. Tutors would interact with student in an humanized form to transmit information the best way possible to his or her pupil.

As we can see in the video, we are giving fictional characters a voice and, judging by the way AI is built and can evolve over time, maybe these characters will evolve someway and transform themselves.

Why not ask Gatsby what he thinks about social media and how TikTok defines consumption habits among teenagers. Maybe he/it has something to say?

Or would it be better to ask Fitzgerald first and then crosscheck the answer with Gatsby’s to see if there is any bias, an author bias, influencing the latter?

So far, I don’t know. As Khan himself just said, we are in the beginning of all this. The front runners will be the ones dictating the pace and, mainly, the direction we are running to in AI.

I think I would rather see AI helping us at a predictable humanized way, sometimes even replicating our the flaws and limitations of their creators: us, humans. ChatGPT kind of works that way — at the time of this writing at least — claiming that it doesn’t know some things, or it can give inaccurate answers. That’s fine for now. It’s acceptable.

Speaking to fictional characters it not natural. I would say it’s not even funny.

I don’t want to ask questions to Noah about the Arc his selection criteria. And even as a Dune fan myself, I don’t want to ask Paul Atreides why he had given up walking the golden path, leaving the burden to his son Leto II. I don’t need fabricated answers to those questions; “inferred truth”? No, thanks.