I saw this post from Santiago this morning, a topic that I have anticipated in my thinking and essays for at least two years now.
If digital intelligence can easily copy our looks and tone of voice, the final step in the process of creating a realistic copy of ourselves that could be expressed in video.
This was the missing magic touch for creating clones.
Obviously, the final destination is an “intelligent” autonomous clone trained on all our phone calls, emails, social media posts and blogs like this one, to express opinions as we would.
One of the cool tasks people used to play with in the early days of chat LLMs was to ask the model to respond as it was Steve Jobs or F. Scott Fitzgerald. In the case of the latter, it was even nicer because the model would try to apply the unique writing style of the writer.
As of late 2025, all AI companies set their apparatus to ingest and process all human generated data, which includes content you and I produced. Content that represents our way of thinking and opinions about our zeitgeist.
Maybe until the end of next year, 2026, we would be seeing agents that not only look and speak like us, but reason like us on the topics we usually talk about.
On the bad side: the nightmarish idea of losing our sense of self when facing a perfect clone of ourselves, a topic that many modern philosophers and futurologists have explored over the years.
On the good side: our clone would tend to immediately agree with us on any topic, a socially rare reaction these days. (if this serves as consolation)


Leave a Reply
You must be logged in to post a comment.