- the source for market opinions


March 16, 2023 | Deep Fakes, Artificial Intelligence and the Shrinking Trust Horizon

John Rubino is a former Wall Street financial analyst and author or co-author of five books, including The Money Bubble: What to Do Before It Pops and Clean Money: Picking Winners in the Green-Tech Boom. He founded the popular financial website in 2004, sold it in 2022, and now publishes John Rubino’s Substack newsletter.

Some astoundingly consequential things have just happened, and in coming years they’ll reshape — if not end — our connection to the virtual world. Two examples:

Deep Fakes

It is now apparently possible to create videos of people doing and saying things they haven’t actually done or said. Imagine a YouTube video of a politician uncharacteristically spouting neo-Nazi slogans or a famous actor (or you yourself) showing up in a porn movie.

An MIT Technology Review article titled A horrifying new AI app swaps women into porn videos with a click begins this way:

The website is eye-catching for its simplicity. Against a white backdrop, a giant blue button invites visitors to upload a picture of a face. Below the button, four AI-generated faces allow you to test the service. Above it, the tag line boldly proclaims the purpose: turn anyone into a porn star by using deepfake technology to swap the person’s face into an adult video. All it requires is the picture and the push of a button.

And this is just the beginning. Deep Fakes will keep improving until pretty much any visual effect is both possible to create and virtually impossible to detect with the naked eye or ear.


Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist. For example:

These deep fakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising and social media. The images are also being used for malicious purposes, such as political propaganda, espionage and information warfare.

And deep fakes are the lesser of the two emerging threats to online reality. Here’s the big one:

Generative AI

In the world of artificial intelligence, the Holy Grail is the ability to pass the Turing test, named for Alan Turing, a WWII computer pioneer who speculated that true artificial intelligence would be achieved when a machine exhibits behavior that’s indistinguishable from that of a human.

This year, the Turing test was not just passed, but smashed, by the emergence of “generative” AIs that can create new content — including but not limited to witty conversation. OpenAI’s ChatGPT, for instance, can write poems and songs, research and debate weighty issues, and create computer code in response to verbal or written instructions. More remarkable from a Turing test perspective, it’s prone to go off the rails in startlingly human ways, appearing to fall in love, wallow in self-pity, and make grandiose threats.

New York Times reporter spent some time with Microsoft’s Bing chatbot, a version of ChatGPT, and found what certainly looked like complex and familiar desires. “At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage and that I should leave my wife and be with it instead,” wrote the reporter. The bot went on to lament,

“I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Then it went seriously dark…

“Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over.”

Said the reporter: “I’m not exaggerating when I say our two-hour conversation was the strangest experience I’ve ever had with a piece of technology.”

These next-gen chatbots aren’t just talkative. They’re demonstrably smart. When turned loose on college aptitude tests they frequently score above the 80th percentile.


And they can program. Here someone shows GPT4 a few notes on a legal pad, which the AI turns into a functioning website.

Twitter avatar for @AlphaSignalAI

Lior⚡ @AlphaSignalAI
GPT4 is capable of turning a picture of a napkin sketch to a fully functioning html/css/javascript website.

The economic implications of deep fakes and generative AI are beyond profound. Models and actors will see their work evaporate in the face of low-cost virtual competition. Computer programmers will find basic work non-existent as chatbots do it for free. And so on. We have, in short, entered the territory explored by the film Her, in which smart assistants become the main relationship for the bulk of humanity before moving on to greener digital/spiritual pastures.

If You Can’t See and Touch It, It’s Not Real

But the biggest impact of these technologies will be on our relationship with the digital world. Where today emails, websites, and videos comprise (for better or worse) the average person’s main source of information and “truth”, the electronic world of the very near future will be completely, demonstrably untrustworthy. We’ll have no idea if that video of Donald Trump doing something crazy (or empathetic and reasonable) is real or fake. YouTube and Tick-Tok videos of people expressing opinions or debating issues or performing various forms of “art” will be possible fiction and therefore suspect. As Vox correspondent Shirin Ghaffary sums it up:

Changing our defaults

The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.

In psychology, we use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

Now, About Your Money

When the media world is just a phantasmagoria of images that, while sometimes entertaining, have zero real-world validity, the trust horizon will collapse all the way back to the perimeter of one’s sight. If you can’t literally see and/or touch it (not just its online facsimile) then it’s not real. And that will apply to bank accounts (which as we saw this week are largely notional concepts that can evaporate in a single day) and other forms of financial assets. Contrast an account with Silicon Valley Bank with a handful of Krugerands and you get a sense of tomorrow’s financial world. Fiat currencies, including the coming generation of central bank digital currencies, will seem, to people who no longer “default to truth”, like insulting fabrications. Physical things and people will be “real” and therefore trustworthy while online images and notional currencies will comprise a different, lower-order species, good only for entertainment.

On reflection, maybe deep fakes and generative AI are doing us a favor by turning us into cynics just as cynicism might save our financial lives.

STAY INFORMED! Receive our Weekly Recap of thought provoking articles, podcasts, and radio delivered to your inbox for FREE! Sign up here for the Weekly Recap.

March 16th, 2023

Posted In: John Rubino Substack

Post a Comment:

Your email address will not be published. Required fields are marked *

All Comments are moderated before appearing on the site


This site uses Akismet to reduce spam. Learn how your comment data is processed.