Issue #13 - If you create an AGI that destroys humanity, I'll be forced to revoke your license

May 21, 2023
Free Edition
In This Issue

  • US Senators and OpenAI agree that the US government should be able to issue a license to train an artificial general intelligence and revoke it if it destroys humanity. A flawless plan.
  • BT announced plans to cut 40,000 and 55,000 jobs by 2030. They feel that bullish about generative AI.
  • SEC Chair warns that the next financial crisis might be caused by generative AI and not Robinhood disabling the buy button again.
  • The CNET overworked journalists don’t seem relieved at all that AI has started writing articles on their behalf.
  • Quiz: let’s say that your dart is an AI. How many figurines can you hit on a dartboard depicting 100 professions?
  • A psychiatrist starts using ChatGPT to describe patients’ mental health. Now she can make $300 / hour without even saying a word.
  • Philosopher Daniel Dennett suggests that our society might be on the verge of collapse if generative AI is allowed to pose as real people. Everybody else thought that human trust is overrated.
  • You can now rent a synthetic clone of a real woman and pretend she’s your girlfriend. But we are well-mannered people and we don’t use forbidden words to describe the practice.

P.s.: the new Splendid Edition of Synthetic Work is out: Hey, great news, you potentially are a problem gambler.

Inside it, we discover what KPMG, The US Air Force, Flutter Entertainment, Entain, New Balance, and The Wildlife Conservation Society are doing with AI.

In the Prompting section, we also discover how AI can help solve the You don’t know what you don’t know problem.

Intro

For the first time since I launched Synthetic Work, you are receiving this newsletter on a Sunday, rather than on a Friday. That’s because, after being busy for almost 5 months building this project and missing every holiday on the UK calendar (or US calendar, or Italian calendar, etc.), I was offered the possibility to see in person the world-famous The Japanese Footbridge captured in Claude Monet’s masterpiece.

As an art lover and collector, I couldn’t miss the opportunity to know how it feels to be on that bridge. And so I travelled from London to Paris to Giverny for part of this week.

And this is how it feels to be on that bridge:

I’m confident you care absolutely nothing about any of this, but I thought that, as a loyal reader, you deserve an explanation about why you didn’t get your newsletter the day you were promised.

But perhaps you like a late Sunday delivery? If so, let me know by replying to this email.

And now, let’s talk about the things that matter. This week was intense.
Alessandro

What Caught My Attention This Week

Obviously, the first thing I’d like to focus your attention on this week is the fact that the US Senate Judiciary Subcommittee had a little chat with OpenAI’s co-founder and CEO, Sam Altman, the New York University Professor Emeritus Gary Marcus, and…and a third person. Nobody understood why the third person was there, but OK.

The focus of the hearing was the oversight of artificial intelligence, and during the happy gathering, the word “jobs” was mentioned just 26 times.

Synthetic Work exists to explore and document the impact of AI on jobs and our industries, and to help workers upskill and master AI if the fear of mass job displacement has merit.

So, what has been said during the US Senate hearing is especially important because it comes from the people that are making AI and distributing that AI at a planetary scale.

Let’s start with the opening remarks of the US Senator Richard Blumenthal:

And for me, perhaps the biggestnightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required.

To which, Sam Altman replies in his opening remarks:

 We’re very optimistic tha they’re going to be fantastic jobs in the future, and that current jobs can get much better.

However, as the conversation goes deeper, Sam gets increasingly clear about the fact that before things will get better they might go a lot worse:

Like with all technological revlutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the other side of a previous technological revolution, talking about the jobs that exist on the other side you know, you can go back and read books of this. It’s what people said at the time. It’s difficult. I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better. I, I think it’s important. First of all, I think it’s important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused, and it’s a tool that people have a great deal of control over and how they use it. And second, GPT-4 and other systems like it are good at doing tasks, not jobs.

And so you see already people that are using GPT-4 to do their job much more efficiently by helping them with tasks. Now, GPT-4 will I think entirely automate away some jobs, and it will create new ones that we believe will be much better. This happens again, my understanding of the history of technology is one long technological revolution, not a bunch of different ones put together, but this has been continually happening. We, as our quality of life raises and as machines and tools that we create can help us live better lives the bar raises for what we do and, and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.

The thing that Sam is not saying, and that many people are not willing to discuss, is that AI is fundamentally different from other technologies we have invented in the past (something he actually says in his opening remark) and, as such, its impact on jobs might be dramatically bigger.

If, making a hypothetical exaggeration, the AI causes the displacement of 50% of the white-collar jobs that exist today before it leads to the emergence of “better jobs”, is the world really going to be a better place?

Also, the notion that the jobs of the future will be “better” should not be taken for granted.

Thanks/because of generative AI, the cost of creation will sink to near zero for much of the human output. At the same time, all the automation in the last two decades has not increased people’s salaries and quality of living.

If the cost of human labour sinks below what it is today because of generative AI, does it mean that we’ll have to do 3-4 jobs in the future to maintain the same living standard we have today?

And if so, what’s the impact of that on procreation? Will we have time and energy for that?

As usual, the point of all these questions is not if they have merit or not, but if we have a plan in case they have merit.

When it’s time to comment on the AI impact on jobs, Professor Marcus seems to agree with me (or I agree with him) on at least one aspect:

on jobs past performance histor is not a guarantee of the future. It has always been the case in the past that we have had more jobs, that new jobs, new professions come in. As new technologies come in, I think this one’s gonna be different. And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years? And I don’t think anybody knows the answer to that question.

Ingeniously, he then pushes Senator Blumenthal to ask what’s Sam’s biggest nightmare about AI and this is the answer they get:

Look, we have tried to be very lear about the magnitude of the risks here. I think jobs and employment and what we’re all gonna do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I’m just more optimistic that we are incredibly creative and we find new things to do with better tools, and that will keep happening. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company.

OpenAI has been very clear about the magnitude of the opportunity, but I really don’t think they have been clear about the magnitude of the risk at all. If it wasn’t for all the work that Gary Marcus has done in the last few years, and the Future of Life Institute (FLI) open letter that we both signed, perhaps, this US Senate hearing wouldn’t even have taken place.

More importantly, it’s worth reminding you, dear reader, that Sam and the rest of the OpenAI team operate in the Silicon Valley bubble where things happen and capitals flow in a very different way from the rest of the world. And that, as I’ve seen over and over during my 23 years of career in the IT industry, can really distort the perception of how the entire world will react to a technology that potentially displaces jobs on a mass scale.

Outside Silicon Valley, people don’t get $1M funding for saying “Hey, I have an idea about X” and for banding together with 3 friends to explore if it can be done. And I live in London, where capitals flow like a river. Elsewhere in Europe, South America, large parts of Asia, Africa, and also much of the United States, the situation is even worse.

Should people’s jobs get displaced by generative AI in the future, they won’t have the luxury to say “Uh! What am I going to do with my free time now? Maybe I could meditate!”

Equally, people won’t have the luxury to say “Well, I think I’m going to create a startup to defeat age and death. I know it’s hard, but I’ll give it a go.”

That thing doesn’t exist outside Silicon Valley and people, as resourceful as they are, won’t turn into a bunch of startup founders/entrepreneurs overnight in reaction to the impact of AI.

The hearing covers many other points we often talk on Synthetic Work, like the manipulation of emotions and opinions, and many others that are critically important and, yet, beyond the focus of this newsletter. So, if you have three hours to spare, this is a must-watch:

The second thing worth your attention this week: while the US Senate was busy mentioning the word “jobs” 26 times during the hearing we just talked about, one the most critical companies in the United Kingdom, BT, was busy announcing the plan for a major workforce reduction, as it explores the use of generative AI for customer service and other roles.

Anna Gross, reporting for the Financial Times:

BT said on Thursday that it woud cut between 40,000 and 55,000 jobs, including employees and third-party contractors, by 2030. The FTSE 100 group’s current workforce totals 130,000, including around 30,000 third-party contractors which are mostly full-time posts.

The cuts will include 15,000 fibre engineers and 10,000 maintenance workers, a person close to the company said, with another 10,000 eliminated by increasing digitisation and automation.

Chief executive Philip Jansen said the group would become “a leaner business with a brighter future,” adding that many of those reductions would come from the end of the full fibre rollout that the group was “spending a fortune on now”. Other roles will go due to increasing digitisation, Jansen said, as he claimed AI would bring about sweeping changes in the future. “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said of the jobs to be lost through digitisation and automation, adding that generative AI would bring huge advances.

I don’t know how to say this in more clear terms: if you think that generative AI is not a threat to job security and what you are hearing from TV and newspapers is the newest exaggeration, think again.

At this point, it’s irrelevant if tomorrow’s AI is up to the task of replacing people at most tasks that make today’s jobs. It might well be that we’ll never reach the level of maturity necessary to take humans out of the loop. But if many business leaders convince themselves (and each other) that they can try, the impact on the workforce will be significant.

Most employees don’t have a plan to deal with this scenario and the best reaction so far has been going on strike. But that’s not a solution, it’s an accelerant to the experimentation and deployment of AI.

One thing you can do is to start sharing the Free Edition of Synthetic Work with your colleagues and friends and family to raise awareness.

Then, if you think it’s worth it, you can subscribe to the Splendid Edition of Synthetic Work to start upskilling yourself.

As we heard from Ford’s CEO in Issue #12 – ChatGPT Sucks at Making Signs, your company may not feel there’s time to upskill you. The competitive pressure exercised by companies already using generative AI is forcing everybody to accelerate workforce transformation plans like never before.

The third thing that is worth your attention this week: Gary Gensler, the Chair of the U.S. Securities and Exchange Commission (SEC), warns that the next financial crisis might be due to the proliferation of generative AI.

Richard Vanderford, reporting for The Wall Street Journal:

The next financial crisis couldemerge from firms’ use of artificial intelligence, Securities and Exchange Commission Chair Gary Gensler said, warning of the potential “systemic risk” posed by the technology’s proliferation.

Data aggregators and AI platforms could be major components of future financial system “fragility,” Mr. Gensler said Tuesday at a conference in Washington hosted by the Financial Industry Regulatory Authority, Wall Street’s self-regulatory body.

Observers years from now might look back and say “the crisis in 2027 was because everything was relying on one base level, what’s called [the] generative AI level, and a bunch of fintech apps are built on top of it,” Mr. Gensler said.

Just like generative AI can be used to manipulate people’s opinions on the last day of the elections, to push undecided voters in one direction or another, the technology can be used to produce content at a scale that pushes investors (especially retail investors) to act impulsively about one publicly traded company or an entire basket of stocks.

As we often said in this newsletter, humans are easy to manipulate. So easy, in fact, that all I have to do to convince you that Synthetic Work is an amazing newsletter is generate convincing testimonials that come at you from five different directions.

It’s called Truth-by-Repetition (TBR) effect. It doesn’t work with every person, but when it works, it can turn a statement known as false into a truth.

Truth-by-Repetition (TBR) effec has long been assumed to occur only with statements whose truth value is unknown to participants. Contrary to this hypothesis, recent research found a TBR effect with statements known to be false.

Of note, a recent model even posits that repetition could increase the perceived truth of highly implausible statements. As for now, however, no empirical evidence has reported a TBR effect for highly implausible statements.

Here, we reasoned that one may be found provided a sensitive truth measure is used and statements are repeated more than just once.

In a preregistered experiment, participants judged the truth of highly implausible statements on a 100-point scale, and these statements were either new to them or had been presented five times before the judgment task.

We observed a TBR effect: truth judgments were higher for repeated statements than for new ones-even if all statements were still judged as false.

Exploratory analyses additionally suggest that all participants were not equally prone to this TBR effect: about half the participants showed no or even a reverse effect.

Overall, the results provide direct empirical evidence to the claim that repetition can increase perceived truth even for highly implausible statements, although not equally so for all participants and not to the point of making the statements look true.

And so, generative AI doesn’t necessarily have to be good at the tasks that define a profession to impact jobs. It might indirectly impact jobs “simply” by destabilizing our political and economic systems.

Last but not least thing to pay attention to this week: US media billionaire Barry Diller warned that the use of artificial intelligence would prove “destructive” to journalism unless publishers were able to use copyright law to exert control.

Mr Diller is not your random billionaire. He is Chairman and Senior Executive of IAC and Expedia Group and founded the Fox Broadcasting Company and USA Broadcasting.

In turn, IAC is an American holding company that owns brands across 100 countries, mostly in media and Internet, including Investopedia, Martha Stewart Living, The Daily Beast, Travel + Leisure, and dozens of others that you must have read at least once in your life.

So, while there’s a growing number of online media publishers that have started using generative AI to “support their overworked journalists”, as we have read in Issue #5 – The Perpetual Garbage Generator and elsewhere, Mr Diller goes on the record with a much different position.

Daniel Thomas, reporting for the Financial Times:

Speaking at the Sir Harry EvansGlobal Summit in Investigative Journalism in London, Diller said that freely allowing AI access to media content would prove to be a mistake, and that the notion of “fair use” — which can be used to cover copyrighted material in data sets for machine learning — needed to be redefined.

“You can’t have fair use when there is an unfair machine that knows no bounds,” said Diller, who chairs media and internet group IAC.

In other words, for Mr Diller, generative AI is a threat to the business.

What Mr Diller doesn’t seem to realize is that a growing number of articles these days depend on what’s been said between people on social media, especially on Twitter, or at conferences, which are increasingly recorded and published on YouTube.

With the right access level to both Twitter and YouTube and the right amount of funding, any company with a powerful-enough generative AI model could automate the publishing of many magazines and newspapers we see today, completely skipping the need for journalists.

In the meanwhile, the aforementioned journalists are starting to realise exactly that and are not happy with it.

Do you remember CNET? One of the first that started publishing articles with ChatGPT? The one that was forced to admit the silent adoption of AI after readers of CNET Money noticed an outrageous number of mistakes in the published articles?

Well…

Caitlin Harrington, reporting for Wired:

Today the human members of its ditorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.

“In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations,” reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The fine folks at Visual Capitalist created a fun infographic showing the composition of the American workforce, taken a sample of 100 professionals, based on the data from the National Occupational Employment and Wage Estimates (May 2022) published by the U.S. Bureau of Labor Statistics (BLS):

The weekend is over, unfortunately. Otherwise, what I would suggest doing, for fun, is to go back and read the charts on the impact of AI on jobs that we published in the following issues of Synthetic Work:

and then cross with a big red X the little people in this infographic.

Or, perhaps, turn the infographic into a dartboard for next weekend?

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Last week, the notorious technologist Robert Scoble volunteered to have a 30-minute psychiatry session with psychiatrist Dr Joann Mundin.

The doctor used a new system from a company called Savantcare that transcribes the session with the patient, strips the transcription out of personally identifying information, and then sends the output to ChatGPT for further analysis.

At this point, ChatGPT spits out what a human psychiatrist would normally write about a patient that sounds similar to the notes it has received. Remember that generative AI is designed to predict the next likely word it has learned after reading human output for millions of hours.

Whether you are a doctor or not, this is a 17 minutes video that you don’t want to miss:

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

One of the greatest philosophers of our times, Daniel Dennett, doesn’t feel that great about modern AI. In fact, he feels pretty awful and he’s warning us all.

Dennett has spent 40 years studying consciousness, and I read many of his books while doing my research on the same subject, alongside neuroscience and artificial intelligence.

In his most recent work, From Bacteria to Bach and Back, he mentions AI more than once, as you’ll find out with a quick Twitter search.

Last year, he even allowed a group of researchers to train GPT-3 on all his books to see if the AI could convincingly impersonate him in a Q&A session.

But GPT-3 feels like a million years ago in terms of AI evolution. What we have today doesn’t seem particularly amusing anymore.

This week he wrote in a piece for The Atlantic titled The Problem With Counterfeit People:

Money has existed for several tousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk.

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect.

On the passive and ignorant pawns, by coincidence, at the beginning of May, I wrote on Twitter:

Let’s stretch our imagination, nd play some devil’s advocate, shall we? One year from now, or five, most people will write things assisted by AI: emails to prospects, clients, and colleagues; business and marketing plans; slide decks and conference speeches, songs and poems.

Today, the most vocal AI evangelists will tell you that AI is not going to have any negative effect on our society. AI will only make things better. People will write better, will present better, will draw better, will code better.

Perhaps. But…

What if this is true ONLY for people that are already inclined to better themselves and/or are already gifted in terms of writing, presenting, drawing, coding?
AI as a skill amplifier.

What if all the other people will simply use AI to conserve energy? Less work, mental and physical. Less time dedicated to the job, partner, kids. More time for myself.
AI as a free time generator.

There’s an enormous difference between these two scenarios. AI can be used for both, but it’s not a given.

AI as a skill amplifier in the hands of an already talented writer/speaker/coder means that this person uses the AI as a capable assistant, or mentor, to write more eloquently, more frequently, more effectively.

AI as a free time generator in the hands of a less ambitious or less talented individual means a skilled worker to offload the tasks to and blindly accept what comes back.

In this hypothetical scenario, of course, the free time generator people would still become better writers/speakers/coders. But they would not pay the same attention to the AI output as the skill amplifier people.

Tesla autopilot (when it works) makes you a better driver, but the longer you use it (and the better it gets), the more you trust it, and the less you pay attention to the road.

What if with AI it will happen the same? What if, in one year or five, generative AI models will become so good that free time generator people will stop paying attention to what they write, say at a conference, draw, code?

Who controls who in that scenario? Is it the people telling the AI what to do to improve their skills? Or is the AI telling the people what to do to have more free time?

And if it’s the latter, who decides what must be said, and in what way, to the audience?
The people who trained and/or fine-tuned the AI?
The people that offer software on top of these AI models by honing the prompt (as long as it will be necessary) to get better output?
Or perhaps other categories of people, like advertisers and governments, maybe even giving away AI tools for free in exchange for the chance to manipulate opinions and emotions at scale?

Just wondering.

Back to Dennett:

Evolution is not restricted to iving organisms, as Richard Dawkins demonstrated in 1976 in The Selfish Gene. Counterfeit people are already beginning to manipulate us into midwiving their progeny. They will learn from one another, and those that are the smartest, the fittest, will not just survive; they will multiply.

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

The moment has arrived to insist on making anybody who even thinks of counterfeiting people feel ashamed—and duly deterred from committing such an antisocial act of vandalism.

OK. Done. Now we can all go back to generating cute bears on a bicycle with Dall-E 2.

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

Here we are again. Whenever I think that people can’t do any worse with AI, and I’ll have to close my favourite section of the newsletter for lack of material, something new comes up and it’s outrageous enough to give me new hope.

This week we talk about how you can fool yourself with a synthetic girlfriend.

No. Not one like the Replika chatbots we talked about in Issue #2 – 61% of the office workers admit to having an affair with the AI inside Excel.

Worse. Much worse.

Alexandra Sternlicht, reporting for Fortune:

Caryn Marjorie, a 23-year-old ifluencer, has 1.8 million followers on Snapchat. She also has more than 1,000 boyfriends, with whom she spends anywhere from 10 minutes to several hours every day in individual conversations, discussing plans for the future, sharing intimate feelings and even engaging in sexually charged chats.

These boyfriends are dating a virtual version of Marjorie, powered by the latest artificial intelligence technology and thousands of hours of recordings of the real Marjorie. The result, CarynAI, is a voice-based chatbot that bills itself as a virtual girlfriend, with a voice and personality close enough to that of human Marjorie that people are willing to pay $1 per minute for a relationship with the bot.

Though CarynAI has only been charging users for a week in beta testing, it’s already generated $71,610 in revenue from her 99% male partners, according to an income statement Marjorie’s business manager shared with Fortune. With this, Marjorie sees having an A.I. doppelgänger as a promising way to level up her career as an influencer.

“I’ve been very very very close with my audience, but when you have hundreds of millions of views every single month, it’s just not humanly possible to speak to every single viewer,” says Marjorie, who posts over 250 pieces of content to Snapchat every day. “And that’s where I was like, ‘You know what: CarynAI is gonna come and fill that gap.’” And she believes the company has the potential to “cure loneliness.”

CarynAI is the first romantic companion avatar from AI company Forever Voices, which has made chatbot versions of Steve Jobs, Taylor Swift and Donald Trump (among others) that are similarly available for pay-per-minute conversations on Telegram

John Meyer, the CEO of Forever Voices says that “ethics is something [he] and the engineering team take very seriously,” and that they are looking to hire a chief ethics officer. With this in mind, he also believes the technology is “especially important” for young people, particularly kids like him who are “not typical” and “struggle with friends.”

Very seriously.

Of course.

Breaking AI News

Want More? Read the Splendid Edition

In the Professional Services industry, KPMG has started using a customized version of ChatGPT that finds the right experts to pitch for a proposal within a 10,000-people database.

Edmund Tadros, reporting for The Australian Financial Review:

The KymChat tool can securely acess internal data to quickly find experts within a 10,000-strong consulting team for use in proposals.

“The first-use case is finding people in the business,” said John Munnelly, KPMG’s chief digital officer.

KPMG’s KymChat, which has been created by Microsoft, is a private version of the popular ChatGPT and uses the new and improved GPT-4 language model to generate responses to user prompts.

The tool can access KPMG’s internal database of partner and staff resumes and, depending on rank of the person, certain internal financial data about the firm, Mr Munnelly said. More databases will be added over time.

The data entered into the tool are kept within KPMG’s servers locally, although KymChat does access an overseas-based supercomputer to process queries…