- Intro
- UBI & Welfare
- What Caught My Attention This Week
- Publicis announced an investment of €300mn as part of its AI strategy.
- The New York Times started building a team to explore the use of generative AI in its newsroom.
- For the first time, the Brisbane Portrait Prize will accept entries completed by generative AI.
- UBI & Welfare
- Is it really time to talk about an AI tax?
In two weeks, Synthetic Work will be one year old.
In these 11 months and 2 weeks, we’ve seen mild but growing evidence of AI impacting occupations. More importantly, we have seen not-so-mild warnings from the people building AI that the impact will be more evident as more powerful models get released in the world.
So, I’ve been thinking for a while about adding a new section to this newsletter, focused on welfare policies like the Universal Basic Income (UBI) and adjacent ideas.
When I asked about it, on social media, some of you expressed much interest in the topic, but if that’s not the case, please do reply to this email and let me know.
If, instead, you’re interested in the topic, please understand that I know very little about the topic. Differently from artificial intelligence, to which I dedicated the last eight years of my professional focus, I have no expertise in welfare policies.
Normally, I consider this an advantage, not a handicap.
Sometimes, the most valuable insights come from people who are not experts in a field, as they are unconstrained by the biases and assumptions that experts have. At least, this is a well-known position among the top venture capitalists in Silicon Valley.
Yet, my goal is not to provide insights on welfare policies but, as always, to learn about them, and to share what I learn with you. And, as always, the topic must be connected to the impact of AI on jobs, and the future of work.
The first story is below.
Alessandro
Publicis announced an investment of €300mn as part of its AI strategy.
Daniel Thomas, reporting for Financial Times:
Senior advertising executives have warned that generative artificial intelligence could challenge the position of the established advertising groups by making it easier for their clients to carry out their own marketing activities and allow tech companies to offer rival services.
Even without these external challenges, many expect the use of AI technology to lead to fewer jobs within the larger groups, as areas such as media planning and buying are automated and creative ideas can be carried out more quickly and cheaply.
In a strategy update on Thursday, Publicis said that the plans would put AI technology at the “core” of its business.
…
The Paris group, which owns advertising and market agencies around the globe, said that its AI strategy would allow all 100,000 of its staff to use consumer data for 2.3bn profiles of people across the world, with “trillions of data points about content, media, and business performance”.Among its objectives, Publicis said that this would allow greater accuracy for media planning, buying and optimisation, as well as personalised advertising “at scale” for brands owned by its clients.
…
Sadoun said that its investment in AI would not lead to any job losses, although predicted that people would have “different jobs” in the future. “We can create jobs through our growth. AI is going to radically change how we operate.”
We talked about Publicis in one of the first Splendid Editions of Synthetic Work, and the company is even tracked in the AI Adoption Tracker.
Based on the data we collected at that time, it’s not unreasonable to assume that, after using generative AI in its operations for almost a year, Publicis has concluded that the technology would allow its clients to operate too independently, at a fraction of the price they pay to the ad agency.
And based on that, the company concocted a strategy to provide self-service generative AI tools to its clients to remain competitive.
Every company in the world should try to be intellectually honest and ask the same question: Do our clients still need us now that they have access to generative AI?
The New York Times started building a team to explore the use of generative AI in its newsroom.
Emilia David, reporting for The Verge:
Zach Seward, who was recently hired by the publication to head AI initiatives, posted on Threads that the team will be “focused on prototyping uses of generative AI and other machine-learning techniques to help with reporting and how the Times is presented to readers.”
Seward’s post said the Times plans to hire a machine learning engineer, a software engineer, a designer, and a couple of editors to round out the AI newsroom initiative. So far, the Times has posted job listings for an associate editorial director for AI initiatives and a senior design editor.
“The team, led by the editorial director for A.I. initiatives, will also include colleagues with a mix of engineering, research, and design talent, acting as a kind of skunkworks team within the newsroom. Together, they will partner with other teams in the news, product, and technology groups to take the best ideas from prototype to production,” the listing for associate editorial director, AI initiatives, reads in part.
So, on one side The New York Times is happy to sue OpenAI with the allegation that their LLMs were trained with copyrighted articles from the newspaper. On the other side, it’s building a team to use those very LLMs to write articles for the newspaper.
On the latter point: like every other news organization we tracked in the Splendid Edition of Synthetic Work, The New York Times has no other choice.
On the former point, allow me a rare digression from the main focus of Synthetic Work.
If every organization in the world would ask for money from OpenAI on the basis that they have contributed to the training dataset of GPT-4, ChatGPT would not exist anymore.
OpenAI is aware that if they give in to The New York Times, they will have to give in to an almost endless list of other claimants. Publishing houses, news organizations, and social media networks would be only at the top of the list.
Then, they would see universities and research institutions coming forward. Then, Hollywood studios, TV networks, and music labels. And so on.
All of this while, ironically, the individuals that have produced the actual content, the journalists, the writers, the actors, the musicians, you and me, would not see a dime from OpenAI.
Because a similar situation is financially unsustainable, and because we have almost reached to limits of all the data that be gathered from the Internet, OpenAI’s focus must have been focused on synthetic data for a while.
In the short term, they will continue to appease as few claimants as possible, licensing their content for a fee. We’ve seen it in the December deal with Axel Springer.
But their long-term strategy must depend on producing datasets of high-quality synthetic data, used to train models as capable as GPT-4 without a single piece of copyrighted material.
A growing body of research suggests that this is possible, and just earlier this week somebody asked:
Someone clever asked me the other day: at this price, how much would it cost to recreate all the text on the internet? https://t.co/1yyHo0a38r
— Nat Friedman (@natfriedman) February 1, 2024
Now. Not only the entire Internet is finite. It’s also full of low-quality data that only negatively impacts the performance of AI models. So researchers have started wondering: what would happen if the training data sets for these models were much smaller but only composed of, for example, the best textbooks we ever published?
To make the example more specific: rather than train an AI model with 500 biology books of variable quality, what if we trained it with only the best 5 biology books ever published?
Humans spend a lifetime reading mediocre books about the subjects they are passionate about. Only after reading a lot of them, they can recognize the 1-2 books that are worth paying attention to. And they recommend only those ones to their friends.
If this strategy works, not only OpenAI might be able to steer clear of copyright claims in the future, but they might also be able to produce models that are significantly more capable than GPT-4.
Copyright lawsuits like the one from The New York Times are only accelerating that process.
For the first time, the Brisbane Portrait Prize will accept entries completed by generative AI.
Josh Taylor, reporting for The Guardian:
The Brisbane Portrait Prize – with a top prize worth $50,0000 – has been described as Queensland’s answer to the Archibalds with selected entries displayed at the Brisbane Powerhouse later in the year.
…
In the terms and conditions of entry, the Brisbane Portrait Prize notes this year that it will accept entries “completed in whole or in part by generative artificial intelligence” so long as the artwork is original and “entirely completed and owned outright” by the entrant.A spokesperson for the prize told Guardian Australia that allowing AI entries acknowledged the definition of art was not stagnant and would always grow.
…
The spokesperson said in the past more traditional artists had objected to digital and photographic entries being allowed – which are now generally accepted in the art world.
…
A previous winner, the painter Stephen Tiernan, told the ABC there were still artistic processes involved in the creation of AI-generated work, and ultimately the rule change kept the prize contemporary.The spokesperson said the competition would determine ownership of the work based on the processes used and the terms of the AI program behind it. When entering, artists must declare they have full copyright over the entry.
…
Dr Rita Matulionyte, a senior lecturer in law at Macquarie University, said AI itself could not be an author under Australian copyright law, but it remained an open question how much input a person must have in an AI-assisted artwork to claim ownership.“The thing that is unclear [is] how much human contribution is enough for a human to become an author,” she said. “Is one prompt enough or is it 100 prompts that you have to make?”
…
The National Portrait Gallery’s National Photographic Portrait Prize for 2024 allows the use of generative AI tools in the development of photographic work entered – but will not allow wholly AI-generated images.
…
The World Press Photography competition in November announced it would exclude AI-generated entries from its open format following “honest and thoughtful feedback”, stating the ban was “in line with our long-standing values of accuracy and trustworthiness”.
As an art collector and an AI practitioner, I have a lot of thoughts about this. Three of them:
First: it’s inevitable that some organizations, in every industry, will welcome generative AI. Sometimes is for publicity. Other times is to remain competitive. Other times, as it’s suggested here, is to remain contemporary. But when it comes to art institutions, welcoming a new technology is more likely than ever because of their mission and role in society.
Emad Mostaque, the CEO of Stability AI once said that LLMs are “cultural objects.” It’s a deep observation that has remained with me ever since. Perhaps, not just LLMs but all generative AI models are.
They have become part of our culture and they contain a snapshot of our culture.
Second: the ones of you who followed my research on diffusion AI models here and on social media, know that you can use Stable Diffusion to “mine” pretty pictures or to create something from original from scratch. The former requires no human contribution other than clever engineering, the press of a button. The latter huge amount of work, and the human contribution to the artistic process is unquestionable.
If if I don’t have the time I wish I could dedicate to it, I maintain a page about my experiments to show how that is the case:
Third: this decision helps open the doors to a new generation of artists that will use generative AI as a tool, in the same way artistis befor them used easels and then cameras.
Today, AI art is mainly associated with cheap mass-production of digital assets for video games, advertising, movies, etc. but decisions like the Brisbane Portrait Prize’s will change that, and people will see the possibility of an artistic career in generative AI.
Like Claire:
After a day of pure adrenaline recovery, as promised, a thread on corpo | real ( @braindrops ) & a note on what profits are meant for. It's long, but rich in effort.
If tldr, the bulk of the thread is on the collection, & the last few tweets are my intentions for the profit.🌸 pic.twitter.com/iSwHZUYTls
— Claire Silver 🌸 (@ClaireSilver12) February 1, 2024
Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center and special adviser to the European Commission, writes in the Financial Times:
Generative AI is already bringing a host of societal challenges. Global job losses are one key expected effect. While the political debate remains largely focused on safety and security harms, various studies foresee deep disruptions to labour because of the technology. It was Elon Musk who raised the future of work on the margins of last year’s AI safety summit. He casually mentioned, in a conversation with UK Prime Minister Rishi Sunak, that we must anticipate a society in which “no job is needed”. The reverberations of that are unimaginable.
…
While consultants are optimistic that AI will “enhance” jobs rather than replace them, research by ResumeBuilder found that more than a third of business leaders said AI had already replaced workers in 2023. There is no indication that the more sophisticated versions of generative AI would lead to a slowdown in impact on employment.
…
To rebalance the cost-benefit impacts of AI in favour of society — as well as to make sure the necessary response is affordable at all — taxing AI companies is the only logical step. I had not anticipated starting 2024 by agreeing with Bernie Sanders and Bill Gates, both of whom have proposed a tax on job-taking robots in the past, but here we are. An updated version of their plan, taking in generative AI’s progress, is needed.
…
A debate resulting in global political consensus may take years and should start now. Agreement must be reached around the percentage of revenue or profit to be taxable and the purpose of the tax — should it be focused on mitigating job losses specifically or on addressing the multiple societal impacts of AI more broadly? And given that China and the US are both leading AI developers and have not yet implemented the minimum corporate tax rate rules domestically, incentives and enforcements will have to be effective.It took years to get a minimum global corporate tax base in place. Considering the impending costs to society, a conversation about a targeted tax for billion-dollar AI companies cannot wait.
There’s not much substance in this article. It raises awareness about the fact that Elon Musk, Bill Gates, and Bernie Sanders are in favor of an “AI tax,” in case you were not paying attention.
It could be summarized as “Let’s start talking about taxing AI companies now because it will take forever before it happens,” a message that the author is in the position to whisper in the ears of the European Commission.
Yet, I’ve decided to start with this new section with this article because the time horizon is an element worth pondering about.
In reality, there are three timelines to consider:
- How long will it take for AI to become so powerful that it can impact global occupation in a meaningful way?
- How long will it take, since the advent of such an AI, to have a meaningful impact on global occupation?
- How long will it take for the governments of the world to react to either #1 or #2?
On the first question, the answer could go from “It will never happen.” to “GTP-4 is already powerful enough to have a meaningful impact on global occupation. It’s not yet visible only because it takes time for large companies to adopt new technologies.”
People are fixated that job disruption depends on the discovery of Artificial General Intelligence (AGI), but we have no evidence that what we have is not already enough.
That evidence must appear in a very precise way: companies must fire their employees to replace them with generative AI models, then those AI models must fail in every productivity metric that matters, and finally the companies must revert to human employees.
Only then, we can declare that today’s AI is not enough to impact jobs.
Focusing on this evidence becomes superfluous, of course, if AGI comes along. So, as a one-off exception, I’ll share the latest projections, kindly compiled by Brett Winton at ARK Invest.
But these are just projections. Even if the forecast error persists, there’s no guarantee that we’ll see an AGI in 2026 (at the end of which, OpenAI will likely launch GPT-6).
Moreover, these predictions do not take into account external factors that might altogether prevent the launch of an AGI:
- Who told you that the US government would not intervene to block the launch of the AGI as soon as it’s discovered?
- Who told you that the US military would not seize the AGI as soon as it’s discovered on the basis that it’s considered a weapon?
Back to the point where we started: how much time do governments really have to debate an AI tax, UBI, and other welfare policies?
If we assume the most challenging scenario, AGI discovered and released to the world by the end of 2026, then it all depends on how long it will take for companies around the world to adopt it.
What if, for example, the adoption of AGI is so slow that governments don’t register any meaningful impact on global occupation for 20 years?
This chart can be misleading, as it only reaches 2005, and new technologies have been adopted faster and faster in the last 20 years.
ChatGPT is the fastest-adopted technology in history. OpenAI took only 5 days to reach 1 million users. And only two months to reach 100 million monthly users.
(this chart and many others discussed in this week’s Splendid Edition of Synthetic Work is provided by ARK Invest)
Of course, that metric doesn’t automatically translate into business adoption and, in fact, enterprises in multiple industries have been characteristically slow in adopting generative AI.
That’s because generative AI is very hard to secure, hallucinates in unpredictable ways, and requires workforce training even to figure out how to interact with it.
But:
- What would be the business adoption curve of an AI that just works (like AGI is expected to)?
- What would be the business adoption of an AI that can simply match 1:1 a human worker in understanding the job and performing it?
- What would be the business adoption of an AI that requires no infrastructure preparation or training for its use, and it’s as simple to interact with as a person sitting next to you and listening to everything you say?
Would this accelerate the adoption of AI by 50%? 100%?
And even if manage to answer all these questions, then we still have two final questions to focus on:
What is a “meaningful impact on global occupation”?
For example, is a 10% global workforce net reduction in a year a reason to issue welfare policies? Or the expected behavior of a free market in 2026 when more powerful automation technologies become available?
Different political parties invariably have different answers to this question, dictated not just by ideology but also by the political agenda of the moment, and the economic situation of the country.
And all of this, before considering other two key variables in this complex equation: the potential advent of open source AIs as powerful as GPT-4, and the uneven access to powerful AI across the world, leading to a potential rebalance of the global economy.
This torrent of questions is meant to highlight the fact that any conversation about an AI tax today will be considered premature by some and pointless by others.
Whatever would be discussed today, in the context of GPT-4, might have to be turned upside down in the context of GPT-5, or in the context of an open source AGI released into the world by a novel Satoshi Nakamoto.
This week’s Splendid Edition is titled AI Expectations.
In it:
- A Chart to Look Smart
- More than 50% of surveyed UK undergraduates are using generative AI to help with their essays. They will be your future new hires.
- GitClear published a study on developers using CoPilot and found a decline in code quality.
- ARK Invest published its annual Big Ideas report, giving ample room to AI forecasts for this year and 2030.
- Prompting
- New research confirms that GPT-4 is capable of generative creative business ideas if prompted in the right way.
- The Tools of the Trade
- A little app to transcribe 45min YouTube video in 4 minutes.