- Ken Griffin can’t wait to turn the Finance industry upside down with GPT-4
- Coca-Cola allegedly explores the new frontier of product placement
- Microsoft couldn’t wait a second longer to litter with ads your chats with its AI
- There is a connection between the invention of spreadsheet and the transformation of the financial workforce
- Once upon a time, photocopiers were scary
- Is trading the only human interaction that will be algorithmic in future?
- Humans remain equally gullible after 60 years of exposure to AI
- How to start your marriage with a lie in 101 ways
P.s.: This week’s Splendid Edition of Synthetic Work is titled I Know That This Steak Doesn’t Exist and it’s all about the impact of AI on software development.
- how your peers are using AI in your industry (Education, Finance, Government, Health Care, Media & Entertainment, Tech, etc.)
- what are life-changing AI tools that can enhance your productivity at work (tested or used by me personally)
- why and when to use specific techniques (like prompting) to improve your interaction with the AI
- how you can use AI to perform tasks that matter in your profession
This week we introduce a new section of the newsletter: Putting Lipstick on a Pig. If you read what I said on social media in the last few weeks, you know it was inevitable. I don’t want to unveil anything. Keep reading.
Putting Lipstick on a Pig will occasionally replace the AI Joke of the Week section to give us all a break from the lame jokes generated by GPT-4, LLaMA and the other AI models I test and use. We definitely need to find a better way to assess the risk of human extinction.
Speaking of which, I’d like to mention something serious for a moment. If you are a new reader, I promise you the newsletter is not as intense as the content that follows in this brief intro.
Here we go.
Some of you may have seen an unusual update from me on social media:
Synthetic Work is, in part, about understanding if my concerns have merit or not. And I’m writing it in a humorous way to keep AI as approachable to as many non-technical people as possible.
Well, earlier this week, I was invited to sign an open letter drafted by the Future of Life Institute, calling for a 6 months pause in developing large language models beyond GPT-4.
We now know that GPT-4 has 1 trillion parameters (the biggest model ever created) and has been trained on 66 times more data than GPT-3 (which is the model at the basis of ChatGPT). However, the scientific community knows nothing about how this AI model really works and what’s contained inside the training data set. So there’s no way to assess what it might or might not do from a safety point of view, or an ethical one, or a privacy one, or an intellectual property infringement one, and so on.
We also know that OpenAI is already training GPT-5 and what they are seeing is concerning enough for their CEO to say something like this:
there will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there.
— Sam Altman (@sama) December 26, 2022
Along with me, another 1,000 people have signed the letter, including actually important people: Elon Musk (who originally founded OpenAI with a $100M donation), Yuval Noah Harari (who wrote Homo Sapiens and Homo Deus), Emad Mostaque (the CEO of Stability AI, which gave us Stable Diffusion), Yoshua Bengio (one of the founders of modern AI), Gary Marcus (the AI researcher and university professor who appears on every mainstream TV channel and newspaper calling out the endless imperfections of ChatGPT – a person I talk to very often), the founding researchers of DeepMind (the UK competitor of OpenAI, acquired by Google), and many others.
The letter is not perfect, and none of us expects that it will actually concede humanity a pause to understand what is happening and what might happen next. But it’s a good way to raise the attention of regulators and build shareholders’ pressure on Microsoft to do things with more rigor.
One of the key things that the critics of this letter have not taken into account, possibly due to information asymmetry, is that, on top of the December tweet above, the CEO of OpenAI went on the record with ABC News in mid-March saying we are a little bit scared of this, referring to the power of the GPT models they are developing (and here he’s clearly referring to what comes after GPT-4).
Similarly, Ilya Sutskever, OpenAI’s chief scientist and co-founder, told the Verge:
These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models.
At the same time, in mid-March, Microsoft, which now controls the distribution of OpenAI models at a planetary scale, laid off its entire AI Ethics and Society team. The very people hired to guarantee that AI models would be released in a responsible manner.
So the question is: what would happen if the CEO of a big pharmaceutical company would go on the record with a prime news channel saying something like the following?
We think we have discovered a drug that enhances all human capabilities, turning people into superhumans, but it’s very potent, and it will get more potent, and we are a little bit scared of what we have seen in terms of side effects. Oh, by the way, we fired the people we hired to be sure we take all due precautions before we start selling this drug.
This topic is so important that all newspapers and TV networks have paid attention to it. Among the others, Al Jazeera English reached out asking me for a live interview.
Alright. Now that you know, you can safely think “Who cares?” and we go back to our normal programming. Oh, you already did? Perfect.
What a weird intro
This is the Free Edition of the newsletter and, well, it's free to receive in your inbox every week. But to access this online archive, you need a paid membership.
Read a sample of the Free Edition
Subscribe* or Sign in