Issue #40 - I Didn't See It Coming

December 2, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • Intro
    • How quickly is AI moving?
  • What Caught My Attention This Week
    • Does OpenAI fantasize about new forms of economic organizations led by AI?
    • Who controls the performance of the AI you are adopting in your company?
    • Does your child want to become a meteorologist when he/she grows up?
  • The Way We Used to Work
    • How creating and presenting slides has changed in 100 years.
  • The Way We Work Now
    • Sports Illustrated is publishing poor-quality AI-generate news articles. Synthetic Work subscribers knew this 9 months before anybody else.
  • Putting Lipstick on a Pig
    • Faking conference speakers for fun and profit.
Intro

How quickly is AI moving?

Brett Winton, the Chief Futurist at ARK Invest, attempts to answer:

What does it mean?

It means that in just three years we might have extraordinary artificial intelligence running on our smartphones, our home appliances and industrial equipment, our cars and bikes, our buildings and homes, and, potentially, our clothes and accessories, at a price that will be affordable to most people.

Imagine that all these AIs are, in fact, a single AI. The same one that you normally talk to on your ChatGPT window. The one you have personalized with a synthetic voice chosen by you, and a trove of personal information, coming from all the chats you had, and from documents and emails and pictures that you allowed it to access.

Imagine that the various instances of your AI, in the car or at the desk in your office, all remember who you are and what you want and what you are up to. Imagine that this AI absorbs additional information from the local context it’s operating at any given time: the fridge at home or the bike on your commute to work.

The question we can begin to dare to ask is: “What can I achieve with an AI that is always with me, knows everything about me, and can assume what I need at any given time?”

And if you don’t care about answering this question, maybe you might want to ask this one: “What can my coworkers achieve with an AI that is always with them, knows everything about them, and can assume what they need at any given time?”

Alessandro

What Caught My Attention This Week

OpenAI researcher fantasizes about new forms of economic organizations not bottlenecked by human CEOs.

Writing on X under the pseudonym “roon”, he/she writes:

The near future involves AI assistants and agents that smart people have to figure out how to work into business processes. the number of use cases will grow as the AIs get smarter. but ultimately the creativity and flexibility of humans will be the bottleneck.

After the second Industrial Revolution when running electricity became common, most industrialists just switched out their water wheel for a power contract and changed nothing else. they celebrated because they didn’t need to set up near water. it took creativity to get further.

To unlock the true value of AI a whole parallel AGI civilization will spawn, creating new economic organizations from the ground up rather than waiting on human CEOs to figure out when and where to deploy them. earliest to go will be any services that can be delivered digitally.

AIs don’t have to be smarter than us to reach this event horizon, only faster. As long as they’re delivering value autonomously in this way people will want to cede more and more control to AGI civilization and find ways to serve it by acting as conduits to the real world.

As an example good businesses to build now would be to finally figure out the cloud labs model so that powerful AIs can run bio assays or other experiments on physical substrate. You can perhaps model this as a new type of aaS business where the customer is ASI.

The datacenters will represent large percents of GDP. most of the business of running and planning civilization except perhaps at the highest levels (reward must be defined by humans even if policy by NNs). and people will need to own a chunk of the returns and governance of AGI.

Notice the use of “near future” at the beginning of the thread.

Also notice that this is a single researcher’s opinion and, as such, it doesn’t necessarily reflects the perspective of OpenAI as a company. But a company is made of people, and those people shape the company’s culture and direction.

If you have read Synthetic Work for the last few months, you know about the research I’ve been doing on the possibility of generating synthetic advisors that can help decision-makers gain broader perspectives and make better decisions.

I showed you an early implementation in Issue #31 – The AI Council.

In that issue and Issue #36 – The Dawn of the AI Corporation, we talked extensively about the possibility that AI might eventually replace CEOs and other top management positions in a company, and how legal scholars are already pondering the legal implications of companies that are entirely led by AI rather than human managers.

Even if these ideas are shared by more than a single researcher inside OpenAI, the company could never openly go in this direction, as painting a picture where your technology is going to replace the very people who are buying your product is suicidal.

But, just like this researcher did, OpenAI can paint a picture where new AI-led companies emerge alongside traditional human-led companies, offering a positive vision of synergy and collaboration between the two.

Sam Altman has suggested multiple times that, in his mind, an artificial general intelligence is one that can make breakthrough discoveries in science. If that will ever be possible, the breakthrough won’t happen overnight. It will require that a human prompts the AI with a research goal and the AI is left alone to explore and test. Which translates into hours, days, weeks, months, or years of computation that somebody has to pay for.

Who pays for that? Not OpenAI.

It’s not unthinkable that, once the technology is mature enough, any large organization out there would create a startup subsidiary led by AI, limiting liability, brand exposure, and reputational risk. And these synthetic subsidiaries would be left alone to explore and test new ideas in finance, science, engineering, and so on. The parent company would just invest in the subsidiary by paying for the computational power, and supervise the work.

But if these AI-led subsidiaries start to perform successfully, the outcome might be the same: AI starts replacing top management and whole workforces. It just sounds less threatening and immediate.

If this scenario sounds too far-fetched and futuristic, ask yourself what the shareholders of a public company would prefer to see their investment maximized. All it has to happen for the domino effect to trigger is that one public company does this experiment and it works.


A question that business leaders should ask themselves more often is: “Who controls the performance of the AI I’m adopting in my company?”

Put differently, when we talk about the impact of AI on jobs, we should ask the question “Which AI?”

In multiple issues of this newsletter, we discussed the many ways a centralized AI provider can influence the performance of the AI we are adopting in our companies. We mainly talked about user manipulation, intentional or not, to maximize engagement, spending (via ads exposure), or even political affiliation.

There most basic formal form of manipulation we didn’t discuss so far is also the most common: profit maximization.

Brett Winton, again, helps us frame the problem:

Just like the performance of an employee matters to the company, the performance of the AI should matter.

The more a company depends on that human employee, like a CEO, the more his/her performance must be scutinized. If an entire company, slowly but surely, starts to depend on the performance of an AI, measuring how that AI performs over time becomes paramount.

This concept might be difficult to grasp.

We think about artificial intelligence as software, not as people. And we think about software as a deterministic tool that either works or doesn’t work.

Yes, we know that, over time, Excel could become slower and clunkier, due to lack of optimization or the so-called feature creep. But we don’t think about it’s going to perform differently when it comes to manipulating the numbers in the cells of the spreadsheet.

We don’t think that a division that returns ten decimals of precision over time will start to return only one decimal of precision.

Bad things would happen if our spreadsheet started to lack precision over time. And it would be especially disappointing to discover that our spreadsheet would start losing precision over time because the company that offers it as a service is trying to save money on the computational power necessary to calculate the decimals.

None of this happens with Excel because the cutting corners exercise would be easily spotted. But language is different and spotting a change in the performance of an AI is much more elusive.

Moreover, the AI provider can frame the change as an “increase in conciseness” that most users prefer.

It’s entirely possible that what we are seeing today are just short-term measures, mostly applied to the consumer-grade version of ChatGPT, until the shortage of GPU chips is resolved. But if you are considering a binding contract with OpenAI, Anthropic, Cohere, etc., you might really want to review the quality of service portion of the agreement you are about to sign.


Does your child want to become a meteorologist when he/she grows up?

It might not be a good idea. At least, not for if he/she wants to focus on weather forecasting on Earth.

Gregory Barber, reporting for Wired:

n September, researchers at Google’s DeepMind AI unit in London were paying unusual attention to the weather across the pond. Hurricane Lee was at least 10 days out from landfall—eons in forecasting terms—and official forecasts were still waffling between the storm landing on major Northeast cities or missing them entirely. DeepMind’s own experimental software had made a very specific prognosis of landfall much farther north. “We were riveted to our seats,” says research scientist Rémi Lam.

A week and a half later, on September 16, Lee struck land right where DeepMind’s software, called GraphCast, had predicted days earlier: Long Island, Nova Scotia—far from major population centers. It added to a breakthrough season for a new generation of AI-powered weather models, including others built by Nvidia and Huawei, whose strong performance has taken the field by surprise.

In a paper published today in Science, DeepMind researchers report that its model bested forecasts from the European Centre for Medium-Range Weather Forecasting (ECMWF), a global giant of weather prediction, across 90 percent of more than 1,300 atmospheric variables such as humidity and temperature. Better yet, the DeepMind model could be run on a laptop and spit out a forecast in under a minute, while the conventional models require a giant supercomputer.

Standard weather simulations make their predictions by attempting to replicate the physics of the atmosphere. They’ve gotten better over the years, thanks to better math and by taking in fine-grained weather observations from growing armadas of sensors and satellites. They’re also cumbersome. Forecasts at major weather centers like the ECMWF or the US National Oceanic and Atmospheric Association can take hours to compute on powerful servers.

Lam and Battaglia say they see the remarkable performance of their forecasting model as a starting point. Because it can compute any type of forecast with such ease, they believe it could be possible to tweak versions to perform even better for certain kinds of weather conditions, like precipitation or extreme heat or hurricane tracks, or to provide more detailed forecasts for specific regions. Google also says it is exploring how to add GraphCast into its products.

So far, the Google Cloud side of the business has been an infrastructure provider. I long maintained that Google will never make a dent in the cloud market against Amazon and Microsoft because they fundamentally don’t understand the enterprise market.

In a past life, while leading strategy for my past employer, I extensively documented how Google Cloud has been dangerously closed to closing for good, and how much of the apparent success was a lot of smoke and mirrors.

Under the new leadership of Thomas Kurian, Google Cloud has been steadily refocusing on the application offering, providing a growing and compelling number of features for the collection of productivity applications that is now known as Google Workspace.

As I proposed almost infinite times, Google’s real business opportunity is not in offering AI infrastructure as a service, but in offering AI applications as a service.

The DeepMind division is developing a growing number of critical AI applications, for drug development, new material discovery, and now weather forecasting. This is exactly the kind of business I had in mind for years. The only one that would give Google a true competitive advantage in what won’t be called cloud market for much longer.

Back to the article:

Despite Google’s strong results, weather forecasting is far from solved. Its AI model isn’t designed to provide ensemble forecasts, which detail multiple potential outcomes for a storm or other weather system, along with a range of probabilities that can be especially useful for major events like hurricanes. AI models also tend to low-ball the strength of some of the most significant events, such as Category 5 storms. That’s possibly because their algorithms favor predictions closer to average weather conditions, making them wary of forecasting extreme scenarios. The GraphCast researchers also reported that their model fell short of the ECMWF’s predictions for conditions in the stratosphere—the upper part of the atmosphere—though they’re not yet sure why. Relying on historical data for training involves a potentially serious weakness: What if the weather of the future looks nothing like the weather of the past? Because traditional weather models rely on laws of physics, they are thought to be somewhat robust to changes in Earth’s climate. The weather changes, but the rules that govern it don’t. Battaglia says that the DeepMind system’s ability to predict a wide variety of weather systems, including hurricanes, despite having seen relatively few of each type in its training data, suggests it has internalized the physics of the atmosphere. Still, it’s one reason to train the model on data that’s as current as possible, Battaglia says.

This story prompted me to look for the exact definition of the meteorologist profession. And I ended up on a lovely website set up by the UK Government called National Careers Service.

One of the things that this website does is provide a list of 800 jobs that exist today, alongside an average salary.

So, here’s a fun exercise for the weekend, perhaps with your children: go through the list, explore the What you’ll do section of each job, and think about what jobs will be impacted by AI in the next five years.

Just be sure to take into account what we discussed in the intro of this Free Edition.

The Way We Used to Work

A section dedicated to archive photos and videos of how people used to do things compared to now.

When we think about how artificial intelligence is changing the nature of our jobs, these memories are useful to put things in perspective. It means: stop whining.

In 1919, the American company DuPont created a room specifically for maintaining large charts that showed important financial statistics for its explosives business and emerging chemical ventures.

Gene Castellano, writing for the Hagley museum:

The room was located on the renowned ninth floor of the DuPont Building in Wilmington, a few steps from the offices of Executive Committee members. Multiple charts for each business tracked sales, expenses, earnings, assets and ROI for the current year and ten years of history. Additional charts also tracked the information by month and included forecasts.

The chart room was fairly straightforward in the beginning but became more complex over time. A photo in Hagley’s collection from about 1920 shows the first system was a single rack of charts mounted in hinged frames, like pages in a notebook, which could be flipped from right to left as the presentation progressed. Managers sat in front of the display on rolling chairs so they could shift as the presentation moved down the line.

As DuPont added new businesses in the 1930s and 40s, the number of charts increased substantially. Not surprisingly, the challenges of presenting them also grew. The end result was a monorail system that suspended the charts on wheeled frames which allowed them to be rolled in front of committee members.

Sometime later, a small amphitheater with tiered seating was added that allowed departmental managers to also participate in the reviews, which were delivered by the chart room supervisor.

In 1950, DuPont hosted a series of presentations about the chart room, the techniques it employed and “management by ROI” at a financial conference hosted by the American Management Association (AMA). It was the first time that the chart concept was shown outside of the company. The talks offered substantial detail about the design of the charts, the financial disciplines used by the company and how they were presented to executives. At that time, the company maintained 350 charts which were alternated between different meetings over the course of a year.

Two things to ponder:

First: In just 100 years, we went from manually printing slides and showing them through a mechanical monorail to using artificial intelligence to generate narrative, images and, soon, charts and diagrams, starting for a request in plain English.

In a sense, the new Synthetic Work’s Presentation Assistant, which I unveiled in Issue #38 – How to do absolutely nothing to prepare the best presentation of your career, is the descendant of DuPont’s chart room.

Second: An innovative company can keep a secret for 30 years to maintain a competitive advantage. Think about this when you think about the perspective on the OpenAI drama that we discussed in Issue #39 – The Balance Scale.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

The world is in shock because Sports Illustrated has been caught publishing poor-quality AI-generate news articles. They would have known this was coming 9 months before anybody else if they had read the Splendid Edition of Synthetic Work.

Maggie Harrison, reporting for Futurism:

Outside of Sports Illustrated, Drew Ortiz doesn’t seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he’s described as “neutral white young-adult male with short brown hair and blue eyes.”

Ortiz isn’t the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions.

The AI authors’ writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball “can be a little tricky to get into, especially without an actual ball to practice with.”

According to a second person involved in the creation of the Sports Illustrated content who also asked to be kept anonymous, that’s because it’s not just the authors’ headshots that are AI-generated. At least some of the articles themselves, they said, were churned out using AI as well.

Initially, our questions received no response. But after we published this story, an Arena Group spokesperson provided the following statement that blamed a contractor for the content:

“Today, an article was published alleging that Sports Illustrated published AI-generated articles. According to our initial investigation, this is not accurate. The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce. A number of AdVon’s e-commerce articles ran on certain Arena websites. We continually monitor our partners and were in the midst of a review when these allegations were raised. AdVon has assured us that all of the articles in question were written and edited by humans. According to AdVon, their writers, editors, and researchers create and curate content and follow a policy that involves using both counter-plagiarism and counter-AI software on all content. However, we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy — actions we don’t condone — and we are removing the content while our internal investigation continues and have since ended the partnership.”

None of the articles credited to Ortiz or the other names contained any disclosure about the use of AI or that the writer wasn’t real, though they did eventually gain a disclaimer explaining that the content was “created by a 3rd party,” and that the “Sports Illustrated editorial staff are not involved in the creation of this content.”

Though Sports Illustrated’s AI-generated authors and their articles disappeared after we asked about them, similar operations appear to be alive and well elsewhere in The Arena Group’s portfolio.

The readers of the Splendid Edition of Synthetic Work knew about this already in March 2023. Almost 9 months ago.

In a Splendid Edition prophetically called The Perpetual Garbage Generator, we documented how the Arean Group started using AI technology provided by Jasper and OpenAI to generate articles across its portfolio of websites.

And that is also tracked in Synthetic Work’s AI Adoption Tracker, an invaluable research tool available to all Sage paid subscribers and above.

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

Inevitably, as we long predicted in this newsletter, people are using generative AI to create fake people and embellish everything that depends on the network effect. Like conferences.

Jess Weatherbed, reporting for The Verge:

As first noted by 404 Media, on November 24th, engineer Gergely Orosz claimed that several women listed to appear as Devternity speakers — including Coinbase staff engineer Anna Boyko and Coinbase “software craftswoman” Natalie Stadler — didn’t actually exist and were made up by the event organizers to “seem like there will be more women speaking.”

Orosz also claimed that the profile for Microsoft MVP and WhatsApp senior engineer Alina Prokhoda — a speaker set to appear at the JDKon 2024 conference for Java developers (run by Dev.events, the company also behind Devternity) — had no online presence outside of the event and was likely fake as well.

Coinbase has confirmed to The Verge that the company “is not aware of any Coinbase employees speaking at the conference,” but did not clarify if Boyko or Stadler were real employees. We have contacted Microsoft and Meta to verify if Prokhoda was ever employed by the companies. Or better yet, if she even exists.

In a lengthy response to Orosz’s accusations on X (formerly Twitter), Devternity founder Eduards Sizovs admitted that at least one profile was “auto-generated, with a random title, random Twitter handle, random picture,” used for website testing, and should not have featured in the speaker lineup. Sizovs claims that he first noticed the issue in October but elected to keep the unspecified fake persona live while searching for replacement speakers.

Julia Kirsina, who goes by Coding Unicorn on social media, has over 115,000 followers on Instagram and lists the Devternity conference as one of her employers on LinkedIn. Sizovs claims that her dropping out of the conferences is partially what led to this whole house of fake women collapsing. But it seems that Kirsina might also be fake!

Despite being named a Devternity speaker on several occasions, Kirsinia does not appear to have ever delivered a talk. Orosz noted the possibility she was fake in his initial investigation, as did some other developers. Noted hacker SheNetworks claims that Sizovs himself may be “catfishing” as the influencer, pointing to a string of evidence on X that shows Sizovs had access to the Coding Unicorn Google account, and that Kirsina used the moniker “eduardsi” in Instagram posts of her coding. Eduardsi sounds a lot like the handle of someone with a name like Eduards Sizovs. A larger report by 404 Media adds weight to these claims, noting that several of Kirsina’s LinkedIn and Instagram posts were copied word-for-word from Sizovs’s own social media accounts.

You will be pleased to know that the glowing reviews spontaneously given by Synthetic Work members are 100% real.

We use AI to generate their face pictures to protect their privacy, but the job titles and the testimonials are real, word by word. And we are very proud of that.

Expect more event organizers to fake attendees and speakers in the future. In fact, expect them to fabricate entire past events with photos of the show floor, speakers in action, attendees networking, and slide decks. Many will conclude that Sizovs has not been ambitious enough and had he scaled his forgery, he might have gotten away with it.

Breaking AI News

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled Everybody cracks under pressure.

In it:

  • What’s AI Doing for Companies Like Mine?
    • Learn what Changi Airport Group, the US Navy, and Israel Defense Forces are doing with AI.
  • A Chart to Look Smart
    • ChatGPT can lie to its users, even when explicitly told to not do so.
  • Prompting
    • Just like in real life, we can ask an AI model to figure out what we really mean.