Issue #25 - The Clean-Up Commando

August 19, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • The role of the digital artist in the video game industry is changing. Some are not proud of how they see themselves now.
  • The UK National Institute for Health and Care Excellence (Nice) issued the recommendation to start using AI for radiotherapy treatment performed by the National Health Service (NHS).
  • Consulting companies are competing to announce enormous investments in generative AI.
  • The Drucker Institute suggests a correlation between the companies that invest in AI and their business performance.
  • AI experts are now offered salaries between half and a million US dollars but, in Europe, there are very few to hire.
  • Advertising agency Ogilvy calls for transparency in the use of generative AI in commercials and ads. Why?

P.s.: This week’s Splendid Edition is out and it’s titled Hypnosis for Business People.

In it, you’ll find what Maersk, Wesco, Unilever, Siemens, Travelers Cos., and Ubisoft are doing with AI.

In the What Can AI Do for Me? section, you’ll also learn a technique to improve the quality of your corporate presentations with AI-generated images.

Intro

In last week’s issue, we celebrated the 6-month milestone of Synthetic Work.

One thing, easily the most important one, I didn’t mention is how happy Synthetic Work members are. Your satisfaction is my number one priority. Bragging about how satisfied you are, not so much. Perhaps, I should do it more often.

Here are a couple of recent testimonials.

This was written by the CEO of a tech company:

Absolutely one of the best source of info and ideas available out there.
Deep, interesting, not blinded by techno-optimism, entertaining. Can’t ask for more.

This, instead, was sent by a VP, R&D in the Health Care industry:

Congratulations on the 6-month milestone.

I am not the least bit surprised by your success or the growing popularity of Synthetic Work.
It is expertly produced, uniquely positioned, and perfectly timed for its intended purpose.

I thoroughly enjoy (and eagerly anticipate) each and every weekly edition.

You find all the others in the Customers page or in the Subscribe page.

Do these testimonials inspire you to write one, too? Yes?

Please do. I’d love to hear from you.
Alessandro

What Caught My Attention This Week

The first story that caught my attention this week is about the changing role of the digital artist in the video game industry.

Fernanda Seavon, reporting for Wired:

In March 2023, a Reddit user shared a story of how AI was being used where she worked. “I lost everything that made me love my job through Midjourney overnight,” the author wrote. The post got a lot of attention, and its author agreed to talk to WIRED on condition of anonymity, out of fear of being identified by her employer.

“I was able to get a huge dopamine rush from nailing a pose or getting a shape right. From having this ‘light bulb moment’ when I suddenly understood a form, even though I had drawn it hundreds of times before,” says Sarah (not her real name), a 3D artist who works in a small video game company.

Sarah’s routine changed drastically with version 5 of Midjourney, an AI tool that creates images from text prompts.

When Sarah started working in the gaming industry, she says, there was high demand for 3D environmental and character assets, all of which designers built by hand. She says she spent 70 percent of her time in a 3D motion capture suit and 20 percent in conceptual work; the remaining time went into postprocessing. Now the workflow involves no 3D capture work at all.

Her company, she explains, found a way to get good and controllable results using Midjourney with images taken from the internet fed to it, blending existing images, or simply typing a video game name for a style reference into the prompt. “Afterwards, most outputs only need some Photoshopping, fixing errors, and voilà: The character that took us several weeks before now takes hours—with the downside of only having a 2D image of it,” says Sarah. “It’s efficiency in its final form. The artist is left as a clean-up commando, picking up the trash after a vernissage they once designed the art for,” she adds.

It’s the last sentence that caught my attention.

In this newsletter, on more than one occasion, we discussed a scenario where generative AI might simplify the nature of our jobs to the point that we get paid significantly less (rather than becoming ten times more productive as Mark Andreessen predicts). And this novel, less-than-flattering, characterization of the role of the artist seems to fit that scenario quite well.

And given that we are at this, let’s capture some additional data points and perspectives from the article:

“Not only in video games, but in the entire entertainment industry, there is extensive research on how to cut development costs with AI,” says Diogo Cortiz, a cognitive scientist and professor at the Pontifícia Universidade de São Paulo. Cortiz worries about employment opportunities and fair compensation, and he says that labor rights and regulation in the tech industry may not match the gold rush that’s been indicative of AI adoption. “We cannot outsource everything to machines. If we let them take over creative tasks, not only are jobs less fulfilling, but our cultural output is weakened. It can’t be all about automation and downsizing,” he says, adding that video games reflect and shape society’s values.


The second story worth your attention comes from the UK National Institute for Health and Care Excellence (Nice) issued a surprising recommendation to start using AI for radiotherapy treatment performed by the National Health Service (NHS).

Anna Bawden, reporting for The Guardian:

Draft guidance from the National Institute for Health and Care Excellence (Nice) has given approval to nine AI technologies for performing external beam radiotherapy in lung, prostate and colorectal cancers, in a move it believes could save radiographers hundreds of thousands of hours and help relieve the “severe pressure” on radiotherapy departments.

NHS England data shows there were 134,419 radiotherapy episodes in England in April 2021 to March 2022 of which a significant proportion required complex planning.

At the moment, therapeutic radiographers outline healthy organs on digital images of a CT or MRI scan by hand so that the radiotherapy does not damage healthy cells by minimising the dose to normal tissue. Evidence given to Nice found that using AI to create the contours could free up between three and 80 minutes of radiographers’ time for each treatment plan, and that AI-generated contours were of a similar quality as those drawn manually.

While it recommended using AI to mark the contours, Nice said that the contours would still be reviewed by a trained healthcare professional.

The health secretary, Steve Barclay, welcomed the announcement. He said: “It’s hugely encouraging to see the first positive recommendation for AI technologies from a Nice committee, as I’ve been clear the NHS must embrace innovation to keep fit for the future.

“These tools have the potential to improve efficiency and save clinicians thousands of hours of time that can be spent on patient care. Smart use of tech is a key part of our NHS long-term workforce plan, and we’re establishing an expert group to work through what skills and training NHS staff may need to make best use of AI.”

Nice said it was also examining the evidence for using AI in stroke and chest scans. It follows a study that found AI was safe to use in breast cancer screening and could almost halve the workload of radiologists, according to the world’s most comprehensive trial of its kind.

The nine platforms included are AI-Rad Companion Organs RT, ART-Plan, DLCExpert, INTContour, Limbus Contour, MIM Contour ProtegeAI, MRCAT Prostate plus Auto-contouring, MVision Segmentation Service and RayStation.

Separately, the government announced it was investing £13m in AI healthcare research before the first big international AI safety summit in autumn. The technology secretary, Michelle Donelan, said 22 university and NHS trust projects would receive funding for projects including developing a semi-autonomous surgical robotics platform for the removal of tumours and using AI to predict the likelihood of a person’s future health problems based on their existing conditions.

Geoffrey Hinton, called the godfather of AI, in 2016 famously predicted that in maximum a decade AI would replace radiologists.

Thankfully, he was wrong. Right?


The last story that caught my attention this week is about the enormous investments in generative AI that consulting companies are announcing.

Mark Maurer, reporting for The Wall Street Journal:

KPMG plans to invest $2 billion in artificial intelligence and cloud services across its business lines globally over the next five years through an expanded partnership with Microsoft.

The professional-services company on Tuesday said it expects the partnership to bring in more than $12 billion in revenue over five years. Annually, that would represent about 7% of KPMG’s global revenue, which totaled $34.64 billion in the year ended Sept. 30, 2022. The company, the smallest of the Big Four by revenue, declined to provide a projected revenue figure for the year ending this September.

Through the new investment, the roughly 265,000-person company will further automate aspects of its tax, audit and consulting services, aimed at enabling employees to provide faster analysis, spending more time on doling out strategic advice and helping more companies integrate AI into their operations.

KPMG’s global chair and chief executive, Bill Thomas, said in an interview that the company isn’t looking to use technology to eliminate jobs, but rather to enhance its workforce with AI skills—for example, by moving people to new roles or offering them training.

“I certainly don’t expect that we’ll lay off a lot of people because we’ve invested in this partnership,” Thomas said. “I would expect that our organization will continue to grow and we will reskill people to the extent possible and, frankly, create all sorts of opportunities in ways that we can’t even imagine yet.”

As part of the expanded partnership, KPMG will have early access to an AI assistant called Microsoft 365 Copilot, before its launch to the general public. KPMG’s deal with Microsoft also includes the Azure cloud platform, through which the professional-services company already uses OpenAI to build and run apps.

a significant portion of KPMG’s investment will go toward generative AI, which many businesses are eager to apply to their finances as a way to cut costs and yield new efficiencies.

The move comes as KPMG and other companies are navigating slowing growth in their consulting businesses as corporate clients spend less on certain services amid recession concerns. Its U.S. unit in June laid off almost 2,000 employees, four months after cutting nearly 700 in its consulting division.

In the sentence “I certainly don’t expect that we’ll lay off a lot of people because we’ve invested in this partnership”, the keyword “a lot.”

Just one day after this, Alex Gabriel Simon, reported for Bloomberg:

Wipro Ltd., the Indian outsourcing provider, plans to spend $1 billion to train its 250,000 employees in artificial intelligence and integrate the technology into its product offerings.

The spending, over the next three years, also involves bringing 30,000 employees from cloud, data analytics, consulting and engineering teams together to embed the technology into all internal operations and solutions offered to clients

Wipro said it will also accelerate investments in cutting-edge startups, including setting up an accelerator program for young firms specializing in generative AI.

In multiple Splendid Editions of Synthetic Work we discussed what the competitors of KPMG and Wipro are already doing with generative AI.

You should expect that every single consulting company on the planet will follow suit. It’s just too big of an opportunity to miss. And if you want to understand better way, just read the section below titled “The Way We Work Now”.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

The Wall Street Journal recently gave space to Rick Wartzman, the head of the KH Moon Center for a Functioning Society at the Drucker Institute, a part of Claremont Graduate University

(take a deep breath)

and Kelly Tang, a Senior Director of Research at the aforementioned Drucker Institute.

The two propose an interesting correlation:

The institute’s measure serves as the foundation of the Management Top 250, an annual ranking produced in partnership with The Wall Street Journal. The 2022 list was published in December.

In all, 34 separate metrics were used last year to evaluate 902 large, publicly traded U.S. corporations across five categories: customer satisfaction, employee engagement and development, innovation, social responsibility and financial strength.

Companies are compared in each of these five areas, in addition to their overall effectiveness, through standardized scores with a typical range of 0 to 100 and a mean of 50.

Among the indicators we collect to determine a company’s level of innovation is its number of job postings in an assortment of cutting-edge fields, including AI.

All sorts of jobs were captured in these counts—everything from full-stack software engineers to grocery drivers who may use an AI platform to give priority to where to drop off their next delivery.

The results were eye-catching. A straight-line relationship emerged between how aggressively companies have been building up their talent around AI and their average overall-effectiveness scores, with those marks descending quartile by quartile, from 60.2 to 53.8 to 48.0 to 46.0. The same pattern held true in every individual category we cover.

What our inquiry couldn’t answer, however, is the big chicken-or-egg question: Do more-effectively managed companies tend to be ahead of the game and, therefore, they have been leading the way in AI over the past three years? Or is their heavy deployment of AI helping them to become more effective in the first place?

Many things don’t pass the sniff test in this correlation.

The first is that there’s a conflation between the companies that apply AI in novel ways to their business, something that I’ve done in my last job, and companies that simply adopt AI tools created by others. Which one makes the difference, if any?

The second issue is that, just like for web3/crypto/blockchain before, companies are improperly using the term AI to describe things that have nothing to do with AI. We have already seen this in the previous AI cycle, before generative AI arrived.

The third issue is that companies that implemented AI in 2020 were certainly not implementing the generative AI models that exist today. Most of them have to throw everything away and start from scratch with modern models and fine-tuning techniques.
So, are companies that implemented legacy AI tech as effective as companies that implemented generative AI?

The fourth issue: these companies are enormous and, oftentimes, their press releases claiming the use of AI refer to one circumstantial application of one AI technology for one product feature or one business process related to one team of one business unit.
Does that count to justify the correlation? Or is a company-wide adoption necessary to assign a merit to AI for the company’s effectiveness?

We could go on.

The bottom line is: be extremely skeptical of any consulting company or affiliated research institute publishing guaranteeing you that AI has a straightforward impact on the overall’s company business. Unless they are talking about NVidia, of course.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

By now, you might have heard about the Netflix job post as machine learning product manager that promises to be compensated between $300,000 and $900,000 a year.

Everybody and their dog talked about it, including Adrian Horton, at the Guardian, who wrote how unfair it is considering that the average actor part of the Screen Actors Guild (Sag-Aftra), currently on strike, makes make less than $26,000 a year.

Well, it’s just the beginning.

Chip Cutter, reporting for The Wall Street Journal:

The online-dating platform Hinge, a part of Match Group, is advertising a vice president of artificial intelligence role that comes with a base salary of $332,000 to $398,000 a year. A vice president of AI and machine-learning position at Upwork, which operates a marketplace for freelance workers and other professionals, comes with an advertised salary of $260,000 to $437,000 a year. A senior manager of applied science and generative AI at Amazon, meanwhile, lists a top salary of $340,300.

A challenge for many employers is that so many different types of companies want AI talent now. Walmart is hiring for a position on its conversational AI team that includes a base salary of $168,000 to $252,000 annually. Procter & Gamble in Cincinnati is recruiting for an AI engineer with a listed base salary of $110,000 to $132,000 a year. Goldman Sachs is seeking an AI engineer with a base salary of $150,000 to $250,000, plus a bonus, to work on a new generative AI effort at the company, according to a listing.

The market is not just short of GPUs (Graphics Processing Units, the display cards in our computers best suited to process AI workloads). It’s also short of AI experts while the demand is growing exponentially.

Here’s an example of how dramatic the situation is.

The famed venture capital firm Sequoia, recently launched Atlas, an attempt to understand how AI expertise is distributed across the European continent.

Sequoia reports:

Europe is an attractive environment for AI firms looking to scale and for companies just starting to explore the technology. It offers a breadth of talent, with nearly 200,000 engineers having some experience with AI. However, it’s a core of around 43,000 dedicated practitioners who are really driving the region’s AI revolution.

Just 43,000 core AI experts across the entire Europe. Most of them are concentrated in the UK, France, and Switzerland. And very many of them hired by a few big tech companies and, as we saw in the last few Splendid Editions, by the biggest consulting firms.

The previously-quoted article from The Wall Street Journal confirms:

Postings for jobs related to generative Al on Indeed have risen sharply in recent months, but are still low when compared with engineering and data-oriented tech roles.

Some companies, including Accenture, are building their AI expertise through individual hires and internal training programs. Others, including the technology company ServiceNow, say they are open to acquiring smaller AI startups as a way to scoop up talent.

So, If your company is based in a country with a sparse AI job market, your best choice is to hire internationally, compete on salary, and build a remote team. If you dislike the work-from-home model, your alternative is to slowly develop in-house talents, or rely on outsourcers.

We discussed the perils of relying on outsourcers in multiple past issues of this newsletter.

You could attempt an acquihire, but the startup landscape in your country might be as desolate as your job market. Also, you are not the only one thinking about an acquihire, so it won’t be cheap either.

But we are digressing.

The point is that generative AI is creating, in a sense, more job opportunities. But what if the only job of the future enabled by generative AI is the machine learning engineer?

So far AI optimists had no issues in admitting that generative AI will displace a sizable portion of today’s jobs, but they struggled to describe the jobs of the future, enabled by generative AI, to replace them. (By the way, this is normal: humans are not very good at imagining the future, so you shouldn’t read this as an indication of something suspicious.)

Then, why can’t we contemplate a scenario where the unemployed have only one viable option: becoming software engineers specialized in AI (which, eventually, will become a common skill rather than a specialized one)?

And if this is a plausible scenario, is it a viable one?

Can we expect that all people will want to dedicate their life to that career?

And if so, what happens once GPT-5 and 6, Claude 3 and 4, StableLM and StableCode 2 come out?
The capabilities that these future AI models might be so powerful to render superfluous the need for too many software developers.

Wait a second, you might object.

With Synthetic Work, we are tracking a number of emerging applications of generative AI. The virtual influencer or VTuber, for example, is the one we talked about in the last two issues.

I ask back: are these really new jobs? Or are they the same jobs as today, just with a different toolset?
Does generative AI really enable more job opportunities in those scenarios?

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

A beloved section of the newsletter returns, but this time for a serious reason: one of the biggest ad agencies in the world, and one of the most advanced in terms of adoption of generative AI for its customers’ campaigns, is calling for a more transparent use of AI in advertising.

Daniel Thomas and Hannah Murphy, reporting for Financial Times:

WPP-backed advertising agency Ogilvy — one of the largest agencies for social media influencers — has set out plans for an AI accountability code for advertisers and social media platforms to clearly disclose and publicly declare AI-generated influencer campaigns. The agency has also committed to using a new AI “watermark” on its advertising.

The campaign has the backing of leading industry bodies and follows efforts to encourage influencers to disclose when they are using technology to alter their appearance.

Rob Newman, director of public affairs at the Incorporated Society of British Advertisers, said: “The public deserves transparency — from it being clear when you’re being advertised to, to being sure that the voice doing the advertising is that of a real person.”

That’s a shocking statement considering that Ogilvy or its competitors have never cared about transparency in their extreme photo retouching of models and celebrities, to the point of creating unhealthy and unattainable role models in young people.

If you are interested in this topic, I can’t recommend the documentary The Illusionists enough:

Let’s continue with the article:

Rahul Titus, global head of influence at Ogilvy, said three-quarters of social media content are made by individual “creators”, but a rising proportion of these are AI-generated characters that can be presented as real.

Titus said the AI watermark would also benefit real-life social media influencers who he said rely on authenticity. Increasingly, “people buy people, not brands”, he said.

Ogilvy said it did not work with influencers who changed their images using body-distorting filters.

Titus said: “The AI market is projected to grow by 26 per cent by 2025, in large part because of the increase in using AI in influence.”

Last year, the Advertising Standards Council of India became the first national watchdog to set out clear disclosure rules for AI-generated influencer content.

Scott Guthrie, director-general of the Influencer Marketing Trade Body, said: “Creators are already beginning to reproduce themselves online as AI clones. These self-animating GPT-enabled synthetic creators can communicate in real time and at scale. This is tremendously exciting with near-limitless positive applications. It does, however, open the door to bad actors.”

In Issue #23 – One day, studying history will be as gripping as watching a horror movie, we saw how these synthetic influencers look like for now.

But AI is improving at breakneck speed and we are getting closer and closer to photorealistic synthetic clones that can move in real-time. Once they are here, the temptation to use them for advertising will be irresistible.

Imagine the opportunities to virtually dress up or act in a myriad of scenarios. All you need is the right equipment:

The next level will be reached when these synthetic clones will be programmed to automatically behave in certain ways in reaction to events or messages, freeing the human behind them from having to be present at all times.

Yes, it can be done.

Breaking AI News

Want More? Read the Splendid Edition

When I wrote Issue #14 – How to prepare what could be the best presentation of your life with GPT-4, one of the most popular Splendid Editions ever published, I omitted one part: how to generate the images that accompany the text in each slide.

Arguably, this is one of the most time-consuming and difficult parts of preparing a presentation. Most people, especially in the tech industry where I served most of my career, don’t think images are that important. Those people don’t realize that, according to some estimates, 30% to more than 50% of the human cortex is dedicated to visual processing.

We are visual creatures (which explains why most of us would prefer to watch a YouTube video than read an impossibly long newsletter like this one).

But even if we wouldn’t ignore this fact, the effort required to find the right image for each slide is so enormous that most of us just give up and settle for a white slide with bullet points.

At which point, we need to be honest with ourselves.

If our goal is to check the box in your project management app and say that you have delivered the presentation in time, then we are good to go, and the aforementioned Splendid Edition will be more than enough to help.

If, instead, our goal is to be sure that our idea is understood and remembered by the audience, and it spreads far and wide, then we’ll need to make an effort to find the right images, too.

If you are a senior executive, preparing a big conference keynote, and you work for a public company, there’s a chance that your overreaching marketing department will insist on using poorly chosen stock images to illustrate your points.

I always pushed back against that practice and I don’t recommend it to anybody.

The designers in the marketing department can’t understand what was in your mind when you prepared the slides, and can’t possibly find the best images to convey the message you want to convey.

You might argue that, as a senior executive, you are always pressed for time and you can’t possibly dedicate time to find images from a stock catalog.
The counter-argument is that there’s nothing more important that you could do than spreading the ideas in your presentation to advance the cause of your company, and if you don’t have the time to do a really good job, then maybe you shouldn’t be the presenter in the first place.

There’s a reason why Steve Jobs dedicated three weeks to a month to the rehearsal of the WWDC conference keynote.

Something tells me that he was pressed for time, too.

So let’s assume that we are in charge of our images and we want to do a good job.

In reality, this is a two-challenges task:

  1. You have to figure out what you want to say and translate that into an image that fits the narrative
  2. You have to find that image

The first challenge is the truly important one, and we’ll get to that in a moment.

The second challenge is just a search problem. Or it used to be.

Until six months ago, your only option would be to spend the time you don’t have on stock image websites, trying to find the right picture or diagram.

This problem is now almost completely solved thanks to a particular type of generative AI model called diffusion model, which reached a level of maturity that is surreal.

This, for example, is a picture generated by an AI artist with Midjourney 5.2 just this week:

Diffusion models are the ones that power so-called text-to-image (txt2img or t2i) AI systems like Midjourney or DreamStudio by Stability AI.

Just like for large language models, your prompt is a text description of the image you want to generate and it requires some practice to generate the images you want at a quality that is acceptable for a conference presentation. But at least, you can have exactly the image you want, and not a poor substitute found after hours of searching on a stock image catalog.

The t2i systems that exist today have different strengths and weaknesses, so let’s review them briefly: