Issue #14 - My job-hunting AI will get in touch to talk with your recruiting AI tomorrow

May 26, 2023
Free Edition
In This Issue

  • New York City just passed a law to regulate how AI is used to hire candidates. Let’s just hope it doesn’t end up like the EU cookie consent nightmare.
  • It turns out that generative AI to create content that reaches the top of Google Search in a matter of two days. One every three people around you will turn into a SEO expert by next week.
  • A startup promises to use generative AI to place virtual products in videos. YouTube will copy this in three…two… Influencers to be rich in three…two…
  • OpenAI GPT-4 sort of passed an official Radiology exam, performing way better than ChatGPT. Radiologists can still count on their human creativity for…they’ll find something.
  • DJs and nerd musicians are using AI to create completely new type of music. Meanwhile, other people are busy using AI to copy the existing type of music to get rich.
  • UK headteachers are quite concerned about AI and don’t want to wait for the government to issue guidance. Wait until they see how much McKinsey costs to give them a speedy answer.
  • Smart candidates now use AI to generate resumes and cover letters, fooling HR professionals and hiring managers. Who thought that lying would be so much fun?

P.s.: This week’s Splendid Edition of Synthetic Work is titled How to prepare what could be the best presentation of your life with GPT-4.

It talks about one thing only: how to write a presentation, start to finish, with GPT-4. An epic journey to turn you into a phenomenal speaker* thanks to AI.

*I make no promises.

Intro

I realise that, for a lot of people, Synthetic Work is a lot of content to digest every week. The world’s economy is transforming in front of our eyes because of AI, impacting a portion of the population infinitely larger than when we started adopting electricity or the Internet.

CEOs, EVPs/SVPs/VPs, Senior Directors, and board members read Synthetic Work. Not all of them don’t have the time to read 20 pages of content every week, even if I religiously stay away from technical jargon.

If you don’t have enough time, but you need to know what your industry peers are doing with AI, how to educate your workforce on using AI and improve their productivity, or how to develop a program to implement AI in your organization, consider a phone call or an in-person meeting.

As a former Gartner research director, I had more 1:1s in a day than any human being should be allowed to, so I’ve been hesitant to offer this. But extraordinary times call for extraordinary measures.

As expected, I’ve started to receive a number of questions on how to start an AI program inside a large company. What AI model to adopt, how to recruit the right talents, what approach to pursue, how to stay on top of the academic research and not bet on the wrong horse, how to choose between open source and decentralized AI vs. proprietary and centralized, how to deal with regulations as well as copyright and patent infringements, etc.

I spearheaded the first and biggest AI project to date in Red Hat, my former employer, working side by side with the former CTO of IBM Watson (the AI that won against humanity at Jeopardy) and his team.

That project became a top priority for both Red Hat’s and IBM’s CEOs almost overnight. Something not easy to achieve in two companies with 20,000 and 340,000 employees respectively.

That experience and my years in Gartner will help me help you with this.

The time is not ready yet. But in a few weeks, I’ll start publishing guidance on how to adopt AI for enterprise organizations in the Splendid Edition of Synthetic Work. If you didn’t so already, you might want to upgrade your subscription.

 

And now that this excruciatingly boring and shameless self-promotion is over, we can finally read this week’s content.

Alessandro

What Caught My Attention This Week

The first thing worth your attention this week is that New York City just passed a law to regulate how AI is used to hire candidates.

Steve Lohr, reporting for The New York Times:

The city government passed a la in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.

the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.

That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.

Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.

Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.

You perfectly know that, for years, companies all around the world have used AI to screen candidates’ resumes. In fact, your company probably does that, too.

Thinking they would be clever, in reality, they have trusted poorly-made algorithms that are even more poorly updated, rejecting talents that would have made a big difference.

So regulation is good. Especially if it enforces the auditing of these algorithms.

People assume, quite reasonably, that AI is like other software technologies. You deploy it and forget it. Sure, you might need to fix the bugs along the way, but that’s it.

But AI doesn’t work like other software technologies. Even if it sounds unbelievable to a non-technical person, we humans, don’t exactly know how modern AI works and comes to its conclusion.

We have a rough idea because we put together the software in the first place, but then, the software starts to do things that sometimes we struggle to understand.

You know how a baby is made, right?

So you go and make a baby. You have seen other babies, so you have a pretty clear idea of what the baby will do.

But you don’t quite know the inner details of the baby’s mind.

And then, one day, your two years old baby starts playing the piano like Mozart. And you totally didn’t expect that.

This is the situation we have with modern AI today. Except that, sometimes, this baby starts killing people like Jack the Ripper instead of playing the piano like Mozart.

So we can’t just deploy AI and forget about it. We need to keep an eye on it and audit what it does for a veeeeeeeryyyyyyyyy long time because there are many edge cases where we don’t know how it will behave.

The technology providers that complain about it, like the ones mentioned in The New York Times article, do so because they perfectly know all of this. They also know that if they had to correct the mistakes of an AI model every time a new one arises, it would cost millions of dollars making the business financially unsustainable.

Do you remember the scandal of Google Photos? The AI inside that product that labelled Black people as gorillas? Well, 8 years later, that AI has not been fixed. Google and Apple simply blocked the capability of their AI to identify gorillas. Just in case.

So, a regulation that enforces auditing and allows customers to fight back is good.

On the other side, you have the European disaster of the cookie consent. If you don’t live in Europe and you don’t know what I’m talking about, lucky you.

How humans have managed to ruin the Internet with a single piece of regulation is marvellous in its perversion.

Try to sue the companies that have played around the grey areas of that regulation, ruining your life, and let me know how it goes.

Thankfully, there’s an alternative way to fight poor AI used for recruitment. Scroll all the way down to the Putting Lipstick on a Pig section of the newsletter, and you’ll find out.

The second thing that it’s worth your attention this week: smart developers are exploiting how slowly Google is reacting to generative AI by creating content farms that reach the top of Google Search in a matter of two days.

The following Twitter thread is eye-opening and you should really read it:

The results these people are obtaining are incredible:

And if you don’t know what any of this means, show these numbers to somebody in your Marketing department.

Regardless, why does it matter?

Because, for now, these experts in AI and search engine optimization (SEO) are exploiting Google’s reaction time just for profit. They slap an ad at the top of the page and they get revenue.

But going forward, if Google doesn’t react quickly, these automated content farms, now superpowered by generative AI, will start building websites dedicated to influencing people’s opinions on things that matter: politics, policies, and, eventually, stocks.

At today, we cannot foresee the impact of a coordinated activity by multiple content farms using generative AI to influence the market. And because this is a non-zero possibility, as we read in Issue #13 – If you create an AGI that destroys humanity, I’ll be forced to revoke your license, the SEC Chair last week warned that the next financial crisis might be caused by generative AI.

And if there’s a financial crisis triggered by generative AI, a lot of jobs will be impacted.

The last thing that you should pay attention to this week is a startup called Ryff, which is using generative AI to do virtual product placements in Youtube videos.

Two key rules govern Synthetic Work:

Rule One is that technical jargon is strictly forbidden. If I really need to say something technical, like “context window”, I’ll be sure to explain what it means in a language that even a kid can understand. Or so I hope.

Rule Two is that I don’t talk about technology providers unless they are mentioned in a story about an AI adopter that is using their technology in the real world to solve an actual business problem.

That is for your safety, dear reader.

After 23 years in the tech industry, I have seen too many startups and established players peddling “enterprise solutions” that were barely more than a piece of code written in an afternoon by a person in a basement.

As a veteran Gartner analyst I enjoyed working with used to say: if you don’t have a customer, you have nothing.

Of course, every rule has its exceptions. And today I’m willing to make an exception.

Not because Ryff deserves an exception, but because the idea they came up with is phenomenal and might dramatically impact the job of the video influencer (or Youtuber or whatever you call it these days).

Dean Takahashi, reporting for VentureBeat:

Ryff has ingested and integrate content into platforms and channels such as Sky TV, ITV, A&E, Hallmark, Viacom, Channel 4, CBS, Viu TV, ESPN, Hulu, Apple TV, Paramount+, NBC, and others. Brands such as Coca-Cola, Intel, General Mills (Cinnamon Toast Crunch), Diageo (Baileys), Mars (Orbit), and over 50 others already successfully use Ryff technology.

Using proprietary AI, and machine learning (ML) with visual computing, Ryff can discover and make available ‘shoppable moments’ to brands to a potential audience in North America and Europe for 250,000 hours of the currently available streaming catalog shows; not to mention live sports and new TV and film.

Brands can search a growing library of over 2000 TV shows, films, and sports in Spheera to find those VPOs (Virtual Placement Opportunities) placements suited to their products and target market, filtering by their market category, content genres, formats, and scenes.

In a statement, Taylor said, “Spheera will breathe new financial life into the balance sheets of creators who can retain complete creative control of their content. Our artificial intelligence is rewriting product placement rules for sport, TV, Film, social, and influencers.”

OK. Now, ignore everything you just read.

Some VentureBeat are sponsored content submitted by the vendors and edited to look more realistic. So most of this probably comes from Ryff (it surely sounds like it) and it should not be trusted until it’s deployed in production.

What really matters here is that YouTube will certainly do this in-house and offer the technology to every creator on their platform.

Google is already experimenting with an AI technology that could allow YouTube to lipsync dub every English video in dozens of other languages. Something called Universal Translator, recently showed at the Google I/O conference.

So imagine that, in the future, a YouTuber will create one video in English and, without doing anything, that video will be duplicated, with the same tone of voice, in Italian, Spanish, French, German, etc.

Now, imagine YouTube releasing a virtual product placement technology like the one that Ryff is about to launch, and combining that with the Universal Translator.

Every video, in every language, will be plastered with placed products that are relevant to the audience of a certain nation.

Going forward, nothing of what we’ll see will be 100% real. Not just Hollywood movies with special effects, but every single digital output human produce will be embellished, filtered, augmented, injected with ads, translated, and customized to be sure that we always pay attention and we always buy.

That’s why, on social media, I always say that we entered the Post-Real era.

A Chart to Look Smart

The easiest way to look smart on social media and gain a ton of followers? Post a lot of charts. And I mean, a lot. It doesn’t matter about what. It doesn’t even matter if they are accurate or completely made up.
You won’t believe that people would fall for it, but they do. Boy, they do.

So this is a section dedicated to making me popular.

Earlier this week is the fact that GPT-4 passed a 150 questions exam matching the style, content, and difficulty of the Canadian Royal College and American Board of Radiology examinations.

And you should care about this even if you don’t care anything about Radiology or Health Care.

You probably have read everywhere, for weeks, about all the law and medical exams that ChatGPT passed. This new one is notable for three reasons:

  1. This is one of the first academic research focused on the new GPT-4 model. What we colloquially call “ChatGPT” is a system on top of a model called GPT-3.5-Turbo.
    As more academic research comes in, we’ll see that GPT-4 is extraordinarily more capable than GPT-3.5-Turbo and that means that the AI can do more tasks that are part of a human job.
  2. In this particular example, GPT-4 passed the exam by performing significantly better than GPT-3.5-Turbo, answering 81% of questions correctly (the passing threshold is 70%) against the 69% score of its predecessor.
  3. In particular, GPT-4 performed better than GPT-3.5-Turbo in the so-called higher-order questions, which shows how OpenAI developed advanced reasoning capabilities in its AI in just a few months.

OK. But, again, why do we care about all of this?

Because of the following chart, which comes from a completely different, unrelated research:

In this chart, somebody compared the performance of an AI model that has been fine-tuned (it means “educated to be a specialist in discipline X”), Med-PaLM 2, just released by Google, with generalist models like GPT-4 (so, models that have not been fine-tuned to be specifically good at anything).

It’s like asking: “Who’s a better radiologist? A polymath that is amazing at everything but has never seen radiology before or a really bright university student that has trained for 10 years to become a great radiologist?”

As you can see, GPT-4 performs as well Med-PaLM 2. So the counterintuitive answer seems to be that both the polymath (GPT-4) and the university student (Med-PaLM 2) can be equally good radiologists.

And all of this casts doubt on the current assumptions about what enterprise companies should do when they adopt AI internally.

Things move so fast within the AI community, and we know so little about what GPT models are capable of, that you should not trust any guidance that was not written max 24 hours ago. Even if it comes from well-known analysis firms.

As I said in the intro, more to come on this subject in the Splendid Edition, later this year.

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

Perhaps unexpectedly, Synthetic Work talks a lot about generative AI and yet, in these first three months of publication, I have discussed very little the impact of AI on the industry that is most obviously impacted by AI: Media & Entertainment.

It was a deliberate choice. Even if, as an art lover and collector, the Media & Entertainment industry matters enormously to me, I didn’t want the readers of this newsletter to believe that generative AI is just about image and video generation.

In fact, in three months, we’ve seen an astonishing number of other industries being impacted by AI, starting with the Education industry in Issue #1 – Burn the books, ban AI. Screwed teachers: Middle Ages are so sexy.

From there, we talked about how AI is transforming jobs in the Legal industry, the Health Care industry, the Financial Services industry, and so on. If you want to see how many industries we talked about so far, you just have to check the AI Adoption Tracker.

After all of this, time has come to talk about Media & Entertainment in a more serious way.

To help us, we have Brandon Stewart, reporting for Freethink:

Nao Tokui’s goals with AI are nthing if not ambitious. “My ultimate goal is to create a new genre of music,” he told me earlier this spring over a video call.

In addition to being a DJ, he’s an associate professor at Keio University and founder of Qosmo Labs, a sort of R&D laboratory for experiments in creative AI. I was eager to discuss the development of his AI DJ project, which he has been working on for the better part of a decade.

“I started this project, the AI DJ Project, back in 2015. When I started, my DJ friends got angry,” he recalls. “They thought I was trying to automate everything, but that’s not [my] intention, of course. My intention was to use AI to extend my capability as a DJ. Through interaction with the AI, I get new ideas, what I can play, and how to be human in a creative process. In the future, with AI I think there will be a new form of music, or new form of art expression, which has never been possible without new technology.”

The most recent iteration, released last year, is a performance called “Emergent Rhythm” where the DJ system leverages a variety of different models to create a live output of raw rhythms and melodies that is then spontaneously mixed in real-time by the human DJ. As Nao describes it on his website, “The human DJ is expected to become an AJ, or ‘AI Jockey,’ rather than a ‘Disk Jockey’, taming and riding the AI-generated audio stream in real-time.”

And now, the very interesting part:

In order for other artists to fllow in his footsteps and use these tools to push the boundaries of music, it’s important that they are able to understand and manipulate the underlying AI tools. “I still see a big gap between AI engineers, AI practitioners, and actual artists,” Nao says. That worries him because as he sees it, great art often comes from misusing technology. “The history of music is full of examples of the misuse of technology.”

As these tools become more corporatized, Nao worries that could put them even farther beyond the reach of artists. “It’s easy to misuse a paintbrush or a piano, but it’s getting more and more difficult to misuse new tools. AI models are getting bigger and bigger, and more and more complex. Nowadays, I cannot train ChatGPT or Stable Diffusion model by myself.” As a result, “many artists are getting more and more dependent on these tools. Your creative process can be defined by what Adobe provides as an AI tool.

“It’s super important that this process is not governed only by big companies like Adobe or OpenAI.”

This concern is growing as we speak, as digitial artists and AI enthusiasts all around the world discover that they cannot use Midjourney or Adobe Firefly to generate certain images because specific keywords or entire concepts are forbidden.

To attempt to win the huge Chinese market, these companies wrap their generative AI models with other AI models designed to censor output. But this is a slippery slope that very few artists in countries that support free speech are used to.

And so, here we have the paradox: the Media & Entertainment industry is embracing generative AI at full speed while generative AI platforms are getting constrained at full speed.

It’s like telling a kid that he can’t draw certain things with the pencil and paper that you just gave him. Many people are unsettled by that idea.

Open source AI will be the only thing that will preserve full creative freedom, as hard as it is to use.

More interesting parts from the article:

Dadabots was founded by musical technologists CJ Carr and Zack Zukowksi. They have a reputation as sort of the charming scofflaws of the AI music world. The pair met at Berklee College of Music and started collaborating after attending a hackathon at MIT in 2012.

they describe themselves as “a cross between a band, a hackathon team, and an ephemeral research lab.” Put another way, they write, “We’re musicians seduced by math.”

Their technique is focused on using neural networks trained on large volumes of music. “We focus on this type of audio synthesis called neural synthesis,” explains Carr. “Audio is just a sequence of amplitudes. Basically you’re just giving it, okay, this happened and then predict what happens next. And this very, very simple concept of predict what happens next is the thing that’s powering ChatGPT right now, like these massive language models that are blowing everyone’s minds so that their ability to write code, to write sonnets, to give you recipes very almost like the same principle just applying it to audio here and it works.

They are perhaps most well-known for a neural network trained on the work of death metal band Archspire which has been livestreaming AI-generated metal music non-stop, 24-hours a day for four years

this urge to experiment doesn’t mean they disregard artists. In fact, more often than not, they are working directly with artists. “I think one of the best parts of what we do is collaborating with artists,” says Carr. “As these tools are being developed and as this new art form is being developed, it’s important to have musicians in on the conversations around the development around it.”

One of these unique collaborations was with UK beatbox champion Reeps One. There is a whole six-part documentary series on YouTube that followed the project and explored the whole range of themes that technologists and artists have been discussing these last several years with regards to AI. It was a true collaboration between Dadabots and the beatbox artist to see what they could create together, and, as these things often do, got into some equally uncomfortable and enlightening territory. “For Reeps, hearing his essence distilled and replicated by a machine was met initially with fear,” according to a recap of the project on the Dadabots website. “It’s creepy hearing your own voice like this. [But] that fear turned into excitement as he saw his bot doppelganger more as a collaborator. It produces strange beatbox patterns he’s never made before, inspiring him to push his craft further.”

Ultimately, Dadabots is focused on pushing boundaries and discovering fusions between genres. “Beyond just recreating styles that already exist, we’re really, really interested in making fusions. And this is actually a really, really hard problem, but some genres just seem to fuse really nicely,” explains Carr. “So like punk, deathcore and djent, because they’re all at the same tempo, they have similar backbeat grooves, they’re just different color palettes. They actually fuse together pretty well.”

All of this is wonderful. Not just from an artistic standpoint. There’s a huge amount of money to be made in the Media & Entertainment industry thanks to the collaboration between AI and human artists.

There’s only one little problem.

For one DJ that wants to expand his capabilities with AI, and one duo of nerd musicians that want to use AI to work with artists, there’s an ocean of people that just want to automate the process of creating music (or images or videos) in the hope of hitting the jackpot and become rich.

Saturating all the publishing platforms with garbage in the process is not a concern. But the thing is: AI is not producing garbage at all.

Please visit this website and let me know if you can tell the difference between these songs and human-made ones. Ask the people around you in the office. At home.

And if you can’t, what does it mean for human singers that are not already famous?

Would you start a career as a singer, an already precarious endeavour, when your country’s Top 100 is 90% filled with AI hits?

If yes, because you can use AI too, at that point, how can you compete with music labels that can train their AI on the top music hits of the last 100 years to produce a new song that encapsulates all the compelling elements of all songs ever produced?

And if you are still confident you can challenge them, at that point, are you still a singer or are you something else?

How Do You Feel?

This section is dedicated to the psychological impact of artificial intelligence on people. You might think that this has no relevance to the changing nature of human labour, but it does. Oh, if it does!

For any new technology to be successfully adopted in a work environment or by society, people must feel good about it (before, during, and after its use). No business rollout plan will ever be successful before taking this into account.

Understandably, the headteachers of UK schools don’t feel too well about this whole AI thing. A few of them sent an open letter to The Times:

Sir, As leaders in state and inependent schools we regard AI as the greatest threat but also potentially the greatest benefit to our students, staff and schools. Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so. We are pleased, however, that it is now grasping the nettle (“Sunak: Rules to curb AI threats will keep pace with technology”, May 19) and we are eager to work with it.

AI is moving far too quickly for the government or parliament alone to provide the real-time advice schools need. We are thus announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts, to advise schools on which AI developments are likely to be beneficial and which damaging. We believe this initiative will ensure that we can maximise the vast benefits of AI across education, while minimising the very real and present hazards and dangers.

Sir Anthony Seldon, head, Epsom College; Helen Pike, master, Magdalen College School; James Dahl, master, Wellington College; Lucy Elphinstone, headmistress, Francis Holland School; Geoff Barton, general secretary, Association of School and College Leaders; Chris Goodall, deputy head, Epsom & Ewell High School; Tom Rogerson, headmaster, Cottesmore School; Rebecca Brown, director of studies, Emanuel School

The problem with this approach is that these schools don’t have a better chance to obtain sound advice about the risks and rewards of generative AI in education than the UK government.

Like for every other emerging technology that preceded AI, the conversation is clouded by an infinite number of pretended experts that are hoping to profit from the confusion and fear of the early adopters.

In this case, differently from almost every other emerging technology that preceded AI, every organization is an early adopter because the adoption is forced on them by the enthusiasm of the end users. Accordingly, the profit opportunities are significantly bigger and so are the pretended experts.

These schools don’t have the skills to distinguish between real and pretended experts and will likely end up following ill advice.

In the meanwhile, in other UK schools, outrageous things still happen:

I dedicated the entire first Splendid Edition of Synthetic Work to the impact that AI is having on the Education industry: Issue #1 – Burn the books, ban AI. Screwed teachers: Middle Ages are so sexy.

Some teachers are genuinely concerned that AI will take the jobs of the students they are educating.

Ben Cohen, reporting for the Wall Street Journal:

It had been a long day for Po-Sen Loh, a professor at Carnegie Mellon University and Team USA’s coach for the International Mathematical Olympiad, who is traveling to 65 cities and giving 124 lectures before the next school year like he’s on a personal mission to meet every single American math geek.

the scholar had the energy of a fourth-grader on Skittles as he delivered a talk called “How to Survive the ChatGPT Invasion.” And his simple, practical advice applied to everyone in the auditorium.

“Think about what makes humans human,” Loh said, “and lean into that as hard as possible.”

He says the key to survival is knowing how to solve problems—and knowing which problems to solve. He urges math nerds to focus on creativity, emotion and the stuff that distinguishes man from machine and won’t go obsolete. As artificial intelligence gets smarter, the premium on ingenuity will become greater. This is what he wants to drill into their impressionable young minds: Being human will only be more important as AI becomes more powerful.

After his talk, I asked how his message to a room full of fifth-graders applies to someone in an office, and he replied faster than ChatGPT. “The future of jobs is figuring out how to find pain points,” he said. “And a pain point is a human pain.” Loh would tell anyone what he told the students and what he tells his own three children. It’s his theorem of success. “You need to be able to create value,” he said. “People who make value will always have opportunities.”

But many other teachers are concerned about their own jobs.

We saw in Issue #11 – Personalized Ads and Personalized Tutors that Khan Academy is showing incredible progress in using GPT-4 to create personalized tutors that are infinitely more patient and engaging than human teachers.

Even university professors with side consulting jobs know that, if AI continues to progress in the way it has progressed in the last few months, their job is not secure:

Putting Lipstick on a Pig

This section of the newsletter is dedicated to AI tools and services that help people and things pretend to be who/what they are not. Another title for this section could be Cutting Corners.

Given that we have started this week’s Free Edition talking about AI and recruiting practices, let’s finish with the same topic.

Now the roles are reversed. Candidates are using AI to write their resumes, and differently from the crappy algorithms that companies use to recruit, GPT-4 works incredibly well.

One month ago, Reddit users u/Neither_Tomorrow_238, posted the following article on the very popular r/ChatGPT forum:

So, now, 1.8 million people around the world have seen how easy is to fool your recruiters and hiring managers with AI.

You might think that these questionable candidates will never pass the oral interview. If so, think again:

  • Occasionally, companies need to hire in a rush. They don’t have time for a thorough interview panel.
  • Don’t underestimate the bias you can develop after reading a stellar resume with an “outstanding” cover letter.
  • If GPT-4 allows the candidate to apply for 1,000 jobs instead of 10, the chances that one hiring manager falls for it grow significantly.

From there, it will snowball in a way that nobody can control anymore:

Where does it end?

It ends here: “Hi. My job-hunting AI will get in touch to talk with your recruiting AI and discuss my fit for this position and negotiate the salary. Have a nice day.”

Just like algorithmic trading, we’ll have algorithmic recruitment.

The candidate that can afford the smartest AI wins the salary.

Happy hiring!

Breaking AI News

Want More? Read the Splendid Edition

In the last few Splendid Editions, we learned about a lot of different techniques on how to interact with GPT-4. Those techniques are listed in the How to Prompt section of Synthetic Work for your reference.

It’s now time to use some of those techniques on some real-world use cases, discovering how AI can help you get the job done, start to finish.

Let’s start with something that I enjoy doing, and did for 20 years, but that many people all around the world detest profoundly: writing a presentation.

I have one main reason to start from this use case: on social media and elsewhere, we are inundated by enthusiastic calls for action from hundreds of “The AI Guy”, enticing us to try a wave of startups that promise to revolutionize our presentations with generative AI.

Sure…