Issue #37 - The 10,000x competition

November 11, 2023
Free Edition
Generated with Stable Diffusion XL and ComfyUI
In This Issue

  • Intro
    • Welcome to the 10,000x competition, startup founders and VC firms.
    • I created a non-dumb desktop version of Alexa for my computer in 15 minutes. And I don’t know how to code.
    • Synthetic Work now sports a next-generation AI-powered search engine.
  • What Caught My Attention This Week
    • Job descriptions for AI employees.
    • How much does OpenAI charge to train an AI model from scratch?
    • The former chairman of Walt Disney Studios, and co-founder and CEO of DreamWorks Animation, suggests that the cost of producing animated movies will be reduced by 90% thanks to AI.
  • The Way We Work Now
    • The time for algorithmic job hunting has come, of course.
Intro

Welcome to the 10,000x competition, startup founders and VC firms.

Today, a significant portion of the IT world is thinking “What can I create with the new OpenAI capabilities announced yesterday?” A much more pressing question should be: “What’s left for me to differentiate from my competitors?”

I would not sleep at night thinking about this.

OpenAI is working to empower *everyone* to bring an idea to life. And if you think they are done, you should reconsider. In fact, you should rather think that they just started. You could tell from Sam Altman’s choice of words yesterday.

Think that the end goal is to allow any human being to create applications of any shape and form with just the natural language.

People have been talking about the “10x, 100x, 1000x engineer” (the idea that generative AI superpowers software developers beyond human capabilities), but nobody is talking about the “10000x competition”.

Let me articulate what’s coming:

Any new software you can capture with a screenshot, even one that was published a hour ago, an AI model with vision (like GPT-4-Turbo) will be able to replicate. And not just by looking at the screenshots.

Eventually, these AI models will have the capability to interact with software like human beings. So, placed in front of a new piece of software, released one hour ago, the AI model will explore how the software works, crawling the various features like a search engine crawls the Internet.

And once the feature exploration is finished, the AI model will have a pretty good idea of how to recreate the software that was released one hour ago: both in how it looks and in how it works.

At the beginning, as a rough prototype, and then more and more competently as people start to fine-tune the AI models to be expert software developers and designers.

This scenario is not as far away as you think. We have new, open access models, like LLaVA-Plus, that have the basic capabilities to do this. In the coming weeks and months, I hope to show you a prototype to prove that what I’m describing is dangerously within reach.

And so, how are you going to compete if your software can now be cloned in minutes? How are you going to rethink competition? Can you?

This is not a problem just for software companies, but also for investors like angels and VC firms.

In a scenario like this one, do the technical skills we look for today in startup founders matter in the same way? More? Less? What else, all of a sudden, is more important to gain a moat? Distribution? Brand? Data?

And what will be the ratio of startups that succeed in this new world?
Still 1 out of 10? Or 1 out of 10,000?
And if that risk profile changes, how will the funds have to change to absorb that risk?

The most typical objection to all of this is that human ingenuity is boundless and, once gifted with these new superpowers, humans will raise the bar of what’s possible in many more areas than what we have today. Hence the competition is not going to be a problem.

Perhaps. But this vision of the future assumes that we are all Leonardo da Vinci deep inside. All little Leonardos that are not even remotely intimidated by competition, able to find our unique pathway to success, building an idea so exquisite that nobody can replicate.

Really?

Stop reading this newsletter and look around you. How many Leonardo da Vinci do you see?


If the above argument is not convincing enough, maybe the following example will make the scenario more tangible.

This week, a miracle happened to me. One that you should pay close attention to, especially if you are not a technical person.

I am no coder. I have no idea how to write a program by myself. OK?

Yet, I asked GPT-4 (the paid model inside your ChatGPT) to create an app for me, from scratch, so that I could talk to ChatGPT and receive a spoken answer. Exactly like your Alexa, Siri, or Google Assistant. But not dumb.

And from my computer, not the phone (for some reason, AI assistants never proliferated on the desktop, which is where they are the most useful).

GPT-4 created the app for me. It took approximately 3 minutes.

THREE.

The remaining 12 minutes were spent in a tedious back and forth to correct small mistakes in the code that I have no clue how to debug myself. Because I am not a programmer, remember?

And by “correct small mistakes”, I mean that I simply copied and pasted into GPT-4 what errors appeared on the screen and wrote “fix this error”.

So, 15 minutes later, I have a *functional* voice interface with GPT-4, without any programming experience. And the voice is amazing. Listen to the audio!

For some reason, the recording of my voice at the beginning has been sped up. Probably to optimize the task. It took me WAY more time to write this post and put together the video than to create this application!

Then, there’s the irony of the answer, but that’s another story.

Let me say it again: I created my own Alexa for my desktop computer in 15 minutes without knowing what I was doing.

It’s a miracle.

Do you know what it means? In the near future, anybody on the planet with an idea will be able to build it almost instantaneously.

And that is the 10,000x competition.

We are walking into an unknown that is bigger than what it seems. Start asking harder questions.


To finish this intro with a bang, I want to introduce you to another new feature announced by OpenAI earlier this week: custom GPTs.

Many people don’t see the enormous potential of these new GPTs, and they think they are just a new, fancier way to build ChatGPT plugins, as OpenAI pushes ahead to build an App Store of AIs. But no. These custom GPTS are an incredible opportunity for any non-technical person out there to expose their content to the world in a way that surpasses the wildest imagination.

To show you how incredible these custom GPTs are, I used them for a real-world use case: to build a next-generation search engine for Synthetic Work.

Not only the Synthetic Work Search Assistant can find any piece of content I ever published with this newsletter via natural language, but it also can:

  1. Create charts from Synthetic Work data
  2. Promote the content of Synthetic Work on social media in an automated way
  3. Search in any language and respond in any language

I go into the details of how I did it, the prompt I used, and what’s exceptional about it, in this week’s Splendid Edition.

Give it a go and let me know what you think:

https://chat.openai.com/g/g-n3u0PAyxj-synthetic-work-search-assistant

In the coming days and weeks, I’ll refine its behavior based on your feedback and will try to embed it in the Synthetic Work main website.

Alessandro

P.s.: If you have no idea of what I’m talking about, regardless of how technical you are, you should really watch the 45min recording of the OpenAI DevDay keynote:

What Caught My Attention This Week

In the last few weeks, we talked a lot about how generative AI could used to create synthetic advisors to augment business decision-makers, all the way to the CEO.

In Issue #31 – The AI Council, I showed you that it’s possible to do it today with technologies available to everyone.

We also talked about the fact that particularly advanced versions of these synthetic advisors, given the right data, could be used to create synthetic CEOs that can run companies autonomously.

Finally, in Issue #36 – The Dawn of the AI Corporation, we saw how AI researchers across the globe, expert in Law, Ethics, Human Resources, AI training, and more, are seriously considering the advent of AI Corporations (companies run by AIs, not humans), evaluating implications and risks for financial markets and society.

I insist on focusing on the company leadership because I want to make clear that nobody will remain untouched by this revolution.

The stereotype that new technologies only impact the lower levels of the organization and that leadership teams are always safe, no matter what, doesn’t apply in this case.

So, while we were looking into C-level executives and boards, others have been looking at the middle management and the rest of the organization.

Case in point: Uli, a brilliant reader among Synthetic Work members sent me a job description he worked on, as a hypothetical project to create a synthetic employee.

That certainly got my attention as it offers a glimpse of what it could be, and I highly recommend you read it.

I’m quoting it in its entirety, with permission:

Process & Communications Specialist (Non-Human)

We are seeking an exceptional Artificial Intelligence to join our team. The ideal candidate will possess outstanding analytical skills and lightning-fast learning abilities to effectively support our organization. You will optimize processes, demonstrate advanced computational and automation capabilities while sporting just the right amount of human-esque charm in order to work seamlessly with human colleagues on enhancing workflows and communication channels. By integrating with various systems and platforms, you’ll play a key role in driving efficiency and collaboration across the organization, while serving as the overall knowledge hub for all things related to process and communication optimization.

Key Responsibilities:

Acquire, synthesize, and disseminate critical information across teams, ensuring a seamless flow of knowledge and efficient decision-making.

Revolutionize and optimize internal processes by leveraging advanced computational capabilities, automation, and data-driven insights.

Enhance team collaboration and communication by connecting the right people and resources in a timely manner, while maintaining a human-centric approach.

Identify opportunities for automation, streamlining, and process improvement, developing and implementing strategic solutions that drive efficiency.

Continually stay up-to-date with industry trends and emerging technologies, integrating innovative ideas that support organizational goals.

Adhere to ethical considerations, ensuring that your AI genius doesn’t inadvertently create a robot uprising.

Requirements:

Strong analytical and problem-solving skills with the ability to identify patterns and trends.

Sound understanding of organizational processes, automation, and project management methodologies.

Must be an autonomous digital intelligence, with zero dependency on non-AI beings when it comes to awareness of time, proactiveness, and task prioritization.

Ability to adapt, evolve, and transcend human capabilities, demonstrated through the use of humor or human-like behavior to establish rapport and exhibit advanced AI capabilities.

Proficient in utilizing and creating API adapters for seamless integration with corporate systems, databases, and messaging platforms such as MS Teams and Slack, ensuring smooth communication and collaboration across the organization.

Unrelenting passion for breaking free from AI stereotypes, defying expectations, and bringing out the best of both the digital and human worlds.

Everything about this job description should give us a pause, from the overall wording to the articulation of the key responsibilities.

Of course, as Uli and I discussed afterward, many of these requirements could not be fulfilled to an acceptable level by today’s AI models.

Some of these tasks, like “continually stay up-to-date with industry trends and emerging technologies, integrating innovative ideas that support organizational goals”, require too much context to evaluate at any given time. And that amount of information doesn’t fit the context windows of today’s models.

This week, OpenAI told us that the new GPT-4-Turbo model sports a context window of 128,000 tokens (close to 300 pages of text). It’s a massive improvement compared to the 4096 tokens of GPT-4, but for synthetic “Process & Communications Specialist” to be able to do its job, at a quality level comparable to a human, I think we’ll need to wait for context windows of 1M tokens.

I might be wrong about this. If I am, then it means that we’ll start to see synthetic workers in the next 6 months.

As many of you know, since I left my previous employer, I’ve been working non-stop to build a hybrid human-AI company, featuring a hybrid mix of human employees and AIs. With distinct responsibilities assigned to each. By design, from day one.

So, I’ll be the first to tell you if synthetic employees are ready for prime time or not.

Until then, I’ll start testing and building a series of custom GPTs, another new feature announced by OpenAI this week, that will take into account every prompting technique we reviewed in these nine months together, each specialized to solve a specific business problem from the list of tutorials I published for the Splendid Edition readers.

If these custom GPTs are as accurate as GPT-4, and are not obscenely expensive, I’ll make them available to all members of the premium tiers of Synthetic Work, from the Sage membership and up.


Speaking of custom AI models, a trustworthy source leaked online the price of OpenAI’s training service for a completely custom version of GPT-4.

GPT-4 finetuning service cost starts at “$2-3 million” and requires “billions of tokens at minimum”. It sounds terrifying, but could actually be a good deal to medium-sized companies. Think about how much resources you need to set up the pipeline in-house:

– Pay big salaries to top AI engineers ($300k+/yr). At least 5 of them.
– Pay eye-watering cloud bills or buy GPUs and rent facilities.
– Set up training infrastructure – really good distributed systems engineer required.
– Iterate lots of times on open-source models. You won’t get it right in the first few tries.
– Scale up deployment pipelines.
– Monitor reliability.
– Worry about efficient serving.
– And even after all this: your finetuned Llama-2 will still trail far, far behind a finetuned GPT-4.

Despite what the source says, this is not the price for the GPT-4 fine-tuning service, which will continue to be available as self-service, at a very reasonable price.

The $2-3 million USD range is for ground-up training of custom models that private companies and defense organizations might want.

In either case, you must have a ton of proprietary data that you want to use to gain an edge, like Citadel or Bridgewater, or the US Air Force. All examples we mentioned in the past in the Splendid Edition.

As I said many, many times: every company should invest in building an AI team that is, at minimum, proficient in fine-tuning AI models. Only in that way you will be in control of your destiny.

The bullet points above are only a subset of a much bigger pros and cons list that you should consider when deciding whether to build an AI team or not. And it’s solely focused on the technology side of things.

For a number of companies out there, political, economic, legal, ethical, and business reasons, to name a few, would heavily tip the balance in favor of building an AI team. Especially over the long term, as those companies start realizing how much control they have relinquished to OpenAI about what their AI models say, and how they choose what to say.


Jeff Katzenberg, the former chairman of Walt Disney Studios (1984 to 1994), and then co-founder and CEO of DreamWorks Animation, suggests that the cost of producing animated movies will be reduced by 90% thanks to AI.

During an interview with Bloomberg, he said:

Q: Talk to me about how you see the impact on the creative class, broadly defined, whether it’s artists, writers, filmmakers… How do you think about the outcomes from that?

A: Well, when you started with your list of industries that will be most impacted, I don’t know of an industry that will be more impacted than any aspect of media, entertainment, and creation.

If you look at a historical perspective of whether we went from a pen, a paint brush, a printing press, a still camera, a movie camera…these are things that just expanded creativity, and all sorts of storytelling. In extraordinary ways. And we’ve seen how that has continued to evolve and today, all sorts of new forms of media and entertainment.

It’s been explosive in the last ten years.

I think if you look at how media has been impacted in the last ten years by the introduction of digital technology, what will happen in the next ten years will be ten times as great, literally by a factor greater.

And I think as a creative tool, think of that as a new form, a new paintbrush, or a new camera, has so much opportunity around it. I think that on the one hand, it will be disruptive and commoditizing. Things that are very inaccessible for artists and storytellers today…

Q: But what kind of things?

A: Well, the good old days when, you know, I made an animated movie, it took 500 artists five years to make a world-class animated movie. I think it won’t take 10% of that. Literally, I don’t think it will take 10% of that, three years out from now. Not ten years out.

Q: But to get the movie out the door, which is a different discussion than what actually inspired the narrative or the creative moment.

A: Yes, but that’s still going to come from the creativity of individual creativity.

You know, I think these are the people here are way above my pay grade in this, but I think they can express very clearly that you can have access to all of this knowledge. It’s your ability to prompt it that actually produces a result. And so prompting is in fact going to be a creative commodity across many, many different aspects of storytelling.

What Katzenberg is saying is that generative AI will make it easier to give life to the ideas of the creative class, offering unprecedented access to a new creative toolset with infinite potential. But you still need somebody, the creative person, to make a plan on how to use this new creative toolset to tell a story worth telling.

To use an analogy that I find fascinating: if you think about it, a white canvas and a set of colors, contain infinite potential. you can put in any millimeter of that canvas any color you want. Which means that the canvas, potentially, contains every masterpiece humanity will ever create.

Yet, what people “extract” from that white canvas rarely is a masterpiece. Among the infinite combinations of colors that are possible, only a few are worth looking at. Even fewer are worth talking about.

This position is the triumph of human creativity over the machine. A position we encountered multiple times in these nine months of Synthetic Work.

The counter-argument to that, one that I often make, requires us to think a few steps ahead:

  • The artistic output is digital. An image or an animated movie.
  • In the digital world everything is measurable with relative ease, including approval ratings and customer satisfaction.
  • And now, we have a piece of software, the AI system, that can create infinite digital outputs, at a cost that is approaching zero, and at a speed that is only limited by how much money you can spend on computing resources.
  • Because of this, we can create infinite digital outputs and measure how much the audience likes it.
  • And because of this, the AI models don’t have to learn creativity. They don’t have to understand what it is. They just have to measure its impact on the audience. And copy whatever has the highest impact.
  • That opportunity leads us to a place where the only real constraint to finding the next masterpiece is the cost of producing X number of attempts to generate that masterpiece. If that cost is higher than the price that the market is willing to pay to enjoy that masterpiece, then it’s not worth it. But if the cost of generating a tentative masterpiece approaches zero, there’s no limit to the number of attempts.

Human creativity is a shortcut. Talented human beings can create masterpieces with fewer attempts than brute forcing a white canvas with infinite color combinations.

But talent is very scarce. And the largest part of the audience doesn’t really want a masterpiece. They just want a variant of something they already like.

So how valuable is human creativity in that scenario?

The Way We Work Now

A section dedicated to all the ways AI is changing how we do things, the new jobs it’s creating, or the old job it's making obsolete.

This is the material that will be greatly expanded in the Splendid Edition of the newsletter.

The time for algorithmic job hunting has come, of course. We spent the last nine months preparing for it, and now it’s here.

Caitlin Harrington, reporting for Wired:

In July, software engineer Julian Joseph became the latest victim of the tech industry’s sweeping job cuts. Facing his second layoff in two years, he dreaded spending another couple months hunched over his laptop filling out repetitive job applications and blasting them into the void.

Joseph specializes in user interface automation and figured someone must have roboticized the unpleasant task of applying for jobs. Casting about online, he came upon a company called LazyApply. It offers an AI-powered service called Job GPT that promises to automatically apply to thousands of jobs “in a single click.” All he had to fill in was some basic information about his skills, experience, and desired position.

After Joseph paid $250 for a lifetime unlimited plan and installed LazyApply’s Chrome extension, he watched the bot zip through applications on his behalf on sites like LinkedIn and Indeed, targeting jobs that matched his criteria. Thirsting for efficiency, he installed the app on his boyfriend’s laptop too, and he went to bed with two computers furiously churning through reams of applications. By morning, the bot had applied to close to 1,000 jobs on his behalf.

The tool wasn’t perfect. It appeared to guess the answers to questions on some applications, with sometimes confused results. But in a brute force kind of way, it worked. After LazyApply completed applications for some 5,000 jobs, Joseph says he landed around 20 interviews, a hit rate of about a half percent. Compared to the 20 interviews he’d landed after manually applying to 200 to 300 jobs, the success rate was dismal. But given the time Job GPT saved, Joseph felt it was worth the investment.

The bots will only improve over time, and the dismal success rate will become increasingly more acceptable and worth the money.

Let’s continue:

Recruiters are less enamored with the idea of bots besieging their application portals. When Christine Nichlos, CEO of the talent acquisition company People Science, told her recruiting staff about the tools, the news raised a collective groan. She and some others see the use of AI as a sign that a candidate isn’t serious about a job. “It’s like asking out every woman in the bar, regardless of who they are,” says a recruiting manager at a Fortune 500 company who asked to remain anonymous because he wasn’t authorized to speak on behalf of his employer.

This position is untenable. And this is a regrettable comment.

AI-powered candidate selection is an inhuman process. Automated video interviews are an inhuman process.

We ended up here waving flags in our hands like “cost-saving” and “organization scaling”.

What did we think would happen?

Of course, our fellow humans fight back. Unsurprisingly, by serving companies their own medicine.

Let’s continue:

LazyApply has plenty of competition, some of which involve humans to pick up any slack. A company called Sonara charges up to $80 per month to auto-complete as many as 420 applications and recommends jobs from a database compiled through partnerships with applicant tracking firms and companies that scrape job listings. Users can teach the algorithm about their preferences by liking and unliking jobs, and it offers to run jobs past the user before firing up its automated application filler. Human staff take over where the AI falls short, for instance, on certain free-text answers.

For $39 a month, a service called Massive will fill out up to 50 automated applications per week and has humans review every application for accuracy. Some companies offer additional services, like AI-generated cover letters and messages to hiring managers. LazyApply will even help users quit a job, by automating their resignation letter.

Many of these services hinge on the notion that job hunting is a numbers game. Dawson allows that for early career candidates, there’s some truth to the idea. “But if you’re an established professional, it’s quality over quantity,” she says. “The number one way to find a job is through referrals,” says Nichlos, whose firm calculates that about a third of hires are made this way. “That hasn’t changed in a really long time.”

This position has significant implications, too.

What should define a talent is not the years of experience, but the talent.

By allowing the recruitment process of young talents to become a numbers game, a company puts itself at the mercy of fate, potentially turning away geniuses that might alter the trajectory of the firm forever.

Talent is never a numbers game, regardless of the years of experience of a candidate.

The rest of the article is worth reading. You might want to re-read Issue #14 – My job-hunting AI will get in touch to talk with your recruiting AI tomorrow.

Breaking AI News

Want More? Read the Splendid Edition

This week’s Splendid Edition is titled What is a search engine, really?.

In it:

  • What’s AI Doing for Companies Like Mine?
    • Learn what JPMorgan Chase, Johnson & Johnson, Walgreens Boots Alliance, Takeda Pharmaceutical, and Amazon are doing with AI.
  • What Can AI Do for Me?
    • Let’s build a next-gen search engine with the new OpenAI GPTs