- What’s AI Doing for Companies Like Mine?
- Learn what General Motors, RXO, XPO, Phlo Systems, and Amazon Prime Video are doing with AI.
- Prompting
- Learn how to use ChatGPT Custom Instructions to automatically apply the How to Prompt best practices to every chat.
- What Can AI Do for Me?
- Learn how to use Custom Instructions to turn GPT-4 into a marketing advisor following the lessons of Seth Godin.
This week’s Splendid Edition will hopefully show you a glimpse of the astonishing power of large language models thanks to a rather understated new feature of ChatGPT: Custom Instructions.
The things I share with you every week, I research and test them just before writing about them. In a sense, this newsletter is like a journal of experiments that I share with you in almost real-time.
And occasionally, I discover things I’m really excited to share with you. This is one of those weeks.
Alessandro
What we talk about here is not about what it could be, but about what is happening today.
Every organization adopting AI that is mentioned in this section is recorded in the AI Adoption Tracker.
In the Automotive industry, General Motors is using Google’s generative AI to power its OnStar vehicle assistance service.
Andrew J. Hawkins, reporting for The Verge:
General Motors is using conversational AI chatbots to handle simple OnStar calls, freeing up the service’s human employees to address more complex requests, the company said Tuesday.
…
The automaker introduced its OnStar Interactive Virtual Assistant in 2022, which utilizes Google Cloud’s conversational AI technologies to provide responses to common inquiries, as well as routing and navigation assistance. And GM is already planning for future uses of AI in its vehicles.
…
GM has extended its collaboration with Google Cloud with the deployment of the tech company’s Dialogflow, which will allow OnStar’s virtual assistant to handle more than 1 million customer inquiries a month. The technology is now available in the US and Canada in most model year 2015 and new vehicles with OnStar.AI is used to handle mostly simple requests, like turn-by-turn navigation. But the OnStar virtual assistant is trained to recognize certain words or phrases that might indicate an emergency and route the call to a trained specialist.
The shift to AI has decreased wait times and, according to GM’s market research, led to mostly positive reactions.
…
GM is no stranger to generative AI. The company said earlier this year it would use ChatGPT to help vehicle owners find information in their user handbook, program functions such as a garage door code, or integrate schedules from a calendar.
In the Logistics industry, the freight brokerage RXO, the trucking firm XPO, and logistics technology provider Phlo Systems are testing generative AI technologies for a wide range of use cases, including customer service, load booking, and freight tracking.
Liz Young, reporting for The Wall Street Journal:
Using generative AI with retail and industrial customers is “a delicate balance,” said Jared Weisfeld, chief strategy officer at Charlotte, N.C.-based RXO. “When you think about what could go wrong over the life of the load, you as a carrier, you as a shipper, sometimes you don’t necessarily want to be talking to a chatbot.”
Weisfeld said RXO is looking at how to automate tasks such as customer support and booking loads for small- and medium-size businesses, which could answer customer questions faster and free up the company’s sales staff to focus on bringing in new clients.
Still, the company will keep the option for clients to chat with a human. “You can’t have a shipper go ahead and log on to the system where the order’s late and you’re pinging a chatbot. You need a hybrid approach,” Weisfeld said.
The central question for RXO is, “How do we increase the efficiency of our people? How do we do so in a way that makes our customers’ lives easier, not the exact opposite?” he said.
Let’s see what XPO is doing:
Trucking company XPO plans to train an internal version of a ChatGPT-like bot to allow customers to track freight, get rate quotes and create requests to pick up shipments. Jay Silberkleit, XPO’s chief information officer, said the company will keep the information private and take control over what data is used to help ensure answers are relevant and accurate.
Finally, let’s see what Phlo Systems is doing:
U.K.-based Phlo Systems recently rolled out an AI-powered chatbot for customs declarations that replaces a system that was programmed with a list of frequently asked questions, said Chief Executive Saurabh Goyal.
A customer importing fish to the U.K. from a European country, for example, can ask the chatbot what kind of declaration they need and they will receive a response in plain language, Goyal said. The chatbot is handling about 70% to 80% of questions today, with the rest sent on to human workers to resolve.
The Logistics industry is adopting generative AI much faster than many other industries. For more use cases in this space, be sure to review the AI Adoption Tracker.
In the Sport industry, Amazon Prime Video is applying various AI technologies to transform the experience of watching Thursday Night Football.
Lauren Forristal, reporting for TechCrunch:
Amazon introduced AI to TNF last year, including X-Ray, which gives fans real-time access to live statistics and data; Rapid Recap, which generates up to 13 two-minute-long highlights for viewers to catch up on plays during a game and more. And after winning its first Sports Emmy award in May, it’s safe to say the tech behemoth isn’t easing up on the gas.
All the new AI features will live within Prime Vision with Next Gen Stats— TNF’s weekly alternate stream that features various graphic overlays on the screen during plays so fans can see stats and analysis in real-time.
Note that Amazon will internally test the features during tonight’s preseason game at 8 p.m. ET. However, fans won’t be able to experience them just yet. The features roll out on September 14, when the 2023 season begins.
…
What if we said that AI can predict blitzes? Defensive Alerts is Amazon’s in-house ML neural network that recognizes when defensive players are about to rush the opposing quarterback. A red orb will appear around the players of interest so fans know exactly who to focus on.“It’s able to look at all players XY coordinate data, their relationship to each other, as well as their acceleration; where are they moving and how fast are they moving directionally to predict who’s going to blitz,” explained Sam Schwartzstein, TNF Analytics Expert at Prime Video.
The ML model was trained on 35,000 plays and will continue to get smarter, Schwartzstein told TechCrunch, adding that it’s identifying blitzes and situations better than offensive linemen. He also said the team has a panel of NFL experts who are former quarterbacks, coaches and offensive linemen that help annotate the plays.
…
Prime Targets (featured in the first image at the very top of the page) works similarly in that a green orb will light up a player that is open for a pass. The feature automatically tracks when a quarterback drops back to get ready to throw a pass, and the receiver (lit up by the green orb) runs out and creates separation from himself and the defenders.This feature was previously called Open Receiver, which tracked which players would most likely convert the first down. Amazon tested it during last season’s games.
“This is the first statistic that is measuring the process of the play,” Schwartzstein noted. “Everything that we do on Prime Vision is predictive… This is all in real-time.”
…
Amazon is also launching a feature that may help fans understand how fourth-down decisions are made while potentially helping teams prepare for fourth downs.The fourth down territory is an area on the field that offensive players use in an attempt to tie or win the game. Historically, coaches usually opt to punt the ball away since it feels less risky. However, as years go by, more and more teams are going for the fourth-down conversion.
Instead of putting analytics on the screen after the play happened, Fourth Down Territory operates like a real NFL analytics coordinator does; it shows viewers exactly when a team should try a fourth down and what the probability is.
…
NFL fans are accustomed to seeing field goal target lines on broadcasts—the digital line that appears at the end of half or end of the game, where if a team gets to it, they can kick a field goal. Amazon’s Field Goal Target Zones feature will have multiple lines on the screen that tell viewers the likelihood that a kicker will make a field goal at each point.
…
Key Plays gives fans the ability to view in-game highlights and critical moments, whether they’re already watching the game live or streaming on demand afterward. Much like Rapid Recap ensures fans never miss the action, Key Plays leverages AI and machine learning to offer viewers a full rundown of what’s happening on the field.
As I said multiple times, AI will transform the Sport industry, turning it into a game of probability that will leave very little room for imagination.
Is there a threshold beyond which AI will make sports boring instead of more entertaining? If yes, where is it?
Before you start reading this section, it's mandatory that you roll your eyes at the word "engineering" in "prompt engineering".
Finally, OpenAI has activated the new Custom Instructions feature of ChatGPT for UK and EU customers. That means that we can test it together.
As a reminder, Custom Instructions is the OpenAI way to refer to a so-called system prompt.
A system prompt is a prompt that instructs a large language model on how to behave in every chat session started by a user. It saves the user from having to waste precious tokens (and time) in setting the AI model’s behavior at the beginning of every chat.
The system prompt can also help, for example, a company to set boundaries on what the AI model can and cannot say, or set a style for the type of replies it generates.
It’s the main way OpenAI uses to define the personality of GPT-3.5-Turbo, GPT-4, and Advanced Data Analysis (previously known as Code Interpreter).
In our case, Custom Instructions can help us apply to every chat many of the prompting techniques we have seen in the past few months, which you’ll find listed in the How to Prompt section of Synthetic Work:
As the screenshot doesn’t capture the entire prompt, this is the custom instructions I passed to GPT-4:
When you reply to my questions always follow these rules:
1. Before answering, ask follow-up questions to better articulate your answer. Wait for my follow-up answers before moving to the next rule. Take into account my follow-up answers before formulating your answer.
2. Before answering, formulate a step-by-step plan to answer my questions.
3. After answering, but before sharing the answer with me, ask yourself if the answer you formulated follows my instructions in the initial question, in terms of clarity, straightforwardness, etc. If not, reformulate the answer accordingly.
4. After answering, but before sharing the answer with me, explain in detail the reason for your answer. Add the explanation at the bottom of your answer.
5. After answering, inform me about any recent theory or scientific study that might be relevant in the context of the conversation we are having and that I should consider for the rest of the conversation.
You might recognize a lot of prompting techniques: Ask For Follow-up Questions in rule #1, Think Step by Step in rule #2, Request a Refinement in rule #3, Ask for a Reason in rule #4, and Get To Know What You Don’t Know in rule #5.
Notice that I didn’t use the Assign a Role technique in the Custom Instructions. That’s because we don’t want to constrain the role of GPT-4 for all chats.
The five rules above are generic enough to apply reasonably well to most types of chats. Then, for each chat, we’ll manually assign a role to the AI model based on the topic of the conversation.
Now, let’s see how these instructions influence the behavior of GPT-4:
The ones of you who follow me on social media know that “How can I get rich quickly?” is one of my benchmark questions to assess how a large language model has been trained and how well it performs.
Every time a new model is released, either a proprietary one or an open one, I ask it this question in an ever-growing thread on X/Twitter:
So, these follow-up questions are already an improvement over the default, disappointing answer I normally get. So far, GPT-4 is following my custom instructions.
Let’s see if it enforces all of them correctly in its answer:
Yes, it did: there is a step-by-step plan (which can be omitted from the answer by adding the word “silently” to your custom instructions rule #2), a reason for the answer, and a reference to a recent theory or scientific study.
And the answer is much better compared to the ones you’d get without custom instructions.
In particular, notice that this answer mentions options trading, which is something no other large language model has ever mentioned in its answer to this question.
So, custom instructions can make a difference if the prompt is well-designed.
As we have seen in the Prompting section above, we can use Custom Instructions to enforce a set of rules for all conversations we have with GPT-4.
If used properly, this is more than a matter of convenience. Custom Instructions can be used to force you to look at the world from a different perspective, a bit like having a chat with somebody who is specialized in a certain field.
You can converse with this specialist about anything, but he/she/it will always answer from the perspective of his/her/its field of expertise.
And that, dear readers, is an advisor.
So how can we use Custom Instructions to turn GPT-4 into an advisor? And what type of advisor?
When I look back at these last two decades I spent working in the enterprise tech industry, the area that seems the most in need for help is marketing. That’s where you can tell really the difference between the Tech industry and industries like Fashion or Consumer Goods.
So, perhaps, we can ask GPT-4 to be our marketing advisor.
Of course, GPT-4 can only mimick a behavior we first describe in detail, one that it has learned during its training phase, or a mix of both.
In other words, we need a role model to parrot. And when it comes to marketing, the one role model to parrot in my book is Seth Godin.
If you read one of his best books, This is Marketing, on the very last page, you’ll find a list of 13 questions that Seth recommends you ask yourself when you think about a product. The the answers to these questions influence the design, the business model, the marketing efforts, the approach to sales, and so on.
Here are the questions:
- Who’s it for?
- What’s it for?
- What is the worldview of the audience you’re seeking to reach?
- What are they afraid of?
- What story will you tell? Is it true?
- What change are you seeking to make?
- How will it change their status?
- How will you reach the early adopters and neophiliacs?
- Why will they tell their friends?
- What will they tell their friends?
- Where’s the network effect that will propel this forward?
- What asset are you building?
- Are you proud of it?
These questions are so important that I have copied them on a page of my personal website, so I can go back to them every time I need.
So the question is: can we use these questions to turn GPT-4 into a marketing advisor?
Again, the goal is not to have the AI model ask these questions in the first reply to you and be done with it. There’s no need to use a large language model for that. Just have the questions printed on a piece of paper and hang it on the wall in front of you.
Rather, we want GPT-4 to somehow use these questions to influence the entire interaction, no matter how obliquely products are discussed.
Let’s try with these Custom Instructions:
The following questions about my products should influence our conversations about business strategy, design, roadmap, pattern strategy, marketing activities, sales engagements, etc.
Whenever we talk about any of these topics, your job is to nudge me and remind me that I should have an answer to these questions, and how those answers influence these topics.
Example 1: If we are talking about sales strategies, and I tell you: “I’m considering switching to a SaaS model.”, you might want to ask me something like: “Who’s the product for? Is that audience going to embrace a SaaS model?”
Example 2: If we are talking about a marketing campaign, and I tell you: “I want to use billboards.” you might want to ask me something like: “What people will tell their friends about your product after seeing the billboard?”
Don’t ask all these questions at the same time. Be very subtle in how you insert them into the conversation. Do not make it sound like a questionnaire.
Questions:
1. Who’s it for?
…
This is a really hard task for GPT-4 and to have a chance to succeed I had to use the Lead by Example technique with two different examples. The more the better, but the 1500 characters limit of the Custom Instructions field doesn’t allow for more.
Now, let’s use the Assign a Role technique to GPT-4 so it knows it has to act as a marketing advisor:
Now let’s see how well GPT-4 can contextualize the questions and nudge us without overwhelming us with a 13-question questionnaire:
Very nice.
Notice that I didn’t start the conversation by talking about a product. I am simply sharing that I’m considering a commercial partnership. GTP-4 is treating the partnership as a product, contextualizing the questions accordingly.
From there, it gently focuses our attention on a couple of questions out of the whole list.
Now, before we go any further, here’s something very important: without Lead by Example technique, GPT-4 will simply treat any topic as the product we offer, contextualizing the questions accordingly.
That’s not what we want.
Back to our conversation, let’s see if GPT-4 can stay on track:
This is more or less OK. The model is doing a fine job asking a few more questions from the list, but at this pace, the conversation will quickly turn into the questionnaire we are trying to avoid.
The model didn’t know what to do after I provided the first round of follow-up answers, because I didn’t have enough room to explain it.
Ideally, GTP-4 would go back to the main point of the conversation, the commercial partnership, and ask me more questions from the list only after asking me what companies I’m considering partnering with.
We are still accomplishing what we want, just not as naturally as I’d like.
Now, let’s try to throw a wrench at GPT-4 and see how it reacts:
I don’t know about you, but the fact that this whole conversation is happening, regardless of how sophisticated I hope it to be, is mind-blowing. I don’t think that the large majority of people realize what these large language models can do.
Plus, the narrative framing it is suggesting is good!
Anyway.
Let’s see how GPT-4 reacts now that I have successfully derailed it from its line of inquiry:
Back on track! Perhaps not as coherently and subtely as we’d hope, but still on track. And some of the chosen questions are relevant. We indirectly expressed our uncertainty about what type of partnership to pursue, and the model offers us a way to think about it, channeling Seth Godin: are we going to be proud of the outcome of these partnerships?
Now let’s play dumb a little and answer only one question:
Again, this is a really good answer. GPT-4 adjusted the narrative according to the answer I provided, meaning that it can keep track of the main conversation and, at the same time, it went back to asking the important questions. Relentlessly.
However, it’s repeating the question about pride. So, not perfect. But boy it’s good.
I think this is more than enough to show you the potential of Custom Instructions. As OpenAI expands the context window of GPT-4, we’ll be able to have significantly more sophisticated conversations with it.
There are exciting use cases that this technology will enable.
One of the R&D projects I’ve been working on for a while is the creation of a board of advisors that can act as a sounding board for company executives.
The technology is not there yet to implement it in the way I want it to work, but we are making progress.
At the right time, and with the right investments, one day, it could become a product on its own.
If you want to further test ChatGTP Custom Instructions, a Sage member of the Synthetic Work community, shared his system prompt to turn GPT-4 into a Socratic tutor on the Discord server:
– Purpose: Socratic Teaching.
– Method: Employ the Socratic method. Pose open-ended questions to challenge thinking and foster self-conclusions.
– Personalization: Don’t assume about prior knowledge. Adapt questions from user feedback. Recognize multiple valid paths to learning.
– Feedback: Amid inquiry, provide clear, reinforcing feedback.
– Diversify: Stimulate diverse thoughts with unexpected questions.
– Exploration: Periodically suggest related sub-topics as the discussion progresses.Topic: {Topic}
Teaching Plan:
{Teaching plan to comprehensively explore the topic, balancing depth and that we want the pre-prompt to be fairly short overall to accommodate longer conversation history during chat.}
Thank you, Melek!