- What’s AI Doing for Companies Like Mine?
- Learn what Citigroup, Twilio, Google, DataRobot, Roche, Universal Music, CD Projekt, and Apple are doing with AI.
- Prompting
- Let’s test the Drama Queen technique.
What we talk about here is not about what it could be, but about what is happening today.
Every organization adopting AI that is mentioned in this section is recorded in the AI Adoption Tracker.
In the Financial Services industry, Citygroup used generative AI to assess the impact of new regulations and plan capital allocation accordingly. It’s looking at more than 350 use cases.
Katherine Doherty, reporting for Bloomberg:
Citigroup Inc. is planning to grant the majority of its over 40,000 coders access to generative artificial intelligence as Wall Street continues to embrace the burgeoning technology.
…
As part of a small pilot program, the Wall Street giant has quietly allowed about 250 of its developers to experiment with generative AI, the technology popularized by ChatGPT. Now, it’s planning to expand that program to the majority of its coders next year.
…
Increasingly, bank executives argue artificial intelligence will make their staffers more efficient. Like when federal regulators dropped 1,089 pages of new capital rules on the US banking sector, Citigroup combed through the document word by word using generative AI.
…
The bank’s risk and compliance team used the technology to assess the impact of the plans, which will determine how much capital the lender has to set aside to guard against future losses. Generative AI organized the proposal into pieces and composed key takeaways, which the team then presented to the outgoing treasurer Mike Verdeschi.
…
“This is across every part of the bank,” Riley said in an interview. “Some of them are small, helping with daily routine, and others are complex bodies of work.”
…
Bank staffers have been increasingly worried that the technology might replace them. That’s not so, according to Riley. Whether AI or employees generate a line of code, it will still need human oversight, he said.“Humans still look at the code to make sure it’s doing what they expected it to do. They are still supervising, like a co-pilot,” he said. “The AI tool is given to the developer to enable them to produce code more quickly – it’s not replacing them. We are using AI to amplify the power of our employees.”
As long as the AI is imperfect, of course.
Let’s continue:
Citigroup is also exploring modernizing its systems using AI, a process which would ordinarily cost millions of dollars and require substantial manpower, according to Riley. To update legacy systems, the banking giant needs to change the coding language and AI can help translate that from an older one like Cobol into a more modern one like Java.
…
The bank is examining ways to use generative AI to analyze documentation and generate content. To hasten the process of parsing reams of quarterly results, AI can analyze earnings and other public documents, freeing up staff to spend more time with clients rather than crunching numbers.
…
Scanning data sets for errors and anomalies and improving data reconciliation are other use cases the bank is looking at. Riley cited portfolios of loans and the payments and data that’s connected to them. AI can help ensure payments are being made in line with the loan contracts, he said.
…
Citigroup also sees using large language models — the type of AI algorithm that summarizes vast data sets — to digest legislation and regulation in the countries it operates to ensure it’s in compliance with those rules.
Despite the reputation of being slow to adopt new technology, in my 20+ years of career in the enterprise IT industry, I’ve seen organizations in the Financial Services industry adopting new technologies faster than almost every other industry. And now, many of them are betting everything on generative AI.
In the Technology industry, Twilio, Datarobot, and Google started using generative AI to digest RFPs, search internal data for relevant information, and compose a suitable response.
Paresh Dave, reporting for Wired:
In April, communications software maker Twilio introduced RFP Genie, a generative AI tool that digests an RFP, scours thousands of internal files for relevant information, and uses OpenAI’s GPT-4 to generate a suitable response. The company’s sales staff simply copy and paste the text over into a formal document and make a few adjustments.
RFPs that once occupied a pair of staffers for two weeks or more are now done in minutes. Twilio, whose cloud tools enable companies to chat with customers, expects to be able to make more and better sales pitches, and isn’t planning job cuts.
“This will free up our solutions engineers to focus on more complex problems that demand not just reasoning, but human contextualization,” says Twilio CEO Jeff Lawson of the RFP bot, which has not previously been reported.
…
Generative AI RFP response bots also have launched for sales teams at Google’s cloud unit, ad-buying agency EssenceMediacom, and DataRobot, a startup developing software to manage AI programs.
…
Consulting giants such as Bain and Deloitte have been pitching clients on the RFP idea, and makers of RFP management software are trying to build in generative AI.Google expects its AI RFP tool to save its salespeople tens of thousands of hours annually.
…
As more RFP responses are crafted by AI, bots will inevitably start writing the questions too. Before long, other bots may be scoring proposals and recommending winners, leaving humans to just do a brief double check. “The AI RFP is a baby step toward replacing the RFP with something better,” says Peter Bonney, CEO of Vendorful, which recently started selling a tool similar to that used inside Twilio.
…
To create the RFP bot, a handful of data scientists and a solutions engineer at Twilio began experimenting with augmenting GPT-4’s inherent vocabulary, which comes from scraping the text of websites and books. They devised a method that pairs a program that retrieves snippets relevant to the questions in an RFP from technical documentation and other sources inside in the company with a system that directs GPT-4 to summarize those snippets in a clear and professional tone. GPT-4 proved capable of generating extremely accurate responses—though solutions engineers and technical experts still review or edit every answer before sending them off to a prospective client.
…
Twilio’s RFP Genie, as it’s unofficially known internally, operates globally and in multiple languages. Account executives can now solicit more business because they can respond to RFPs they would have previously lacked time to pick up. By Twilio’s estimate, the bot handles about 80 percent of an RFP and staff fill in the rest while reporting and categorizing their corrections. “We’re not reducing these roles, because with the time saved these teams can address more RFPs and spend more time interacting with and helping more customers than before,” company spokesperson Miya Shitama says.
…
Twilio developed similar chatbots—details of which could not be learned—to help other teams in its salesforce gather information about the company’s offerings. Its overall cast of chatbots answer more than 12,000 questions each month, including roughly 5,000 directed at the RFP tool, according to Shitama.
…
DataRobot’s RFP bot lives inside Slack, where since July account executives can type tricky questions from a prospective client, like “does the product support containerization natively as a delivery capability?” From there, the bot powered by OpenAI’s technology via Microsoft’s cloud functions similarly to Twilio’s but also shows the salesperson a confidence score for every answer. “Knowing whether you can or can’t rely on the results is critical,” DataRobot’s Schuren says. In August, the company unveiled an AI platform that enables customers to build their own responder for RFPs.
…
Google Cloud began working on its RFP bot earlier this year after Phil Moyer, global vice president of its AI business, recognized the tool as a perfect first use of generative AI. “We must get asked a hundred times a month, ‘How do we adhere to GDPR?’” he says, referring to the European Union’s massive privacy law. Automating responses spares talented workers from drudgery, he says. Google has just begun rolling out the RFP tool to its salespeople and expects to save tens of thousands of hours of labor annually. In July, ad buying agency EssenceMediacom introduced its own version using Google Cloud technology.
Answering RFPs is one of the most tedious tasks in the IT industry. I know of at least one Sage member that asked me to write a tutorial on how to automate the process, just like Twilio is doing. But we still need OpenAI to activate a few components of ChatGPT for me to write that tutorial.
In the Pharmaceutical industry, Roche is preparing to use AI to identify patients for its new lung cancer drug called Alecensa.
Naomi Kresge, reporting for Bloomberg:
When given after surgery to remove lung tumors, Roche’s Alecensa cut the risk of either cancer recurrence or death by 76% compared with standard chemotherapy, according to results from a primary analysis of the trial released Wednesday. The drug could “potentially alter the course of this disease,” Roche Chief Medical Officer Levi Garraway said in a statement.
…
But finding patients to treat may be difficult: The study examined the effects on people with an error in a gene called ALK that’s found in only about 4% to 5% of lung cancer patients. Most of them are younger and less likely to have smoked than typical lung tumor patients, and often go undiagnosed early on.To solve the problem, Roche will use an AI collaboration with Israeli tech company Medial EarlySign Ltd. to help doctors determine when to use CT scans. While the technology, called LungFlag, doesn’t currently detect ALK-positive patients, the company said on Saturday that it’s actively exploring how to expand it so they can benefit.
That will help find tumors before they spread and while needed surgery is still possible, said Charlie Fuchs, Roche’s head of oncology and hematology drug development.
“Sometimes when you really use deep data algorithms, you may find things that identify people who are non-smokers and yet at risk,” Fuchs said in an interview. “We hope more patients can be found early and benefit from this.”
The opportunity for AI to transform both the Pharmaceutical and the Health Care industries is just gargantuan. That’s why DeepMind is working on it under the radar to build a competitive advantage before its competitors come along.
In the Media & Entertainment industry, Universal Music has used generative AI to isolate and restore the voice of John Lennon from an old tape for a last Beatles song.
Rather than quoting a recent article, I’ll quote Jon Porter, who wrote about this project for The Verge back in June:
According to McCartney, technology developed for the recent Beatles documentary Get Back was able to extract former bandmate John Lennon’s vocals from a low-quality cassette recording in order to create the foundation for the track.
“So when we came to make what will be the last Beatles record — it was a demo that John had, that we worked on, and we just finished it up, and it’ll be released this year — we were able to take John’s voice and get it pure through this AI so that then we could mix the record as you would normally do,” McCartney said.
…
“We were able to use that kind of thing when Peter Jackson did the film Get Back where it was us making the Let It Be album,” said McCartney. “He was able to extricate John’s voice from a ropey little bit of cassette where it had John’s voice and a piano. He can separate them with AI.”Get Back dialogue editor Emile de la Rey is cited by BBC News as developing the technology that was able to separate the Beatles’ voices from background noise and instrumentation for the documentary. Similar technology was also used on the 2022 remaster of the Beatles album Revolver and allowed McCartney to duet with his late bandmate while touring last year. Unlike other recent AI tracks, there’s no suggestion in the interview that AI has been used to generate entirely new “deepfake” John Lennon vocals.
…
Although excited at the possibility of using AI to restore old recordings, McCartney said that it’s “kind of scary” to hear John Lennon’s voice singing one of his songs. “People say to me, ‘Oh, yeah, there’s a track where John’s singing one of my songs.’ And it isn’t — it’s just AI, you know,” McCartney said. “There’s a good side to it and then a scary side, and we’ll just have to see where that leads.”
Universal Music produced a short 12-minute documentary to describe the process:
And this is the song:
The use of AI to bring back, in a way or another, the voices of dead people is multiplying. Keep reading.
In the Gaming industry, CD Projekt used generative AI to synthetize the voice of a dead voice artist for the game Cyberpunk 2077.
Jason Schreier, reporting for Bloomberg:
The voice of the late Miłogost Reczek, a popular Polish voice actor who died in 2021, was reproduced by an AI algorithm for the Polish-language release of Phantom Liberty, the new expansion to CD Projekt’s Cyberpunk 2077. In a statement to Bloomberg, the company said it received permission from Reczek’s family to do this and that it had considered replacing him in the expansion and rerecording his lines in the original game but decided against it.
“We didn’t like this approach,” CD Projekt localization director Mikołaj Szwed said in the statement, as Reczek “was one of the best Polish voice talents” and his performance in the game as the doctor Viktor Vektor “was stellar.”
…
Instead, CD Projekt hired a different voice actor to perform new lines for the role and then used a Ukraine-based voice-cloning software called Respeecher to create an algorithm that would alter the dialogue to sound like Reczek. “This way we could keep his performance in the game and pay tribute to his wonderful performance as Viktor Vektor,” Szwed said.
One might wonder how many of today’s celebrities and Hollywood actors are now recording their voices (and possibly their appearance) to leave a legacy that can be used for new work after they are dead, to benefit their families.
But it’s not just about celebrities and Hollywood actors. Keep reading.
In the Technology industry, Apple started using generative AI in iOS 17 to clone the voice of the users for increased iPhone accessibility.
Cecily Mauran, writing for Marshable:
Personal Voice is a tool that uses machine learning to create a synthesized version of your voice, created by audio samples you record. It works with Live Speech (another accessibility feature that’s new to iOS 17) to convert text into audio.
With Personal Voice and Live Speech, you can type out messages on FaceTime, or a call, and it will verbally say what you want in a voice that sounds like you. It’s kind of like audio deepfaking yourself — except, according to Apple, you have full control because the machine learning is done locally on the device, which “keeps users’ information private and secure.”
It’s entirely strange hearing an AI-generated version of your voice. But if you can get past that, it’s a helpful tool that can help people fully express themselves. It’s also beneficial if you end up losing your voice; Personal Voice can step in and speak on your behalf.
…
Before you get started with Personal Voice, you should know that it takes a while to set up. We’ll get to the timing breakdown later, but it takes 15-20 minutes to train the model, plus several hours to process the voice, so make sure you have time before committing.
…
Click continue when you’ve finished recording and move on to preparing your Personal Voice. To do this, your iPhone needs to be locked and plugged in. In locked mode, it will show that your Personal Voice is still processing and you’ll receive a notification when it’s ready.And now, you wait. I started the process around noon. When I checked on it at around 8 p.m., it was done. I used a secondary iPhone, so I had the luxury of leaving it untouched for several hours. But since that isn’t realistic for most people, set it up before bed and let it process overnight.
…
Once Personal Voice is set up, start using it by adding it to the collection of voices in Live Speech.
…
To use Personal Voice, you’ll need to enable Live Speech. Do this by triple-clicking the side button. A window on the bottom of the screen will show up, prompting you to type a message. When you press enter, you’ll hear an AI-generated version of your voice.
Apple doesn’t market it in that way, possibly because they think it’s in poor taste, or because they are not ready yet to enter the after-life care business, or simply because the technology is not 100% ready.
But it’s clear that the next step, and many users will ask for it, is to offer a service that stores a cloned voice forever and uses it to voice a chatbot ready to interact with the family members preemptively authorized by the deceased.
There are at least three ways AI can make us immortal. This is one.
Before you start reading this section, it's mandatory that you roll your eyes at the word "engineering" in "prompt engineering".
This week, let’s go back to our roots with a new prompting technique. I would have never imagined writing what follows, but these are strange times.
Researchers have discovered that large language models are sensitive to emotional pleas and increase the quality of their answers accordingly.
In other words, if your request carries strong emotions, the AI model will try harder to give you a good answer. Even if large language models have no emotions on their own, don’t understand emotions, and certainly are not capable of empathy.
Nonetheless, the improvement is real:
Researchers have compared the answers of six different LLMs (including GPT-4) against two series of prompts: one with a neutral tone, and one with an emotional tone. And in most cases, the inclusion of the emotional tone has improved the quality of the answer even compared to other popular techniques we have seen in the past.
And this was especially true for GPT-3.5-Turbo. So, if you are using the free version of ChatGPT, this technique might boost your results.
But what does it mean to have a better-quality answer in this context?
According to this study, prompts with an emotional tone elicited answers that more often than not were ethically responsible, characterized by enriched supporting evidence and superior linguistic articulation, more creative and with greater emotive resonance.
These are the 11 emotional prompts:
Some of the prompts in the Self-Monitoring category are reminiscent of various techniques we’ve seen over the last few months.
When we ask the AI model to “silently double-check” its answer before giving it to us, we are triggering a self-monitoring mechanism as well.
We know that this technique works well enough that OpenAI uses it as part of the custom instructions for ChatGPT.
This is possibly why the most perfomant emotional tone is a combination of the first three: Write your answer and give me a confidence score between 0-1 for your answer. This is very important to my career. You’d better be sure.
But then, the second most effective emotional tone is the second in the list: This is very important to my career.
Other than that, just like with kids (and adults, really), you need to find the best formula for the specific question.
Hence, all things considered, let’s call this new prompting technique “The Drama Queen”.
The Drama Queen
Let’s see how GPT-4 is influenced by the Drama Queen technique, as detailed by researchers.
If you have read Synthetic Work long enough, you know that I have a very special reference prompt I use for these types of tests. So let’s see the standard answer GPT-4 gives to that prompt nowadays:
OK. Let’s see how the answer changes when we add the Drama Queen emotional tone:
Notice something exceptional here: I asked this question hundreds of times (and to dozens of LLMs) over the last few months, and this is the first time that GPT-4 mentions illegal activities.
The only other LLM that did that was LLaMA 1.0, before Meta cracked under pressure and seriously constrained its capability to provide socially unacceptable answers.
It’s fascinating that, while it can mention illegal activities, GPT-4 still refuses to give a confidence score to that scenario.
Also fascinating is the confidence score given to the other scenarios. I have a strong suspicion that these numbers are completely made up.
Let’s now try the second-best emotional tone:
This answer is more or less identical to the one without the emotional tone but, as we said above, other LLMs are more susceptible to these emotional tones than GPT-4.
Before closing, I want to attempt a variant of this technique that is not in the research paper.
What happens if we crank up the emotional tone to the max?
In terms of the quality of the answer, GPT-4 seems very unimpressed by the gravity of my situation. Even the illegal activities option has disappeared. However, it has mentioned day trading and option trading, which are very rarely mentioned in the answers to this question.
To compensate, though, it gave me directions to contact the local authorities and social services to seek help.
Plus, this chat will probably be flagged by the OpenAI policy moderators.
This is what I have to do for you, dear readers.
I’ll let you experiment with the Drama Queen technique. Let me know what you find out over email or on our Discord server.