Jack Hidary, CEO, SandboxAQ
Gary Marcus, Professor Emeritus, New York University
Jack Hidary, CEO, SandboxAQ
Gary Marcus, Professor Emeritus, New York University
Bank of America’s Breakthrough Technology Dialogue event at Goodwood in the UK brought together a diverse mix of industry leaders, investors and experts working at the frontiers of technology to engage in an open exchange on the forces set to transform economies and societies in the future, with one main goal. “We all have our own technology imperatives, but we think it’s important to use our convening power to create a space where we can all learn,” explained Bernard Mensah, President of International, Bank of America, who hosted the event.
The lesson evident to many participants this year was that AI was in many respects not yet delivering on the hopes that dominated last year’s event.
Investment in AI has also shown signs of flagging as the market waits for more clarity on how the technology will be monetised.1 Jack Hidary, CEO of quantitative AI firm SandboxAQ, summed up the view of many participants:
The current hype cycle of AI is focused on LLM’s such as ChatGPT, but there are far more powerful and specific forms of AI now arriving which enterprises should shift their attention to for real world impact. If we look at the largest sectors of the economy, they need far more than a language model to accelerate their core business
Yet amid the skepticism, other attendees pointed to progress – even if this is not always immediately apparent.
One challenge delegates discussed was the cost of training large language models (LLMs). LLMs are AI models trained on large amounts of data, which enable them to recognise and interpret text or speech, and respond in a suitable way.2
Bank of America experts have noted the importance of training data means success with AI will depend on a company’s ability to formulate a robust data strategy that ensures high-quality data is readily available.
LLMs are costly to train and operate given the raw data required, much of which comes from the internet and social media. Tech giants like Microsoft, Google and Amazon have all been rushing to boost spending on the AI race. It was noted that even Apple, which has lagged others, has ramped up investment on its Apple Intelligence system, which will power its next generation of devices.3
Many companies are choosing to work together to advance AI developments. OpenAI and Microsoft are partnering to increase access to their industry leading AI models. Microsoft provides the supercomputing infrastructure OpenAI services such as ChatGPT run on,4 and helps customers tap into the company’s many other cutting-edge models (and thousands more from other companies) with its Azure AI platform. Apple recently announced the integration of ChatGPT, one of the best-known generative AI models, into its iOS 18 mobile device operating system for features like content generation.5
Predictions that training the next generation of LLMs could cost $10 billion apiece have focused the minds of investors.6 But LLMs are not the only basis for generative AI.
Small language models (SLMs) require far fewer parameters – the variables models learn in training and subsequently use to make decisions – than LLMs. Recent versions of Phi-3, Microsoft’s newest family of ‘lightweight’ AI models, are immensely powerful in terms of their ability to approximate reason and understanding, yet small and nimble enough to operate on a mobile phone.
The trick is to train SLMs on high-quality synthetic data rather than generic internet content, according to Sebastien Bubeck, vice president of generative AI research at Microsoft. He noted this also reduces the risks of misinformation or intellectual property violations.“Scale isn’t enough,” Bubeck pointed out. “Having much more control over how you teach models is how you’re going to make the next jump in AI.”
What remains to be seen is how these trends translate into commercial outcomes. Tony Fadell, principal, Build Collective, told Breakthrough Technology Dialogue that he was seeing results with AI “when it’s grounded in incredible data and applied for a specific function.”
“It’s not just about a business model that says we’re selling AI – the more important business model is putting AI into specific applications such as drug discovery, financial services or new battery chemistry,” agreed Hidary.
This means that the financial rewards of AI are likely to flow to the companies that find those business models, rather than those providing the technology itself. As Bank of America experts have pointed out, businesses are likely to gain the most by treating AI as an evolution rather than a revolution – in other words, using AI to elevate existing strategies, rather than reengineering those strategies in response to AI developments.
One example is the modelling and simulated testing of pharmaceutical compounds. Firms such as PhoreMost, a UK-based biopharmaceutical company, are using AI-based processes to accelerate the discovery process for therapies for some of the hardest to treat diseases.
Coding is another area where AI is already having significant real-world impact. In one recent poll, over 75% of software developers reported they were using or planning to use coding assistants. Encouragingly, 95% of those who had adopted AI felt it had boosted their productivity.7
Generative AI is also transforming customer service, by enabling conversation-based searches, suggesting responses to customer inquiries and summarising interactions.
IBM has worked with companies such as Bouygues Telecom to automate call transcriptions and update customer relationship management systems, realising millions of dollars of savings annually. Similarly, Bank of America has developed Erica, an AI-based virtual financial assistant that can help deliver important information and answer customer questions in real time. Since launching five years ago, Erica has had nearly 1.9 billion interactions with more than 40 million customers.8
Looking ahead, industry insiders see massive promise for ‘embodied AI’ – the combination of AI and robotics. The idea is to leverage AI to make machines more efficient and intelligent, and therefore better at their tasks.
Hidary pointed out that there are already demonstrations of robots that can watch videos of a human performing a task, then generate the code to ensure they can perform precisely the same function. This can benefit businesses by reducing training costs and enabling machines to tackle jobs quicker. “We have a lot to look forward to in the applications of humanoid robotics,” he said.
Most participants seemed to agree, even if some were disappointed by the inability of AI to function yet without human oversight and who acknowledged the need to be patient before the realities of AI catch up with high expectations. All limitations aside, the conversations at the Breakthrough Technology Dialogue showed AI is here to stay, and is a force enterprises and investors cannot afford to ignore.
To learn more about Bank of America’s international business, please get in touch with your relationship manager, or click here to view the international webpage.
1 Bank of America Institute: The AI evolution: Reality justifies the hype
2 IBM: What are large language models (LLMs)?
3 CNBC: Apple spending more on AI but remains behind its Silicon Valley peers
4 Microsoft: Frequently asked questions: AI, Microsoft Copilot, and Microsoft Designer
5 Apple: Introducing Apple Intelligence
6 Time: The billion-dollar price tag of building AI
7 IT brief: Developers embrace AI tools despite accuracy concerns