Dean DeBiase is a best-selling author and Forbes Contributor reporting on how global leaders and CEOs are rebooting everything from growth, innovation, and technology to talent, culture, competitiveness, and governance across industries and societies.
Navigating The AI Revolution: How It Will Redefine Your Workplace
By Dean DeBiase
November 25th, 2024
With Elon Musk’s xAIraising an additional $5 billion in funding from Andreessen Horowitz, Qatar Investment Authority, Valor Equity Partners, and Sequoia — and Amazon investing an additional $4 billion in OpenAI rival Anthropic — artificial intelligence enters the holiday season on fire.
But while Microsoft, Google, Meta, Amazon and others invest billions in developing general-purpose large language models to handle a variety of tasks, one size does not fit all when it comes to AI. What’s good for these big dogs may not be what your company needs. And even if there is an impending bubble, now more than ever, the C-suite needs to better understand the impact of these technologies.
With (too many) LLM startups enabling computers to synthesize vast amounts of data and respond to natural-language queries — LLM powered AI is becoming critical for businesses across the globe. “The response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable,” AWS CEO Matt Garman, stated in a release on their expanding partnership and investments. “By continuing to deploy Anthropic models in Amazon Bedrock and collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies.”
For many companies, LLMs are still the best choice for specific projects. For others, though, they can be expensive for businesses to run, as measured in dollars, energy, and computing resources. According to IDC calculations, worldwide AI spending will double over the next four years to $632 billion (seems low), with generative AI growing rapidly to represent 32% of all spending.
I suspect there are emerging alternatives that will work better in certain instances—and my discussions with dozens of CEOs support that. I spoke with Steve McMillan, president and CEO of Teradata, one of the largest cloud analytics platforms focused on harmonizing data with trusted AI, who offered an alternative path for some businesses: “As we look to the future, we think that small and medium language models, and controlled environments such as domain-specific LLMs, will provide much better solutions.”
Your Company Needs Small Language Models (SLMs)
So, what exactly are small language models? They are simply language models trained only on specific types of data, that produce customized outputs. A critical advantage of this is the data is kept within the firewall domain, so external SLMs are not being trained on potentially sensitive data. The beauty of SLMs is that they scale both computing and energy use to the project’s actual needs, which can help lower ongoing expenses and reduce environmental impacts.
Another important alternative—domain-specific LLMs—specialize in one type of knowledge rather than offering broader knowledge. Domain-specific LLMs are heavily trained to deeply understand a given category and respond more accurately to queries, by for example a CMO vs. a CFO, in that domain.
AI’s Hallucination, Power And Training Challenges
Since LLMs require thousands of AI processing chips (GPUs) to process hundreds of billions of parameters, they can cost millions of dollars to build especially when they’re being trained, but also afterwards, when they’re handling user inquiries.
The Association of Data Scientists notes that simply training GPT-3 with 175 billion parameters “consumed an estimated 1,287 MWh (megawatt-hours) of electricity… roughly equivalent to the energy consumption of an average American household over 120 years.” That doesn’t include the power consumed after it becomes publicly available.
By comparison, ADaSci says fully deploying a smaller LLM with 7 billion parameters for a million users would only consume 55.1 MWh: The SLM requires less than 5% of the LLM’s energy consumption. In other words, it is possible to achieve significant savings by following McMillan’s advice when building an AI solution.
LLMs typically demand far more computing power than is available on individual devices, so they’re generally run on cloud computers. For companies, this has several consequences, starting with losing physical control over their data as it moves to the cloud, and slowing down responses as they travel across the internet. Because their knowledge is so broad, LLMs are also subject to hallucinations. These are responses that may sound correct at first, but turn out to be wrong (like your crazy uncle’s Thanksgiving table advice), often due to inapplicable or inaccurate information being used to train the models.
Upsides Of SLMs
SLMs can help businesses deliver better results. Although they have the same technical foundation as the well-known LLMs that are broadly in use today, they’re trained on fewer parameters, with weights and balances that are tailored to individual use cases. Focusing on fewer variables enables them to more decisively reach good answers; they hallucinate less and are also more efficient. When compared with LLMs, SLMs can be faster, cheaper, and have a lower ecological impact.
Since they don’t require the same gigantic clusters of AI-processing chips as LLMs, SLMs can run on-premises, in some cases even on a single device. Removing the need for cloud processing also gives businesses greater control over their data and compliance.
As McMillan explains, his company’s goal isn’t to lock customers into a single solution or LLM model that might not be best for their needs. “Our ethos is to embrace all of those technologies, allowing our customers to use the language models of their choice inside the Teradata ecosystem so they can trust the data that’s feeding into those models and the analytics and the insights that are coming from the data feeding those into those models in the most effective and efficient way.”
What About Domain-Specific LLMs
Domain-specific LLMs also have an important role to play. Stay with me here. Picture them as an American history textbook compared with a collection of encyclopedias—far more heavily focused on addressing a given need well than many needs superficially. Because they’re trained on specialized knowledge, domain-specific LLMs can provide answers that are more relevant, contextually appropriate and accurate. Their focused parameters are also easier to customize or fine-tune for specific tasks than the larger set of more generalized parameters used in general-purpose LLMs.
These benefits are offset by a couple of downsides. Domain-specific LLMs need to be specially trained from the start, and require ongoing reinforcement, particularly as information within the domain evolves and expands—both potentially expensive.
SLM Use Cases: What They Can Do for Businesses Now
When unpacking SLM deployments, there are game-changing impacts across sectors like:
Customer Service
Small Language Models can be used for rapid customer sentiment and complaint analysis, using data that’s highly valuable to keep inside the corporate firewall. They can generate valuable summaries that can integrate into customer relationship management (CRM) products to improve resolution actions.
Healthcare
Small Language Models are beginning to prove their value in analyzing physicians’ notes, another data processing area with significant reasons to avoid moving sensitive data. When AI extracts and interprets information, healthcare providers can focus more on patient care—maybe looking at and engaging with people instead of their computer screens.
Finance
Businesses that need to find emails or documents with potential regulatory compliance or governance impacts can use SLMs to flag them. As tasks go, this is straightforward for an LLM—it doesn’t need much more than a small model, potentially running on the same servers that hold the data, avoiding the need for additional storage, expensive AI processor use, or network transport expenses.
Retail
From Walmart, Kroger, and Costco to Target, CVS and Walgreens providing AI-based product recommendations is a strategic business function in retail. It’s also a process that relies heavily—sometimes exclusively—on business-owned data, such as customer information, buying and browsing history, and the company’s product catalog. This use case can leverage an open-source LLM’s analytic functions, such as clustering or vector similarity. LLM-created product recommendations can run alongside typical search results, meeting customers’ exact requests while intelligently guiding them to items that are more personalized.
While well-known LLMs such as OpenAI’s ChatGPT-4, Anthropic’s Claude, and Meta’s LLaMA 2 can process large amounts of data to generate seemingly perceptive outputs, they aren’t as well-suited to understanding the specific issues faced by a given business or the meaning of medical terms.
Smaller language models, including those by Hugging Face, offer the capability to narrow the types of data they ingest, their output, and the power they use, creating solutions that can scale to search a million documents or assist a million customers. They can also be incorporated into AI suites that provide a collection of tailored, efficient solutions rather than one big and unwieldy LLM.
What C-Suite Leaders Should Do (Trust) Next
Moving forward, business adoption of AI won’t be one-size-fits-all: Every business will focus on efficiency, selecting the best and least expensive tool to get the job done properly. That means picking the right-sized model for each project, whether a general purpose LLM or smaller and domain-specific LLMs as businesses determine they will deliver better results, require fewer resources, and reduce the need for data to migrate to the cloud.
Given the current state of public confidence in AI-generated answers, it’s clear trusted AI and data will be mandatory for the next wave of business solutions. “When you think about training AI models,” said McMillan, “they must be built on the foundation of great data. That’s what we’re all about, providing that trusted data set and then providing the capabilities and analytics capabilities so clients, and their customers, can trust the outputs.”
In a world that needs higher accuracy and efficiency more than ever before, smaller and domain-specific LLMs offer another option for delivering results companies and the broader public can rely upon. Leaders who continue to invest in their learning journeys will be able to accelerate their company’s AI optimization and become more competitive in their specific market sectors. Enjoy the journey.