HomeArticlesNewsTeradata simplifies real-world genAI and accelerates business value.

Teradata simplifies real-world genAI and accelerates business value.

Teradata has recently announced significant enhancements to its VantageCloud Lake and ClearScape Analytics offerings. These upgrades enable enterprises to seamlessly implement generative AI (genAI) use cases, allowing them to realize immediate returns on investment (ROI).

Teradata Unveils New Features for VantageCloud Lake and ClearScape Analytics

Teradata has recently announced significant enhancements to its VantageCloud Lake and ClearScape Analytics offerings. These upgrades enable enterprises to seamlessly implement generative AI (genAI) use cases, allowing them to realize immediate returns on investment (ROI).

As organizations transition from the theoretical to the practical use of generative AI, there is an increasing demand for a more holistic AI strategy. This strategy emphasizes actionable use cases that are known to deliver tangible business benefits. Notably, a staggering 84% of executives anticipate ROI from AI initiatives within a year. With the rapid advancements in large language models (LLMs), alongside the rise of small and medium-sized models, AI service providers can now offer tailored open-source models. These models provide significant versatility across various applications while reducing the costs and complexities associated with larger frameworks.

Introducing Bring-Your-Own LLM (BYO-LLM)

The introduction of the bring-your-own LLM (BYO-LLM) feature allows Teradata’s clients to leverage smaller or mid-sized open LLMs, including models specific to particular domains. These models not only facilitate easier deployment but also represent a more cost-effective solution overall. Teradata’s new offerings ensure that LLMs are brought directly to the data, minimizing the need for substantial data movement and enhancing security, privacy, and trust.

Furthermore, Teradata now equips clients with the versatility to utilize either GPUs or CPUs based on the complexity and size of the LLMs. In scenarios that demand speed and performance at scale for tasks like inferencing and model fine-tuning, GPUs can significantly enhance efficiency. Both functionalities will be accessible through VantageCloud Lake. Additionally, Teradata’s partnership with Nvidia, announced concurrently, incorporates the Nvidia AI full-stack accelerated computing platform, including Nvidia NIM as part of the Nvidia AI Enterprise. This integration is designed to expedite trusted AI workloads, regardless of their scale.

Advancements in Generative AI with Open Source LLMs

Organizations are beginning to understand that larger LLMs may not be the most suitable choice for every application and can come with prohibitive costs. The BYO-LLM feature allows users to select the most appropriate model based on their unique business requirements. According to Forrester, forty-six percent of AI leaders are gearing up to incorporate existing open-source LLMs into their generative AI strategies. With the implementation of BYO-LLM, customers using VantageCloud Lake and ClearScape can easily access small or mid-sized LLMs from open-source providers like Hugging Face, which offers an extensive library of over 350,000 LLMs.

The flexibility of smaller LLMs, often crafted for domain-specific applications, addresses valuable real-world challenges. For example:

  • Regulatory compliance: Financial institutions can utilize specialized open LLMs to pinpoint emails with potential regulatory consequences, significantly reducing their reliance on expensive GPU resources.
  • Healthcare note analysis: Open LLMs provide capabilities to analyze doctor’s notes for streamlined information extraction, improving patient care while safeguarding sensitive data.
  • Product recommendations: Implementing LLM embeddings along with in-database analytics from Teradata ClearScape Analytics enhances recommendation systems for businesses.
  • Customer complaint analysis: Open LLMs facilitate the examination of complaint topics, sentiments, and summaries, paving the way for enriched customer insights that improve resolution strategies.

GPU Analytic Clusters for Enhanced Performance

By integrating Nvidia’s full-stack accelerated computing capabilities into VantageCloud Lake, Teradata is set to enhance the performance of LLM inferencing—offering clients enhanced value and cost efficiency, particularly for complex or larger models. Nvidia’s accelerated computing is adept at managing vast volumes of data and executing calculations rapidly, which is essential for tasks like inference. A practical application of this in healthcare is the automated summarization of doctor’s notes, enabling medical professionals to concentrate more on patient interactions.

Moreover, VantageCloud Lake supports model fine-tuning utilizing GPUs, thus granting clients the ability to refine pre-trained language models with their organization’s specific datasets. This customization not only elevates model accuracy but also increases efficiency, since organizations won’t need to start the training process from the ground up. For instance, a mortgage advisory chatbot could be fine-tuned with financial terminology to improve its responses, demonstrating how organizations can enhance model adaptability and reusability through accelerated computing.

Service Availability

The ClearScape Analytics BYO-LLM feature for Teradata VantageCloud Lake is set to become generally available on AWS this October, with plans to expand to Azure and Google Cloud in the first half of 2025. VantageCloud Lake will first launch with Nvidia AI accelerated compute on AWS in November. The integration of inference features is expected in Q4, with fine-tuning capabilities to follow in the first half of 2025.


HAL149 is an AI company specializing in developing intelligent assistants for businesses. Our AI Assistants are custom-trained GPT models designed for tasks like customer service and content generation.