VAST Data has unveiled the full vision for the company by introducing a transformative data computing platform designed to be the foundation of AI-assisted discovery. The VAST Data Platform is VAST’s global data infrastructure offering, unifying storage, database and virtualized compute engine services in a scalable system that was built from the ground up for the future of AI.
While generative AI and Large Language Models (LLMs) have introduced the world to the early capabilities of artificial intelligence, LLMs are limited to performing routine tasks like business reporting or reciting information that is already known. The true promise of AI will be realized when machines can recreate the process of discovery by capturing, synthesizing and learning from data – achieving a level of specialization that used to take decades in a matter of days.
The era of AI-driven discovery will accelerate humanity’s quest to solve its biggest challenges. AI can help industries find treatments for diseases and cancers, forge new paths to tackle climate change, pioneer revolutionary approaches to agriculture, and uncover new fields of science and mathematics that the world has not yet even considered.
As such, enterprises are increasingly turning their focus to AI applications, and while organizations can stitch together technologies from disparate public or private cloud offerings, customers require a data platform that simplifies the data management and processing experience into one unified stack.
Today’s existing data platforms have become popular for global enterprises, dramatically reducing infrastructure deployment complexity for business intelligence and reporting applications, but are not built to meet the needs of new deep learning applications. This next generation of AI infrastructure must deliver parallel file access, GPU-optimized performance for neural network training and inference on unstructured data, and a global namespace spanning hybrid multi-cloud and edge environments, all unified within one easy-to-manage offering in order to enable federated deep learning.
The foundation of this next era of AI computing can only be built by resolving fundamental infrastructure tradeoffs that have previously limited applications from computing on and understanding datasets from global infrastructure in real time. To bring deep learning to data, VAST Data is today introducing the VAST Data Platform.
The VAST Data Platform was built with the entire data spectrum of natural data in mind – unstructured and structured data types in the form of video, imagery, free text, data streams and instrument data – generated from all over the world and processed against an entire global data corpus in real-time. This approach aims to close the gap between event-driven and data-driven architectures by providing the ability to:
- Access and process data in any private or major public cloud data centre
- Understand natural data by embedding a queryable semantic layer into the data itself.
- Continuously and recursively compute data in real-time, evolving with each interaction
For more than seven years, VAST has been building toward a vision that puts data – natural data, rich metadata, functions and triggers – at the centre of the VAST Disaggregated Shared-Everything (DASE) distributed systems architecture. DASE lays the data foundation for deep learning by eliminating tradeoffs of performance, capacity, scale, simplicity and resilience to make it possible to train models on all of an enterprise’s data. By allowing customers to now add logic to the system – machines can continuously and recursively enrich and understand data from the natural world.
To capture and serve data from the natural world, VAST first engineered the foundation of its platform, the VAST DataStore, a scalable storage architecture for unstructured data that eliminates storage tiering. Exposing enterprise file storage and object storage interfaces, the VAST DataStore is an enterprise network attached storage platform built to meet the needs of today’s powerful AI computing architectures, such as NVIDIA DGX SuperPOD AI supercomputers, as well as big-data and HPC platforms.
The exabyte-scale DataStore is built with best-in-class system efficiency to bring archive economics to flash infrastructure – making it also suitable for archive applications. Resolving the cost of flash storage has been critical to laying the foundation for deep learning for enterprise customers as they look to train models on their proprietary data assets. To date, VAST has managed more than ten exabytes of data globally with leading customers including Booking.com, NASA, Pixar Animation Studios, Zoom Video Communications, Inc., and many others.
To apply structure to unstructured natural data, VAST has added a semantic database layer natively into the system with the introduction of the VAST DataBase. Applying first-principles simplification of structured data by combining the characteristics of a database, a data warehouse and a data lake all in one simple, distributed and unified database management system, VAST has resolved the tradeoffs between transactions (to capture and catalogue natural data in real-time) and analytics (to analyze and correlate data in real-time). Designed for rapid data capture and fast queries at any scale, the VAST DataBase is the first system to break the barriers of real-time analytics from the event stream all the way to the archive.
With a foundation for synthesized structured and unstructured data, the VAST Data Platform then makes it possible to refine and enrich raw unstructured data into structured, queryable information with the addition of support for functions and triggers. The VAST DataEngine is a global function execution engine that consolidates data centres and cloud regions into one global computational framework. The engine supports popular programming languages, such as SQL and Python, and introduces an event notification system as well as materialized and reproducible model training that makes it easier to manage AI pipelines.
The final element of the VAST Data Platform strategy is the VAST DataSpace, a global namespace that permits every location to store, retrieve and process data from any location with high performance while enforcing strict consistency across every access point. With the DataSpace, the VAST Data Platform is deployable in on-premises data centres, edge environments and now also extends DataSpace access into leading public cloud platforms including AWS, Microsoft Azure and Google Cloud.
This global, data-defined computing platform takes a new approach to marrying unstructured data with structured data by storing, processing and distributing that data from a single, unified system.
For enterprise AI and LLM systems to drive new discoveries and understandings, they require:
- Direct access to the natural world through the VAST DataSpace, eliminating reliance on slow and inaccurate translations
- The ability to store immense amounts of natural unstructured data in an accessible manner, through the VAST DataStore
- The intelligence to transform unstructured raw data into an understanding of its underlying characteristics, through the VAST DataEngine
- And finally, a way to build on all of an organization’s global knowledge, query it, and generate a better understanding of it, through the VAST DataBase
“We’ve been working toward this moment since our first days, and we’re incredibly excited to unveil the world’s first data platform built from the ground up for the next generation of AI-driven discovery,” said Renen Hallak, CEO and Co-Founder at VAST Data. “Encapsulating the ability to create and catalogue understanding from natural data on a global scale, we’re consolidating entire IT infrastructure categories to enable the next era of large-scale data computation. With the VAST Data Platform, we are democratizing AI abilities and enabling organizations to unlock the true value of their data.”
The VAST DataStore, DataBase and DataSpace are generally available within the VAST Data Platform today, and the VAST DataEngine will be made available in 2024.