After many decades of research, finally, during the last couple of years, Artificial Intelligence (AI) and Machine Learning (ML) academic research led to the development of technology feasible for enterprise and even industrial production deployments capable of addressing many previously seemed impossible to solve problems. The availability of vast quantities of data and the exponential growth of computing resources enabled this fundamental transition.
We don’t program computers anymore… we train them!
The development of ML models, which predict future parameters or recognize objects, rely on the availability and accessibility of well-prepared structured and unstructured data. The size of the training and verification datasets directly influences the quality of the produced ML. In reality, the availability of a large quantity of high-quality data is the key factor in making most ML technology work. The data must be collected, prepared, and accessible for the compute infrastructure, and has to be archived for future use. Each of these stages presents significant but different demands toward the storage infrastructure. In many cases, each data processing stage is performed at multiple locations. Primarily, the data is generated and collected first at the edge of the network and consequently pushed to distributed datacenters or to the cloud for processing and long-term storage. The exact infrastructure architecture and workflow are defined based on the specific use case, which brings the vital need for flexibility and modularity for any solution.
One of the most extensive and highly visible AI/ML use cases is the Autonomous Vehicle (AV) development. In this case, the initial data is collected by multiple cameras, LiDARs, and Radars installed on test vehicles that are deployed in regions where the AVs will be able to operate in the future. Each vehicle collects terabytes of data per hour, which needs to be stored reliably and offloaded to a datacenter at a later time. The data collected at the datacenter quickly grows to tens and even hundreds of petabytes. Data is collected at multiple locations and processed locally or aggregated either in a primary datacenter or on the public cloud. This process is extremely time-consuming and expensive, making efficient storage and data management solutions critical for the business.
A new reference architecture to cover all phases of the complex AV development process
To address all of these challenges for AV developers and relying on decades of unstructured data management experience and industry-leading technology, Quantum developed a new ADAS/AV data management reference architecture that delivers unmatched flexibility and performance. It describes the only single-vendor solution on the market that covers all phases of the complex AV development process, including in-vehicle data collection, data preparation, ML model training, system simulation, HiL testing, long-term storage, and archiving. The solution includes the new Quantum R6000 in-vehicle, high performance, high capacity, ruggedized storage appliance, which is deployed in the trunk of the test car and collects multistream data at over 10GBytes/sec. Its compact size, removable storage canister with up to 120 TBytes of reliable storage makes it ideal for any data load in any environment.
R6000 allows the fast offload of the collected data to the datacenter infrastructure, where it is placed under the management of Quantum StorNext File System, the world’s fastest file system for video workloads. StorNext manages the data stored on industry-leading NVMe, HDD, object storage, and tape appliances. It controls multiple storage tiers and places the data automatically at the tier and the location that provides the necessary performance and capacity at the best price. The capacity and the performance of each of the storage tiers scale infinitely.
Some of the core principles followed during the design of the Quantum reference architecture are modularity and open interfaces. Each one of the components supporting the storage tiers is optional and interchangeable, and all major industry standards for interoperability are supported. These allow easy integration of existing storage datacenter infrastructure or cloud instances from AWS, GCP, or Azure.
A blueprint for AV and AI/ML development organizations
The new Quantum reference architecture provides a blueprint for AV and other industrial AI/ML development organizations at any size and at any stage of development to start and grow on a solid technology base and with the support of one of the leading data management companies. Tapping into the decades of knowledge Quantum provides, together with the highest performing, most reliable technology on the market, provides a massive advantage in the race of developing the next level of autonomous vehicles and industrial robots.
To learn more about Quantum’s ADAS and mobility solutions, visit our autonomous vehicles page.
To learn more about the Quantum R-Series Edge Storage range, visit the R-Series product page.