Next year is the sixth edition of Dutch Night @CloudFest (March…
The use of artificial intelligence (AI) in IT infrastructures has gained prominence over the past few years. Organizations across practically all industry verticals are adopting AI to meet business challenges and exceed customer expectations. But what qualities exactly does a data center colocation partner need to have when accommodating your artificial intelligence enabled IT infrastructure?
The use of artificial intelligence in IT infrastructures is growing at a fast pace. In their latest Worldwide Artificial Intelligence Systems Spending Guide, IDC quantified that spending on AI systems will reach $97.9 billion (about 85,8 billion euros) in 2023. According to IDC, this figure will be more than two and a half times the $37.5 billion that was spent in 2019.
Organizations are increasingly outsourcing their IT infrastructures to professional data center services operators, but are these colocation providers adequately equipped to house your AI-defined IT infrastructures? Some of them are, some maybe not.
Processing Power, NVIDIA
An AI-ready colocation provider, first of all, must be capable of facilitating massive amounts of processing power. It means that the data center colocation environment being offered must feature the ability to support high-density workloads while meeting demanding power requirements.
The precise impact of AI on data center infrastructures can best be illustrated by taking a closer look at the latest AI-enabling technologies, such as from NVIDIA. In May his year, NVIDIA has launched its A100 graphics processor. Based on the company’s new Ampere architecture, this CPU will power servers used by practically all leading server vendors and cloud services providers (CSPs), including Dell Technologies, HPE, Lenovo, Asus, Fujitsu, Inspur, Cisco, Atos, Supermicro, Quanta/QCT, AWS, Microsoft Azure, Google, and others. The fact that all these well-known server vendors are embracing NVIDIA’s AI-enabling technology means that the use of artificial intelligence in the data center environment is hardly possible to ignore.
Nvidia’s DGX A100 board can even combine eight of these A100s CPUs into a super-GPU that can work as one giant processor. One DGX A100 would be capable of delivering 5 petaflops of artificial intelligence (AI) performance. It packs the power and capabilities of an entire data center into this single system. When co-locating AI-enabling technologies like these, it would require high-density options but also flexibility and scalability towards the power densities required.
Automotive, Telecom, AdTech/MarTech
If you still think that artificial intelligence and accompanying equipment is primarily the domain of hyperscalers without having much of an impact on regular colocation demand, you might want to reconsider. Yes indeed, it WAS the domain of the hyperscalers. Not anymore, at least not exclusively. Hyperscalers have been pursuing artificial intelligence and machine learning at scale since quite a while already. Now, as confirmed by market research figures from IDC, the use of artificial intelligence in the data center environment is taking of in a broader sense.
The rise of AI and the use of GPU computing hardware with now even super-GPUs emerging will definitely set certain expectations for the power density offered by colocation data centers. As mentioned, newly launched AI-enabling hardware such as from NVIDIA is packing more compute power into the data center equipment. And, while NVIDIA is now dominating the market for artificial intelligence chips, there are several other chip vendors focused on delivering power hungry AI chips as well, including Intel, AMD and a variety of startups such as Mythic, Graphcore, SambaNova Systems and Wave Computing. Driven by artificial intelligence use cases, servers and storage hardware will thus further boost power density requirements inside the colocation cabinets in the coming years.
The automotive industry is a good example of a vertical adopting AI at vast pace now. Last year, Volvo Group and NVIDIA signed a contract to develop an advanced AI platform for autonomous driving trucks. And in June this year, Mercedes-Benz and NVIDIA announced a partnership that will launch software-defined, intelligent vehicles using “end-to-end” NVIDIA technology.
Most analysts expect the data requirements of self-driving vehicles will be split. Some duties will be managed by powerful on-board computers. Other duties will be offloaded to external colocation data centers for additional data-crunching and storage.
Telecommunications is another vertical adopting artificial intelligence in their operations at vast pace. They already use it in many aspects of their businesses, from enhancing their customers’ experiences to improving network reliability and establishing predictive maintenance.
maincubes’ colocation data centers in Frankfurt (FRA01) and Amsterdam (AMS01) are home to a variety of companies with AI-enabling IT infrastructures. Clients that include well-known DAX-listed companies within automotive and telecom, deployed in maincubes FRA01, and also customers in the AdTech/MarTech segment. Global AdTech/MarTech company RTB House is a good example of this, with their IT infrastructure for the European market being deployed in the maincubes AMS01 facility in Amsterdam. Its unique, proprietary ad-buying engine is powered entirely by deep learning algorithms. To efficiently, effectively and flexibly accommodate their AI technology, they have embraced OCP (Open Compoute Project) technology for their data center setup. They can also utilize the European OCP Experience Center available in maincubes AMS01.
Energy Efficiency, Redundancy, Continuity
AI requirements do certainly place different demands on colocation data centers than traditional workloads do. As AI takes off in enterprise settings, so will data center power usage. And with excessive power usage comes the way to control it. Colocation data centers able to facilitate these power-hungry AI-applications must be equipped to deal with these demanding power requirements. You may then think of measures aimed at energy efficiency cooling efficiency and redundancy, as well as power redundancy and flexibility/scalability of power infrastructure.
While maincubes offers all of these features for AI use cases in its colocation data centers in Frankfurt and Amsterdam, including maximum redundancies with even 100% uptime guarantees, another interesting example of cooling technology for artificial intelligence applications is the use of liquid cooling. In close cooperation with ‘immersion cooling’ scale-up Asperitas, maincubes offers liquid cooling solutions in specifically equipped colocation suites of AMS01, the maincubes data center in Amsterdam. Asperitas’ immersion cooling technology makes high-density configurations much more energy efficient. As a result, energy-intensive hardware implementations can continue to operate with the highest possible load factor. At the same time, the innovative cooling technology reduces the amount of data center space required for AI as well as HPC and machine learning workloads.
For maincubes it’s important to diversify our data center colocation services portfolio on an ongoing basis, to meet evolving needs set by all clients including the ones with AI-enabled operations.