Nvidia Expands AI Capabilities with SchedMD Acquisition


Nvidia has acquired SchedMD and mentioned it’s going to proceed to distribute that firm’s open-source Slurm software program.

Slurm, a workload administration system for high-performance computing (HPC) and synthetic intelligence (AI), is utilized in greater than half of the highest 10 and high 100 methods within the TOP500 record of supercomputers, Nvidia mentioned in a Monday (Dec. 15) weblog put up.

The 2 firms have been collaborating for over a decade, in accordance with the put up.

With the acquisition, Nvidia will proceed to put money into Slurm’s growth “to make sure it stays the main open-source scheduler for HPC and AI”; supply open-source software program help, coaching and growth for Slurm to SchedMD’s clients; and develop and distribute Slurm as open-source, vendor-neutral software program, whereas making it out there to the broader HPC and AI group, in accordance with the put up.

“Nvidia will speed up SchedMD’s entry to new methods — permitting customers of Nvidia’s accelerated computing platform to optimize workloads throughout their total compute infrastructure — whereas additionally supporting a various {hardware} and software program ecosystem, so clients can run heterogeneous clusters with the most recent Slurm improvements,” Nvidia mentioned within the put up.

SchedMD CEO Danny Auble mentioned within the launch that the acquisition demonstrates the significance of Slurm’s function in demanding HPC and AI environments.

Commercial: Scroll to Proceed

“Nvidia’s deep experience and funding in accelerated computing will improve the event of Slurm — which can proceed to be open supply — to satisfy the calls for of the following era of AI and supercomputing,” Auble mentioned.

In an earlier transaction, Nvidia mentioned in April 2024 that it deliberate to amass Run:ai, a Kubernetes-based workload administration and orchestration software program supplier.

That acquisition was finalized in January after being cleared by regulators.

When asserting that it had entered right into a definitive settlement to amass Run:ai, Nvidia mentioned the deal would assist clients make extra environment friendly use of their AI computing assets.

“Run:ai permits enterprise clients to handle and optimize their compute infrastructure, whether or not on premises, within the cloud or in hybrid environments,” Nvidia mentioned on the time in a weblog put up.

Nvidia CEO Jensen Huang mentioned in November that Nvidia is working by “three huge platform shifts directly” as firms transfer from conventional computing to accelerated computing, from classical machine studying to generative AI, and now towards agentic methods that carry out multistep duties.



Supply hyperlink


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.