Intel and Microsoft today announced the contribution of Scalable I/O Virtualization (SIOV) specification to the Open Compute Project. Through this contribution, any organization can adopt SIOV and incorporate it into their products under an open, zero-cost license.
SIOV is a scalable and flexible approach to hardware assisted I/O virtualization. As it is built on existing PCI Express capabilities, it can be easily supported by compliant PCI Express (PCIe) or Compute Express Link (CXL) endpoint device designs and the software ecosystem. Also, the new SIOV specification enables mass-scale virtualization of I/O devices in an efficient way.
When adopted, SIOV architecture will enable data center operators to deliver more cost-effective access to high-performance accelerators and other key I/O devices for their customers, as well as relieve I/O device manufacturers of cost and programming burdens imposed under previous standards.
The new SIOV spec will be supported in the upcoming Intel Xeon Scalable processor, code-named Sapphire Rapids, Intel Ethernet 800-series network controllers and future PCIe and Compute Express Link (CXL) devices and accelerators.
“Microsoft has long collaborated with silicon partners on standards as system architecture and ecosystems evolve. The Scalable I/O Virtualization specification represents the latest of our hardware open standards contributions together with Intel, such as PCI Express, Compute Express Link and UEFI. Through this collaboration with Intel and OCP, we hope to promote wide adoption of SIOV among silicon vendors, device vendors, and IP providers, and we welcome the opportunity to collaborate more broadly across the ecosystem to evolve this standard as cloud infrastructure requirements grow and change,” said Zaid Kahn, GM for Cloud and AI Advanced Architectures at Microsoft.
You can learn more about this spec here.