All IDCA News

By Loading

19 Dec 2022

Share on social media:

The New C600 from Graphcore Supports Both Floating-Point and Mixed-Precision Machine Intelligence

Graphcore, a British semiconductor firm, has just launched the C600 PCIe card, which adds 8-bit FP8 floating-point support for AI computation. This design helps accelerate AI development by allowing for more accurate and streamlined processing with AI training and inference applications.

As well as Graphcore, the FP8 instruction set is supported by major players in the tech industry, such as ARM, Intel, NVIDIA, and Qualcomm. With the C600 chip, developers can access up to 560 teraflops for FP8 and 280 for FP16. It's a dual-slot card with a 185-watt thermal design power.

“We are making the IPU available on a PCIe card in response to customer demand in markets where datacentre configurations, including rack size and power delivery, vary widely,” said Chen Jin, Graphcore China's VP and Head of Engineering.

“This highly versatile form factor enables Graphcore customers to tailor their system setup, including host server/chassis, to their exact requirements.”

In addition to having 1,472 processing cores, the IPU can run 8,832 independent program threads in parallel. Each IPU features 900MB of on-chip SRAM memory.

One IPU packs a whopping 1,472 processing cores and is capable of running 8,832 concurrent programs in parallel. IPUs each come with 900MB of onboard memory, and eight of these boards can be interconnected in a single enclosure, with IPU Links serving as a communication medium.

America recently widened the restrictions for exporting high-level chips to China. This ban means that Graphcore may have to modify its processors to align with the regulation.

Also, read Oracle Is Joining Forces with NVIDIA to Speed up the Adoption of Enterprise AI

Follow us on social media: