FPGA designers turn to machine learning in the cloud

June 09, 2017

Blog

FPGA designers turn to machine learning in the cloud

Even with all the attention focused on mobile-device apps for everything from ride sharing and Instagram to music streaming, the semiconductor industry still holds sway for passionate engineers...

Even with all the attention focused on mobile-device apps for everything from ride sharing and Instagram to music streaming, the semiconductor industry still holds sway for passionate engineers who want to make a difference – even if the application is far removed from the consumer experience. One such example involves a couple of savvy engineers who recognized that data analytics could solve some of the more vexing FPGA design challenges.

A few trends around the growing use of FPGAs caught their attention. For one, engineering groups have a renewed interest in FPGAs, notably highlighted by Intel’s acquisition of Altera. While an increasing number of software developers realize that FPGAs are ideal for implementing their algorithms, they found that they’re not easy to design. The ongoing tussle between FPGAs, GPUs, and custom accelerators in data centers further illustrates the importance of FPGAs in high-performance heterogeneous computing platforms.

The problems had to do with the timing and performance of complex IC designs. As FPGAs and design flows become more complex, the number and difficulty of critical timing and performance issues increase exponentially.

The inability to achieve timing closure can delay a product’s time to market as engineering groups struggle to achieve their design’s timing goals. And additional requirements often come in late that need more logic, increasing logic density that can negatively impact timing. In the worst-case scenario, a larger device or a faster speed grade must be used to achieve closure, impacting a product’s profitability, especially at higher volumes.

Perhaps even more interesting is the increasing use of FPGAs driving hardware design to the cloud because semiconductor companies don’t have the hardware resources available to manage new FPGA designs. An example is Intel’s Quartus Prime or Xilinx’s Vivado memory specs that make compile time impossible on a desktop computer or workstation. Instead, engineering managers are turning to cloud computing services for faster results and reduced capital expenditures.

The final trend is the range of artificial intelligence and machine learning applications to support FPGA and ASIC designs. The obvious conclusion is to solve timing closure. The performance could benefit a variety of markets beyond data centers, such as advanced driver assistance systems (ADAS) and high-frequency trading (HFT).

Plunify engineers devised a solution to address performance and timing challenges using machine learning techniques to close timing and optimize FPGA designs, analyzing past compilation results to predict optimal synthesis/place-and-route parameters and placement locations out of quadrillions of possible solutions. And, of course, the tool resides in the cloud or on a user’s servers.

Tools like this learn and infer what tool parameters, such as synthesis options, place-and-route options and placement locations, are best for a design. They use statistical modeling and machine learning to draw insights from the data to improve quality of results. The tools’ learning capabilities ensure that the more an engineering group uses them, the smarter the learning database becomes, accelerating the time to design closure.

Kirvy Teo is the co-founder and vice president at Plunify. He graduated from the National University of Singapore with a BS degree in Computer Science and has more than 10 years of experience in web and application development. Teo speaks only English and Mandarin, but writes codes in 10 languages.