Flex Logix Inc.
Mountain View, CA 94040 https://flex-logix.com/
Have you ever spent months evaluating a new technology only to find out that it really wasn’t suited for the task in mind and wished someone had warned you up front?
The AI inference market has changed dramatically in the last three or four years. Previously, edge AI didn’t even exist and most inferencing capabilities were taking place in data centers, on super computers or in government applications that were also generally large-scale computing projects.
InferX X1P1 PCIe board at $399-$499 brings datacenter class throughput to lower price point edge servers; InferX X1P4 in 2021 will increase throughput 4x
Inferencing is a hot topic. Let?s start with ?what is inferencing?? Then, how do you benchmark inferencing?
This blog discusses how to benchmark inference accelerators to find the one that is the best for your neural network.
FLEX LOGIX CO-FOUNDERS AWARDED INTERCONNECT PATENT FOR CONNECTING ANY KIND OF RAM BETWEEN eFPGA CORES TO CREATE APPLICATION-OPTIMIZED eFPGA - NewsMay 15, 2018
New RAMLinx Solution Enables the Integration of Any Size, Amount or Type of RAM in eFPGA Arrays