Flex Logix: Machine Learning and Embedded FPGA IP

Jul. 18, 2018 – Machine learning-based applications have become prevalent across consumer, medical, and automotive markets. Still, the underlying architecture(s) and implementations are evolving rapidly, to best fit the throughput, latency, and power efficiency requirements of an ever increasing application space. Although ML is often associated with the unique nature of (many parallel) compute engines in GPU hardware, the opportunities for ML designs extend to cost-sensitive, low-power markets. The implementation of an ML inference engine on an SoC is a great fit for these applications – this article (very briefly) reviews ML basics, and then highlights what the embedded FPGA team at Flex Logix is pursuing in this area.

Machine learning refers to the capability of an electronic system to:

  1. receive an existing dataset of input values ("features") and corresponding output responses
  2. develop an algorithm to compute the output responses with low error ("training") and,
  3. deploy that algorithm to accept new inputs and calculate new outputs, with comparable accuracy to the training dataset ("inference")

Click here to read more ...


Partner with us

Visit our new Partnership Portal for more information.

Submit your material

Submit hot news, product or article.

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2018 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted,
reposted, duplicated or otherwise used without the
express written permission of Design And Reuse.