www.design-reuse-embedded.com
Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...

Neural Network Inference Engine IP Core Delivers >10 TeraOPS per Watt

VeriSilicon Expands Leadership in Deep Neural Network Processing with Breakthrough NN Compression Technology VIP8000 NN Processor Scaling from 0.5 to 72 TeraOPS

Nuremberg, Germany , Feb. 27, 2018 – VeriSilicon Holdings Co., Ltd. (VeriSilicon) today announced significant milestones have been achieved for its versatile and highly scalable neural network inference engine family VIP8000.

"The biggest thing to happen in the computer industry since the PC is AI and machine learning, it will truly revolutionize, empower, and improve our lives. It can be done in giant machines from IBM and Google, and in tiny chips made with VeriSilicon's neural network processors," said Dr. Jon Peddie, president Jon Peddie Research. "By 2020 we will wonder how we ever lived without our AI assistants," he added.

Machine learning and neural network processing represent the next major market opportunity for embedded processors. The International Data Corporation (IDC) forecasts spending on AI and machine learning to grow from $8B in 2016 to $47B by 2020. With the release of the latest generation of their NN inference IP, VeriSilicon establishes itself as a significant driver of growth in this category. The industry-leading top-end performance of the Vivante VIP8000 processor continues to expand the application space from always-on battery powered IoT clients to AI server farm applications.

VeriSilicon's latest updates to VIP8000 are specifically designed to accelerate neural network model inferencing with greater efficiency and inference speed while slashing memory bandwidth requirements compared to alternative DSP, GPU, and CPU hybrid processor approaches. The fully programmable VIP8000 processors reach the performance and memory efficiency of dedicated fixed-function logic with the customizability and future proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks (TensorFlow, Caffe, AndroidNN, ONNX, NNEF, etc.). The VIP8000 NN architecture can handle a wide range of AI workloads, while optimizing memory management of the data that flows through the processor.

Not only does VeriSilicon's NN engine outperform all traditional DSP, GPU and CPU hybrid systems, it is industry-proven and has been shipping to licensees as a ready IP core for more than 18 months. In 2017 alone, 10 major ASIC developers selected VIP after rigorous benchmarking of both competing IP solutions and SoCs. VeriSilicon has been successful licensing to a wide range of end-customers with applications from ADAS and autonomous vehicles, security surveillance, home entertainment, imaging to dedicated ASICs for servers.

The VIP8000 NN processor achieves the industry's highest performance and energy efficiency levels and is the most scalable platform on the market. This NN engine can range from 0.5 to 72 TeraOPS, with power efficiency of more than 10 TeraOPS per Watt, based on a recent 14nm implementation of the IP. The introduction of new Hierarchical Compression, Software Tiling/Caching, Pruning, Fetch Skipping and patent pending, Layer Merging technology further reduces memory bandwidth requirements for VIP8000 relative to other processor architectures.

"AI is everywhere. With patent-pending Neural Network compression technology, VIP8000 family efficiently delivers the performance that accelerates the adoption of AI in embedded products. We are deeply engaged with leading customers ranging from deeply embedded to edge server products." said Weijin Dai, Chief Strategy Officer, Executive Vice President and GM of VeriSilicon's Intellectual Property Division. "Applications and algorithms to address these challenges are rapidly advancing and we are combining AI technology with VeriSilicon's extensive IP portfolio to deliver breakthrough solutions to our customers. AI needs to deliver value efficiently."

VeriSilicon supports a wide range of NN frameworks and networks (TensorFlow, Caffe, AndroidNN, Amazon Machine Learning, ONNX, NNEF, AlexNet, VGG16, GoogLeNet, Yolo, Faster R-CNN, MobileNet, SqueezeNet, ResNet, RNN, LSTM, etc.) and also provides numerous software and hardware solutions to enable developers to create high-performance Neural Network models and machine-learning-based applications.

VeriSilicon at Embedded World 2018

Learn more about the VIP8000 NN and related VeriSilicon IP, NN ecosystem solution development partners, custom silicon and advanced packaging (SiP) turnkey services at Embedded World 2018 in Nuremberg, Germany, February 27 – March 1, Hall 4A / 4A-360.

About VeriSilicon

VeriSilicon is a Silicon Platform as a Service (SiPaaS®) company that provides industry-leading, comprehensive System-on-a-Chip (SoC) and System-in-a-Package (SiP) solutions for a wide range of end markets including mobile internet devices, datacenters, the Internet of Things (IoT), automotive, industrial, and medical electronics. Our machine learning and artificial intelligence technologies are well positioned to address the movement to "intelligent" devices. SiPaaS provides our customers a substantial head start in the semiconductor design and development process and allows the customers to focus their efforts on core competency with differentiating features. Our end-to-end semiconductor turnkey services can take a design from concept to a completed, tested, and packaged semiconductor chip in record time. The breadth and flexibility of our SiPaaS solutions make them a performance-effective and cost-efficient choice for a variety of customers.

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.