Sponsored Content

ASX-listed artificial intelligence (AI) company, BrainChip (ASX:BRN), has introduced a new form of artificial intelligence that will accelerate edge computing.

Edge computing is computation that is largely or completely performed on the device as opposed to taking place in a centralised cloud environment or at a central processing hub.

BrainChip is developing a chip that will have the ability to process large amounts of data  very quickly, without the need to send data to and from a central processing unit (CPU).

Its AkidaTM Neuromorphic System-on-Chip (NSoc) will mark the first time that a spiking neural network (SNN) model will be available on a something as small as a chip. To date, it has only been possible using large computing hardware systems.

Neuromorphic computing works more like the human brain than a computer: it emulates the biological function of neurons that communicate using spikes, hence the term spiking neural network.

This is unlike other neural networks that use complex mathematics as their basic function. This method of processing can drastically reduce power consumption and also improve training time.

The Akida NSoC is inspired by biology in the form of the human brain and has been engineered by technology.

“Spiking neural networks are considered the third generation of neural networks,” said Peter van der Made, Founder and CTO of BrainChip.

“The Akida NSoC is the culmination of decades of research to determine the optimum neuron model and innovative training methodologies.”

The Akida NSoC is small, low cost and low power, and significantly advances artificial intelligence and its applications, especially in low-power applications such as advanced driver assistance systems (ADAS), autonomous vehicles, drones, vision-guided robotics, surveillance and machine vision systems.

Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing 100 times better efficiency than neuromorphic test chips from Intel and IBM.

A chip that is able to process and analyse data to become smarter will enable AI to be deployed closer to the source of information.

The market for artificial intelligence acceleration chip architecture is expected to surpass US$60 billion by 2025, according to Tractica, a market intelligence firm specialising in AI. The development of this type of technology onto a chip puts AI in the fast lane towards deployment in edge computing applications.

The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor, and it learns through the use of innovative training methodologies for supervised and unsupervised training.

In the supervised mode, the initial layers of the network train themselves autonomously, while in the final fully-connected layers, labels can be applied manually, enabling these networks to function as classification networks.

The Akida development environment is now available for early access customers to begin the creation, training, and testing of spiking neural networks to be deployed onto the chip . The first chips are then expected be available in in the second half of 2019.

 

This content is produced by Star Investing in commercial partnership with BrainChip. This article does not constitute financial product advice. You should consider obtaining independent advice before making any financial decisions.