AI Optimized Hardware

AI Optimized Hardware

AI Optimized Hardware


Artificial Intelligence is having a high demand in various applications in the next couple of years and gradually improving its footprint in the automated industry. As Artificial Intelligence is dealing with the association of more complex tasks, the computational power of hardware will be increased. To restrict this, the traditional methods developed the concept of embedding a large number of logic gates to fit for more transistors in AI devices but it would be malfunctioned the devices owing to the shrinking of logic gates up to the size of 5nm.

In order to reduce the malfunctioning to the logic gates, machine learning algorithms have been deployed locally into the edge devices to curb the latency that could be issued on drones and automated devices. The local deployment of algorithms is decreasing the transferring of data in a cloud which automatically lowering the costs of networking particularly in IoT devices. However, the current hardware of AI is having a large size which consumes more energy than limits some types of devices that have the local algorithms. Some of the experiments can be done on the other chip architectures for optimization of machine learning algorithms which included key features like energy-efficient, more powerful, and smaller size (Kelley, 2019).

Why AI is so demanding?
Specifically, machine learning involving simple math calculations subsuming addition, multiplication, and some derivatives stuff but doing a lot of calculations. As neural networks have hundreds of layers, each with thousands of nodes, and embedded with millions of data points, so need not worrying about the complexity of calculations but have to consider the challenge of the scale of calculations.

Comparison with GPU
The Graphical Processing Unit lures the hardware market when compared to machine learning due to the feature of parallel computing and the simultaneous functioning of thousands of chips. However, it’s better to design specific AI chips because they are much faster and use less energy as GPU has been rendering on the graphics that use more energy (Paruthi, 2018).

Techniques to be implemented for Better AI Chips
• Reduced precision
• Analog Computation
• In-memory Computation using Phase Changing Memory

Reducing Precision:
The computer programs operated with floating-point arithmetic which the scientific notation is written in binary form. Even though it is energy efficient, some of the slight errors may occur owing to the rounding off the decimals on how many bits available on the computer. For improving the accuracy of calculations, computers have been gone from 16 to 32 to 64 bits whereas in the case of neural networks there is no need for that accuracy level.

As neural networks are working based on the general trends, it will be diminished the performance level of a model if considering the high level of accuracy or calculations. Some of the researchers suggest that sufficient to use 8-bit or 16-bit chip architecture to implement the AI hardware devices successfully. That means, lower bit processers save the cost, lower the energy consumption, and smaller in size which allowing computing AI algorithms on edge devices locally.

In-Memory Computation Using Phase Changing Memory:
In the computation of AI algorithms, one of the drawbacks is the transfer of data in a cloud. Accordingly, the processor has to do more operations like fetching the data from RAM, compute, and send it back, and considering a large amount of calculations for the algorithms so the data transfer is a limitation for AI algorithms.

Recently, the researchers have been tried the AI hardware with Phase Changing Memory which is a memory device made with a material that can be of two states like amorphous and crystalline. In the amorphous state, the substance has a high electric resistivity while it has a low resistivity in the state of crystalline. The highlight point of using this technology is that it has the ability to perform the arithmetic operations locally without transferring data to the processor. The memory device is non-volatile in the sense that it will hold the data without the need of power that really works great on low-power IoT devices.

Analog Computation:
Analog Computation has come back again as it is providing benefits like lower energy consumption and easily performs the arithmetic computations which make ideal with AI technologies. It uses basic electric properties that mean it is energy efficient when compared to the array of logic gates which required adding binary numbers. Although analog computation is very difficult to do the programming some of the companies adopt analog technology and interacting with the digital chips. With this inspiration, AI chips will be made with analog computation to provide energy-efficient devices with rapid calculations (Gutierrez, 2019).

Companies Working on AI Hardware
• Intel has acquired AI hardware devices like CNN’s, Nervana, and with a decent software suite for developers.
• Nvidia’s has been strived to provide various AI hardware devices and succeed in making them even better for AI applications including its Tesla V100 GPU’s.
• IBM also researching on AI algorithms by using different technologies like analog computation and phase changing memory.
• Many other start-up companies like Mythic, Graphcore, and Wave Computing striving to provide better AI hardware devices.
In conclusion, AI hardware must be improved much as higher demand in the usage of software in the digital technological world.

Checklist


Is this article helpful?