Computer Engineering Project Topics

Neuromorphic Computing

Neuromorphic Computing

Neuromorphic Computing

Chapter One

PREAMBLE TO THE STUDY

There are ten main motivations for using neuromorphic architecture, including Real-time performance, Parallelism, von Neumann Bottleneck, Scalability, Low power, Footprint, Fault Tolerance, Faster, Online Learning, and Neuroscience [1]. Among them, real-time performance is the main driving force of the neuromotor system. Through parallelism and hardware-accelerated computing, these devices are often able to perform neural network computing applications faster than von Neumann architectures. In recent years, the more focused area for neuromorphic system development has been low power consumption. Biological neural networks are fundamentally asynchronous [8], and the brain’s efficient data-driven can be based on event-based computational models [9]. However, managing the communication of asynchronous and event-based tasks in large systems is a challenge in the von Neumann architecture [10]. The hardware implementation of neuromorphic computing is favourable to the large-scale parallel computing architecture as it includes both processing memory and computation in the neuron nodes and achieves ultra-low power consumption in the data processing. Moreover, it is easy to obtain a large scale neural network based on the scalability. Because of all aforementioned advantages, it is better to consider the neuromorphic architecture than von Neuman for hardware implementation.

CHAPTER TWO

LITERATURE REVIEW

ARTIFICIAL NEURAL NETWORKS

An Artificial Neural Network (ANN) is a combination and collection of nodes that are inspired by the biological human brain. The objective of ANN is to perform cognitive functions such as problem-solving and machine learning. The mathematical models of the ANN were started in the 1940s; however, it was silent for a long time (Maass, 1997). Nowadays, ANNs became very popular with the success of ImageNet2 in 2009 (Hongming, et al., 2018). The reason behind this is the developments in ANN models and hardware systems that can handle and implement these models. (Sugiarto & Pasila, 2018) The ANNs can be separated into three generations based on their computational units and performance (Figure 1).

The first generation of the ANNs started in 1943 with the work of Mc-Culloch and Pitts (Sugiarto & Pasila, 2018). Their work was based on a computational model for neural networks where each neuron is called “perceptron”. Their model later was improved with extra hidden layers (Multi-Layer Perceptron) for better accuracy – called MAGDALENE – by Widrow and his students in the 1960s (Widrow & Lehr, 1990). However, the first generation ANNs were far from biological models and were just giving digital outputs. Basically, they were decision trees based on if and else conditions. The Second generation of ANNs contributed to the previous generation by applying functions into the decision trees of the first-generation models. The functions work among each visible and hidden layer of perceptron and create the structure called “deep neural networks”. (Patterson, 2012; Camuñas-Mesa, et al., 2019) Therefore, second-generation models are closer to biological neural networks. The functions of the second-generation models are still an active area of research and the existing models are in great demand from markets and science. Most of the current developments about artificial intelligence (AI) are based on these second-generation models and they have proven their accuracy in cognitive processes. (Zheng & Mazumder, 2020)

The Third generation of ANN is termed as Spiking Neural Networks (SNNs). They are biologically inspired structures where information is represented as binary events (spikes). Their learning mechanism is different from previous generations and is inspired by the principles of the brain (Kasabov, 2019). SNNs are independent of the clock-cycle based fire mechanism. They do give an output (spike) if the neurons collect enough data to surpass the internal threshold. Moreover, neuron structures can work in parallel (Sugiarto & Pasila, 2018). In theory, thanks to these two features SNNs consume less energy and work faster than second-generation ANNs (Maass, 1997).

During the next decade, we will see how Neuromorphic computing gradually transforms the nature and functionalities of a wide range of scientific and non-scientific applications. In this report, we will briefly describe three specific but very large areas on which this emerging field of computing science is likely to impact more rapidly and intensively: 1) mobile applications, which are dramatically affecting our daily lives, are increasingly demanding more powerful processing capacities and abilities, 2) adaptive robotics, whose technological advance runs in parallel and is intimately linked to the progress of AI, needs to draw on the ´human thinking´ mechanisms provided by neuromorphic chips to offer solutions more closely and effectively matched to the domestic and/or industrial users´ necessities, and 3) event-based vision sensors, that although may look, in principle, a less impactful area of application than the previous ones, certainly allow adaptive robotics to be fed with reliable visual signals and react accordingly with precise humanlike responses.

Mobile Applications

Intelligent software is essential for the current usage of mobile applications. They cover from image processing to text processing, audio processing, etc. (Table 5).

These applications mostly require more processing power than mobile phones can currently handle. Therefore, they are using supercomputers via service calls to offer AI services. Although this way of service call works, it also has some critical issues such as (Pathak, 2017)

  • They are limited by the speed of the internet connection.
  • Their responsiveness depends on the service speed.
  • They are causing privacy concerns (Howard, 2019).

On-device AI, therefore, is essential to solve these problems and enable leading-edge technologies.

Since the Snapdragon 820/836 (2016) mobile phones include processors with AI accelerators (Ignatov, et al., 2019). The most advanced chips at the time (2020 January) are Apple A13, Qualcomm Snapdragon 865 and Huawei Kirin 990. These chips can handle some edge AI applications such as face recognition, real-time translation, photo segmentation, voice recognition. However, their processing capacity is still limited for large-scale, complex or parallel tasking services. Embedding extra processing power is not sustainable with the limitations of battery technology. Therefore, neuromorphic chips can have a big impact in this sector with their efficiency in real-time AI services.

 

CHAPTER THREE

CONCLUSION AND RECOMMENDATIONS

In our emerging and dynamic AI-based society, research and development on AI is to a large extent focused on the improvement and utilisation of deep neural networks and AI accelerators. However, there is a limit in the architecture of traditional von Neumann systems, and the exponential increase of data-size and processing requires more innovative and powerful solutions. Spiking Neural Networks and Neuromorphic computing, which are well-developed and known areas among neuroscientist and neuro-computing researchers, are part of a trend of very recent and novel technologies that already contribute to enable the exploration and simulation of the learning structures of the human brain.

This report has explained the evolution of the artificial neural networks, the emergence of SNNs and their impact on the discovery of neuromorphic chips. It has been discussed the limitations of the traditional chips and the eventual influence of neuromorphic chips on demanding AI applications. The main players have been identified in the area, and have been related to current and future applications. The study has also described the market advantages of neuromorphic chips when comparing with other AI semiconductors. Neuromorphic chips are compatible with event-based sensor applications and emerging technologies such as photonic, graphene or non-volatile memories. They have a huge potential in the development of AI and can certainly be a dominant technology in the next decade.

Hopefully, the report has served to briefly give some light on the complexity of this challenging computing area. While staying loyal to our objective of offering a practical description of the most recent advances, we have also tried to be instructive enough so as to increase the interest and visibility of the topic to the non-specialised audience. For other readers the study may represent a promising and challenging step towards a more profound understanding of the area that could eventually support the creation of roadmaps, the exploration of new industrial applications, or the analysis of synergies between these novel chips and other related emerging trends.

REFERENCES

  • C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, “A survey of neuromorphic computing and neural networks in hardware,” arXiv preprint arXiv:1705.06963, 2017.
  •  I. K. Schuller, R. Stevens, R. Pino, and M. Pechan, Neuromorphic Computing–From Materials Research to Systems Architecture Roundtable, 2015.
  • D. Monroe, Neuromorphic computing gets ready for the (really) big time, 2014.
  • B. Rajendran, A. Sebastian, M. Schmuker, N. Srinivasa, and E. Eleftheriou, “Low-power neuromorphic hardware for signal processing applications: A review of architectural and system-level design approaches,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 97–110, 2019.
  •  P. Hasler and L. Akers, “Vlsi neural systems and circuits,” in Ninth Annual International Phoenix Conference on Computers and Communications. 1990 Conference Proceedings. IEEE, 1990, pp. 31–37.
  • J.-C. Lee and B. J. Sheu, “Parallel digital image restoration using adaptive vlsi neural chips,” in Proceedings., 1990 IEEE International Conference on Computer Design: VLSI in Computers and Processors. IEEE, 1990, pp. 126–129.
  • L. Tarassenko, M. Brownlow, G. Marshall, J. Tombs, and A. Murray, “Real-time autonomous robot navigation using vlsi neural networks,” in Advances in neural information processing systems, 1991, pp. 422–428.
  • M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
  • S.-C. Liu, T. Delbruck, G. Indiveri, A. Whatley, and R. Douglas, Eventbased neuromorphic systems. John Wiley & Sons, 2014.
  •  S. Moradi, N. Qiao, F. Stefanini, and G. Indiveri, “A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (dynaps),” IEEE transactions on biomedical circuits and systems, vol. 12, no. 1, pp. 106–122, 2017.
  • G. Indiveri and S.-C. Liu, “Memory and information processing in neuromorphic systems,” Proceedings of the IEEE, vol. 103, no. 8, pp. 1379–1397, 2015.
  •  A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, pp. 500–544, 1952.
WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!