Implementation of Massive Artificial Neural Networks with CUDA
Citations Over TimeTop 20% of 2012 papers
Abstract
computational architectures, such as GPUs, lead to a great improvement in speed.Until recently, the programmers of ANN could only harness this processing power with especially prepared graphical applications.What is new is that the newest GPU architectures allow for a more general approach to ANN programming, without taking into consideration the graphical aspects of GPUs.One general-purpose parallel computing architecture is CUDA (Compute Unified Device Architecture), as developed by the Nvidia GPU manufacturer.Different aspects of ANN implementation using CUDA are discussed later.A much greater performance of ANN can be achieved by better understanding the particularities and limitations of CUDA.The next section presents some biological background of neurons and neural network.Later, different implementation techniques are identified for artificial neural networks.The main section explains section, the implementation of ANN with the CUDA development toll.In conclusion, several experiments are demonstrated and several implementation techniques for large ANN are compared.
Related Papers
- → Physician-Friendly Machine Learning: A Case Study with Cardiovascular Disease Risk Prediction(2019)71 cited
- → Artificial Intelligence, Machine Learning, and Medicine: A Little Background Goes a Long Way Toward Understanding(2021)29 cited
- → Application of Machine Learning in Animal Disease Analysis and Prediction(2020)26 cited
- → Sentiment Analysis by Using Supervised Machine Learning and Deep Learning Approaches(2020)3 cited
- → Breakdown of Machine Learning Algorithms(2022)1 cited