Artificial Neural Networks: Man vs Machine?
Photo Credits  Äkta människor SVT1
Deep neural networks provide data scientists with a means to deal with ambiguity and complexity 
This article is part of a BAI series exploring 10 basic machine learning algorithms*
Are these hubots something human or some kind of machine? If Human intelligence can quickly tell the difference between the two, machine learning must rely on algorithms like artificial neural networks to make a prediction. Patterned after the structure of the human mind, do ANNs allow machines to think like humans? What exactly are ANNs, how do they work, how do they differ from other machine learning algorithms, and what are their use scenarios in data science?
Computers were originally designed around algorithms composed of predetermined steps to calculate the right answer for a given case. This belief in “Absolute Certainties” has allowed organizations to optimize business processes, accelerate accounting procedures, and improve their supply chain. Nonetheless, the challenges of image classification, object detection, and speechtotext were beyond the reach of traditional algorithms, for they required algorithms capable forecasting probabilities based upon imperfect information.
McCulloch and Pits created the first artificial neural network in 1943 using a computational model based upon probability theory integrating threshold logic. ANNs are organized as three layers of connected neurons: an input layer that funnels into the data into the system for processing, a hidden layer that relies on a set of weighted inputs to produce an outcome through an activation function, and output layer that produces the result of the program. ANN’s are thus algorithms based on probabilities, they do not “think” like the human brain but employ mathematical functions to address the challenges of stochastic learning environments.
A neuron or node is a mathematical function that receives inputs and gives a single outcome that represents the result of the computation on the inputs. ANNs learn progressively through examples and experiences rather than through a set series of commands. Deep Neural networks are made up of layers of thousands of such neurons. Deep neural networks provide data scientists with a means to deal with ambiguity and complexity, because they decompose problems into minute subproblems which allow the construction of accurate representations of how an input is presented.
Each layer of an ANN processes its input by a specified weight creating the value of the operation. This value is then multiplied by a generated threshold value and sent to an activation function to calculate the output. Given a set of inputs, either binary, linear, and nonlinear activation functions are used to define the output of each neuron. Then the output of that function is sent as the input for another layer, or as the final response of a network if the layer is the last.[1]
Training for ANN carry out by standardizing the “weights” using techniques known as forward propagation and back propagation. A key characteristic of ANNs are these iterative learning processes in which data is presented to the network one at a time, and the weights associated with the input values is adjusted in kind. Training ANNs is thus computationally expensive and can often lead to overfitting.
In Forward Propagation, sample weights are input to the ANN through the inputs and the respected sample outputs are recorded. More precisely, forward propagation is the process of feeding the ANN with a set of inputs to get the sum of their componentwise products (i.e. their dot product) with their weights then feeding the latter to an activation function and comparing its numerical value to the actual output (the ground truth).
Back propagation is a method used to calculate a gradient that is needed in the calibrate the weights to be used in the network. In back propagation data scientists work from the output units through the hidden units to the input units, considering the margin of error of each layer’s outputs. The inputs are then adjusted to minimize the margin of the error.
Finally, there are several forms of artificial neural networks. Originally, the Perceptron was designed as a linear classifier for use in binary predictions. The major limitation of this model, that the data must be linearly separable, has since led to the multilayered ANNs capable of solving complex classification and regression tasks. There are six basic forms of neural networks in use today that integrate differing mathematical operations and diverse sets of parameters. Feedforward is used mainly for speech and vocal recognition, radical basis function has been used to predict power grid shortage, Kohonen selforganizing is a logical pick for pattern recognition, recurrent ANNs have been deployed in speech to text, convolutional in signal processing, and modular neural networks as part of multimodule decisionmaking strategies.[2]
Kohonen Neural Networks
Returning to our photo of hubots taken from the Swedish television series Real Humans, let’s suppose we have three neurons (A, B, C) in our Neural Network.[3] After training on the data and assigning neurons A, B and C to important features that distinguish men from machines, the algorithm can be trained that when activations A & C are activated, the image belongs to humans, but if A & B are activated, the belongs to machines. The neural network could be developed to decipher a list of features associated with being human: empathy, natural intelligence, imagination, faith, emotion, etc. The definitions of these features are in themselves human constructs, and beyond the scope of current applications of artificial intelligence. Nonetheless, deep neural networks composed on thousands, and perhaps millions or neurons, can help us represent these features more precisely and therefore build better predictive models to explore such complexity and ambiguity.
Dr. Lee SCHLENKER, The Business Analytics Institute
*Previously published contributions in the BAI series on basic machine learning algorithms include kNN — Getting to know your nearest neighbors, Bayes’ theorempractice makes perfect and Shark Attack — explaining the use of Poisson regression
Lee Schlenker is a Professor of Business Analytics and Community Management, and a Principal in the Business Analytics Institute http://baieurope.com. His LinkedIn profile can be viewed at www.linkedin.com/in/leeschlenker. You can follow the BAI on Twitter at https://twitter.com/DSign4Analytics
In this issue :
How can human and machine intelligence together be leveraged to bring tangible benefits to the business? . 

The Business Analytics Institute In brief, the recent projects, conferences, bootcamps and publicatons of the Institute 

The BAI 2019 Summer School on the Practice of Data Science The Business Analytics Institute will once again be offering a 10day Summer School this July 110 in Bayonne, France for senior undergraduate and graduate students on the Practice of Data Science 



Managing Director of Maiton Consulting, Davy Cielen took time out during the IQPC conference on Airport Excellence to share his thoughts on his experience, his current work, and opportunities in Data Science.  
Artificial Neural Networks: Man vs Machine What exactly are ANNs, how do they work, how do they differ from other machine learning algorithms, and what are their use scenarios in data science? 
This Newsletter has been created specifically for the BAI community to foster conversation around the use of analytics in improving business decisionmaking. 