Artificial neural networks possess a fundamental constraint which prevents them from
demonstrating the most significant behavior of biological neurons: state abstraction, or
put simply, biological memory.
The fundamental constraint which prohibits this behavior is a result of the basic neural
network architecture, in which every input to a neural network unit contributes to the
output state of that unit.
This operation is in stark contrast to the behavior of biological neurons, whose inputs
behave in a thresholding manner, and whose outputs are triggered by variable subsets
of those inputs.
Because of this distinction, artificial neural network units operate as state devices,
where biological neurons operate as signaling devices.
Ever since the invention of the first artificial perceptron, researchers have been
interested in developing artificial neural network architectures that could automatically
discover internal representations within a defined data set.
But much like the strict rules that analysts must adhere to when “framing” the data sets
that are scrutinized in Fourier analysis, the assumptions made in the framing of data
sets, and the rules regarding the calculation of delta rules for back-propagation in neural
networks all place constraints on the internal representations and the non-trivial feature
extraction they can demonstrate.
Because of this constraint, artificial neural networks become specified for particular
representations, where biological neurons can be generalized for abstract
representations.
And this constraint is a product of the basic units comprising artificial neural networks themselves. Because the units comprising neural networks are state devices, they cannot demonstrate the astronomical power of geometric learning that biological neurons demonstrate.
The learning behavior of artificial neural networks is characterized as a process of “gradient descent”, conducted through a back-propagation cycle. Through the iterations of the back-propagation cycle, every element of an artificial neural network moves an “error target” towards an asymptotic value, a process of ever-decreasing increments in learning for each subsequent cycle.
Contrast this with the geometrically increasing increments of learning as biological neurons conduct their reciprocal cycles of abstraction, starting off with small numbers of neurons participating in the abstraction process, with each cycle adding a geometrically increasing population of neurons to the learning endeavor.
Because every computing element in an artificial neural network participates in the gradient descent calculation, the back propagation cycle amounts to a race towards ever-diminishing returns, in contrast to the variable participation of elements in biological abstraction, that demonstrate geometrically increasing learning in their reciprocal cycle.
Copyright © 2020 Geometric Learning - All Rights Reserved.
Powered by GoDaddy Website Builder
We already have the best AI in the world, so we do not need to place any data in your browser.