Seem like my training data for the car – perhaps a hint of #bias. 😂
#GeekyJokes #ML #AIJokes
Seem like my training data for the car – perhaps a hint of #bias. 😂
#GeekyJokes #ML #AIJokes
Neural networks have a very interesting aspect – they can be viewed as a simple mathematical model that define a function. For a given function which can take any input value of , there will be some kind a neural network satisfying that function. This hypothesis was proven almost 20 years ago (“Approximation by Superpositions of a Sigmoidal Function” and “Multilayer feedforward networks are universal approximators”) and forms the basis of much of #AI and #ML use cases possible.
It is this aspect of neural networks that allow us to map any process and generate a corresponding function. Unlike a function in Computer Science, this function isn’t deterministic; instead is confidence score of an approximation (i.e. a probability). The more layers in a neural network, the better this approximation will be.
In a neural network, typically there is one input layer, one output layer, and one or more layers in the middle. To the external system, only the input layer (values of ), and the final output (output of the function ) are visible, and the layers in the middle are not and essentially hidden.
Each layer contains nodes, which is modeled after how the neurons in the brain works. The output of each node gets propagated along to the next layer. This output is the defining character of the node, and activates the node to pass on its value to the next node; this is very similar to how a neuron in the brain fires and works passing on the signal to the next neuron.
To make this generalization of function outlined above to hold, the that function needs to be continuous function. A continuous function is one where small changes to the input value , creates small changes to the output of . If these outputs, are not small and the value jumps a lot then it is not continuous and it is difficult for the function to achieve the approximation required for them to be used in a neural network.
For a neural network to ‘learn’ – the network essentially has to use different weights and biases that has a corresponding change to the output, and possibly closer to the result we desire. Ideally small changes to these weights and biases correspond to small changes in the output of the function. But one isn’t sure, until we train and test the result, to see that small changes don’t have bigger shifts that drastically move away from the desired result. It isn’t uncommon to see that one aspect of the result has improved, but others have not and overall skewing the results.
In simple terms, an activation function is a node that attached to the output of a neural network, and maps the resulting value between 0 and 1. It is also used to connect two neural networks together.
An activation function can be linear, or non-linear. A linear isn’t terribly effective as its range is infinity. A non-linear with a finite range is more useful as it can be mapped as a curve; and then changes on this curve can be used to calculate the difference on the curve between two points.
There are many times of activation function, each either their strengths. In this post, we discuss the following six:
1. Sigmoid function
A sigmoid function can map any of input values into a probability – i.e., a value between 0 and 1. A sigmoid function is typically shown using a sigma (). Some also call the () a logistic function. For any given input value, the official definition of the sigmoid function is as follows:
If our inputs are , and their corresponding weights are , and a bias b, then the previous sigmoid definition is updated as follows:
When plotted, the sigmoid function, will look plotted looks like this curve below. When we use this, in a neural network, we essentially end up with a smoothed out function, unlike a binary function (also called a step function) – that is either 0, or 1.
For a given function, , as , tends towards 1. And, as as , tends towards 0.
And this smoothness of is what will create the small changes in the output that we desire – where small changes to the weights (), and small changes to the bias () will produce a small changes to the output ().
Fundamentally, changing these weights and biases, is what can give us either a step function, or small changes. We can show this as follows:
One thing to be aware of is that the sigmoid function suffers from the vanishing gradient problem – the convergence between the various layers is very slow after a certain point – the neurons in previous layers don’t learn fast enough and are much slower than the neurons in later layers. Because of this, generally a sigmoid is avoided.
2. Tanh (hyperbolic tangent function)
Tanh, is a variant of the sigmoid function, but still quite similar – it is a rescaled version and ranges from –1 to 1, instead of 0 and 1. As a result, its optimization is easier and is preferred over the sigmoid function. The formula for tanh, is
Using, this we can show that:
.
Tanh also suffers from the vanishing gradient problem. Both Tanh, and, Sigmoid are used in FNN (Feedforward neural network) – i.e. the information always moves forward and there isn’t any backprop.
3. Rectified Linear Unit (ReLU)
A rectified linear unity (ReLU) is the most popular activation function that is used these days.
ReLU’s are quite popular for a couple of reasons – one, from a computational perspective, these are more efficient and simpler to execute – there isn’t any exponential operations to perform. And two, these doesn’t suffer from the vanishing gradient problem.
The one limitation ReLU’s have, is that their output isn’t in the probability space (i.e. can be >1), and can’t be used in the output layer.
As a result, when we use ReLU’s, we have to use a softmax function in the output layer. The output of a softmax function sums up to 1; and we can map the output as a probability distribution.
Another issue that can affect ReLU’s is something called a dead neuron problem (also called a dying ReLU). This can happen, when in the training dataset, some features have a negative value. When the ReLU is applied, those negative values become zero (as per definition). If this happens at a large enough scale, the gradient will always be zero – and that node is never adjusted again (its bias. and, weights never get changed) – essentially making it dead! The solution? Use a variation of the ReLU called a Leaky ReLU.
4. Leaky ReLU
A Leaky ReLU will usually allow a small slope on the negative side; i.e that the value isn’t changed to zero, but rather something like 0.01. You can probably see the ‘leak’ in the image below. This ‘leak’ helps increase the range and we never get into the dying ReLU issue.
5. Exponential Linear Unit (ELU)
Sometimes a ReLU isn’t fast enough – over time, a ReLU’s mean output isn’t zero and this positive mean can add a bias for the next layer in the neural network; all this bias adds up and can slow the learning.
Exponential Linear Unit (ELU) can address this, by using an exponential function, which ensure that the mean activation is closer to zero. What this means, is that for a positive value, an ELU acts more like a ReLU and for negative value it is bounded to -1 for – which puts the mean activation closer to zero.
When learning, this derivation of the slope is what is fed back (backprop) – so for this to be efficient, both the function and its derivative need to have a lower computation cost.
And finally, there is another various of that combines with ReLU and a Leaky ReLU called a Maxout function.
So, how do I pick one?
Choosing the ‘right’ activation function would of course depend on the data and problem at hand. My suggestion is to default to a ReLU as a starting step and remember ReLU’s are applied to hidden layers only. Use a simple dataset and see how that performs. If you see dead neurons, than use a leaky ReLU or Maxout instead. It won’t make sense to use Sigmoid or Tanh these days for deep learning models, but are useful for classifiers.
In summary, activation functions are a key aspect that fundamentally influence a neural network’s behavior and output. Having an appreciation and understanding on some of the functions, is key to any successful ML implementation.
I was looking at something else and happen to stumble across something called Netron, which is a model visualizer for #ML and #DeepLearning models. It is certainly much nicer than for anything else I have seen. The main thing that stood out for me, was that it supports ONNX , and a whole bunch of other formats (Keras, CoreML), TensorFlow (including Lite and JS), Caffe, Caffe2, and MXNet. How awesome is that?
This is essentially a cross platform PWA (progressive web app), essentially using Electron (JavaScript, HTML5, CSS) – which means it can run on most platforms and run-times from just a browser, Linux, Windows, etc. To debug it, best to use Visual Studio Code, along with the Chrome debugger extension.
Below is a couple of examples, of visualizing a ResNet-50 model – you can see both the start and the end of the visualization shown in the two images below to get a feel of things.
And some of the complex model seem very interesting. Here is an example of a TensorFlow Inception (v3) model.
And of course, this can get very complex (below is the same model, just zoomed out more).
I do think it is a brilliant, tool to help understand the flow of things, and what can one do to optimize, or fix. Also very helpful for folks who are just starting to learn and appreciate the nuances.
The code is released under a MIT license and you can download it here.
Someone recently asked me, what are some of the use cases / examples of machine learning. Whilst, this might seem as an obvious aspect to some of us, it isn’t the case for many businesses and enterprises – despite that they uses elements of #ML (and #AI) in their daily life – as a consumer.
Whilst, the discussion gets more interesting based on the specific domain and the possibly use cases (of course understanding that some might not be sure f the use case – hence the question in the first place). But, this did get me thinking and wanted to share one of the images we use internally as part of our training that outcomes some of the use cases.
These are not 1:1 and many of them can be combined together to address various use cases – for example a #IoT device sending in a sensor data, that triggers a boundary condition (via a #RulesEngine), that in addition to executing one or more business rule, can trigger a alert to a human-in-the-loop (#AugmentingWorkforce) via a #DigitalAssistant (say #Cortana) to make her/him aware, or confirm some corrective action and the likes. The possibilities are endless – but each of these elements triggered by AI/ML and still narrow cases and need to be thought of in the holistic picture.
Trained a model to create a synthetic sound that sounds like me. This is after training it with about 30 sentences – which isn’t a lot.
To create a synthetic voice, you enters some text, using which is then “transcribed” using #AI and your synthetic voice is generated. In my case, at first I had said AI, which was generated also as “aeey” (you can have a listen here). So for the next one, changed the AI to Artificial Intelligence.
One does need to be mindful of #DigitalEthics, as this technology improves further. This is with only a very small sampling of data. Imagine what could happen, with public figures – where their recordings are available quite easily in the public domain. I am thinking the ‘digital twang’ is one of the signatures and ways to stamp this as a generated sound.