In our journey into the world of machine learning and deep learning, it will soon become necessary to approach the customisation of models, optimisers, loss functions, layers and other fundamental components of the algorithm as a whole. Tensorflow and Keras have a large number of pre-implemented and optimised loss functions that are easy to call up in the working environment. Nevertheless, it may be necessary to develop personalised and original loss functions to fully satisfy our need for model characterisation.
Why read this article?
In this article and the youtube video above we will recall the basic concepts of the loss function and cost function, we will then see how to create a custom loss function in tensorflow with the Keras API and subclassing the base class “Loss” of Keras. We will then see how to create an example loss, in this case, a customised Accuracy for regression problems. I remind you to follow my Medium profile to support this work. You can find all the other articles in this series on my profile and all the other videos in the series on my YouTube channel. You can also find all the scripts in the git repository.
In mathematical optimization, statistics, machine learning and Deep Learning the Loss Function (also known as Cost Function or Error Function) is a function that defines a correlation between a series of values and a real number. That number represents conceptually the cost associated with an event or a set of values. In general, the goal of an optimization procedure is to minimize the loss function.
As stated on the official Wikipedia page, a “good statistical practice requires the selection of an estimation function consistent with the actual variation experienced in the context of a particular application. Therefore, in practice, the selection of the statistical method to be used to model an applied problem depends on knowledge of the costs that will occur due to the problem-specific circumstances.”
For most optimisation algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used objective functions are mean square error and deviance.
“However, deviance (which makes use of an absolute value) has the disadvantage of not being differentiable at a=0. A quadratic function has the disadvantage of being dominated by outliers when summing over a set of…
Continue reading: https://towardsdatascience.com/custom-loss-function-in-tensorflow-eebcd7fed17a?source=rss—-7f60cf5620c9—4