# Tag: LossFunction

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

Practical definition in ML

Given a dataset, we want to predict a range of numeric (continuous) values. One or several variables of the dataset predict (are correlated with) a numerical outcome (the future), which is usually another column in the data.… Read more...

Training an RNN with a Combined Loss Function.

## Customise your algorithm by creating the function to be optimised

In this article and the youtube video above we will recall the basic concepts of the loss function and cost function, we will then see how to create a custom loss function in tensorflow with the Keras API and subclassing the base class “Loss” of Keras.

The triplet loss has a couple of disadvantages that should be considered.

First, it requires a careful selection of the anchor, positive, and negative images. The difference between the negative and the anchor images can’t be too much, if it was, the network will satisfy the loss function easily without learning anything. The anchor and the negative images must be similar but shouldn’t belong in the same class.

Second, it’s computationally expensive and lastly, the triplet loss requires the hyperparameter alpha, or the margin. it can lead to worse results when not chosen carefully.

There are many alternatives to the triplet loss, one of them is the ArcFace Loss. This is a loss based on the cross-entropy loss aiming to maximize the decision boundary between classes thus grouping similar data points closer together.