(testing signal)

Tag: LossFunction

Linear Regression

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

Practical definition in ML

Given a dataset, we want to predict a range of numeric (continuous) values. One or several variables of the dataset predict (are correlated with) a numerical outcome (the future), which is usually another column in the data.… Read more...

Custom Loss Function in TensorFlow

Customise your algorithm by creating the function to be optimised

Why read this article?

In this article and the youtube video above we will recall the basic concepts of the loss function and cost function, we will then see how to create a custom loss function in tensorflow with the Keras API and subclassing the base class “Loss” of Keras.


Novel Approaches to Similarity Learning

The triplet loss has a couple of disadvantages that should be considered.

First, it requires a careful selection of the anchor, positive, and negative images. The difference between the negative and the anchor images can’t be too much, if it was, the network will satisfy the loss function easily without learning anything. The anchor and the negative images must be similar but shouldn’t belong in the same class.

Second, it’s computationally expensive and lastly, the triplet loss requires the hyperparameter alpha, or the margin. it can lead to worse results when not chosen carefully.

There are many alternatives to the triplet loss, one of them is the ArcFace Loss. This is a loss based on the cross-entropy loss aiming to maximize the decision boundary between classes thus grouping similar data points closer together.


The Loss Function of Intelligence

Simulating artificial general intelligence has appeared to be a harder problem than previously thought []: progress in the field of machine learning has proven to be insufficient to complete this challenge. This article suggests a way in which ‘intelligence’ can be simulated, arguing an evolutionary approach is at least one option among possibly a number of other ones.

What we as humans define as intelligence is hard to put into words. If one would ask around to see how people define this term they would logically end up with varying answers, as is the case for probably all concepts. Still, the word ‘intelligence’ is a relatively broad concept when compared to other ones.

Without agreeing on one definition, one can not easily simulate intelligence artificially so that all spectators would agree that it is: usually, we all think a different set of facets of human behaviour can be attributed to ‘intelligence’, although they might be similar [].