(testing signal)

Tag: gradientdescent

Why Gradient Descent Works?

Everybody knows what Gradient Descent is and how it works. Ever wondered why it works? Here’s a mathematical explanationPhoto by Yuriy Chemerys on UnsplashWhat is Gradient Descent?Gradient descent is an iterative optimization algorithm that is used to optimize the weights of a machine learning model (linear regression, neural networks, etc.) by minimizing the cost function of that model.The intuition behind gradient descent is this: Picture the cost function (denoted by f(Θ̅ ) where…

Why does gradient descent work?

You might understand it, but have you seen it?I) Why botherFirst, let’s get what might be the elephant in the room for some out of the way. Why should you read this blog? First, because it has awesome animations like figure 1 below. Don’t you want to know what’s going on in this picture?Figure 1: A plane with its gradient. Created by author using: https://github.com/ryu577/pyrayAnd second, because optimization is really, really important. I don’t care who you are, that should be…

Mathematics Hidden Behind Linear Regression

Exploring statistics using Calculus

Read more...

Complete Step-by-Step Gradient Descent Algorithm from Scratch

If you’ve been studying machine learning long enough, you’ve probably heard terms such as SGD or Adam. They are two of many optimization algorithms. Optimization algorithms are the heart of machine learning which are responsible for the intricate work of machine learning models to learn from data. It turns out that optimization has been around for a long time, even outside of the machine learning realm.

People optimize. Investors seek to create portfolios that avoid excessive risk while achieving a high rate of return. Manufacturers aim for maximum efficiency in the design and operation of their production processes. Engineers adjust parameters to optimize the performance of their designs.

This first paragraph of the Numerical Optimization book by Jorge Nocedal already explains a lot.

Read more...

Response Optimization with Design of Experiments and python

Find the experimental optimum with response surface methods (RSM) and python.

In the previous article a method for analyzing a simple DOE with 2 levels was presented and the relative analysis to address mean effects and interactions was discussed. An important point while running a DOE, however, is the ability to look for the maximum response of a system.

In this article we will employ some very basic tools available with python to address such a point: given the result of a full factorial DOE with 2 levels, how to plan and execute the next runs in order to achieve a maximum. If you are familiar with some machine learning techniques, you will find that we’re implementing a very naive gradient descent approach. The difference is that we don’t want to minimize a cost function but rather maximize the outcome.

Read more...

Dealing with Large Datasets: the Present Conundrum

An introduction to Stochastic Gradient Descent and Data Parallelism.

Read more...