As you can see, Isomap is an **Unsupervised Machine Learning** technique aimed at **Dimensionality Reduction.**

It differs from a few other techniques in the same category by using a **non-linear** approach to dimensionality reduction instead of linear mappings used by algorithms such as PCA. We will see how linear vs. non-linear approaches differ in the next section.

## How does Isometric Mapping (Isomap) work?

Isomap is a technique that combines several different algorithms, enabling it to use a non-linear way to reduce dimensions while preserving local structures.

Before we look at the example of Isomap and compare it to a linear method of Principal Components Analysis (PCA), let’s list the high-level steps that Isomap performs:

- Use a KNN approach to
**find the k nearest neighbors**of every data point. Here, “k” is an arbitrary number of neighbors that you can specify within model hyperparameters. - Once the neighbors are found,
**construct the neighborhood graph**where points are connected to each other if they are each other’s neighbors. Data points that are not neighbors remain unconnected. **Compute the shortest path**between each pair of data points (nodes). Typically, it is either Floyd-Warshall or Dijkstra’s algorithm that is used for this task. Note, this step is also commonly described as finding a**geodesic distance**between points.**Use multidimensional scaling (MDS) to compute lower-dimensional embedding**. Given distances between each pair of points are known, MDS places each object into the N-dimensional space (N is specified as a hyperparameter) such that the between-point distances are preserved as well as possible.

For our example, let’s create a 3D object known as a Swiss roll. The object is made up of 2,000 individual data points. The chart is **interactive **so make sure to rotate it to familiarize yourself with its exact shape.

Next, we want to map this 3-dimensional swiss roll 2 dimensions using Isomap. To track what happens during this transformation, let’s select two points: A and B.

We can see that these two points are relatively close to each other within the 3D space. If we used a linear dimensionality reduction approach such as PCA, then the Euclidean distance between these two points would remain somewhat similar in lower dimensions. See PCA transformation chart below:

Continue reading: https://towardsdatascience.com/isomap-embedding-an-awesome-approach-to-non-linear-dimensionality-reduction-fc7efbca47a0?source=rss—-7f60cf5620c9—4

Source: towardsdatascience.com

## Comments by halbot