This exercise shows that only C was able to compute the Fibonacci sequence task faster than Julia. Such a competitive advantage should give Julia a superpower in machine learning, making the iterations quicker and saving time for engineers.

Is it really true? This article dives deeper into Julia and Python’s performance and compares the training time of an MNIST model of hand-written digits in both languages.

To compare Julia and Python’s performance, I run an experiment with this set-up

  • classification of hand-written digits tasks (0–9) using MNIST data
  • standard well-established programming solution
  • ceteris paribus approach with maximum effort to make both solutions comparable
  • only comparing model training time.

I run both codes in JupyterLab on a notebook with Intel(R) Core(TM) i7–5500U, 2.40 GHz CPU, and 16 GB RAM.

I use classic MNIST data of hand-written digits that is often used in ML tutorials. The dataset has a pre-processed training set of 60,000 and a test set of 10,000 examples. The digits have been size-normalized and centered in a fixed-size image. TensorFlow includes MNIST data as part of TensorFlow Datasets, and Julia contains the same pre-processed data in MLDatasets.

Flux is one of the most preferred libraries for machine learning in Julia. After importing the necessary Flux modules, Statistics library, and MLDatasets:

The data is reshaped, encoded, flattened, and, finally, loaded into the training set:

Next, I create a model-compiling function with two dense layers and standard activation functions that is then instantiated:

Next, I define a loss function, learning rate, ADAM optimizer, and ps object containing parameters obtained from the model:

The model is then trained with 10 epochs. @time is a useful macro for measuring training performance:

Continue reading:—-7f60cf5620c9—4