(testing signal)

Tag: jupyter

Getting Started with Jupyter+IntelligentGraph

Since IntelligentGraph combines Knowledge Graphs with embedded data analytics, Jupyter is an obvious choice as a data analysts’ IntelligentGraph workbench.

The following are screen-captures of a Jupyter-Notebook session showing how Jupyter can be used as an IDE for IntelligentGraph to perform all of the following:

Create a new IntelligentGraph repository
Add nodes to that repository
Add calculation nodes to the same repository
Navigate through the calculated results
Query the results using SPARQL

GettingStarted is available as a JupyterNotebook here: GettingStarted JupyterNotebook

Images of the GettingStarted JupyterNotebook follow:


Using the Jupyter ISparql, we can easily perform SPARQL queries over the same IntelligentGraph created above.… Read more...

Spotify API and Audio Features

One gal’s journey to make a playlist her mom can dance to

Dashboard filtered by “Danceability” — just out of view in the bottom right chart is Fergalicious taking the 9th slot. An absolute crime it’s not #1.

Complete Guide to Spark and PySpark Setup for Data Science

Complete A-Z on how to set-up Spark for Data Science including using Spark with Scala and with Python via PySpark as well as integration with Jupyter notebooks

Photo by Rakicevic Nenad from Pexels


Apache Spark is a unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It is fast becoming the de-facto tool for data scientists to investigate big data.

In similar fashion to most data scientists Python has always been my go-to programming language for anything from data collection via web scraping tools such as Scrapy and Selenium to data wrangling with pandas and machine leaning/deep learning with all the fantastic libraries available in Python such as Pytorch and Tensorflow.


PathQL: Intelligently finding knowledge as a path through a maze of facts

PathQL simplifies finding paths through the maze of facts within a KnowledgeGraph. Used within IntelligentGraph scripts it allows data analysis to be embedded within the graph, rather than requiring graph data to be exported to an analysis engine. Used with IntelligentGraph Jupyter Notebooks it provides powerful data analytics

I would suggest that Google does not have its own intelligence. If I search for, say, ‘Arnold Schwarzenegger and Harvard’, Google will only suggest documents that contain BOTH Arnold Schwarzenegger and Harvard. I might be lucky that someone has digested these facts and produced a single web page with the knowledge I want.


5 Development Rules to Improve Your Data Science Projects

1. Abstract scripts into functions and classes

Say you are working on a Jupyter notebook figuring out how to best visualize some data. As soon as that code works and you don’t think it will need much more debugging, it’s time to abstract it! Let’s look at an example,

import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd

synthetic_data = np.random.normal(0,1,1000)

plt.plot(synthetic_data, color="green")
plt.title("Plotting Synthetic Data")
plt.xlabel("x axis")
plt.ylabel("y axis")

Here, we plotted some synthetic data. Assuming we are happy with our plot, what we want to do now is abstract this into a function and add it to the code base of our project:

def plotSyntheticDataTimeSeries(data):

The Only Auto-Completion Extension You’ll Ever Need For Your Jupyter Notebooks

This is the one I recommend using. Explore just a few simple steps to set it up!

One of the most loved programming interfaces in Python is the Jupyter Notebook environment, and wanting code auto-completion enabled in it feels quite natural.

I know I like to work in VSCode very often, and the one thing that I regularly missed in my notebooks was the auto completion of my long import statements containing the names of libraries that I quite often tend to forget (:P) and my significantly drawn out variable names that are quite essential in my projects!


How to Create a Docker Image with Jupyter Notebook and Kotlin

This article is written based on the following platform:

This article aims to illustrate in detail the steps to follow in order to create a custom docker image with the following components: Jupyter Notebook and Kotlin kernel. Once the environment is set up, we will show how to access it and how to work with it. Finally, after confirming that everything works fine we will upload our image to a container images repository like Docker Hub, so it can be easily accessed by the community.

Let’s briefly discuss about the technologies and products we are going to use:

Docker is a software platform designed to make it easier to create, deploy, and run applications by using containers.


Introducing the Synthetic Data Community

pip install ydata-synthetic

How to convert a Python Jupyter notebook into an RMarkdown file

Pythonist, give RMarkdown a try and prepare to be amazed!

1. Introduction

NOTE: If you want to jump right into my code, go straight to section 3 (“From Jupyter to the RMarkdown world”).

Python was my first love when I started my journey in the programming world a couple of years ago, and it is still my favorite language. However, for the last few months, I’ve been more and more into R, due to work and academic reasons. And I must admit it: R is super fun too! The more I study both languages, the more certainty I have that the polyglot path in Data Analytics and Data Science is what I want for me.


Build a machine learning web app in Python

We are going to build a simple sentiment analysis application. This app will get a sentence as user input, and return with a prediction of whether this sentence is positive, negative, or neutral.

Here’s how the end product will look:

Image by author

You can use any Python IDE to build and deploy this app. I suggest using Atom, Visual Studio Code, or Eclipse. You won’t be able to use a Jupyter Notebook to build the web application. Jupyter Notebooks are built primarily for data analysis, and it isn’t feasible to run a web server with Jupyter.

Once you have a programming IDE set up, you will need to have the following libraries installed: Pandas, Nltk, and Flask.


How to Best Use Julia with Jupyter

How to add Julia code to your Jupyter notebooks and also enable you to use Python and Julia simultaneously in the same notebook

Photo by Markus Spiske from Pexels

Julia is a really exciting high-level, high-performance, dynamic programming language. It has easy to understand syntax and is forecast to be one of the major programming languages for data science in the coming years.

Jupyter is a great multi-language IDE which I always use as my default environment to explore data with and also write preliminary code routines etc before porting my code to dedicated python scripts which are better for full production purposes for example inside a docker running on AWS etc.


Installing Jupyter Notebook Support in Visual Studio Code

Easily open up .ipynb files by double-clicking on them

If you are a Data Scientist (or working to become one), you would be familiar with Jupyter Notebook. Jupyter Notebook provides a convenient way to combine your Python code (or other languages) together with your Markdown text into a single canvas known as a notebook. The advantage of Jupyter Notebook is that it allows you to selectively run and modify parts of your code easily, without needing to run the program in its entirety. In addition, you can embed formatted text (and figures) into your file, thereby making it easy for others to read and modify your code directly.


Creating Data Science Python Package Using Jupyter Notebook

The Gaussian distributions is important in statistics and are often used in social sciences to represent real random variables whose distributions are unknown. — Wikipedia

Figure1 | Wikipedia


The mean of a list of numbers is the sum of all the numbers divided by the number of samples. Mean — Wikipedia

Standard Deviation

This is a measure of variation with in data. The BMJ

Probability density function

The parameter mu is the mean, while the parameter sigma is the standard deviation. The x is the value in a list.

Gaussian Class

We are going to inherit values and function from parent class Distribution and use python the magic functions.


Why I’m Using VSCode for Jupyter Notebooks

VSCode is a great Python editor and, as I accidentally discovered, good for Jupyter Notebooks, too


Machine learning data pipeline outfit Splice Machine files for insolvency

California-based ML data pipeline company Splice Machine has begun insolvency proceedings, according to a statement on its website.

The startup – which counted bank Wells Fargo, retailer Kroger, and optical networking company Infinera among its customers – specialised in building a database for feature engineering which it hoped would ease machine learning data pipelines.

Based around Jupyter Notebooks, the product suite featured a native Spark data source, an “enhanced” version of MLflow, and its own relational database. It said its technology would improve the efficiency of machine learning lifecycles.

The outfit’s HQ is a few blocks away from San Francisco’s Oakland Bay Bridge

But a notice on its website dated 28 July said that Splice Machine had entered insolvency proceedings under California state law.


Jupyter Notebook vs PyCharm

  1. Introduction
  2. Jupyter Notebook
  3. PyCharm
  4. Summary
  5. References

As a data scientist still learning in an educational setting, you might use one main tool, while you may focus on another, different one as a professional data scientist. Of course, using multiple tools or platforms is beneficial, but there is a time and place for specific ones. Two beneficial and important tools that many data scientists use are Jupyter Notebook and PyCharm. Each has its own respective functions, but the end goal can be surprisingly similar, which is to organize and execute code for data science processes (referring to just data science for the sake of this article).


ColabCode: Deploying Machine Learning Models From Google Colab

By Kaustubh Gupta, Python Developer

Photo by Niclas Illg on Unsplash

Google colab is the handiest online IDE for Python and Data Science enthusiasts. Released in 2017 for the public, it was initially an internal project used by the Google research team to collaborate on different AI projects. Since then, it has gained a lot of popularity due to its easy to use interface, Jupyter notebooks similarity, and GPU support gave it a boost.

Most of the popular machine learning libraries such as numpy, pandas, seaborn, matplotlib, sklearn, TensorFlow come pre-installed in this cloud environment so you don’t require any explicit prerequisite.