DEV Community

Cover image for Quick tip: Using Apache Spark with SingleStore Notebooks
Akmal Chaudhri for SingleStore

Posted on • Edited on

Quick tip: Using Apache Spark with SingleStore Notebooks

Abstract

SingleStore has been providing a cloud portal and a DBaaS offering for some time. Additionally, it has offered a Spark Connector for a while, but Apache Spark had to be run externally. The recent addition of notebooks to the cloud portal has significantly improved Data Science capabilities, including the ability to use Apache Spark. Spark can now be installed in the notebook environment in a few minutes. This article will show how.

The notebook file used in this article is available on GitHub.

Create a SingleStore Cloud account

A previous article showed the steps to create a free SingleStore Cloud account. We'll use the following settings:

  • Workspace Group Name: Spark Demo Group
  • Cloud Provider: AWS
  • Region: US East 1 (N. Virginia)
  • Workspace Name: spark-demo
  • Size: S-00

Create a new notebook

From the left navigation pane in the cloud portal, we'll select DEVELOP > Data Studio.

In the top right of the web page, we'll select New Notebook > New Notebook, as shown in Figure 1.

Figure 1. New Notebook.

Figure 1. New Notebook.

We'll call the notebook spark_demo, select a Blank notebook template from the available options, and save it in the Personal location.

Fill out the notebook

Install Apache Spark

We can easily install Java:

!conda install -y --quiet -c conda-forge openjdk=8
Enter fullscreen mode Exit fullscreen mode

and Spark:

!pip install pyspark --quiet
Enter fullscreen mode Exit fullscreen mode

Once the installation has been completed, we can check the version of Java, as follows:

!java version
Enter fullscreen mode Exit fullscreen mode

Example output:

openjdk version "1.8.0_382"
OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (build 1.8.0_382-b05)
OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-linux64) (build 25.382-b05, mixed mode)
Enter fullscreen mode Exit fullscreen mode

Next, let's check the version of PySpark:

print(pyspark.__version__)
Enter fullscreen mode Exit fullscreen mode

Example output:

3.5.1
Enter fullscreen mode Exit fullscreen mode

Finally, we'll check the version of Python:

print(sys.version)
Enter fullscreen mode Exit fullscreen mode

Example output:

3.11.6 | packaged by conda-forge | (main, Oct  3 2023, 10:40:35) [GCC 12.3.0]
Enter fullscreen mode Exit fullscreen mode

There is a useful Spark Python Supportability Matrix that shows the compatibility of Python with various Spark releases.

Test Apache Spark

Now, let's test the Apache Spark installation.

First, let's create a SparkSession:

# Create a Spark session
spark = SparkSession.builder.appName("Spark Test").getOrCreate()
Enter fullscreen mode Exit fullscreen mode

Next, let's create a DataFrame:

# Create a DataFrame
data = [("Peter", 27), ("Paul", 28), ("Mary", 29)]
df = spark.createDataFrame(data, ["Name", "Age"])
Enter fullscreen mode Exit fullscreen mode

Now we'll show the DataFrame:

# Show the content of the DataFrame
df.show()
Enter fullscreen mode Exit fullscreen mode

The output should be as follows:

+-----+---+
| Name|Age|
+-----+---+
|Peter| 27|
| Paul| 28|
| Mary| 29|
+-----+---+
Enter fullscreen mode Exit fullscreen mode

Finally, we'll stop the SparkSession:

# Stop the Spark session
spark.stop()
Enter fullscreen mode Exit fullscreen mode

Summary

In this short article, we've seen how to install and use Apache Spark in the SingleStore notebook environment. In future articles, we'll explore Spark's capabilities more extensively and demonstrate how to integrate it with the SingleStore Data Platform for reading and writing data using a database.

Top comments (0)