Automating a Spark Node Setup with a Shell Script

salbaroudi - March 18, 2019 - Cloud Computing /Spark /Tools /

Apache Spark is a highly complex platform with many interlocking parts. Online web services such as AWS allow you to launch clusters of nodes with Spark already installed. ​​ 

 I chose Digital Ocean as my Cloud Platform in which to learn and run Spark. This is a low cost option that has the right balance between ease of use and custom control. I can make a simple Droplet online, connect to it via SSH and install all the packages I need for a Spark/Data Science setup.

 Digital Ocean has preconfigured Droplet images with applications already installed. As of time of writing, there does not appear to be one for Data Science/Data Engineering. For the amateur (i.e: me), it is useful to understand how to install the Spark framework from scratch. Digital Ocean also gives you the option to make your own images (called “Snapshots”) and save them. Since I am tinkering with many different areas of Data Science, it makes sense for me to generate my own prebuilt image, and customize this “base” image as I move along.

 However, having an image file is not enough – starting from some base image may not always be desirable. It is nice to be able to start from the beginning for some highly customizable and small builds.

 When installing and building from scratch, there are many (many...) steps to follow. Along the way, I built a rudimentary shell script to automate the tasks for the next node. ​​ And so, the script was born. I have tested it on a fresh Droplet, and it actually works!

 The first version of my build scripts can be seen below. There are two parts to my build script (called and Once the Droplet is initiated, I sftp into it, and place the two scripts in the root folder. Using the admin account, I run them.

The first script ( sets up a user account on the Droplet, and copies over root ssh keys to the user folder.




#Start a new droplet with both user and root keys loaded.


#this will transfer over keys properly, without dealing with permissions.

adduser user

usermod -aG sudo user

rsync --archive --chown=user:user ~/.ssh /home/user


#Now go into /root/.ssh/authorized_keys and delete the user key.

#It is ok if user has root and user key (either is fine).


 I already have two ssh keyfiles generated (one for user and root). Digital Ocean has an option to install them properly in your Droplet. Once the ssh keys are copied over, I manually delete the user key from the root ssh authorized_keys file.

With the user setup is done, it’s time to install our system with the script:



#The rest of this assumes we are using the root account.



#folder for user to do installs.

cd /home/user

apt-get update ​​ 

apt-get upgrade -y

apt install default-jdk scala -y

apt install git build-essential -y

apt install zlib1g-dev libffi-dev -y

#Droplet comes with python3.6, but doesn’t have these modules.

apt install python3-pip python3-venv -y

apt install r-base -y



#install for python 3.7

apt-get install libsqlite3-dev libssl-dev -y

curl -O

tar xf Python-3.7.2.tar.xz

cd Python-3.7.2


make altinstall

cd ..


#create an env folder:

python3.7 -m venv /home/user/env



#install spark

cd /home/user

curl -O

tar xvf spark-2.4.0-bin-hadoop2.7.tgz

#a nicer renaming.

mv ./spark-2.4.0-bin-hadoop2.7 ./spark2.4-hadoop2.7


#remove .tgz

cd /home/user

rm -f ​​ *.tgz

rm -f *.xz


#set /home/user folder so it is owned by user!

cd /home

chown -R user:user ./user/*



Finally, I test to see that Spark, Python3.7, venv, R and Java work on the console. The Droplet is now ready to go!


[1]: The “yes” command for scripted installs:

[2]: Basic Spark Install:

[3]: untar a tar.xz file: ​​

[4]: Install C Compilation Tools:

[5]: Fixing Zlib error :

[6]: Setup SSH keys for user:

[7]: Create SSH Keys:




Leave a Reply

Your email address will not be published. Required fields are marked *