Deploy Alluxio on Docker
Docker can be used to simplify the deployment and management of Alluxio servers. This document provides a tutorial for running Dockerized Alluxio on a single node with local disk as the under storage. We’ll also discuss more advanced topics and how to troubleshoot.
- Prerequisites
- Load the Docker Image
- Prepare Docker Volume to Persist Data
- Launch Alluxio Containers for Master and Worker
- Verify the Cluster
- Advanced Setup
- Performance Optimization
- Troubleshooting
- FAQ
Prerequisites
- A machine with Docker installed.
- Ports 19998, 19999, 29998, 29999, and 30000 available
If you don’t have access to a machine with Docker installed, you can provision a small AWS EC2 instance (e.g. t2.small) to follow along with the tutorial. When provisioning the instance, set the security group so that the following ports are open to your IP address and the CIDR range of the Alluxio clients (e.g. remote Spark clusters):
- 19998 for the CIDR range of your Alluxio servers and clients: Allow the clients and workers to communicate with Alluxio Master RPC processes.
- 19999 for the IP address of your browser: Allow you to access the Alluxio master web UI.
- 29999 for the CIDR range of your Alluxio and clients: Allow the clients to communicate with Alluxio Worker RPC processes.
- 30000 for the IP address of your browser: Allow you to access the Alluxio worker web UI.
To set up Docker after provisioning the instance, which will be referred to as the Docker Host, run
$ sudo yum install -y docker
# Create docker group
$ sudo groupadd docker
# Add the current user to the docker group
$ sudo usermod -a -G docker $(id -u -n)
# Start docker service
$ sudo service docker start
# Log out and log back in again to pick up the group changes
$ exit
Load the Docker Image
Load the docker image from the given tar.
docker load --input alluxio-enterprise-2.10.0-3.4-docker.tar
Prepare Docker Volume to Persist Data
By default, all files created inside a container are stored on a writable container layer. The data doesn’t persist when that container no longer exists. Docker volumes are the preferred way to save data outside the containers. The following two types of Docker volumes are used the most:
-
Host Volume: You manage where in the Docker host’s file system to store and share the containers’ data. To create a host volume, include the following when launching your containers:
$ docker run -v /path/on/host:/path/in/container ...
The file or directory is referenced by its full path on the Docker host. It can exist on the Docker host already, or it will be created automatically if it does not yet exist.
-
Named Volume: Docker manage where they are located. It should be referred to by specific names. To create a named volume, first run:
$ docker volume create volumeName
Then include the following when launching your containers:
$ docker run -v volumeName:/path/in/container ...
Either host volume or named volume can be used for Alluxio containers. For purpose of test, the host volume is recommended, since it is the easiest type of volume to use and very performant. More importantly, you know where to refer to the data in the host file system, and you can manipulate the files directly and easily outside the containers.
For example, we will use the host volume and mount the host directory /tmp/alluxio_ufs
to the
container location /opt/alluxio/underFSStorage
, which is the default setting for the
Alluxio UFS root mount point in the Alluxio docker image:
$ mkdir -p /tmp/alluxio_ufs
$ docker run -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage ...
Of course, you can choose to mount a different path instead of /tmp/alluxio_ufs
.
From version 2.1 on, Alluxio Docker image runs as user alluxio
by default.
It has UID 1000 and GID 1000.
Please make sure the host volume is writable by the user the Docker image is run as.
Launch Alluxio Containers for Master and Worker
The Alluxio clients (local or remote) need to communicate with both Alluxio master and workers. Therefore, it is important to make sure clients can reach both of the following services:
- Master RPC on port 19998
- Worker RPC on port 29999
Within the Alluxio cluster, please also make sure the master and worker containers can reach each other on the ports defined in General requirements.
We are going to launch Alluxio master and worker containers on the same Docker host machine. In order to make sure this works for either local or remote clients, we have to set up the Docker network and expose the required ports correctly.
There are two ways to launch Alluxio Docker containers on the Docker host:
- Option1: Use host network or
- Option2: Use user-defined bridge network
Host network shares ip-address and networking namespace between the container and the Docker host. User-defined bridge network allows containers connected to communicate, while providing isolation from containers not connected to that bridge network. It is recommended to use host network, option 1, for testing.
Note: Pass the Alluxio license using the ALLUXIO_LICENSE_BASE64
environment variable.
No matter which option you are using, please include the below line when you start the Alluxio master containers:
-e ALLUXIO_LICENSE_BASE64="$(cat /path/to/license.json | base64)" \
Verify the Cluster
To verify that the services came up, check docker ps
. You should see something like
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fef7c714d25 alluxio/alluxio-enterprise "/entrypoint.sh work…" 39 seconds ago Up 38 seconds alluxio-worker
27f92f702ac2 alluxio/alluxio-enterprise "/entrypoint.sh mast…" 44 seconds ago Up 43 seconds 0.0.0.0:19999->19999/tcp alluxio-master
If you don’t see the containers, run docker logs
on their container ids to see what happened.
The container ids were printed by the docker run
command, and can also be found in docker ps -a
.
Visit instance-hostname:19999
to view the Alluxio web UI. You should see one worker connected and providing
1024MB
of space.
To run tests, enter the worker container
$ docker exec -it alluxio-worker /bin/bash
Run the tests
$ cd /opt/alluxio
$ ./bin/alluxio runTests
To test the remote client access, for example, from the Spark cluster (python 3)
textFile_alluxio_path = "alluxio://{docker_host-ip}:19998/path_to_the_file"
textFile_RDD = sc.textFile (textFile_alluxio_path)
for line in textFile_RDD.collect():
print (line)
Congratulations, you’ve deployed a basic Dockerized Alluxio cluster! Read on to learn more about how to manage the cluster and make is production-ready.
Advanced Setup
Launch Alluxio with Java 11
Starting from v2.9.0, Alluxio processes can be launched with Java 11 inside Docker containers by pulling the alluxio/alluxio-jdk11
image from Dockerhub.
To use java 11 image, replace alluxio/alluxio
with alluxio/alluxio-jdk11
in the command launching Alluxio Docker container.
Launch Alluxio with the development image
Starting from v2.6.2, a new docker image, alluxio-dev
, is available in Dockerhub for development usage. Unlike the default alluxio/alluxio
image that
only contains packages needed for Alluxio service to run, this alluxio-dev
image installs more development tools, including gcc, make, async-profiler, etc.,
making it easier for users to deploy more services in the container along with Alluxio.
To use the development image, simply replace alluxio/alluxio
with alluxio/alluxio-dev
in the container launching process.
Set server configuration
Configuration changes require stopping the Alluxio Docker images, then re-launching them with the new configuration.
To set an Alluxio configuration property, add it to the Alluxio java options environment variable with
-e ALLUXIO_JAVA_OPTS="-Dalluxio.property.name=value"
Multiple properties should be space-separated.
If a property value contains spaces, you must escape it using single quotes.
-e ALLUXIO_JAVA_OPTS="-Dalluxio.property1=value1 -Dalluxio.property2='value2 with spaces'"
Alluxio environment variables will be copied to conf/alluxio-env.sh
when the image starts. If you are not seeing a property take effect, make sure the property in
conf/alluxio-env.sh
within the container is spelled correctly. You can check the
contents with
$ docker exec ${container_id} cat /opt/alluxio/conf/alluxio-env.sh
Run in High-Availability Mode
A lone Alluxio master is a single point of failure. To guard against this, a production cluster should run multiple Alluxio masters in High Availability mode.
There are two ways to enable HA mode in Alluxio, either with internal leader election and embedded journal, or external Zookeeper and a shared journal storage. Please read running Alluxio with HA for more details. It is recommended to use the second option for production use case.
Relaunch Alluxio Servers
When relaunching Alluxio master docker containers, use the --no-format
flag to avoid re-formatting
the journal. The journal should only be formatted the first time the image is run.
Formatting the journal deletes all Alluxio metadata, and starts the cluster in
a fresh state.
You can find more details about the Alluxio journal here.
The same applies to Alluxio worker docker containers, use the --no-format
flag to avoid re-formatting
the worker storage. Formatting the worker storage deletes all the cached blocks.
You can find more details about the worker storage here.
Enable POSIX API access
Alluxio POSIX access is implemented via FUSE. There are two options to enable POSIX accesses to Alluxio in a docker environment. POSIX API docs provides more details about how to configure Alluxio POSIX API.
- Option1: Run a standalone Alluxio FUSE container, or
- Option2: Enable FUSE support when running a worker container.
First make sure a directory with the right permissions exists on the host to bind-mount in the Alluxio FUSE container:
$ mkdir -p /tmp/mnt && sudo chmod -R a+rwx /tmp/mnt
See Fuse configuration and Fuse mount options for more details about how to modify the Fuse mount configuration.
Set up Alluxio Proxy
To start the Alluxio proxy server inside a Docker container, simply run the following command:
$ docker run -d \
--net=host \
--name=alluxio-proxy \
--security-opt apparmor:unconfined \
-e ALLUXIO_JAVA_OPTS=" \
-Dalluxio.master.hostname=localhost" \
alluxio/alluxio-enterprise proxy
See Properties List for more configuration options for Alluxio proxy server.
Performance Optimization
Enable short-circuit reads and writes
If your application pods will run on the same host as your Alluxio worker pods, performance can be greatly improved by enabling short-circuit reads and writes. This allows applications to read from and write to their local Alluxio worker without going over the loopback network. In dockerized environments, there are two ways to enable short-circuit reads and writes in Alluxio.
- Option1: use domain sockets or
- Option2: use shared volumes.
Using shared volumes is slightly easier and may yield higher performance, but may result in inaccurate resource accounting. Using domain sockets is recommended for production deployment.
Troubleshooting
If the Alluxio servers are not able to be launched, remove the -d
when running the docker run
command
so that the processes can be launched in the foreground and the console output can provide some helpful information.
If the Alluxio servers are launched, their logs can be accessed by running docker logs $container_id
.
Usually the logs will give a good indication of what is wrong. If they are not enough to diagnose
your issue, you can get help on the
user mailing list
or github issues.
Logging can also have a performance impact if sufficiently verbose. You can disable or redirect logging to mitigate this problem.
FAQ
AvailableProcessors: returns 0 in docker container
When you execute alluxio ls
in the alluxio master container and got the following error.
bash-4.4$ alluxio fs ls /
Exception in thread "main" java.lang.ExceptionInInitializerError
...
Caused by: java.lang.IllegalArgumentException: availableProcessors: 0 (expected: > 0)
at io.netty.util.internal.ObjectUtil.checkPositive(ObjectUtil.java:44)
at io.netty.util.NettyRuntime$AvailableProcessorsHolder.setAvailableProcessors(NettyRuntime.java:44)
at io.netty.util.NettyRuntime$AvailableProcessorsHolder.availableProcessors(NettyRuntime.java:70)
at io.netty.util.NettyRuntime.availableProcessors(NettyRuntime.java:98)
at io.grpc.netty.Utils$DefaultEventLoopGroupResource.<init>(Utils.java:394)
at io.grpc.netty.Utils.<clinit>(Utils.java:84)
... 20 more
This error can be fixed by adding -XX:ActiveProcessorCount=4
as jvm parameter.