Deploy Alluxio on Docker

Slack Docker Pulls GitHub edit source

Docker can be used to simplify the deployment and management of Alluxio servers. Using the alluxio/alluxio Docker image available on Dockerhub, you can go from zero to a running Alluxio cluster with a couple of docker run commands. This document provides a tutorial for running Dockerized Alluxio on a single node with local disk as the under storage. We’ll also discuss more advanced topics and how to troubleshoot.

Prerequisites

  • A machine with Docker installed.
  • Ports 19998, 19999, 29998, 29999, and 30000 available

If you don’t have access to a machine with Docker installed, you can provision a small AWS EC2 instance (e.g. t2.small) to follow along with the tutorial. When provisioning the instance, set the security group so that the following ports are open to your IP address and the CIDR range of the Alluxio clients (e.g. remote Spark clusters):

  • 19998 for the CIDR range of your Alluxio servers and clients: Allow the clients and workers to communicate with Alluxio Master RPC processes.
  • 19999 for the IP address of your browser: Allow you to access the Alluxio master web UI.
  • 29999 for the CIDR range of your Alluxio and clients: Allow the clients to communicate with Alluxio Worker RPC processes.
  • 30000 for the IP address of your browser: Allow you to access the Alluxio worker web UI.

To set up Docker after provisioning the instance, which will be referred as the Docker Host, run

$ sudo yum install -y docker
$ sudo service docker start
# Add the current user to the docker group
$ sudo usermod -a -G docker $(id -u -n)
# Log out and log back in again to pick up the group changes
$ exit

Prepare Docker Volume to Persist Data

By default all files created inside a container are stored on a writable container layer. The data doesn’t persist when that container no longer exists. Docker volumes are the preferred way to save data outside the containers. The following two types of Docker volumes are used the most:

  • Host Volume: You manage where in the Docker host’s file system to store and share the containers data. To create a host volume, run:

    $ docker run -v /path/on/host:/path/in/container ...
    

    The file or directory is referenced by its full path on the Docker host. It can exist on the Docker host already, or it will be created automatically if it does not yet exist.

  • Named Volume: Docker manage where they are located. It should be be referred to by specific names. To create a named volume, run:

    $ docker volume create volumeName
    $ docker run -v  volumeName:/path/in/container ...
    

Either host volume or named volume can be used for Alluxio containers. For purpose of test, the host volume is recommended, since it is the easiest type of volume to use and very performant. More importantly, you know where to refer to the data in the host file system and you can manipulate the files directly and easily outside the containers.

Therefore, we will use the host volume and mount the host directory /tmp/alluxio_ufs to the container location /opt/alluxio/underFSStorage, which is the default setting for the Alluxio UFS root mount point in the Alluxio docker image:

$ mkdir -p /tmp/alluxio_ufs
$ docker run -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage   ...

Of course, you can choose to mount a different path instead of /tmp/alluxio_ufs. From version 2.1 on, Alluxio Docker image runs as user alluxio by default. It has UID 1000 and GID 1000. Please make sure it is writable by the user the Docker image is run as.

Launch Alluxio Containers for Master and Worker

The Alluxio clients (local or remote) need to communicate with both Alluxio master and workers. Therefore it is important to make sure clients can reach both of the following services:

  • Master RPC on port 19998
  • Worker RPC on port 29999

Within the Alluxio cluster, please also make sure the master and worker containers can reach each other on the ports defined in General requirements.

We are going to launch Alluxio master and worker containers on the same Docker host machine. In order to make sure this works for either local or remote clients, we have to set up the Docker network and expose the required ports correctly.

There are two ways to launch Alluxio Docker containers on the Docker host:

Host network shares ip-address and networking namespace between the container and the Docker host. User-defined bridge network allows containers connected to communicate, while providing isolation from containers not connected to that bridge network. It is recommended to use host network, option A, for testing.


Verify the Cluster

To verify that the services came up, check docker ps. You should see something like

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES
1fef7c714d25        alluxio/alluxio     "/entrypoint.sh work…"   39 seconds ago      Up 38 seconds                                  alluxio-worker
27f92f702ac2        alluxio/alluxio     "/entrypoint.sh mast…"   44 seconds ago      Up 43 seconds       0.0.0.0:19999->19999/tcp   alluxio-master

If you don’t see the containers, run docker logs on their container ids to see what happened. The container ids were printed by the docker run command, and can also be found in docker ps -a.

Visit instance-hostname:19999 to view the Alluxio web UI. You should see one worker connected and providing 1024MB of space.

To run tests, enter the worker container

$ docker exec -it alluxio-worker /bin/bash

Run the tests

$ cd /opt/alluxio
$ ./bin/alluxio runTests

To test the remote client access, for example, from the Spark cluster (python 3)

textFile_alluxio_path = "alluxio://{docker_host-ip}:19998/path_to_the_file"
textFile_RDD = sc.textFile (textFile_alluxio_path)

for line in textFile_RDD.collect():
  print (line)

Congratulations, you’ve deployed a basic Dockerized Alluxio cluster! Read on to learn more about how to manage the cluster and make is production-ready.

Advanced Setup

Set server configuration

Configuration changes require stopping the Alluxio Docker images, then re-launching them with the new configuration.

To set an Alluxio configuration property, add it to the Alluxio java options environment variable with

-e ALLUXIO_JAVA_OPTS="-Dalluxio.property.name=value"

Multiple properties should be space-separated.

If a property value contains spaces, you must escape it using single quotes.

-e ALLUXIO_JAVA_OPTS="-Dalluxio.property1=value1 -Dalluxio.property2='value2 with spaces'"

Alluxio environment variables will be copied to conf/alluxio-env.sh when the image starts. If you are not seeing a property take effect, make sure the property in conf/alluxio-env.sh within the container is spelled correctly. You can check the contents with

$ docker exec ${container_id} cat /opt/alluxio/conf/alluxio-env.sh

Run in High-Availability Mode

A lone Alluxio master is a single point of failure. To guard against this, a production cluster should run multiple Alluxio masters in High Availability mode.

There are two ways to enable HA mode in Alluxio, either with internal leader election and embedded journal, or external Zookeeper and a shared journal storage. Please read running Alluxio with HA for more details. It is recommended to use the second option for production use case.


Relaunch Alluxio Servers

When relaunching Alluxio masters, use the --no-format flag to avoid re-formatting the journal. The journal should only be formatted the first time the image is run. Formatting the journal deletes all Alluxio metadata, and starts the cluster in a fresh state.

Enable POSIX API access

Using the alluxio/alluxio-fuse, you can enable access to Alluxio on Docker host using the POSIX API.

For example, this following command runs the alluxio-fuse container as a long-running client that presents Alluxio file system through a POSIX interface on the Docker host:

$ docker run --rm \
    --net=host \
    --name=alluxio-fuse \
    -v /tmp/mnt:/mnt:rshared \
    -e "ALLUXIO_JAVA_OPTS=-Dalluxio.master.hostname=localhost" \
    --cap-add SYS_ADMIN \
    --device /dev/fuse \
    alluxio/alluxio-fuse fuse

Notes

  • -v /tmp/mnt:/mnt:rshared binds path /mnt/alluxio-fuse the default directory to Alluxio through fuse inside the container, to a mount accessible at /tmp/mnt/alluxio-fuse on host. To change this path to /foo/bar/alluxio-fuse on host file system, replace /tmp/mnt with /foo/bar.
  • --cap-add SYS_ADMIN launches the container with SYS_ADMIN capability.
  • --device /dev/fuse shares host device /dev/fuse with the container.

Performance Optimiztion

Enable short-circuit reads and writes

If your application pods will run on the same host as your Alluxio worker pods, performance can be greatly improved by enabling short-circuit reads and writes. This allows applications to read from and write to their local Alluxio worker without going over the loopback network. In dockerized enviroments, there are two ways to enable short-circuit reads and writes in Alluxio.

Using shared volumes is slightly easier and may yield higher performance, but may result in inaccurate resource accounting. Using domain sockets is recommended for production deployment.


Troubleshooting

Alluxio server logs can be accessed by running docker logs $container_id. Usually the logs will give a good indication of what is wrong. If they are not enough to diagnose your issue, you can get help on the user mailing list.

FAQ

AvailableProcessors: returns 0 in docker container

When you execute alluxio ls in the alluxio master container and got the following error.

bash-4.4$ alluxio fs ls /
Exception in thread "main" java.lang.ExceptionInInitializerError
...
Caused by: java.lang.IllegalArgumentException: availableProcessors: 0 (expected: > 0)
        at io.netty.util.internal.ObjectUtil.checkPositive(ObjectUtil.java:44)
        at io.netty.util.NettyRuntime$AvailableProcessorsHolder.setAvailableProcessors(NettyRuntime.java:44)
        at io.netty.util.NettyRuntime$AvailableProcessorsHolder.availableProcessors(NettyRuntime.java:70)
        at io.netty.util.NettyRuntime.availableProcessors(NettyRuntime.java:98)
        at io.grpc.netty.Utils$DefaultEventLoopGroupResource.<init>(Utils.java:394)
        at io.grpc.netty.Utils.<clinit>(Utils.java:84)
        ... 20 more

This error can be fixed by adding -XX:ActiveProcessorCount=4 as jvm parameter.