Running Alluxio on Google Cloud Dataproc
This guide describes how to configure Alluxio to run on Google Cloud Dataproc.
Overview
Google Cloud Dataproc is a managed on-demand service to run Presto, Spark, and Hadoop compute workloads. It manages the deployment of various Hadoop Services and allows for hooks into these services for customizations. Aside from the added performance benefits of caching, Alluxio enables users to run compute workloads against on-premise storage or a different cloud provider’s storage such as AWS S3 and Azure Blob Store.
Prerequisites
- A project with Cloud Dataproc API and Compute Engine API enabled.
- A GCS Bucket.
A GCS bucket is required if mounting the bucket to the root of the Alluxio namespace. Alternatively, the root UFS can be reconfigured to be HDFS or any other supported under storage. Type of VM instance to be used for Alluxio Master and Worker depends on the workload characteristics. General recommended types of VM instances for Alluxio Master are n2-highmem-16 or n2-highmem-32. VM instance types of n2-standard-16 or n2-standard-32 enable use of SSD as Alluxio worker storage tier.
Basic Setup
When creating a Dataproc cluster, Alluxio can be installed using an initialization action.
Create a cluster
There are several properties set as metadata labels which control the Alluxio deployment.
- A required argument is the root UFS address configured using
alluxio_root_ufs_uri
. If set toLOCAL
, the HDFS cluster residing within the same Dataproc cluster will be used as Alluxio’s root UFS. - Specify properties using the metadata key
alluxio_site_properties
. Delimit multiple properties with a semicolon (;
).
Example 1: use google cloud storage bucket as Alluxio root UFS
$ gcloud dataproc clusters create <cluster_name> \
--initialization-actions gs://alluxio-public/enterprise-dataproc/2.9.0-2.1/alluxio-dataproc.sh \
--metadata \
alluxio_root_ufs_uri=gs://<my_bucket>,\
alluxio_site_properties="alluxio.underfs.gcs.version=2",\
alluxio_license_base64=<license_string>,\
alluxio_download_path=<urlToAccessibleAlluxioTarball>
Example 2: use Dataproc internal HDFS as Alluxio root UFS
$ gcloud dataproc clusters create <cluster_name> \
--initialization-actions gs://alluxio-public/dataproc/2.9.0-2.1/alluxio-dataproc.sh \
--metadata \
alluxio_root_ufs_uri="LOCAL",\
alluxio_hdfs_version="2.9",\
alluxio_site_properties="alluxio.master.mount.table.root.option.alluxio.underfs.hdfs.configuration=/etc/hadoop/conf/core-site.xml:/etc/hadoop/conf/hdfs-site.xml",\
alluxio_license_base64=<license_string>,\
alluxio_download_path=<urlToAccessibleAlluxioTarball>
<license_string>
is the base64 encoded license string. Set as:$(cat license.json | base64 | tr -d "\n")
<accessible_url>/alluxio-enterprise-2.9.0-2.1-all.tar.gz
is anhttp(s)
orgs
path to the Alluxio Enterprise tarball.
Customization
The Alluxio deployment on Google Dataproc can customized for more complex scenarios by passing
additional metadata labels to the gcloud clusters create
command.
Active Sync
can be enabled on paths in Alluxio for a root HDFS mount point using the metadata key
alluxio_sync_list
.
Specify a list of paths in Alluxio delimited using ;
.
...
--metadata \
alluxio_sync_list="/tmp;/user/hadoop",\
...
Additional files can be downloaded into the Alluxio installation directory at /opt/alluxio/conf
using the metadata key alluxio_download_files_list
.
Specify http(s)
or gs
uris delimited using ;
.
...
--metadata \
alluxio_download_files_list="gs://<my_bucket>/<my_file>;https://<server>/<file>",\
...
The default Alluxio Worker memory is set to 1/3 of the physical memory on the instance.
If a specific value is desired, set alluxio.worker.ramdisk.size
in the provided
alluxio-site.properties
.
Alternatively, when volumes such as
Dataproc Local SSDs
are mounted, specify the metadata label alluxio_ssd_capacity_usage
to configure the percentage
of all available SSDs on the virtual machine provisioned as Alluxio worker storage.
Memory is not configured as the primary Alluxio storage tier in this case.
Pass additional arguments to the gcloud clusters create
command.
...
--num-worker-local-ssds=1 \
--metadata \
alluxio_ssd_capacity_usage="60",\
...
Next steps
The status of the cluster deployment can be monitored using the CLI.
$ gcloud dataproc clusters list
Identify the instance name and SSH into this instance to test the deployment.
$ gcloud compute ssh <cluster_name>-m
Test that Alluxio is running as expected
$ sudo runuser -l alluxio -c "alluxio runTests"
Alluxio is installed and configured in /opt/alluxio/
. Alluxio services are started as alluxio
user.
Compute Applications
Spark, Hive, and Presto on Dataproc are pre-configured to connect to Alluxio.