List of Configuration Properties

Slack Docker Pulls GitHub edit source

All Alluxio configuration settings fall into one of the six categories: Common (shared by Master and Worker), Master specific, Worker specific, User specific, Cluster specific (used for running Alluxio with cluster managers like Mesos and YARN), and Security specific (shared by Master, Worker, and User).

Common Configuration

The common configuration contains constants shared by different components.

Property NameDefaultDescription
alluxio.conf.dir ${alluxio.home}/conf The directory containing files used to configure Alluxio. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.debug false Set to true to enable debug mode which has additional logging and info in the Web UI.
alluxio.extensions.dir ${alluxio.home}/extensions The directory containing Alluxio extensions.
alluxio.fuse.cached.paths.max 500 Maximum number of Alluxio paths to cache for FUSE conversion.
alluxio.fuse.debug.enabled false Run FUSE in debug mode, and have the fuse process log every FS request.
alluxio.fuse.fs.name alluxio-fuse The FUSE file system name.
alluxio.fuse.maxwrite.bytes 128KB Maximum granularity of write operations, capped by the kernel to 128KB max (as of Linux 3.16.0).
alluxio.fuse.user.group.translation.enabled false Whether to translate Alluxio users and groups into Unix users and groups when exposing Alluxio files through the FUSE API. When this property is set to false, the user and group for all FUSE files will match the user who started the alluxio-fuse process.
alluxio.home /opt/alluxio Alluxio installation directory.
alluxio.job.master.bind.host 0.0.0.0 N/A
alluxio.job.master.client.threads 1024 N/A
alluxio.job.master.embedded.journal.addresses A comma-separated list of journal addresses for all job masters in the cluster. The format is 'hostname1:port1,hostname2:port2,...'. Defaults to the journal addresses set for the Alluxio masters (alluxio.master.embedded.journal.addresses), but with the job master embedded journal port.
alluxio.job.master.embedded.journal.port 20003 The port to use for embedded journal communication with other job masters.
alluxio.job.master.finished.job.retention.ms 300000 N/A
alluxio.job.master.hostname ${alluxio.master.hostname} N/A
alluxio.job.master.job.capacity 100000 N/A
alluxio.job.master.lost.worker.interval.ms 1000 N/A
alluxio.job.master.rpc.addresses N/A
alluxio.job.master.rpc.port 20001 N/A
alluxio.job.master.web.bind.host 0.0.0.0 N/A
alluxio.job.master.web.hostname ${alluxio.job.master.hostname} N/A
alluxio.job.master.web.port 20002 N/A
alluxio.job.master.worker.heartbeat.interval.ms 1000 N/A
alluxio.job.master.worker.timeout.ms 60000 N/A
alluxio.job.worker.bind.host 0.0.0.0 N/A
alluxio.job.worker.data.port 30002 N/A
alluxio.job.worker.hostname The hostname of Alluxio job worker.
alluxio.job.worker.rpc.port 30001 N/A
alluxio.job.worker.web.bind.host 0.0.0.0 N/A
alluxio.job.worker.web.port 30003 N/A
alluxio.jvm.monitor.info.threshold 1sec Extra sleep time longer than this threshold, log INFO.
alluxio.jvm.monitor.sleep.interval 1sec The time for the JVM monitor thread to sleep.
alluxio.jvm.monitor.warn.threshold 10sec Extra sleep time longer than this threshold, log WARN.
alluxio.locality.compare.node.ip false Whether try to resolve the node IP address for locality checking
alluxio.locality.node Value to use for determining node locality
alluxio.locality.order node,rack Ordering of locality tiers
alluxio.locality.rack Value to use for determining rack locality
alluxio.locality.script alluxio-locality.sh A script to determine tiered identity for locality checking
alluxio.logger.type Console The type of logger.
alluxio.logs.dir ${alluxio.work.dir}/logs The path to store log files. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.logserver.hostname The hostname of Alluxio logserver. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.logserver.logs.dir ${alluxio.work.dir}/logs Default location for remote log files. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.logserver.port 45600 Default port number to receive logs from alluxio servers. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.logserver.threads.max 2048 The maximum number of threads used by logserver to service logging requests.
alluxio.logserver.threads.min 512 The minimum number of threads used by logserver to service logging requests.
alluxio.metrics.conf.file ${alluxio.conf.dir}/metrics.properties The file path of the metrics system configuration file. By default it is `metrics.properties` in the `conf` directory.
alluxio.network.channel.health.check.timeout.ms 5sec Allowed duration for checking health of client connections before being assigned to a client. If a connection does not become active within configured time, it will be shut down and a new connection will be created for the client
alluxio.network.host.resolution.timeout 5sec During startup of the Master and Worker processes Alluxio needs to ensure that they are listening on externally resolvable and reachable host names. To do this, Alluxio will automatically attempt to select an appropriate host name if one was not explicitly specified. This represents the maximum amount of time spent waiting to determine if a candidate host name is resolvable over the network.
alluxio.proxy.s3.deletetype ALLUXIO_AND_UFS Delete type when deleting buckets and objects through S3 API. Valid options are `ALLUXIO_AND_UFS` (delete both in Alluxio and UFS), `ALLUXIO_ONLY` (delete only the buckets or objects in Alluxio namespace).
alluxio.proxy.s3.multipart.temporary.dir.suffix _s3_multipart_tmp Suffix for the directory which holds parts during a multipart upload.
alluxio.proxy.s3.writetype CACHE_THROUGH Write type when creating buckets and objects through S3 API. Valid options are `MUST_CACHE` (write will only go to Alluxio and must be stored in Alluxio), `CACHE_THROUGH` (try to cache, write to UnderFS synchronously), `THROUGH` (no cache, write to UnderFS synchronously).
alluxio.proxy.stream.cache.timeout 1hour The timeout for the input and output streams cache eviction in the proxy.
alluxio.proxy.web.bind.host 0.0.0.0 The hostname that the Alluxio proxy's web server runs on.
alluxio.proxy.web.hostname The hostname Alluxio proxy's web UI binds to.
alluxio.proxy.web.port 39999 The port Alluxio proxy's web UI runs on.
alluxio.site.conf.dir ${alluxio.conf.dir}/,${user.home}/.alluxio/,/etc/alluxio/ Comma-separated search path for alluxio-site.properties. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.test.mode false Flag used only during tests to allow special behavior.
alluxio.tmp.dirs /tmp The path(s) to store Alluxio temporary files, use commas as delimiters. If multiple paths are specified, one will be selected at random per temporary file. Currently, only files to be uploaded to object stores are stored in these paths.
alluxio.underfs.address ${alluxio.work.dir}/underFSStorage Under file storage address. This property is deprecated; use alluxio.master.mount.table.root.ufs instead
alluxio.underfs.allow.set.owner.failure false Whether to allow setting owner in UFS to fail. When set to true, it is possible file or directory owners diverge between Alluxio and UFS.
alluxio.underfs.cleanup.enabled false Whether or not to clean up under file storage periodically.Some ufs operations may not be completed and cleaned up successfully in normal ways and leave some intermediate data that needs periodical cleanup.If enabled, all the mount points will be cleaned up when a leader master starts or cleanup interval is reached. This should be used sparingly.
alluxio.underfs.cleanup.interval 1day The interval for periodically cleaning all the mounted under file storages.
alluxio.underfs.gcs.directory.suffix / Directories are represented in GCS as zero-byte objects named with the specified suffix.
alluxio.underfs.gcs.owner.id.to.username.mapping Optionally, specify a preset gcs owner id to Alluxio username static mapping in the format "id1=user1;id2=user2". The Google Cloud Storage IDs can be found at the console address https://console.cloud.google.com/storage/settings . Please use the "Owners" one.
alluxio.underfs.hdfs.configuration ${alluxio.conf.dir}/core-site.xml:${alluxio.conf.dir}/hdfs-site.xml Location of the HDFS configuration file.
alluxio.underfs.hdfs.impl org.apache.hadoop.hdfs.DistributedFileSystem The implementation class of the HDFS as the under storage system.
alluxio.underfs.hdfs.prefixes hdfs://,glusterfs:///,maprfs:/// Optionally, specify which prefixes should run through the HDFS implementation of UnderFileSystem. The delimiter is any whitespace and/or ','.
alluxio.underfs.hdfs.remote false Boolean indicating whether or not the under storage worker nodes are remote with respect to Alluxio worker nodes. If set to true, Alluxio will not attempt to discover locality information from the under storage because locality is impossible. This will improve performance. The default value is false.
alluxio.underfs.kodo.connect.timeout 50sec The connect timeout of kodo.
alluxio.underfs.kodo.downloadhost The download domain of Kodo bucket.
alluxio.underfs.kodo.endpoint The endpoint of Kodo bucket.
alluxio.underfs.kodo.requests.max 64 The maximum number of kodo connections.
alluxio.underfs.listing.length 1000 The maximum number of directory entries to list in a single query to under file system. If the total number of entries is greater than the specified length, multiple queries will be issued.
alluxio.underfs.object.store.mount.shared.publicly false Whether or not to share object storage under storage system mounted point with all Alluxio users. Note that this configuration has no effect on HDFS nor local UFS.
alluxio.underfs.object.store.multi.range.chunk.size ${alluxio.user.block.size.bytes.default} Default chunk size for ranged reads from multi-range object input streams.
alluxio.underfs.object.store.read.retry.base.sleep 50ms Block reads from an object store automatically retry for transient errors with an exponential backoff. This property determines the base time in the exponential backoff. Only applicable for S3A.
alluxio.underfs.object.store.read.retry.max.num 20 Block reads from an object store automatically retry for transient errors with an exponential backoff. This property determines the maximum number of retries. Only applicable for S3A.
alluxio.underfs.object.store.read.retry.max.sleep 30sec Block reads from an object store automatically retry for transient errors with an exponential backoff. This property determines the maximum wait time in the backoff. Only applicable for S3A.
alluxio.underfs.object.store.service.threads 20 The number of threads in executor pool for parallel object store UFS operations.
alluxio.underfs.oss.connection.max 1024 The maximum number of OSS connections.
alluxio.underfs.oss.connection.timeout 50sec The timeout when connecting to OSS.
alluxio.underfs.oss.connection.ttl -1 The TTL of OSS connections in ms.
alluxio.underfs.oss.socket.timeout 50sec The timeout of OSS socket.
alluxio.underfs.s3.admin.threads.max 20 The maximum number of threads to use for metadata operations when communicating with S3. These operations may be fairly concurrent and frequent but should not take much time to process.
alluxio.underfs.s3.disable.dns.buckets false Optionally, specify to make all S3 requests path style.
alluxio.underfs.s3.endpoint Optionally, to reduce data latency or visit resources which are separated in different AWS regions, specify a regional endpoint to make aws requests. An endpoint is a URL that is the entry point for a web service. For example, s3.cn-north-1.amazonaws.com.cn is an entry point for the Amazon S3 service in beijing region.
alluxio.underfs.s3.owner.id.to.username.mapping Optionally, specify a preset s3 canonical id to Alluxio username static mapping, in the format "id1=user1;id2=user2". The AWS S3 canonical ID can be found at the console address https://console.aws.amazon.com/iam/home?#security_credential . Please expand the "Account Identifiers" tab and refer to "Canonical User ID".
alluxio.underfs.s3.proxy.host Optionally, specify a proxy host for communicating with S3.
alluxio.underfs.s3.proxy.port Optionally, specify a proxy port for communicating with S3.
alluxio.underfs.s3.threads.max 40 The maximum number of threads to use for communicating with S3 and the maximum number of concurrent connections to S3. Includes both threads for data upload and metadata operations. This number should be at least as large as the max admin threads plus max upload threads.
alluxio.underfs.s3.upload.threads.max 20 The maximum number of threads to use for uploading data to S3 for multipart uploads. These operations can be fairly expensive, so multiple threads are encouraged. However, this also splits the bandwidth between threads, meaning the overall latency for completing an upload will be higher for more threads.
alluxio.underfs.s3a.consistency.timeout 1min The duration to wait for metadata consistency from the under storage. This is only used by internal Alluxio operations which should be successful, but may appear unsuccessful due to eventual consistency.
alluxio.underfs.s3a.default.mode 0700 Mode (in octal notation) for S3 objects if mode cannot be discovered.
alluxio.underfs.s3a.directory.suffix / Directories are represented in S3 as zero-byte objects named with the specified suffix.
alluxio.underfs.s3a.inherit_acl true Optionally disable this to disable inheriting bucket ACLs on objects.
alluxio.underfs.s3a.intermediate.upload.clean.age 3day Streaming uploads may not have been completed/aborted correctly and need periodical ufs cleanup. If ufs cleanup is enabled, intermediate multipart uploads in all non-readonly S3 mount points older than this age will be cleaned. This may impact other ongoing upload operations, so a large clean age is encouraged.
alluxio.underfs.s3a.list.objects.v1 false Whether to use version 1 of GET Bucket (List Objects) API.
alluxio.underfs.s3a.request.timeout 1min The timeout for a single request to S3. Infinity if set to 0. Setting this property to a non-zero value can improve performance by avoiding the long tail of requests to S3. For very slow connections to S3, consider increasing this value or setting it to 0.
alluxio.underfs.s3a.secure.http.enabled false Whether or not to use HTTPS protocol when communicating with S3.
alluxio.underfs.s3a.server.side.encryption.enabled false Whether or not to encrypt data stored in S3.
alluxio.underfs.s3a.signer.algorithm The signature algorithm which should be used to sign requests to the s3 service. This is optional, and if not set, the client will automatically determine it. For interacting with an S3 endpoint which only supports v2 signatures, set this to "S3SignerType".
alluxio.underfs.s3a.socket.timeout 50sec Length of the socket timeout when communicating with S3.
alluxio.underfs.s3a.streaming.upload.enabled false (Experimental) If true, using streaming upload to write to S3A.
alluxio.underfs.s3a.streaming.upload.partition.size 64MB Maximum allowable size of a single buffer file when using S3A streaming upload. When the buffer file reaches the partition size, it will be uploaded and the upcoming data will write to other buffer files.If the partition size is too small, S3A upload speed might be affected.
alluxio.web.file.info.enabled true Whether detailed file information are enabled for the web UI.
alluxio.web.resources ${alluxio.home}/alluxio-ui/ Path to the web application resources.
alluxio.web.temp.path ${alluxio.work.dir}/web/ Path to store temporary web server files.
alluxio.web.threads 1 How many threads to use for the web server.
alluxio.webui.cors.enabled false Set to true to enable Cross-Origin Resource Sharing for RESTful APIendpoints.
alluxio.webui.refresh.interval.ms 15000 The amount of time in milliseconds to await before refreshing the Web UI if it is set to auto refresh.
alluxio.work.dir ${alluxio.home} The directory to use for Alluxio's working directory. By default, the journal, logs, and under file storage data (if using local filesystem) are written here.
alluxio.zookeeper.address Address of ZooKeeper.
alluxio.zookeeper.connection.timeout 15s Connection timeout to use when connecting to Zookeeper
alluxio.zookeeper.election.path /election Election directory in ZooKeeper.
alluxio.zookeeper.enabled false If true, setup master fault tolerant mode using ZooKeeper.
alluxio.zookeeper.job.election.path /job_election N/A
alluxio.zookeeper.job.leader.path /job_leader N/A
alluxio.zookeeper.leader.inquiry.retry 10 The number of retries to inquire leader from ZooKeeper.
alluxio.zookeeper.leader.path /leader Leader directory in ZooKeeper.
alluxio.zookeeper.session.timeout 60s Session timeout to use when connecting to Zookeeper
aws.accessKeyId The access key of S3 bucket.
aws.secretKey The secret key of S3 bucket.
fs.cos.access.key The access key of COS bucket.
fs.cos.app.id The app id of COS bucket.
fs.cos.connection.max 1024 The maximum number of COS connections.
fs.cos.connection.timeout 50sec The timeout of connecting to COS.
fs.cos.region The region name of COS bucket.
fs.cos.secret.key The secret key of COS bucket.
fs.cos.socket.timeout 50sec The timeout of COS socket.
fs.gcs.accessKeyId The access key of GCS bucket.
fs.gcs.secretAccessKey The secret key of GCS bucket.
fs.kodo.accesskey The access key of Kodo bucket.
fs.kodo.secretkey The secret key of Kodo bucket.
fs.oss.accessKeyId The access key of OSS bucket.
fs.oss.accessKeySecret The secret key of OSS bucket.
fs.oss.endpoint The endpoint key of OSS bucket.
fs.swift.apikey (deprecated) The API key used for user:tenant authentication.
fs.swift.auth.method Choice of authentication method: [tempauth (default), swiftauth, keystone, keystonev3].
fs.swift.auth.url Authentication URL for REST server, e.g., http://server:8090/auth/v1.0.
fs.swift.password The password used for user:tenant authentication.
fs.swift.region Service region when using Keystone authentication.
fs.swift.simulation Whether to simulate a single node Swift backend for testing purposes: true or false (default).
fs.swift.tenant Swift user for authentication.
fs.swift.use.public.url Whether the REST server is in a public domain: true (default) or false.
fs.swift.user Swift tenant for authentication.

Master Configuration

The master configuration specifies information regarding the master node, such as the address and the port number.

Property NameDefaultDescription
alluxio.master.activesync.batchinterval 1sec Time interval to batch incoming events for active syncing UFS
alluxio.master.activesync.eventrate.interval 60sec The time interval we use to estimate incoming event rate
alluxio.master.activesync.interval 30sec Time interval to periodically actively sync UFS
alluxio.master.activesync.maxactivity 10 Max number of changes in a directory to be considered for active syncing
alluxio.master.activesync.maxage 10 The maximum number of intervals we will wait to find a quiet period before we have to sync the directories
alluxio.master.activesync.polltimeout 10sec Max time to wait before timing out a polling operation
alluxio.master.activesync.retry.timeout 1hour Retry period before active ufs syncer gives up on connecting to the ufs
alluxio.master.activesync.threadpoolsize 3 Max number of threads used to perform active sync
alluxio.master.audit.logging.enabled false Set to true to enable file system master audit. Note: This property must be specified as a JVM property; it is not accepted in alluxio-site.properties.
alluxio.master.audit.logging.queue.capacity 10000 Capacity of the queue used by audit logging.
alluxio.master.backup.directory /alluxio_backups Default directory for writing master metadata backups. This path is an absolute path of the root UFS. For example, if the root ufs directory is hdfs://host:port/alluxio/data, the default backup directory will be hdfs://host:port/alluxio_backups.
alluxio.master.bind.host 0.0.0.0 The hostname that Alluxio master binds to.
alluxio.master.connection.timeout 0 Timeout of connections between master and client. A value of 0 means never timeout
alluxio.master.daily.backup.enabled false Whether or not to enable daily primary master metadata backup.
alluxio.master.daily.backup.files.retained 3 The maximum number of backup files to keep in the backup directory.
alluxio.master.daily.backup.time 05:00 Default UTC time for writing daily master metadata backups. The accepted time format is hour:minute which is based on a 24-hour clock (E.g., 05:30, 06:00, and 22:04). Backing up metadata requires a pause in master metadata changes, so please set this value to an off-peak time to avoid interfering with other users of the system.
alluxio.master.embedded.journal.addresses A comma-separated list of journal addresses for all masters in the cluster. The format is 'hostname1:port1,hostname2:port2,...'. When left unset, Alluxio uses ${alluxio.master.hostname}:${alluxio.master.embedded.journal.port} by default
alluxio.master.embedded.journal.election.timeout 5s The election timeout for the embedded journal. When this period elapses without a master receiving any messages, the master will attempt to become the primary.
alluxio.master.embedded.journal.heartbeat.interval 1s The period between sending heartbeats from the embedded journal primary to followers. This should be less than half of the election timeout (alluxio.master.embedded.journal.election.timeout).
alluxio.master.embedded.journal.port 19200 The port to use for embedded journal communication with other masters.
alluxio.master.embedded.journal.storage.level DISK The storage level for storing embedded journal logs. Use DISK for maximum durability. Use MAPPED for better performance, but some risk of losing state in case of power loss or host failure. Use MEMORY for optimal performance, but no state persistence across cluster restarts.
alluxio.master.file.async.persist.handler alluxio.master.file.async.DefaultAsyncPersistHandler The handler for processing the async persistence requests.
alluxio.master.format.file_prefix _format_ The file prefix of the file generated in the journal directory when the journal is formatted. The master will search for a file with this prefix when determining if the journal is formatted.
alluxio.master.grpc.channel.auth.timeout 30sec Maximum time to wait for gRPC channel to attempt to receive an authentication response.
alluxio.master.grpc.channel.shutdown.timeout 60sec Maximum time to wait for gRPC channel to stop on shutdown
alluxio.master.grpc.server.shutdown.timeout 60sec Maximum time to wait for gRPC server to stop on shutdown
alluxio.master.heartbeat.timeout 10min Timeout between leader master and standby master indicating a lost master.
alluxio.master.hostname The hostname of Alluxio master.
alluxio.master.journal.checkpoint.period.entries 2000000 The number of journal entries to write before creating a new journal checkpoint.
alluxio.master.journal.flush.batch.time 5ms Time to wait for batching journal writes.
alluxio.master.journal.flush.timeout 5min The amount of time to keep retrying journal writes before giving up and shutting down the master.
alluxio.master.journal.folder ${alluxio.work.dir}/journal The path to store master journal logs. When using the UFS journal this could be a URI like hdfs://namenode:port/alluxio/journal. When using the embedded journal this must be a local path
alluxio.master.journal.formatter.class alluxio.master.journalv0.ProtoBufJournalFormatter The class to serialize the journal in a specified format.
alluxio.master.journal.gc.period 2min Frequency with which to scan for and delete stale journal checkpoints.
alluxio.master.journal.gc.threshold 5min Minimum age for garbage collecting checkpoints.
alluxio.master.journal.init.from.backup A uri for a backup to initialize the journal from. When the master becomes primary, if it sees that its journal is freshly formatted, it will restore its state from the backup. When running multiple masters, this property must be configured on all masters since it isn't known during startup which master will become the first primary.
alluxio.master.journal.log.size.bytes.max 10MB If a log file is bigger than this value, it will rotate to next file.
alluxio.master.journal.retry.interval 1sec The amount of time to sleep between retrying journal flushes
alluxio.master.journal.tailer.shutdown.quiet.wait.time 5sec Before the standby master shuts down its tailer thread, there should be no update to the leader master's journal in this specified time period.
alluxio.master.journal.tailer.sleep.time 1sec Time for the standby master to sleep for when it cannot find anything new in leader master's journal.
alluxio.master.journal.temporary.file.gc.threshold 30min Minimum age for garbage collecting temporary checkpoint files.
alluxio.master.journal.type EMBEDDED The type of journal to use. Valid options are UFS (store journal in UFS), EMBEDDED (use a journal embedded in the masters), and NOOP (do not use a journal)
alluxio.master.journal.ufs.option The configuration to use for the journal operations.
alluxio.master.jvm.monitor.enabled false Whether to enable start JVM monitor thread on master.
alluxio.master.keytab.file Kerberos keytab file for Alluxio master.
alluxio.master.lockcache.concurrency.level 100 Maximum concurrency level for the inodelock cache
alluxio.master.lockcache.initsize 1000 Initial inodelock cache size
alluxio.master.lockcache.maxsize 100000 Maximum inodelock cache size
alluxio.master.log.config.report.heartbeat.interval 1h The interval for periodically logging the configuration check report.
alluxio.master.master.heartbeat.interval 2min The interval between Alluxio masters' heartbeats.
alluxio.master.metastore ROCKS The type of metastore to use, either HEAP or ROCKS. The heap metastore keeps all metadata on-heap, while the rocks metastore stores some metadata on heap and some metadata on disk. The rocks metastore has the advantage of being able to support a large namespace (1 billion plus files) without needing a massive heap size.
alluxio.master.metastore.dir ${alluxio.work.dir}/metastore The metastore work directory. Only some metastores need disk.
alluxio.master.metastore.inode.cache.evict.batch.size 1000 The batch size for evicting entries from the inode cache.
alluxio.master.metastore.inode.cache.high.water.mark.ratio 0.85 The high water mark for the inode cache, as a ratio from high water mark to total cache size. If this is 0.85 and the max size is 10 million, the high water mark value is 8.5 million. When the cache reaches the high water mark, the eviction process will evict down to the low water mark.
alluxio.master.metastore.inode.cache.low.water.mark.ratio 0.8 The low water mark for the inode cache, as a ratio from low water mark to total cache size. If this is 0.8 and the max size is 10 million, the low water mark value is 8 million. When the cache reaches the high water mark, the eviction process will evict down to the low water mark.
alluxio.master.metastore.inode.cache.max.size 10000000 The number of inodes to cache on-heap. This only applies to off-heap metastores, e.g. ROCKS. Set this to 0 to disable the on-heap inode cache
alluxio.master.metastore.inode.inherit.owner.and.group true Whether to inherit the owner/group from the parent when creating a new inode path if empty
alluxio.master.metrics.time.series.interval 5min Interval for which the master records metrics information. This affects the granularity of the metrics graphed in the UI.
alluxio.master.mount.table.root.alluxio / Alluxio root mount point.
alluxio.master.mount.table.root.option Configuration for the UFS of Alluxio root mount point.
alluxio.master.mount.table.root.readonly false Whether Alluxio root mount point is readonly.
alluxio.master.mount.table.root.shared true Whether Alluxio root mount point is shared.
alluxio.master.mount.table.root.ufs ${alluxio.underfs.address} The UFS mounted to Alluxio root mount point.
alluxio.master.periodic.block.integrity.check.interval 1hr The period for the block integrity check, disabled if <= 0.
alluxio.master.periodic.block.integrity.check.repair false Whether the system should delete orphaned blocks found during the periodic integrity check. This is an experimental feature.
alluxio.master.persistence.checker.interval.ms 1000 N/A
alluxio.master.persistence.initial.interval.ms 1000 N/A
alluxio.master.persistence.initial.wait.time.ms 5min Time to wait before starting the persistence job. When alluxio.user.file.writetype.default is set to ASYNC_THROUGH, set to a big enough value to avoid conflicts between cache and through job.
alluxio.master.persistence.max.interval.ms 3600000 N/A
alluxio.master.persistence.max.total.wait.time.ms 86400000 N/A
alluxio.master.persistence.scheduler.interval.ms 1000 N/A
alluxio.master.port 19998 The port that Alluxio master node runs on.
alluxio.master.principal Kerberos principal for Alluxio master.
alluxio.master.replication.check.interval.ms 60000 N/A
alluxio.master.rpc.addresses A list of comma-separated host:port RPC addresses where the client should look for masters when using multiple masters without Zookeeper. This property is not used when Zookeeper is enabled, since Zookeeper already stores the master addresses.
alluxio.master.startup.block.integrity.check.enabled true Whether the system should be checked on startup for orphaned blocks (blocks having no corresponding files but still taking system resource due to various system failures). Orphaned blocks will be deleted during master startup if this property is true. This property is available since 1.7.1
alluxio.master.startup.consistency.check.enabled true Whether the system should be checked for consistency with the underlying storage on startup. During the time the check is running, Alluxio will be in read only mode. Enabled by default.
alluxio.master.tieredstore.global.level0.alias MEM The name of the highest storage tier in the entire system.
alluxio.master.tieredstore.global.level1.alias SSD The name of the second highest storage tier in the entire system.
alluxio.master.tieredstore.global.level2.alias HDD The name of the third highest storage tier in the entire system.
alluxio.master.tieredstore.global.levels 3 The total number of storage tiers in the system.
alluxio.master.ttl.checker.interval 1hour Time interval to periodically delete the files with expired ttl value.
alluxio.master.ufs.block.location.cache.capacity 1000000 The capacity of the UFS block locations cache. This cache caches UFS block locations for files that are persisted but not in Alluxio space, so that listing status of these files do not need to repeatedly ask UFS for their block locations. If this is set to 0, the cache will be disabled.
alluxio.master.ufs.path.cache.capacity 100000 The capacity of the UFS path cache. This cache is used to approximate the `ONCE` metadata load behavior (see `alluxio.user.file.metadata.load.type`). Larger caches will consume more memory, but will better approximate the `ONCE` behavior.
alluxio.master.ufs.path.cache.threads 64 The maximum size of the thread pool for asynchronously processing paths for the UFS path cache. Greater number of threads will decrease the amount of staleness in the async cache, but may impact performance. If this is set to 0, the cache will be disabled, and `alluxio.user.file.metadata.load.type=ONCE` will behave like `ALWAYS`.
alluxio.master.web.bind.host 0.0.0.0 The hostname Alluxio master web UI binds to.
alluxio.master.web.hostname The hostname of Alluxio Master web UI.
alluxio.master.web.port 19999 The port Alluxio web UI runs on.
alluxio.master.whitelist / A comma-separated list of prefixes of the paths which are cacheable, separated by semi-colons. Alluxio will try to cache the cacheable file when it is read for the first time.
alluxio.master.worker.connect.wait.time 5sec Alluxio master will wait a period of time after start up for all workers to register, before it starts accepting client requests. This property determines the wait time.
alluxio.master.worker.heartbeat.interval 10sec The interval between Alluxio master and worker heartbeats.
alluxio.master.worker.threads.max 512 The maximum number of incoming RPC requests to master that can be handled. This value is used to configure maximum number of threads in gRPC thread pool with master.
alluxio.master.worker.timeout 5min Timeout between master and worker indicating a lost worker.

Worker Configuration

The worker configuration specifies information regarding the worker nodes, such as the address and the port number.

</table> ## User Configuration The user configuration specifies values regarding file system access.
Property NameDefaultDescription
alluxio.worker.allocator.class alluxio.worker.block.allocator.MaxFreeAllocator The strategy that a worker uses to allocate space among storage directories in certain storage layer. Valid options include: `alluxio.worker.block.allocator.MaxFreeAllocator`, `alluxio.worker.block.allocator.GreedyAllocator`, `alluxio.worker.block.allocator.RoundRobinAllocator`.
alluxio.worker.bind.host 0.0.0.0 The hostname Alluxio's worker node binds to.
alluxio.worker.block.heartbeat.interval 1sec The interval between block workers' heartbeats.
alluxio.worker.block.heartbeat.timeout ${alluxio.worker.master.connect.retry.timeout} The timeout value of block workers' heartbeats. If the worker can't connect to master before this interval expires, the worker will exit.
alluxio.worker.block.master.client.pool.size 11 The block master client pool size on the Alluxio workers.
alluxio.worker.block.threads.max 2048 The maximum number of incoming RPC requests to block worker that can be handled. This value is used to configure maximum number of threads in gRPC thread pool with block worker. This value should be greater than the sum of `alluxio.user.block.worker.client.threads` across concurrent Alluxio clients. Otherwise, the worker connection pool can be drained, preventing new connections from being established.
alluxio.worker.block.threads.min 256 The minimum number of threads used to handle incoming RPC requests to block worker. This value is used to configure minimum number of threads in gRPC thread pool with block worker.
alluxio.worker.data.folder /alluxioworker/ A relative path within each storage directory used as the data folder for Alluxio worker to put data for tiered store.
alluxio.worker.data.folder.permissions rwxrwxrwx The permission set for the worker data folder. If short circuit is used this folder should be accessible by all users (rwxrwxrwx).
alluxio.worker.data.folder.tmp .tmp_blocks A relative path in alluxio.worker.data.folder used to store the temporary data for uncommitted files.
alluxio.worker.data.server.class alluxio.worker.grpc.GrpcDataServer Selects the networking stack to run the worker with. Valid options are: `alluxio.worker.grpc.GrpcDataServer`.
alluxio.worker.data.server.domain.socket.address The path to the domain socket. Short-circuit reads make use of a UNIX domain socket when this is set (non-empty). This is a special path in the file system that allows the client and the AlluxioWorker to communicate. You will need to set a path to this socket. The AlluxioWorker needs to be able to create the path. If alluxio.worker.data.server.domain.socket.as.uuid is set, the path should be the home directory for the domain socket. The full path for the domain socket with be /.</td> </tr>
alluxio.worker.data.server.domain.socket.as.uuid false If true, the property alluxio.worker.data.server.domain.socket.addressis the path to the home directory for the domain socket and a unique identifier is used as the domain socket name. In addition, clients ignore alluxio.user.hostname while detecting a local worker for short circuit ops. If false, the property is the absolute path to the UNIX domain socket.
alluxio.worker.data.tmp.subdir.max 1024 The maximum number of sub-directories allowed to be created in alluxio.worker.data.tmp.folder.
alluxio.worker.evictor.class alluxio.worker.block.evictor.LRUEvictor The strategy that a worker uses to evict block files when a storage layer runs out of space. Valid options include `alluxio.worker.block.evictor.LRFUEvictor`, `alluxio.worker.block.evictor.GreedyEvictor`, `alluxio.worker.block.evictor.LRUEvictor`.
alluxio.worker.evictor.lrfu.attenuation.factor 2.0 A attenuation factor in [2, INF) to control the behavior of LRFU.
alluxio.worker.evictor.lrfu.step.factor 0.25 A factor in [0, 1] to control the behavior of LRFU: smaller value makes LRFU more similar to LFU; and larger value makes LRFU closer to LRU.
alluxio.worker.file.buffer.size 1MB The buffer size for worker to write data into the tiered storage.
alluxio.worker.file.persist.pool.size 64 The size of the thread pool per worker, in which the thread persists an ASYNC_THROUGH file to under storage.
alluxio.worker.file.persist.rate.limit 2GB The rate limit of asynchronous persistence per second.
alluxio.worker.file.persist.rate.limit.enabled false Whether to enable rate limiting when performing asynchronous persistence.
alluxio.worker.filesystem.heartbeat.interval 1sec The heartbeat interval between the worker and file system master.
alluxio.worker.free.space.timeout 10sec The duration for which a worker will wait for eviction to make space available for a client write request.
alluxio.worker.hostname The hostname of Alluxio worker.
alluxio.worker.jvm.monitor.enabled false Whether to enable start JVM monitor thread on worker.
alluxio.worker.keytab.file Kerberos keytab file for Alluxio worker.
alluxio.worker.master.connect.retry.timeout 1hour Retry period before workers give up on connecting to master and exit.
alluxio.worker.memory.size 2/3 of total system memory, or 1GB if system memory size cannot be determined Memory capacity of each worker node.
alluxio.worker.network.async.cache.manager.threads.max 8 The maximum number of threads used to cache blocks asynchronously in the data server.
alluxio.worker.network.block.reader.threads.max 2048 The maximum number of threads used to read blocks in the data server.
alluxio.worker.network.flowcontrol.window 2MB The HTTP2 flow control window used by worker gRPC connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.worker.network.keepalive.time 30sec The amount of time for data server (for block reads and block writes) to wait for a response before pinging the client to see if it is still alive.
alluxio.worker.network.keepalive.timeout 30sec The maximum time for a data server (for block reads and block writes) to wait for a keepalive response before closing the connection.
alluxio.worker.network.max.inbound.message.size 4MB The max inbound message size used by worker gRPC connections.
alluxio.worker.network.netty.boss.threads 1 How many threads to use for accepting new requests.
alluxio.worker.network.netty.channel EPOLL Netty channel type: NIO or EPOLL. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.worker.network.netty.shutdown.quiet.period 2sec The quiet period. When the netty server is shutting down, it will ensure that no RPCs occur during the quiet period. If an RPC occurs, then the quiet period will restart before shutting down the netty server.
alluxio.worker.network.netty.watermark.high 32KB Determines how many bytes can be in the write queue before switching to non-writable.
alluxio.worker.network.netty.watermark.low 8KB Once the high watermark limit is reached, the queue must be flushed down to the low watermark before switching back to writable.
alluxio.worker.network.netty.worker.threads 0 How many threads to use for processing requests. Zero defaults to #cpuCores * 2.
alluxio.worker.network.reader.max.chunk.size.bytes 2MB When a client read from a remote worker, the maximum chunk size.
alluxio.worker.network.shutdown.timeout 15sec Maximum amount of time to wait until the worker gRPC server is shutdown (regardless of the quiet period).
alluxio.worker.network.zerocopy.enabled true Whether zero copy is enabled on worker when processing data streams.
alluxio.worker.port 29999 The port Alluxio's worker node runs on.
alluxio.worker.principal Kerberos principal for Alluxio worker.
alluxio.worker.session.timeout 1min Timeout between worker and client connection indicating a lost session connection.
alluxio.worker.storage.checker.enabled true Whether periodic storage health checker is enabled on Alluxio workers.
alluxio.worker.tieredstore.block.lock.readers 1000 The max number of concurrent readers for a block lock.
alluxio.worker.tieredstore.block.locks 1000 Total number of block locks for an Alluxio block worker. Larger value leads to finer locking granularity, but uses more space.
alluxio.worker.tieredstore.level0.alias MEM The alias of the top storage tier on this worker. It must match one of the global storage tiers from the master configuration. We disable placing an alias lower in the global hierarchy before an alias with a higher postion on the worker hierarchy. So by default, SSD cannot come before MEM on any worker.
alluxio.worker.tieredstore.level0.dirs.path /mnt/ramdisk on Linux, /Volumes/ramdisk on OSX The path of storage directory for the top storage tier. Note for MacOS the value should be `/Volumes/`.
alluxio.worker.tieredstore.level0.dirs.quota ${alluxio.worker.memory.size} The capacity of the top storage tier.
alluxio.worker.tieredstore.level0.reserved.ratio Fraction of space reserved in the top storage tier. This has been deprecated, please use high and low watermark instead.
alluxio.worker.tieredstore.level0.watermark.high.ratio 0.95 The high watermark of the space in the top storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level0.watermark.low.ratio 0.7 The low watermark of the space in the top storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level1.alias The alias of the second storage tier on this worker.
alluxio.worker.tieredstore.level1.dirs.path The path of storage directory for the second storage tier.
alluxio.worker.tieredstore.level1.dirs.quota The capacity of the second storage tier.
alluxio.worker.tieredstore.level1.reserved.ratio Fraction of space reserved in the second storage tier. This has been deprecated, please use high and low watermark instead.
alluxio.worker.tieredstore.level1.watermark.high.ratio 0.95 The high watermark of the space in the second storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level1.watermark.low.ratio 0.7 The low watermark of the space in the second storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level2.alias The alias of the third storage tier on this worker.
alluxio.worker.tieredstore.level2.dirs.path The path of storage directory for the third storage tier.
alluxio.worker.tieredstore.level2.dirs.quota The capacity of the third storage tier.
alluxio.worker.tieredstore.level2.reserved.ratio Fraction of space reserved in the third storage tier. This has been deprecated, please use high and low watermark instead.
alluxio.worker.tieredstore.level2.watermark.high.ratio 0.95 The high watermark of the space in the third storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level2.watermark.low.ratio 0.7 The low watermark of the space in the third storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.levels 1 The number of storage tiers on the worker.
alluxio.worker.tieredstore.reserver.enabled true Whether to enable tiered store reserver service or not.
alluxio.worker.tieredstore.reserver.interval 1sec The time period of space reserver service, which keeps certain portion of available space on each layer.
alluxio.worker.tieredstore.retry 3 The number of retries that the worker uses to process blocks.
alluxio.worker.ufs.block.open.timeout 5min Timeout to open a block from UFS.
alluxio.worker.ufs.instream.cache.enabled true Enable caching for seekable under storage input stream, so that subsequent seek operations on the same file will reuse the cached input stream. This will improve position read performance as the open operations of some under file system would be expensive. The cached input stream would be stale, when the UFS file is modified without notifying alluxio.
alluxio.worker.ufs.instream.cache.expiration.time 5min Cached UFS instream expiration time.
alluxio.worker.ufs.instream.cache.max.size 5000 The max entries in the UFS instream cache.
alluxio.worker.web.bind.host 0.0.0.0 The hostname Alluxio worker's web server binds to.
alluxio.worker.web.hostname The hostname Alluxio worker's web UI binds to.
alluxio.worker.web.port 30000 The port Alluxio worker's web UI runs on.
Property NameDefaultDescription
alluxio.user.app.id The custom id to use for labeling this client's info, such as metrics. If unset, a random long will be used. This value is displayed in the client logs on initialization. Note that using the same app id will cause client info to be aggregated, so different applications must set their own ids or leave this value unset to use a randomly generated id.
alluxio.user.block.master.client.threads 10 The number of threads used by a block master client pool to talk to the block master.
alluxio.user.block.remote.read.buffer.size.bytes 8MB The size of the file buffer to read data from remote Alluxio worker.
alluxio.user.block.size.bytes.default 512MB Default block size for Alluxio files.
alluxio.user.block.worker.client.pool.gc.threshold 300sec A block worker client is closed if it has been idle for more than this threshold.
alluxio.user.block.worker.client.pool.size 1024 The maximum number of block worker clients cached in the block worker client pool.
alluxio.user.block.worker.client.read.retry 5 The maximum number of workers to retry before the client gives up on reading a block
alluxio.user.conf.cluster.default.enabled true When this property is true, an Alluxio client will load the default values of configuration properties set by Alluxio master.
alluxio.user.date.format.pattern MM-dd-yyyy HH:mm:ss:SSS Display formatted date in cli command and web UI by given date format pattern.
alluxio.user.failed.space.request.limits 3 The number of times to request space from the file system before aborting.
alluxio.user.file.buffer.bytes 8MB The size of the file buffer to use for file system reads/writes.
alluxio.user.file.cache.partially.read.block true This property is deprecated as of 1.7 and has no effect. Use the read type to control caching behavior.
alluxio.user.file.copyfromlocal.write.location.policy.class alluxio.client.file.policy.RoundRobinPolicy The default location policy for choosing workers for writing a file's blocks using copyFromLocal command.
alluxio.user.file.create.ttl -1 Time to live for files created by a user, no ttl by default.
alluxio.user.file.create.ttl.action DELETE When file's ttl is expired, the action performs on it. DELETE by default
alluxio.user.file.delete.unchecked false Whether to check if the UFS contents are in sync with Alluxio before attempting to delete persisted directories recursively.
alluxio.user.file.load.ttl -1 Time to live for files loaded from UFS by a user, no ttl by default.
alluxio.user.file.load.ttl.action FREE When file's ttl is expired, the action performs on it. FREE by default
alluxio.user.file.master.client.threads 10 The number of threads used by a file master client to talk to the file master.
alluxio.user.file.metadata.load.type ONCE The behavior of loading metadata from UFS. When information about a path is requested and the path does not exist in Alluxio, metadata can be loaded from the UFS. Valid options are `ALWAYS`, `NEVER`, and `ONCE`. `ALWAYS` will always access UFS to see if the path exists in the UFS. `NEVER` will never consult the UFS. `ONCE` will access the UFS the "first" time (according to a cache), but not after that. This parameter is ignored if a metadata sync is performed, via the parameter "alluxio.user.file.metadata.sync.interval"
alluxio.user.file.metadata.sync.interval -1 The interval for syncing UFS metadata before invoking an operation on a path. -1 means no sync will occur. 0 means Alluxio will always sync the metadata of the path before an operation. If you specify a time interval, Alluxio will (best effort) not re-sync a path within that time interval. Syncing the metadata for a path must interact with the UFS, so it is an expensive operation. If a sync is performed for an operation, the configuration of "alluxio.user.file.metadata.load.type" will be ignored.
alluxio.user.file.passive.cache.enabled true Whether to cache files to local Alluxio workers when the files are read from remote workers (not UFS).
alluxio.user.file.readtype.default CACHE_PROMOTE Default read type when creating Alluxio files. Valid options are `CACHE_PROMOTE` (move data to highest tier if already in Alluxio storage, write data into highest tier of local Alluxio if data needs to be read from under storage), `CACHE` (write data into highest tier of local Alluxio if data needs to be read from under storage), `NO_CACHE` (no data interaction with Alluxio, if the read is from Alluxio data migration or eviction will not occur).
alluxio.user.file.replication.durable 1 N/A
alluxio.user.file.replication.max -1 N/A
alluxio.user.file.replication.min 0 N/A
alluxio.user.file.seek.buffer.size.bytes 1MB The file seek buffer size. This is only used when alluxio.user.file.cache.partially.read.block is enabled.
alluxio.user.file.ufs.tier.enabled false When workers run out of available memory, whether the client can skip writing data to Alluxio but fallback to write to UFS without stopping the application. This property only works when the write type is ASYNC_THROUGH.
alluxio.user.file.waitcompleted.poll 1sec The time interval to poll a file for its completion status when using waitCompleted.
alluxio.user.file.write.avoid.eviction.policy.reserved.size.bytes 0MB The portion of space reserved in worker when user use the LocalFirstAvoidEvictionPolicy class as file write location policy.
alluxio.user.file.write.location.policy.class alluxio.client.file.policy.LocalFirstPolicy The default location policy for choosing workers for writing a file's blocks.
alluxio.user.file.write.tier.default 0 The default tier for choosing a where to write a block. Valid option is any integer. Non-negative values identify tiers starting from top going down (0 identifies the first tier, 1 identifies the second tier, and so on). If the provided value is greater than the number of tiers, it identifies the last tier. Negative values identify tiers starting from the bottom going up (-1 identifies the last tier, -2 identifies the second to last tier, and so on). If the absolute value of the provided value is greater than the number of tiers, it identifies the first tier.
alluxio.user.file.writetype.default MUST_CACHE Default write type when creating Alluxio files. Valid options are `MUST_CACHE` (write will only go to Alluxio and must be stored in Alluxio), `CACHE_THROUGH` (try to cache, write to UnderFS synchronously), `THROUGH` (no cache, write to UnderFS synchronously).
alluxio.user.heartbeat.interval 1sec The interval between Alluxio workers' heartbeats.
alluxio.user.hostname The hostname to use for the client. Note: this property is deprecated. set alluxio.locality.node instead
alluxio.user.local.reader.chunk.size.bytes 8MB When a client reads from a local worker, the maximum data chunk size.
alluxio.user.local.writer.chunk.size.bytes 64KB When a client writes to a local worker, the maximum data chunk size.
alluxio.user.metrics.collection.enabled false Enable collecting the client-side metrics and hearbeat them to master
alluxio.user.metrics.heartbeat.interval 3sec The time period of client master hearbeat to send the client-side metrics.
alluxio.user.network.data.timeout 30sec The maximum time for a data client (for block reads and block writes) to wait for a response from the data server.
alluxio.user.network.flowcontrol.window 2MB The HTTP2 flow control window used by user gRPC connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.user.network.keepalive.time 9223372036854775807 The amount of time for a gRPC client (for block reads and block writes) to wait for a response before pinging the server to see if it is still alive.
alluxio.user.network.keepalive.timeout 30sec The maximum time for a gRPC client (for block reads and block writes) to wait for a keepalive response before closing the connection.
alluxio.user.network.max.inbound.message.size 100MB The max inbound message size used by user gRPC connections.
alluxio.user.network.netty.channel EPOLL Type of netty channels. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.user.network.netty.worker.threads 0 How many threads to use for remote block worker client to read from remote block workers.
alluxio.user.network.reader.buffer.size.messages 16 When a client reads from a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.network.reader.chunk.size.bytes 1MB When a client reads from a remote worker, the maximum chunk size.
alluxio.user.network.socket.timeout 10min The time out of a socket created by a user to connect to the master.
alluxio.user.network.writer.buffer.size.messages 16 When a client writes to a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.network.writer.chunk.size.bytes 1MB When a client writes to a remote worker, the maximum chunk size.
alluxio.user.network.writer.close.timeout 30min The timeout to close a writer client.
alluxio.user.network.writer.flush.timeout 30min The timeout to wait for flush to finish in a data writer.
alluxio.user.network.zerocopy.enabled true Whether zero copy is enabled on client when processing data streams.
alluxio.user.rpc.retry.base.sleep 50ms Alluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the base time in the exponential backoff.
alluxio.user.rpc.retry.max.duration 2min Alluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the maximum duration to retry for before giving up. Note that, this value is set to 5s for fs and fsadmin CLIs.
alluxio.user.rpc.retry.max.num.retry 100 Alluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the maximum number of retries. This property has been deprecated by time-based retry using: alluxio.user.rpc.retry.max.duration
alluxio.user.rpc.retry.max.sleep 3sec Alluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the maximum wait time in the backoff.
alluxio.user.short.circuit.enabled true The short circuit read/write which allows the clients to read/write data without going through Alluxio workers if the data is local is enabled if set to true.
alluxio.user.ufs.block.location.all.fallback.enabled false Whether to return all workers as block location if ufs block locations are not co-located with any Alluxio workers or is empty.
alluxio.user.ufs.block.read.concurrency.max 2147483647 The maximum concurrent readers for one UFS block on one Block Worker.
alluxio.user.ufs.block.read.location.policy alluxio.client.file.policy.LocalFirstPolicy When an Alluxio client reads a file from the UFS, it delegates the read to an Alluxio worker. The client uses this policy to choose which worker to read through. Builtin choices: [alluxio.client.block.policy.DeterministicHashPolicy, alluxio.client.file.policy.LocalFirstAvoidEvictionPolicy, alluxio.client.file.policy.LocalFirstPolicy, alluxio.client.file.policy.MostAvailableFirstPolicy, alluxio.client.file.policy.RoundRobinPolicy, alluxio.client.file.policy.SpecificHostPolicy].
alluxio.user.ufs.block.read.location.policy.deterministic.hash.shards 1 When alluxio.user.ufs.block.read.location.policy is set to alluxio.client.block.policy.DeterministicHashPolicy, this specifies the number of hash shards.
alluxio.user.ufs.delegation.read.buffer.size.bytes 8MB Size of the read buffer when reading from the UFS through the Alluxio worker. Each read request will fetch at least this many bytes, unless the read reaches the end of the file.
alluxio.user.ufs.delegation.write.buffer.size.bytes 2MB Size of the write buffer when writing to the UFS through the Alluxio worker. Each write request will write at least this many bytes, unless the write is at the end of the file.
alluxio.user.worker.list.refresh.interval 2min The interval used to refresh the live worker list on the client
## Resource Manager Configuration When running Alluxio with resource managers like Mesos and YARN, Alluxio has additional configuration options.
Property NameDefaultDescription
alluxio.integration.master.resource.cpu 1 The number of CPUs to run an Alluxio master for YARN framework.
alluxio.integration.master.resource.mem 1024MB The amount of memory to run an Alluxio master for YARN framework.
alluxio.integration.mesos.alluxio.jar.url http://downloads.alluxio.org/downloads/files/${alluxio.version}/alluxio-${alluxio.version}-bin.tar.gz Url to download an Alluxio distribution from during Mesos deployment.
alluxio.integration.mesos.jdk.path jdk1.8.0_151 If installing java from a remote URL during mesos deployment, this must be set to the directory name of the untarred jdk.
alluxio.integration.mesos.jdk.url LOCAL A url from which to install the jdk during Mesos deployment. Default to LOCAL which tells Mesos to use the local JDK on the system. When using this property, alluxio.integration.mesos.jdk.path must also be set correctly.
alluxio.integration.mesos.master.name AlluxioMaster The name of the master process to use within Mesos.
alluxio.integration.mesos.master.node.count 1 The number of Alluxio master process to run within Mesos.
alluxio.integration.mesos.principal alluxio The Mesos principal for the Alluxio Mesos Framework.
alluxio.integration.mesos.role * Mesos role for the Alluxio Mesos Framework.
alluxio.integration.mesos.secret Secret token for authenticating with Mesos.
alluxio.integration.mesos.user The Mesos user for the Alluxio Mesos Framework. Defaults to the current user.
alluxio.integration.mesos.worker.name AlluxioWorker The name of the worker process to use within Mesos.
alluxio.integration.worker.resource.cpu 1 The number of CPUs to run an Alluxio worker for YARN framework.
alluxio.integration.worker.resource.mem 1024MB The amount of memory to run an Alluxio worker for YARN framework.
alluxio.integration.yarn.workers.per.host.max 1 The number of workers to run on an Alluxio host for YARN framework.
## Security Configuration The security configuration specifies information regarding the security features, such as authentication and file permission. Settings for authentication take effect for master, worker, and user. Settings for file permission only take effect for master. See [Security](../advanced/Security.html) for more information about security features.
Property NameDefaultDescription
alluxio.security.authentication.custom.provider.class The class to provide customized authentication implementation, when alluxio.security.authentication.type is set to CUSTOM. It must implement the interface 'alluxio.security.authentication.AuthenticationProvider'.
alluxio.security.authentication.type SIMPLE The authentication mode. Currently three modes are supported: NOSASL, SIMPLE, CUSTOM. The default value SIMPLE indicates that a simple authentication is enabled. Server trusts whoever the client claims to be.
alluxio.security.authorization.permission.enabled true Whether to enable access control based on file permission.
alluxio.security.authorization.permission.supergroup supergroup The super group of Alluxio file system. All users in this group have super permission.
alluxio.security.authorization.permission.umask 022 The umask of creating file and directory. The initial creation permission is 777, and the difference between directory and file is 111. So for default umask value 022, the created directory has permission 755 and file has permission 644.
alluxio.security.group.mapping.cache.timeout 1min Time for cached group mapping to expire.
alluxio.security.group.mapping.class alluxio.security.group.provider.ShellBasedUnixGroupsMapping The class to provide user-to-groups mapping service. Master could get the various group memberships of a given user. It must implement the interface 'alluxio.security.group.GroupMappingService'. The default implementation execute the 'groups' shell command to fetch the group memberships of a given user.
alluxio.security.login.impersonation.username _HDFS_USER_ When alluxio.security.authentication.type is set to SIMPLE or CUSTOM, user application uses this property to indicate the IMPERSONATED user requesting Alluxio service. If it is not set explicitly, or set to _NONE_, impersonation will not be used. A special value of '_HDFS_USER_' can be specified to impersonate the hadoop client user.
alluxio.security.login.username When alluxio.security.authentication.type is set to SIMPLE or CUSTOM, user application uses this property to indicate the user requesting Alluxio service. If it is not set explicitly, the OS login user will be used.
alluxio.security.stale.channel.purge.interval 60min Interval for which client channels that have been inactive will be regarded as unauthenticated. Such channels will reauthenticate with their target master upon being used for new RPCs.