易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 799|回复: 2
收起左侧

Hadoop学习之路Hadoop集群搭建和简单应用

[复制链接]
发表于 2022-11-7 11:18:00 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
正文9 u) ]9 j. L* }+ A; W
# k- J/ I5 J. L8 o4 l/ l9 r
回到顶部
8 o) p4 k, o$ O* W+ W分布式集群的通用问题" Y1 {: [" }6 j1 N
当前的HDFS和YARN都是一主多从的分布式架构,主从节点---管理者和工作者
$ N1 Y+ h- E# p( ?- g  B+ F6 i! G6 N5 M  ~" |  `
问题:如果主节点或是管理者宕机了。会出现什么问题?' A" T2 F- n0 U" L- u4 Q* W

% h8 g6 B* w4 }+ H( V4 d/ U群龙无首,整个集群不可用。所以在一主多从的架构中都会有一个通用的问题:
3 F3 _" O1 a4 j$ A% `- y
% v0 C- d* N$ g. F. ~: F当集群中的主节点宕机之后,整个集群不可用。这个现象叫做:单点故障。SPOF
  T0 s. S6 y* ^6 ^
( {+ y0 L( v( V; r9 U# S单点故障讲述的概念有两点' r6 X( l1 |# L% a, k# F1 ]8 n
1、如果说宕机的那个节点是从节点,那么整个集群能够继续运行,并且对外提供正常的服务。
/ W& }( v, i5 y. _, }, j. v8 C# H' U/ i0 }
2、如果说宕机的那个节点是主节点,那么整个集群就处于宕机状态。. o  ^  c9 P" g% q: L& D
0 x- D' Y9 t  ?9 T. P" {
通用的解决方案:高可用
2 q* k6 N/ q4 `
- Q# {: `2 L" {/ p# h0 D) r概念:当正在对外提供服务器的主从节点宕机,那么备用的主节点立马上位对外提供服务。无缝的瞬时切换。
. x; `# C4 T. f4 s( ]# d8 r- p
5 o, R0 Z2 Z% \( m% ^
1 P! W' C0 b! x- f) d& B3 W2 w% A: ^3 y. j3 t& a5 o9 B
回到顶部' r8 N+ B' ]/ m6 t3 ^
集群的搭建的集中通用模式& h4 x/ z4 F$ J$ O& R6 [
1、单机模式$ r2 v( W* {+ x* F/ E2 X8 r( M+ e. g
  表示所有的分布式系统都是单机的。
, Y, b# q5 U3 a& |
( N4 Q4 A3 s) t8 U2、伪分布式模式(搭建在了只有一个节点的集群中)/ L+ s/ {  w* {& q9 Z+ P) h; U
  表示集群中的所有角色都分配给了一个节点。
/ y. H0 `3 c6 t* P
% L. M3 T2 a- `: I# }  表示整个集群被安装在了只有一个节点的集群中的。
$ n% A+ Z; }1 T: {! ]. r0 P0 Y( l( N
  主要用于做快速使用,去模拟分布式的效果。1 |% B2 m  P/ u5 B! S0 b( }/ y

' y8 I8 j, L1 x# ~; F" V1 s4 @3、分布式模式9 Y+ g  z  Q+ y+ m* c; \( X
  表示集群中的节点会被分配成很多种角色,分散在整个集群中。
! G5 k/ G" j& I/ y: k0 Q! O2 K
8 @# }' C1 j% q; o# W  主要用于学习测试等等一些场景中。
1 D6 U5 m/ ?* m  |: \! G3 c  z4 Z6 b/ X/ a, {
4、高可用模式. D- C9 j6 M, ]4 a- q+ a: }1 K
  表示整个集群中的主节点会有多个
! f* U; w+ u0 r' V) I* D% C
+ ?6 H; h- y; `$ s  注意区分:能够对外提供服务的主节点还是只有一个。其他的主节点全部处于一个热备的状态。
3 m! M. B0 ?7 f7 d% ?8 B- |  O- n! v: k
  正在对外提供服务的主节点:active  有且仅有一个8 @3 g) Q) ~" ~0 V7 q0 v
8 f' J* z- f9 D) u* K* a
  热备的主节点:standby  可以有多个+ y" r5 ]% f# I; a
( V; o) F+ N5 i! w, X
  工作模式:1、在任意时刻,只有一个主节点是active的,active的主节点对外提供服务8 }! o# B, @4 ?0 h! {
& \0 x8 d/ X  l4 M4 m+ D
       2、在任意时刻,都应至少有一个standby的主节点,等待active的宕机来进行接替
$ |! V( o1 c( B7 T% z7 e3 W
5 j- g, C% t' z% |4 M  架构模式:就是为了解决分布式集群中的通用问题SPOF9 v. O9 N) O, V* c4 O3 Y8 v
; w$ F3 b% |( S  ^7 k4 ~( `
  不管是分布式架构还是高可用架构,都存在一个问题:主从结构---从节点数量太多了。最直观的的问题:造成主节点的工作压力过载,主节点会宕机,当前的这种现象是一种死循环  K1 L- u8 ?9 ?% @+ Y" G+ ~

1 a- I+ z4 U. N+ K$ }5、联邦模式3 {+ b0 F7 ~) C/ N) B/ q
  表示当前集群中的主从节点都可以有很多个。
+ c1 d  Q1 e$ S- O: }# d5 W5 W3 V# O5 K- F3 `" }
  1)主节点:可以有很多个的意思是说:同时对外提供服务的主节点有很多个。/ K+ ]. P9 ?2 o4 o5 @+ H/ E
9 O" q$ j! n0 \& O) Z. ?' _
            重点:每一个主节点都是用来管理整个集群中的一部分
% _# m1 h2 P  t+ x3 V2 Z. h% T: ]1 a0 c* S. q& M
  2)从节点:一定会有很多个。
: ?6 G! M8 m% U+ H: R( z; u
" I% ?) j' Q. e& N8 ~0 Y) g  在联邦模式下还是会有问题:
! D* o( w$ a5 \0 p" B( K' l, {9 t& x$ V( E/ m) {
  虽然这个集群中的一个主节点的压力被分摊到了多个主节点。但是这个多个主节点依然会有一个问题:SOFP+ \2 b" U4 z6 o  K$ w1 e
+ t( }( @3 Z6 U; \' }4 M
回到顶部/ R0 V5 A- h3 L" M! D; p
安装Hadoop集群中的一些通用问题) W. X. g2 r0 C* f% b
1、假如安装不成功,并且不知道应该怎么去解决这个安装错误:重装
! M: A1 x0 K# ?  ?8 J5 y7 o0 t  r6 ?" ~# Y* b+ K2 b, d
   需要做的处理:处理安装步骤中不同的部分即可。第一次安装和重装时候的不同步骤:
3 Z- ^  ^1 ^" o4 @- v$ n3 j
5 N, ?  H2 G! |8 f9 k* j3 K   1)到修改配置文件以前,全部都不用动
3 P( y& b5 Y! \: E7 N7 u; R1 T) f) c$ [! ?  A$ P
   2)检查配置文件是否都正确7 a2 H5 r- s& j

. O: N! R2 k$ a! g8 C4 k( i    先检查一个节点上的配置文件是否都正确,如果都正确,重新分发一次即可
7 g) M4 A8 g1 g# _4 R# D6 t5 D% k& f3 b1 T4 \/ M8 W
   3)在安装分布式集群时,所有节点中的安装的安装目录和安装者,需要检查和确定
! E  r# s% Y1 ^3 H. ?, ]
( z" S7 l, K1 F  d   4)删掉数据目录8 ~" j! Q+ l; C' E* g4 V
( ]! R' u% F0 {( R! m
    A. 删除主节点的工作目录:namenode的数据目录1 X" b/ `3 [! }+ h/ k& p

: @% K/ S( g9 p6 w6 Z        删除即可,只需要在主节点删除即可
. l2 y, l9 Y; w+ `$ N2 |9 P- H7 ?- Z8 f+ D3 I% _5 H
    B. 删除从节点的工作目录:datanode的数据目录
6 R$ p7 Y7 e$ n2 i+ n1 \1 d. L& R9 W( y$ U: o& L) s! H
        删除即可,把每个从节点上的这个对应数据目录都删掉
* U( t6 V  B' U6 Q" _4 s0 {5 i- J9 W" H) R- D2 ^) I+ |- M
        如果以上两份数据都被删除了之后。整个集群当中就相当于没有存储任何的历史数据。所以就是一个全新的集群. v' G% \# K, w; R* ]' r1 v+ _
2 m& }! b. u$ h" w( G8 v9 ^$ g4 s
 5)在确保数据正常和安装包都正常之后,进行重新初始化
! d" [3 B; {- X. C2 ]$ H; C+ X5 N4 {) ?
2 _7 x- F7 ~% g! Q: n     重点强调: hadoop集群的初始化,其实就是初始化HDFS集群, 只能在主节点进行初始化: W0 o, X: p* {; d' [

: r( j7 Q' _; i% F: r, w" ?5 @     如果你只需要搭建YARN集群,那么是可以不用做初始化的。
; v7 K# r2 t$ x7 j: W7 K) Y+ G! \& N& {
  6)启动集群' G, L; ^7 Z' U2 [1 j
1 M# |. c( d2 \+ {
  7)验证集群是否成功
- [1 {8 c- P: W1 E" \1 E" F* ?$ h. _2 r! j
回到顶部
( Q, b4 G5 B* l# j) V/ [7 {Linux环境变量加载的顺序
1 h( y. {- R& Y用户环境变量 :仅仅只是当前用户使用 ~/.bashrc   ~/.bash_profile
% a9 I+ s; R9 Y0 Q6 I系统环境变量 :给当前系统中的所有用户使用 /etc/profile0 k( N7 h- g8 ]! [* G1 e

. T- g& S+ ]- }% R5 J& |任何普通用户在进行登录的时候:会同时加载几个环境变量的配置文件:
+ ]7 @3 N7 f* E- J; ~- T" v$ U" ?8 y5 J! H8 U$ B: h3 e8 o6 t$ O& R8 q9 x
按顺序:. |) c( q& N# S' I) x  u% [
1、/etc/profile8 K- d: ?( m6 W0 f# Q; M
2、~/.bash_profile" ]7 C: Y2 }: j) Y( ~8 E: \
3、~/.bashrc, H/ X! o/ a9 }( d# U6 c: C8 z

" a' K" z5 |$ X) o( l6 O
 楼主| 发表于 2022-11-11 16:27:25 | 显示全部楼层
Hadoop是Apache软件基金会旗下的一个开源分布式计算平台。以Hadoop分布式文件系统HDFS(Hadoop Distributed Filesystem)和MapReduce(Google MapReduce的开源实现)为核心的Hadoop为用户提供了系统底层细节透明的分布式基础架构。
对于Hadoop的集群来讲,可以分成两大类角色:MasterSalve。一个HDFS集群是由一个NameNode和若干个DataNode组成的。其中NameNode作为主服务器,管理文件系统的命名空间和客户端对文件系统的访问操作;集群中的DataNode管理存储的数据。MapReduce框架是由一个单独运行在主节点上的JobTracker和运行在每个从节点的TaskTracker共同组成的。主节点负责调度构成一个作业的所有任 务,这些任务分布在不同的从节点上。主节点监控它们的执行情况,并且重新执行之前的失败任务;从节点仅负责由主节点指派的任务。当一个Job被提交时,JobTracker接收到提交作业和配置信息之后,就会将配置信息等分发给从节点,同时调度任务并监控TaskTracker的执行。
从上面的介绍可以看出,HDFS和MapReduce共同组成了Hadoop分布式系统体系结构的核心。HDFS在集群上实现分布式文件系统,MapReduce在集群上实现了分布式计算和任务处理。HDFS在MapReduce任务处理过程中提供了文件操作和存储等支持,MapReduce在HDFS的基础上实现了任务的分发、跟踪、执行等工作,并收集结果,二者相互作用,完成了Hadoop分布式集群的主要任务。
1.2 环境说明
我的环境是在虚拟机中配置的,Hadoop集群中包括4个节点:1个Master,2个Salve,节点之间局域网连接,可以相互ping通,节点IP地址分布如下:
( R: k: f$ \* W& K) G
虚拟机系统
机器名称
IP地址
Ubuntu 13.04
Master.Hadoop
192.168.1.141
Ubuntu 9.11
Salve1.Hadoop
192.168.1.142
Fedora 17
Salve2.Hadoop
192.168.1.137

- o1 c2 R' b6 h, {3 k: r

5 i! J' S% o1 x5 @5 J3 ^2 ^
Master机器主要配置NameNode和JobTracker的角色,负责总管分布式数据和分解任务的执行;3个Salve机器配置DataNode 和TaskTracker的角色,负责分布式数据存储以及任务的执行。其实应该还应该有1个Master机器,用来作为备用,以防止Master服务器宕机,还有一个备用马上启用。后续经验积累一定阶段后补上一台备用Master机器(可通过配置文件修改备用机器数)。
    注意:由于hadoop要求所有机器上hadoop的部署目录结构要求相同(因为在启动时按与主节点相同的目录启动其它任务节点),并且都有一个相同的用户名账户。参考各种文档上说的是所有机器都建立一个hadoop用户,使用这个账户来实现无密码认证。这里为了方便,分别在三台机器上都重新建立一个hadoop用户。
1.3 环境配置
Hadoop集群要按照1.2小节表格所示进行配置,下面介绍如何修改机器名称和配置hosts文件,以方便使用。
注意:我的虚拟机都采用NAT方式连接网络,IP地址是自动分配的,所以这里就使用自动分配的IP地址而未特地修改为某些IP地址。
(1)修改当前机器名称
假定我们发现我们的机器的主机名不是我们想要的。
1)在Ubuntu下修改机器名称
修改文件/etc/hostname里的值即可,修改成功后用hostname命令查看当前主机名是否设置成功。
       另外为了能正确解析主机名,最好也修改/etc/hosts文件里对应的主机名. ]0 e9 U; \  @& K
      
2)在Fedora下修改机器名称
通过对"/etc/sysconfig/network"文件修改其中"HOSTNAME"后面的值,改成我们规定的名称。
命令:vi /etc/sysconfig/network,修改如下:
" H/ N: ?0 C) w" A2 g: p, z   
5 h* O% U3 g  |6 l0 R/ n) M  x        
    同样为了能正确解析主机名,最好也修改/etc/hosts文件里对应的主机名。
(2)配置hosts文件(必须)
"/etc/hosts"这个文件是用来配置主机将用的DNS服务器信息,是记载LAN内接续的各主机的对应[HostName  IP]用的。当用户在进行网络连接时,首先查找该文件,寻找对应主机名对应的IP地址。
我们要测试两台机器之间知否连通,一般用"ping 机器的IP",如果想用"ping 机器的主机名"发现找不见该名称的机器(这也就是为什么在修改主机名的同时最好修改该文件中对应的主机名),解决的办法就是修改"/etc/hosts"这个文件,通过把LAN内的各主机的IP地址和HostName的一一对应写入这个文件的时候,就可以解决问题。
例如:机器为"Master.Hadoop:192.168.1.141"对机器为"Salve1.Hadoop:192.168.1.142"用命令"ping"记性连接测试。测试结果如下:
$ H, ?& U3 i) _# z   
从上图中的值,直接对IP地址进行测试,能够ping通,但是对主机名进行测试,发现没有ping通,提示"unknown host——未知主机",这时查看"Master.Hadoop"的"/etc/hosts"文件内容会发现里面没有"192.168.1.142  Slave1.Hadoop"内容,故而本机器是无法对机器的主机名为"Slave1.Hadoop" 解析。
在进行Hadoop集群配置中,需要在"/etc/hosts"文件中添加集群中所有机器的IP与主机名,这样Master与所有的Slave机器之间不仅可以通过IP进行通信,而且还可以通过主机名进行通信。所以在所有的机器上的"/etc/hosts"文件中都要添加如下内容:
192.168.1.141 Master.Hadoop
192.168.1.142 Slave1.Hadoop
192.168.1.137 Slave2.Hadoop
命令:vi /etc/hosts,添加结果如下:
2 k# M' K8 ?4 e1 ^4 b
现在我们在进行对机器为"Slave1.Hadoop"的主机名进行ping通测试,看是否能测试成功。
从上图中我们已经能用主机名进行ping通了,说明我们刚才添加的内容,在局域网内能进行DNS解析了,那么现在剩下的事儿就是在其余的Slave机器上进行相同的配置。然后进行测试。
1.4 所需软件
(1)JDK软件
    JDK版本:jdk-7u25-linux-i586.tar.gz
(2)Hadoop软件
    Hadoop版本:hadoop-1.1.2.tar.gz
2、SSH无密码验证配置
Hadoop运行过程中需要管理远端Hadoop守护进程,在Hadoop启动以后,NameNode是通过SSH(Secure Shell)来启动和停止各个DataNode上的各种守护进程的。这就必须在节点之间执行指令的时候是不需要输入密码的形式,故我们需要配置SSH运用无密码公钥认证的形式,这样NameNode使用SSH无密码登录并启动DataName进程,同样原理,DataNode上也能使用SSH无密码登录到 NameNode。
注意:如果你的Linux没有安装SSH,请首先安装SSH
Ubuntu下安装ssh:sudo apt-get install openssh-server
Fedora下安装ssh:yum install openssh-server
2.1 SSH基本原理和用法
1)SSH基本原理
    SSH之所以能够保证安全,原因在于它采用了公钥加密。过程如下:
(1)远程主机收到用户的登录请求,把自己的公钥发给用户。
(2)用户使用这个公钥,将登录密码加密后,发送回来。
(3)远程主机用自己的私钥,解密登录密码,如果密码正确,就同意用户登录。
2)SSH基本用法
    假如用户名为java,登录远程主机名为linux,如下命令即可:
    $ ssh java@linux
    SSH的默认端口是22,也就是说,你的登录请求会送进远程主机的22端口。使用p参数,可以修改这个端口,例如修改为88端口,命令如下:
    $ ssh -p 88 java@linux
    注意:如果出现错误提示:ssh: Could not resolve hostname linux: Name or service not known,则是因为linux主机未添加进本主机的Name Service中,故不能识别,需要在/etc/hosts里添加进该主机及对应的IP即可:
    linux     192.168.1.107
2.2 配置Master无密码登录所有Salve
1)SSH无密码原理
Master(NameNode | JobTracker)作为客户端,要实现无密码公钥认证,连接到服务器Salve(DataNode | Tasktracker)上时,需要在Master上生成一个密钥对,包括一个公钥和一个私钥,而后将公钥复制到所有的Slave上。当Master通过SSH连接Salve时,Salve就会生成一个随机数并用Master的公钥对随机数进行加密,并发送给Master。Master收到加密数之后再用私钥解密,并将解密数回传给Slave,Slave确认解密数无误之后就允许Master进行连接了。这就是一个公钥认证过程,其间不需要用户手工输入密码。
2)Master机器上设置无密码登录
a. Master节点利用ssh-keygen命令生成一个无密码密钥对。
在Master节点上执行以下命令:
ssh-keygen –t rsa –P ''
运行后询问其保存路径时直接回车采用默认路径。生成的密钥对:id_rsa(私钥)和id_rsa.pub(公钥),默认存储在"/home/用户名/.ssh"目录下。
) X/ e0 s0 b/ b; a, l      
查看"/home/用户名/"下是否有".ssh"文件夹,且".ssh"文件下是否有两个刚生产的无密码密钥对。
   
b. 接着在Master节点上做如下配置,把id_rsa.pub追加到授权的key里面去。
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
查看下authorized_keys的权限,如果权限不对则利用如下命令设置该文件的权限:
chmod 600 authorized_keys
c. 用root用户登录修改SSH配置文件"/etc/ssh/sshd_config"的下列内容。
检查下面几行前面”#”注释是否取消掉:
RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile  %h/.ssh/authorized_keys # 公钥文件路径
: I. V4 y5 Y8 V# t   
设置完之后记得重启SSH服务,才能使刚才设置有效。
      
退出root登录,使用普通用户验证是否设置成功。
从上图中得知无密码登录本级已经设置完毕,接下来的事儿是把公钥复制
的Slave机器上。
    注意:有时候在测试时可能会出现错误: Agent admitted failure to sign using the key.解决办法是:ssh-add   ~/.ssh/id_rsa ,如下所示:
   
    d.使用ssh-copy-id命令将公钥传送到远程主机上(这里以Slave1.Hadoop为例)。
e. 测试是否无密码登录其它机器成功。
到此为止,我们经过5步已经实现了从"Master.Hadoop"到"Slave1.Hadoop"SSH无密码登录,下面就是重复上面的步骤把剩余的两台(Slave2.Hadoop和Slave3.Hadoop)Slave服务器进行配置。这样,我们就完成了"配置Master无密码登录所有的Slave服务器"。
接下来配置所有Slave无密码登录Master,其和Master无密码登录所有Slave原理一样,就是把Slave的公钥追加到Master的".ssh"文件夹下的"authorized_keys"中,记得是追加(>>)
注意:期间可能会出现一些问题如下:
(1)如果在ssh连接时出现错误“ssh:connect to host port 22: Connection refused”,如下图所示:
  j! U( u! Q5 r/ p+ }
则可能是因为远程登录的那台机器没有安装ssh服务或安装了没有开启ssh服务,下面到Slave3.Hadoop主机进行测试:) y7 c# z7 Z4 p/ Z" _/ d& v
为了一劳永逸,设置系统启动时开启服务:# systemctl enable sshd.service
- Z4 R2 S4 t. K- }
(2)如果在用命令ssh-copy-id时发现找不到该命令“ssh-copy-id:Command not found”,则可能是ssh服务的版本太低的原因,比如若你的机器是Redhat系统就可能该问题,解决办法是:手动复制本地的pubkey内容到远程服务器,命令如下:
cat ~/.ssh/id_rsa.pub | ssh hadoop@Master.Hadoop 'cat >> ~/.ssh/authorized_keys'
该命令等价于下面两个命令:
①在本地机器上执行:scp ~/.ssh/id_rsa.pub hadoop@Master.Hadoop:/~
②到远程机器上执行:cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
3、Java环境安装
所有的机器上都要安装JDK,现在就先在Master服务器安装,然后其他服务器按照步骤重复进行即可。安装JDK以及配置环境变量,需要以"root"的身份进行。
3.1 安装JDK
首先用root身份登录"Master.Hadoop"后在"/usr"下创建"java"文件夹,再将"jdk-7u25-linux-i586.tar.gz"复制到"/usr/java"文件夹中,然后解压即可。查看"/usr/java"下面会发现多了一个名为"jdk1.7.0_25"文件夹,说明我们的JDK安装结束,删除"jdk-7u25-linux-i586.tar.gz"文件,进入下一个"配置环境变量"环节。
3.2 配置环境变量
(1)编辑"/etc/profile"文件
    编辑"/etc/profile"文件,在后面添加Java的"JAVA_HOME"、"CLASSPATH"以及"PATH"内容如下:
# set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_25/
export JRE_HOME=/usr/java/jdk1.7.0_25/jre
export CLASSPATH=.:CLASSPATH:CLASSPATH:JAVA_HOME/lib:$JRE_HOME/lib
export PATH=PATH:PATH:JAVA_HOME/bin:$JRE_HOME/bin
或者
# set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_25/
export CLASSPATH=.:CLASSPATH:CLASSPATH:JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=PATH:PATH:JAVA_HOME/bin:$JAVA_HOME/jre/bin
以上两种意思一样,那么我们就选择第1种来进行设置。
(2)使配置生效
保存并退出,执行下面命令使其配置立即生效。
source /etc/profile 或 . /etc/profile
3.3 验证安装成功
配置完毕并生效后,用下面命令判断是否成功。
java -version
从上图中得知,我们确定JDK已经安装成功。
3.4 安装剩余机器
这时用普通用户hadoop通过scp命令格式把"/usr/java/"文件复制到其他Slave上面,剩下的事儿就是在其余的Slave服务器上按照上图的步骤配置环境变量和测试是否安装成功,这里以Slave1.Master为例:
scp -r /usr/java seed@Slave1.Master:/usr/
注意:有的机器库函数版本较低,可能安装不了高版本的JDK,比如有些Redhat9,此时不可以选择较低版本的JDK进行安装,因为所有集群中的JDK版本必须相同(经过测试),有两种方法可解决:一是放弃该机器,选用另一台能装该版本的JDK的机子;二是选择低版本的JDK,在所有机器上重新安装。
4、Hadoop集群安装
所有的机器上都要安装hadoop,现在就先在Master服务器安装,然后其他服务器按照步骤重复进行即可。安装和配置hadoop需要以"root"的身份进行。
4.1 安装hadoop
首先用root用户登录"Master.Hadoop"机器,将下载的"hadoop-1.1.2.tar.gz"复制到/usr目录下。然后进入"/usr"目录下,用下面命令把"hadoop-1.1.2.tar.gz"进行解压,并将其重命名为"hadoop",把该文件夹的读权限分配给普通用户hadoop,然后删除"hadoop-1.0.0.tar.gz"安装包。
cd /usr
tar –xzvf hadoop-1.1.2.tar.gz
mv hadoop-1.1.2 hadoop
chown –R hadoop:hadoop hadoop #将文件夹"hadoop"读权限分配给hadoop普通用户
rm -rf hadoop-1.1.2.tar.gz
最后在"/usr/hadoop"下面创建tmp文件夹,并把Hadoop的安装路径添加到"/etc/profile"中,修改"/etc/profile"文件,将以下语句添加到末尾,并使其生效(. /etc/profile):
# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=PATH:PATH:HADOOP_HOME/bin
4.2 配置hadoop
(1)配置hadoop-env.sh
该"hadoop-env.sh"文件位于"/usr/hadoop/conf"目录下。
在文件中修改下面内容:
export JAVA_HOME=/usr/java/jdk1.7.0_25
Hadoop配置文件在conf目录下,之前的版本的配置文件主要是Hadoop-default.xml和Hadoop-site.xml。 由于Hadoop发展迅速,代码量急剧增加,代码开发分为了core,hdfs和map/reduce三部分,配置文件也被分成了三个core- site.xml、hdfs-site.xml、mapred-site.xml。core-site.xml和hdfs-site.xml是站在 HDFS角度上配置文件;core-site.xml和mapred-site.xml是站在MapReduce角度上配置文件。
(2)配置core-site.xml文件
修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS master(即namenode)的地址和端口号。
   
        hadoop.tmp.dir
        /usr/hadoop/tmp
        (备注:请先在 /usr/hadoop 目录下建立 tmp 文件夹)
        A base for other temporary directories.
   
   
        fs.default.name
        hdfs://192.168.1.141:9000
   
备注:如没有配置hadoop.tmp.dir参数,此时系统默认的临时目录为:/tmp/hadoo-hadoop。而这个目录在每次重启后都会被删掉,必须重新执行format才行,否则会出错。
(3)配置hdfs-site.xml文件
修改Hadoop中HDFS的配置,配置的备份方式默认为3。
   
        dfs.replication
        1
        (备注:replication 是数据副本数量,默认为3,salve少于3台就会报错)
   
(4)配置mapred-site.xml文件
修改Hadoop中MapReduce的配置文件,配置的是JobTracker的地址和端口。
   
        mapred.job.tracker
        http://192.168.1.141:9001
   
(5)配置masters文件
有两种方案:
    (1)第一种
    修改localhost为Master.Hadoop
    (2)第二种
    去掉"localhost",加入Master机器的IP:192.168.1.141
为保险起见,启用第二种,因为万一忘记配置"/etc/hosts"局域网的DNS失效,这样就会出现意想不到的错误,但是一旦IP配对,网络畅通,就能通过IP找到相应主机。
(6)配置slaves文件(Master主机特有
    有两种方案:
    (1)第一种
    去掉"localhost",每行添加一个主机名,把剩余的Slave主机名都填上。
    例如:添加形式如下:
Slave1.Hadoop
Slave2.Hadoop
    (2)第二种
    去掉"localhost",加入集群中所有Slave机器的IP,也是每行一个。
    例如:添加形式如下
192.168.1.142
192.168.1.137
原因和添加"masters"文件一样,选择第二种方式。
现在在Master机器上的Hadoop配置就结束了,剩下的就是配置Slave机器上的Hadoop。
最简单的方法是将 Master上配置好的hadoop所在文件夹"/usr/hadoop"复制到所有的Slave的"/usr"目录下(实际上Slave机器上的slavers文件是不必要的, 复制了也没问题)。用下面命令格式进行。(备注:此时用户可以为普通用户也可以为root)   
scp -r /usr/hadoop root@服务器IP:/usr/
例如:从"Master.Hadoop"到"Slave1.Hadoop"复制配置Hadoop的文件。
scp -r /usr/hadoop root@Slave1.Hadoop:/usr/
以root用户进行复制,当然不管是用户root还是普通用户,虽然Master机器上的"/usr/hadoop"文件夹用户hadoop有权限,但是Slave1上的hadoop用户却没有"/usr"权限,所以没有创建文件夹的权限。所以无论是哪个用户进行拷贝,右面都是"root@机器 IP"格式。因为我们只是建立起了普通用户的SSH无密码连接,所以用root进行"scp"时,扔提示让你输入"Slave1.Hadoop" 服务器用户root的密码。
    查看"Slave1.Hadoop"服务器的"/usr"目录下是否已经存在"hadoop"文件夹,确认已经复制成功。查看结果如下:
从上图中知道,hadoop文件夹确实已经复制了,但是我们发现hadoop权限是root,所以我们现在要给"Slave1.Hadoop"服务器上的用户hadoop添加对"/usr/hadoop"读权限。
root用户登录"Slave1.Hadoop",执行下面命令。
chown -R hadoop:hadoop(用户名:用户组) hadoop(文件夹
接着在"Slave1 .Hadoop"上修改"/etc/profile"文件,将以下语句添加到末尾,并使其有效(source /etc/profile):
# set hadoop environment
export HADOOP_HOME=/usr/hadoop
export PATH=PATH:PATH:HADOOP_HOME/bin
如果不知道怎么设置,可以查看前面"Master.Hadoop"机器的"/etc/profile"文件的配置,到此为止在一台Slave机器上的Hadoop配置就结束了。剩下的事儿就是照葫芦画瓢把剩余的几台Slave机器进行部署Hadoop。
4.3 启动及验证
(1)格式化HDFS文件系统
在"Master.Hadoop"上使用普通用户hadoop进行操作。(备注:只需一次,下次启动不再需要格式化,只需 start-all.sh)
hadoop namenode -format
从上图中知道我们已经成功格式化了,但是美中不足就是出现了一个警告,从网上得知这个警告并不影响hadoop执行,但是也有办法解决,详情看后面的"常见问题FAQ"。
(2)启动hadoop
在启动前关闭集群中所有机器的防火墙,不然会出现datanode开后又自动关闭。使用下面命令启动。
start-all.sh
可以通过以下启动日志看出,首先启动namenode 接着启动datanode1,datanode2,…,然后启动secondarynamenode。再启动jobtracker,然后启动tasktracker1,tasktracker2,…。
启动 hadoop成功后,在 Master 中的 tmp 文件夹中生成了 dfs 文件夹,在Slave 中的 tmp 文件夹中均生成了 dfs 文件夹和 mapred 文件夹。
(3)验证hadoop
(1)验证方法一:用"jps"命令
在Master上用 java自带的小工具jps查看进程。
在Slave2上用jps查看进程。
如果在查看Slave机器中发现"DataNode"和"TaskTracker"没有起来时,先查看一下日志的,如果是"namespaceID"不一致问题,采用"常见问题FAQ6.2"进行解决,如果是"No route to host"问题,采用"常见问题FAQ6.3"进行解决。
(2)验证方式二:用"hadoop dfsadmin -report"
用这个命令可以查看Hadoop集群的状态。
4.4 网页查看集群
(1)访问"http://192.168.1.141:50030"
(2)访问"http://192.168.1.142:50070"
5、常见问题FAQ
5.1 关于 Warning: $HADOOP_HOME is deprecated.
hadoop安装完之后敲入hadoop命令时,是提示这个警告:
    Warning: $HADOOP_HOME is deprecated.
经查hadoop-1.1.2/bin/hadoop脚本和"hadoop-config.sh"脚本,发现脚本中对HADOOP_HOME的环境变量设置做了判断,其实根本不需要设置HADOOP_HOME环境变量。
解决方案一:编辑"/etc/profile"文件,去掉HADOOP_HOME的变量设定,重新输入hadoop fs命令,警告消失。
解决方案二:编辑"/etc/profile"文件,添加一个环境变量,之后警告消失:
    export HADOOP_HOME_WARN_SUPPRESS=1
5.2 解决"no datanode to stop"问题
当我停止Hadoop时发现如下信息:
    no datanode to stop
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有清空datanode下的数据,导致启动时失败,有两种解决方案:
第一种解决方案如下:
1)先删除"/usr/hadoop/tmp"
rm -rf /usr/hadoop/tmp
2)创建"/usr/hadoop/tmp"文件夹
mkdir /usr/hadoop/tmp
3)删除"/tmp"下以"hadoop"开头文件
rm -rf /tmp/hadoop*
4)重新格式化hadoop
hadoop namenode -format
5)启动hadoop
start-all.sh
使用第一种方案,有种不好处就是原来集群上的重要数据全没有了。假如说Hadoop集群已经运行了一段时间。建议采用第二种。
第二种方案如下:
1)修改每个Slave的namespaceID使其与Master的namespaceID一致。
   或者
2)修改Master的namespaceID使其与Slave的namespaceID一致。
该"namespaceID"位于"/usr/hadoop/tmp/dfs/name/current/VERSION"文件中,前面蓝色的可能根据实际情况变化,但后面红色一般是不变的。
例如:查看"Master"下的"VERSION"文件
本人建议采用第二种,这样方便快捷,而且还能防止误删。
5.3 Slave服务器中datanode启动后又自动关闭
查看日志发下如下错误。
    ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to ... failed on local exception: java.net.NoRouteToHostException: No route to host
解决方案是:关闭防火墙
5.4 从本地往hdfs文件系统上传文件
出现如下错误:
INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink
INFO hdfs.DFSClient: Abandoning block blk_-1300529705803292651_37023
WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
解决方案是:
1)关闭防火墙
2)禁用selinux
    编辑 "/etc/selinux/config"文件,设置"SELINUX=disabled"
5.5 安全模式导致的错误
出现如下错误:
org.apache.hadoop.dfs.SafeModeException: Cannot delete ..., Name node is in safe mode
在分布式文件系统启动的时候,开始的时候会有安全模式,当分布式文件系统处于安全模式的情况下,文件系统中的内容不允许修改也不允许删除,直到安全模式结束。安全模式主要是为了系统启动的时候检查各个DataNode上数据块的有效性,同时根据策略必要的复制或者删除部分数据块。运行期通过命令也可以进入安全模式。在实践过程中,系统启动的时候去修改和删除文件也会有安全模式不允许修改的出错提示,只需要等待一会儿即可。
解决方案是:关闭安全模式
hadoop dfsadmin -safemode leave
5.6 解决Exceeded MAX_FAILED_UNIQUE_FETCHES
出现错误如下:
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out
程序里面需要打开多个文件,进行分析,系统一般默认数量是1024,(用ulimit -a可以看到)对于正常使用是够了,但是对于程序来讲,就太少了。
解决方案是:修改2个文件。
1)"/etc/security/limits.conf"
    vi /etc/security/limits.conf
加上:
    soft nofile 102400
    hard nofile 409600
2)"/etc/pam.d/login"
    vim /etc/pam.d/login
添加:
    session required /lib/security/pam_limits.so
针对第一个问题我纠正下答案:
这是reduce预处理阶段shuffle时获取已完成的map的输出失败次数超过上限造成的,上限默认为5。引起此问题的方式可能会有很多种,比如网络连接不正常,连接超时,带宽较差以及端口阻塞等。通常框架内网络情况较好是不会出现此错误的。
5.7 解决"Too many fetch-failures"
出现这个问题主要是结点间的连通不够全面。
解决方案是:
1)检查"/etc/hosts"
要求本机ip 对应服务器名
要求要包含所有的服务器ip +服务器名
2)检查".ssh/authorized_keys"
要求包含所有服务器(包括其自身)的public key
5.8 处理速度特别的慢
出现map,但是reduce,而且反复出现"reduce=0%"。
解决方案如下:
结合解决方案5.7,然后修改"conf/hadoop-env.sh"中的"export HADOOP_HEAPSIZE=4000"
5.9 解决hadoop OutOfMemoryError问题
出现这种异常,明显是jvm内存不够得原因。
解决方案如下:要修改所有的datanode的jvm内存大小。
    Java –Xms 1024m -Xmx 4096m
一般jvm的最大内存使用应该为总内存大小的一半,我们使用的8G内存,所以设置为4096m,这一值可能依旧不是最优的值。

; x* d) a" X& ]7 W0 h( U
 楼主| 发表于 2022-11-14 13:52:09 | 显示全部楼层
<?xml version="1.0"?>. J: R: O. p7 T# f) u
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
; K7 q; t4 I0 `! \1 M9 k5 y<!--) g4 N" e$ v+ n9 D2 C4 |
   Licensed to the Apache Software Foundation (ASF) under one or more' u9 e  E6 t0 @! ?7 I
   contributor license agreements.  See the NOTICE file distributed with
% X+ p4 S) e/ j0 S   this work for additional information regarding copyright ownership.; L7 a) X. [& M5 q5 i6 e% I7 }
   The ASF licenses this file to You under the Apache License, Version 2.0
7 L2 r6 d1 \$ B9 R- Y9 V   (the "License"); you may not use this file except in compliance with
4 S; I) ~" {: F) C0 k) e- k2 O   the License.  You may obtain a copy of the License at
$ i% s; P! _8 K0 r' g       http://www.apache.org/licenses/LICENSE-2.03 |$ x2 z. |% z/ r
   Unless required by applicable law or agreed to in writing, software
% a+ k1 j* r- n' F  u5 A% V   distributed under the License is distributed on an "AS IS" BASIS,8 d6 q0 c% M+ C* n+ ?* q
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* k$ K3 p1 \; i9 a9 n& M: ~
   See the License for the specific language governing permissions and
. m5 V2 A% G, L# ^* W   limitations under the License.8 F+ n% [# x! G6 l( G) @2 T
-->! z( d3 Z, I- x  k" X- g& @
<!-- Do not modify this file directly.  Instead, copy entries that you -->& }4 x- N4 V6 C5 t' x, c+ d
<!-- wish to modify from this file into core-site.xml and change them -->
5 c  I! ^- b4 d. C9 ]' E) c# D<!-- there.  If core-site.xml does not already exist, create it.      -->3 X) a8 B3 e6 _3 o+ c
<configuration>5 d3 {1 t# `8 |' {/ s4 E
<!--- global properties -->
( u* X1 r. I& c- ?5 h/ u; ?<property>
3 z0 _; @0 k( i  <name>hadoop.common.configuration.version</name>1 d, p) c8 x3 d( [; O3 g' n
  <value>3.0.0</value>
& `0 V0 U9 c1 [2 \5 f, i8 |  <description>version of this configuration file</description>6 w! u7 ?1 S* j1 `# O
</property>
) I0 ~- [! Y9 M/ i2 ]* o2 x0 X<property>6 C5 u8 \: l, I2 ~4 G5 e
  <name>hadoop.tmp.dir</name>4 T! _7 _2 K3 o; g6 S. G# {% B
  <value>/tmp/hadoop-${user.name}</value># O5 `: I6 @2 P0 }( @( I8 T$ |7 X
  <description>A base for other temporary directories.</description>
, U" L. J; P; }1 c, }5 X</property>
5 W( g5 H* {; X& w; g( d7 q<property>5 b7 \) d) c; X5 P$ }0 `& R
  <name>hadoop.http.filter.initializers</name>4 q' Q3 x* Y; O/ ?8 |$ G6 t2 ]
  <value>org.apache.hadoop.http.lib.StaticUserWebFilter</value>4 D0 }& z; ]! Z" _1 M" E+ g
  <description>A comma separated list of class names. Each class in the list" X* m) v: j1 p2 `! d2 Y
  must extend org.apache.hadoop.http.FilterInitializer. The corresponding
6 G& V  U1 @4 D8 f% q* `  Filter will be initialized. Then, the Filter will be applied to all user' n+ g3 i+ l3 z- m
  facing jsp and servlet web pages.  The ordering of the list defines the
  r+ L; {7 U8 B. F  ordering of the filters.</description>
& h8 i8 l0 l8 l. N</property>
  b% b! X" Z5 V7 q; |<!--- security properties -->
: T- U# j2 d, S0 _6 _<property>1 m$ }& e$ @  ]7 ?) `
  <name>hadoop.security.authorization</name>
7 D1 @+ A& K# v$ y' a0 X  <value>false</value>& E6 N; k$ S7 A% `: ^
  <description>Is service-level authorization enabled?</description>
9 g; F) U: t* p9 _& }</property>
/ H8 }5 b$ f, u5 @. K$ A& g<property># x# V# I, w, I% F1 p8 }2 G
  <name>hadoop.security.instrumentation.requires.admin</name>
- L6 {; p  A( |2 s- W  <value>false</value>
+ j( I1 _2 I6 h! ^; v  <description>
) t: X/ ]! Z* {; a8 x5 z    Indicates if administrator ACLs are required to access, K% Y& i* B( ^: g3 t
    instrumentation servlets (JMX, METRICS, CONF, STACKS).
" P7 @! K# i, o  </description>  ]  _0 K. i# A; l' b7 D
</property>0 E. i+ D: }9 V. L
<property>
- o7 V( i% }) N) I  <name>hadoop.security.authentication</name>
9 f8 H0 m( z9 A) h- A7 P  <value>simple</value>
1 `8 j9 U8 c- W' }9 ?1 W! K  <description>Possible values are simple (no authentication), and kerberos
6 E, B2 e1 f. ^' L  </description>
2 ]0 b6 J9 G& B3 X3 M2 C</property>
3 d- K+ u) Z7 ]+ Y( @6 F5 i<property>- X& J$ h- w  A- M9 o' c
  <name>hadoop.security.group.mapping</name>% B6 G, |6 f6 K; z* T/ L6 i6 U$ a  Q4 b/ ^
  <value>org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</value>: u- f& n" m/ B4 W& ~) Y
  <description>
6 Q* @; L- N" f' t+ R    Class for user to group mapping (get groups for a given user) for ACL.7 T4 i! ^8 U2 S
    The default implementation,
8 \: h% d/ M, E  s3 [    org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback,- p* ]+ ]! v. @& z+ K. D
    will determine if the Java Native Interface (JNI) is available. If JNI is1 L$ X1 D0 q  B6 `8 K+ ^, k
    available the implementation will use the API within hadoop to resolve a
  C  z+ z# `) `' U    list of groups for a user. If JNI is not available then the shell9 s( U1 Y% z( w0 Q5 E' J( c8 p
    implementation, ShellBasedUnixGroupsMapping, is used.  This implementation
5 N* m9 D, u6 z& j$ D+ z    shells out to the Linux/Unix environment with the$ Q, q0 B; r, t7 |/ v/ [
    <code>bash -c groups</code> command to resolve a list of groups for a user./ [3 Q- f& j: f8 g! Y
  </description>
; {$ C7 @5 w: D. [9 r8 d$ G6 f& U</property>: J9 c2 i6 E* f$ x, _5 E8 ?1 r1 _. Z
<property>8 o- Q3 g0 _- ^5 @. ]
  <name>hadoop.security.dns.interface</name>: l, M( O2 p0 o3 d3 v' T
  <description>9 z& K8 Z2 d% v% U+ z; @
    The name of the Network Interface from which the service should determine
- u0 n. Y& o- k7 w2 B' i0 ?. w9 B6 W6 c" X    its host name for Kerberos login. e.g. eth2. In a multi-homed environment,
: H- v- ~3 y$ b$ j5 g* M$ v5 ]    the setting can be used to affect the _HOST substitution in the service
1 N  T$ Q" e$ R, K    Kerberos principal. If this configuration value is not set, the service
, a8 K8 u: R0 a: u/ X  w. |    will use its default hostname as returned by
( ]6 ^. B7 h/ x( ?/ [; S    InetAddress.getLocalHost().getCanonicalHostName().
7 {5 \, g, V7 N* J( Y3 F    Most clusters will not require this setting.+ g; S& N! d, S* I, @0 w5 ]
  </description>
2 u" H7 G/ n6 p% m! ]</property># q# Y, c  ]/ N4 f) M& r6 d  D1 `
<property>) V) u2 M: A0 C2 c$ N8 n
  <name>hadoop.security.dns.nameserver</name>  g6 b1 x0 R9 a) F( x$ o
  <description>
& P! g! S$ p2 _5 V/ A    The host name or IP address of the name server (DNS) which a service Node  m+ @+ g1 f* c2 _  o% X
    should use to determine its own host name for Kerberos Login. Requires
& W5 n' Z4 ?9 \% w    hadoop.security.dns.interface.
2 n1 E( X) b. M: B3 M" e7 c    Most clusters will not require this setting.1 T: t3 I9 A" U$ J
  </description>
; }: b: C. ]8 j8 O9 g. Z7 S' L# a</property>+ d- {( K% L1 O' R6 ~4 A
<property>
, I. B+ B* q$ G# P: [% `  <name>hadoop.security.dns.log-slow-lookups.enabled</name>
6 U* R  o$ z# O. A  <value>false</value>
! X- w* X0 g- n7 e  <description>, D' ^0 A5 D, h- r2 R, }
    Time name lookups (via SecurityUtil) and log them if they exceed the
" D6 H* \) ~( W$ i( _4 h2 x9 K    configured threshold.1 O9 u; [9 \2 U  U& M$ d2 |
  </description>0 \# v5 s5 A; @) T6 _% b
</property>
; `- n- X: b+ \3 f6 H+ F<property>3 Z) ?6 q3 U1 C  a. v2 ~
  <name>hadoop.security.dns.log-slow-lookups.threshold.ms</name>
9 @6 J) Q$ T; u* j4 C2 k  <value>1000</value>* Q  Z6 r* r2 B' f. N
  <description>
( a" J+ s) ^8 I/ G0 [    If slow lookup logging is enabled, this threshold is used to decide if a. f- H, r. T( \, _6 C5 u
    lookup is considered slow enough to be logged.- \7 F: R- F% l
  </description>" Z# r' D8 j: b9 v! W1 C9 \
</property>% T( Z1 @: S4 w+ w9 w
<property>
* d9 M: w0 F8 Y% U8 ^  <name>hadoop.security.groups.cache.secs</name>6 l! c; V' d) R! Y) o
  <value>300</value>
: C2 B* N! d. h. f9 f" ?  <description>
% v9 @' f3 r4 o% ^4 Q+ o; C    This is the config controlling the validity of the entries in the cache
* f& v( _. H4 d, ^+ ?) W    containing the user->group mapping. When this duration has expired,
4 H  y4 |8 b( f8 Q- @! X    then the implementation of the group mapping provider is invoked to get
; ^2 F% N" ^+ m; `3 v- J; u    the groups of the user and then cached back.
% M" R7 p3 m! R! L% k( U  </description>$ k  o" t# l) t3 O: T
</property>* p1 |& P7 e  y5 N( G9 F
<property>  r7 Y  t: H; K5 |* S& f) C
  <name>hadoop.security.groups.negative-cache.secs</name>
6 T: B* ^7 R* O. H. g8 H  <value>30</value>. E) @. B4 \! e, l% p
  <description>* c* C9 p- F, ^, h1 G9 |
    Expiration time for entries in the the negative user-to-group mapping
/ a: C( V. A* ~5 o+ C1 _* d5 v9 b    caching, in seconds. This is useful when invalid users are retrying" H; I4 n0 E* j( h* Z+ _
    frequently. It is suggested to set a small value for this expiration, since
! m7 v! }0 U' G, p9 e7 G. I    a transient error in group lookup could temporarily lock out a legitimate
# {; R. h/ f3 f4 M, _    user.! N" T- S* ]* D1 b2 a6 e
    Set this to zero or negative value to disable negative user-to-group caching./ S6 i8 |' t+ `- y1 a- F& P
  </description>
5 w" H2 R& c0 D9 v. d/ k</property>
0 v5 G/ e9 Z9 x% j<property>
  K7 G; e' z2 f: d  <name>hadoop.security.groups.cache.warn.after.ms</name>: B" ?8 [) ~& B
  <value>5000</value>" }0 F  M" M) M) a4 O
  <description>
0 W) J) I) @7 s' W    If looking up a single user to group takes longer than this amount of" q) d; d, R: Q& K( {; J6 a
    milliseconds, we will log a warning message.$ w8 i# l* ~! x2 r; S8 q! U  m* e
  </description>8 h+ W" V) J5 U: U  Z  `- s) i
</property>
: K& q, u9 [; c* u  L<property>9 W' |) D. f- m: D' p$ I
  <name>hadoop.security.groups.cache.background.reload</name>
$ g3 e8 E- t  _9 m0 A' K  <value>false</value># Y3 _* I7 c  h  P- |$ ~* b! Z! I) Y
  <description>+ u4 |3 _" p- ]1 A
    Whether to reload expired user->group mappings using a background thread* O7 d2 D5 d% g( f! u1 \2 p' c6 o
    pool. If set to true, a pool of# Q$ r! ~: B; V0 T
    hadoop.security.groups.cache.background.reload.threads is created to
) i- B, K& S- j6 }    update the cache in the background.
: ~1 n6 C) c) }7 g9 D) j0 @7 a- b+ u/ Y7 G; I  </description>
% F3 h- e: ~1 e+ }</property>
& t' b6 H- U0 J* U8 h<property>
' T3 g( W0 k0 ^/ H2 D5 R  <name>hadoop.security.groups.cache.background.reload.threads</name>
  B  O$ R5 v% M" C  <value>3</value>, k+ }8 y& F! H: W  P
  <description>4 ?$ w- s/ [$ [9 e5 J
    Only relevant if hadoop.security.groups.cache.background.reload is true.
/ }) p( ~5 z4 G1 ?9 H/ ?$ L+ i0 i    Controls the number of concurrent background user->group cache entry  Q; N. [8 `* n5 `7 h
    refreshes. Pending refresh requests beyond this value are queued and& \9 Q+ T* `. u% L( |2 E  _
    processed when a thread is free.3 f2 F; p4 ]# h, }5 `  f2 f
  </description>
7 H% S4 V/ W/ g</property>7 n- T, L5 f" `% H
<property>1 M7 h. X3 }6 F, l' t. A
  <name>hadoop.security.groups.shell.command.timeout</name>
. G, R. O8 J8 t1 J% r; e) q  <value>0s</value>
# c! S& S# q4 t1 I5 w  <description>& _* N# d# J+ r( I$ w: [+ r
    Used by the ShellBasedUnixGroupsMapping class, this property controls how& X6 i! h* ?. l! h$ b
    long to wait for the underlying shell command that is run to fetch groups.
' m9 f( l* Q1 r" L( x+ X    Expressed in seconds (e.g. 10s, 1m, etc.), if the running command takes# u* L& l3 o2 r& [9 g  [4 P1 j
    longer than the value configured, the command is aborted and the groups
& y( [. T. e# Q" ^6 B% ?1 \    resolver would return a result of no groups found. A value of 0s (default)" a/ f6 T! r/ t1 A
    would mean an infinite wait (i.e. wait until the command exits on its own).
0 Q% c) H: x: ~$ t, v  </description>* U* ]; d6 v$ S' e' T3 \
</property>
" Y9 E- P( V4 a; A! t- y<property>
: j5 _5 `  {9 x  <name>hadoop.security.group.mapping.ldap.connection.timeout.ms</name># x, I& k6 w) u% i! U$ C, |) o/ x
  <value>60000</value># D+ {) _4 N- q) u& ?
  <description>; o( ^- C/ a/ m" j8 z2 Q
    This property is the connection timeout (in milliseconds) for LDAP0 R  l2 h& ^. @- Y$ T- G
    operations. If the LDAP provider doesn't establish a connection within the
! ^: f9 v" ]- N( X: d+ T9 i    specified period, it will abort the connect attempt. Non-positive value
: }  n7 b: _4 L' P$ e# N* k; A    means no LDAP connection timeout is specified in which case it waits for the
" t* a4 C/ N  b. s- d: i    connection to establish until the underlying network times out.
/ F3 M- p& Q$ o4 F0 I& ]7 A  </description>
* n1 i4 P6 w, p5 `* k. A8 P</property>
$ b% ], A* V3 g7 ^, s<property>3 e( K2 }. D2 H' L- x: H% B
  <name>hadoop.security.group.mapping.ldap.read.timeout.ms</name>! h9 X9 l' i2 g* c3 r3 B
  <value>60000</value>, i$ y6 ?1 o6 Q: @
  <description>
2 j/ @8 M) M% e% g- N( |" N    This property is the read timeout (in milliseconds) for LDAP
, x! c' a* ~7 Q7 Y8 O& ]6 J% \8 T+ d    operations. If the LDAP provider doesn't get a LDAP response within the
1 K4 f$ w: x0 Q; e6 r# ~    specified period, it will abort the read attempt. Non-positive value% O: G& E/ M0 }( @( |0 g7 e
    means no read timeout is specified in which case it waits for the response1 |& \3 q5 s. ?( z. u9 i
    infinitely.* E( z3 x' c+ H5 l3 E: Q9 _3 B
  </description>
. i) s& d6 Z3 _0 o</property>
% t* ^' A* k1 \7 i' O6 U4 V<property>4 v1 Y, C7 A1 h
  <name>hadoop.security.group.mapping.ldap.url</name>6 ]6 |# |& U& e
  <value></value>2 b. q! f0 e3 D
  <description>
9 }) {5 Q/ Z  h7 {% L" Q    The URL of the LDAP server to use for resolving user groups when using
, R$ O5 ?$ I, ~3 S    the LdapGroupsMapping user to group mapping.
+ G& a% {6 b" K( }4 C  </description>
; t4 D# @% O  q</property>: L4 `; f  W  y6 w$ n3 s! k/ r1 d
<property>/ D9 \2 M: U! @/ q4 \
  <name>hadoop.security.group.mapping.ldap.ssl</name>
9 ~1 s9 j2 }3 e- f  <value>false</value>8 W8 l4 v; W7 [3 ~0 {/ s# J: ]
  <description>
, t* j0 W  X$ r6 u1 B    Whether or not to use SSL when connecting to the LDAP server.
1 d! F, r5 z3 z7 q7 A& U% z' L  </description>
) e8 o# z6 P* K7 c</property>4 l/ E& h/ H7 o% [
<property>! h' E8 p' u% E1 w3 {3 R( M9 ]5 m
  <name>hadoop.security.group.mapping.ldap.ssl.keystore</name>
+ S* p! F' z- ^; ]0 z  j2 a  <value></value>. T" l" \) z- ~) r3 q4 B1 {
  <description>
; c3 I8 b4 S  c0 U    File path to the SSL keystore that contains the SSL certificate required
! g) P  g6 e& p; c. S2 g    by the LDAP server./ V& e& m! F2 K
  </description>
8 a. |, \$ O1 d6 j- J</property>
4 r4 ]. @8 V4 v; B6 d( ]<property>' ]- x' B3 {: `6 F  M2 k4 H' g0 f
  <name>hadoop.security.group.mapping.ldap.ssl.keystore.password.file</name>
: t8 m1 J6 q! B! u& i; H& r# `  <value></value>
5 p5 c3 i2 u6 Q" I' o9 H; H  C  <description>6 z; W! Y, Y0 P9 a  d, _5 E! F
    The path to a file containing the password of the LDAP SSL keystore. If7 c2 S4 h$ z4 q6 l& `' [* o# z
    the password is not configured in credential providers and the property. F4 r  z; \; w, L) S, e$ D- G
    hadoop.security.group.mapping.ldap.ssl.keystore.password is not set,8 C0 ]  l8 S. E" H1 K
    LDAPGroupsMapping reads password from the file.
( M" d  \  U5 |4 h& ]    IMPORTANT: This file should be readable only by the Unix user running
) s+ Z* n# L0 k* T5 @* |/ K    the daemons and should be a local file.
. ~) |: k: S! P" H9 _  </description>2 ^0 j5 J/ i, E6 ^
</property>9 a: M& w$ t2 Y! \, a) h
<property>4 R8 F( c8 M2 ]: a9 ]7 H5 I
  <name>hadoop.security.group.mapping.ldap.ssl.keystore.password</name>
/ z* v9 ^2 r# |! t3 ?0 D! [2 D  <value></value>3 s+ g8 b- i8 L
  <description>
9 Q/ }* Z3 e2 T( v3 Z    The password of the LDAP SSL keystore. this property name is used as an, G0 j: t) m, s, Q* h* A
    alias to get the password from credential providers. If the password can
; [( I1 z  l. n! T0 ?    not be found and hadoop.security.credential.clear-text-fallback is true- \7 T  L& a% @4 K
    LDAPGroupsMapping uses the value of this property for password.+ t: A* E& g$ s5 y
  </description>
$ z  W) |' v$ K</property>
) M, l1 W+ M& u- r/ W1 z" s<property>- z1 z9 q+ [7 {: ^$ F  `
  <name>hadoop.security.group.mapping.ldap.conversion.rule</name>: c, d3 m& P6 H  w
  <value>none</value>) ]  H2 `1 U* d
  <description>6 Z  z0 |7 S! M2 ]
    The rule is applied on the group names received from LDAP when) l2 y; E) w6 Q. F& ~! p
    RuleBasedLdapGroupsMapping is configured.; n. o4 o) S" X  ]5 u& O! @
    Supported rules are "to_upper", "to_lower" and "none".
7 B  V- p! S( A0 x    to_upper: This will convert all the group names to uppercase.
0 [5 `" G  [5 M, A: k1 v    to_lower: This will convert all the group names to lowercase.
9 _  b6 t5 n& h" y( L    none: This will retain the source formatting, this is default value.2 c0 b+ \$ N& r8 o5 U
  </description>3 g3 z1 R0 f  _1 D0 T, h
</property>" B7 `& O1 g) X$ F  R5 k
<property>" _4 _1 [: h: c3 \) ^' ?7 x" `
  <name>hadoop.security.credential.clear-text-fallback</name># s$ w% n4 v5 G( {1 L8 [$ u
  <value>true</value>
# x: E1 {6 c3 W' |0 N6 M$ f  <description>% k; m" o2 D: N. m* u
    true or false to indicate whether or not to fall back to storing credential5 A0 v: q* V3 Z: t/ j/ X2 X
    password as clear text. The default value is true. This property only works) K, C! f7 h7 G" Q
    when the password can't not be found from credential providers.* A4 K2 }/ }( s( `% Z3 T5 u+ j
  </description>! l" R" x* A/ V! _( u2 ~# w  O
</property>
- A) E# e: H/ d6 _+ \<property>
/ K: `$ a& v, J8 J+ y& E; \  <name>hadoop.security.credential.provider.path</name>
: O% u4 Z" H1 B2 P/ X. U" r& ^! j. d! q  <value></value>
0 _4 l0 \0 j+ {. |- L" D  V  <description>
' @  |( M0 f1 X0 _. l    A comma-separated list of URLs that indicates the type and
3 r7 y# x9 h: q4 w0 A; i9 R    location of a list of providers that should be consulted.
- W1 p1 J  O) a# F# |, X% p4 Z) W  </description>3 \' j4 Y( {2 i' \) ^; k
</property>
- g4 u, v% G5 T<property>. ]1 H. v" ?% \9 c: B9 k- R$ V
  <name>hadoop.security.credstore.java-keystore-provider.password-file</name>9 M1 `2 m' }/ ~+ ?
  <value></value>
' c7 N& U- c6 A/ c  <description>
4 T8 u/ V6 A' L3 j* D    The path to a file containing the custom password for all keystores, g' {0 N6 j. r+ J# N
    that may be configured in the provider path.5 {) R  @8 a: l+ Y' @) X
  </description>& F) h+ v# H" m  W5 G5 \8 O2 J
</property>7 g3 ]8 y9 T! ]. b
<property>
+ @  k9 Q8 X! M  K1 V) g  <name>hadoop.security.group.mapping.ldap.ssl.truststore</name>
) L; G' E* ]  @6 R% A6 J: x  `  <value></value>
" @0 k0 X# r! _* b2 B* }# {' X  <description>
: b2 L$ P& K; v  f: D$ Z    File path to the SSL truststore that contains the root certificate used to
: ^  |. G9 K- D: k. |) V- j$ O" C! L    sign the LDAP server's certificate. Specify this if the LDAP server's
% C- C* p; z. p) G" n3 B    certificate is not signed by a well known certificate authority.
4 x4 k- c9 F- @* e& S: E2 l  </description>7 X: U6 v2 x; Q7 U  V. d
</property>% a5 t# \8 A. {4 k) A: Z4 l
<property>' N5 |, ~  D0 v$ C7 Y2 r' \
  <name>hadoop.security.group.mapping.ldap.ssl.truststore.password.file</name>
, C0 T6 D( `. O  <value></value>6 ?* H' I. D* J  P1 n; A" s$ p( h: I
  <description>
9 R& R" n; o  w( H6 a3 w    The path to a file containing the password of the LDAP SSL truststore.
0 z' k1 {+ |# T    IMPORTANT: This file should be readable only by the Unix user running
* a- I- W8 i2 z7 s% X0 H    the daemons.8 d- m* d9 j) ~$ s
  </description>
  O8 W1 Z, l# q</property>8 G: {9 Z8 X( P0 _% R% a* q
<property>$ C; S- X' W9 R/ W1 @
  <name>hadoop.security.group.mapping.ldap.bind.user</name>
& @" S7 }& |1 t4 f& O, N3 Z! J0 o  <value></value>$ ~# l$ O. S7 y3 e$ N. }, _
  <description>  [( i+ [5 n1 Q; z' F* G7 [& |
    The distinguished name of the user to bind as when connecting to the LDAP2 e$ Y7 L& z9 [) a* }6 `% [& d9 }& P
    server. This may be left blank if the LDAP server supports anonymous binds.2 X" a8 K0 O: D" P/ E
  </description>
5 E) h7 s9 ]' F  X& V</property>+ s" |# B# Y9 f0 [  \; k8 f
<property>
: i% t2 \6 U0 H/ L/ v  <name>hadoop.security.group.mapping.ldap.bind.password.file</name>
) E6 h: W6 T, P7 I9 ]$ t; ^* l& v  <value></value>
  @5 \* s) f  U5 B2 \( {& @! c: e6 P' b  <description>
' O$ A0 g% k& H$ c6 ]    The path to a file containing the password of the bind user. If
9 F2 h, x) l" Q! h    the password is not configured in credential providers and the property
* h. f" D  F/ k; F( ]( H( c0 s    hadoop.security.group.mapping.ldap.bind.password is not set,
/ n# ^  o* U* s/ B- G    LDAPGroupsMapping reads password from the file.7 P, o, c% E* ]
    IMPORTANT: This file should be readable only by the Unix user running7 g5 k" l  o7 q, R6 {% u
    the daemons and should be a local file.
, m2 Y8 a2 T- ^) S- J  </description>
- X# V- B  {% E9 V4 ?</property>
% K5 v& b7 n/ Y<property>
' Y! ]2 z  D# D1 L  x4 g, |- Q  <name>hadoop.security.group.mapping.ldap.bind.password</name>& d5 C" ]3 E4 i& z- R. l3 [) V
  <value></value>; L- a/ G% O! K/ M: [
  <description>
3 F1 E/ u, S( r- @    The password of the bind user. this property name is used as an0 {; v- T6 t. w# J2 U' F" ^) H
    alias to get the password from credential providers. If the password can8 N& A" J5 [' c
    not be found and hadoop.security.credential.clear-text-fallback is true
# ]0 i! k: T2 S5 r# z* G7 b& m) E    LDAPGroupsMapping uses the value of this property for password.
. p' v* S9 ]4 w1 w0 O  </description>
2 B2 Y5 f8 r* R8 u# P8 J2 O; t( s</property>
3 [1 x6 G+ p! {: q. i<property>
1 F1 W. J* e/ f- B  M: e+ A0 x  <name>hadoop.security.group.mapping.ldap.base</name>% G  S' r3 I3 n# n. C
  <value></value>; k& _9 O! ~6 }* }& `  ~: m
  <description>0 e. W0 k' O3 j. G8 Q$ A1 S! Y
    The search base for the LDAP connection. This is a distinguished name,8 Q9 t  w# [$ V
    and will typically be the root of the LDAP directory.
. F/ \( T: S7 e' g$ r- ~  </description>
$ S" [0 H( n) Y- b2 ^, z$ I  f7 z</property>( n3 [6 |, y8 h; W
<property>- }8 E3 Y. y- W5 a, W
  <name>hadoop.security.group.mapping.ldap.userbase</name>
# z1 [. \  `; r4 B9 C' g0 v  <value></value>- R% q  P; q! w2 T8 E' T! M; w: y
  <description>
  m: q0 y3 G( k% `    The search base for the LDAP connection for user search query. This is a$ W* o* W4 @& M  Z) n
    distinguished name, and its the root of the LDAP directory for users." U* I3 ?' T1 }$ K( n$ ^' x
    If not set, hadoop.security.group.mapping.ldap.base is used.) b: i& ?0 ]; ~0 Q  {! D7 U
  </description>
( \* p$ b, Y# B$ m  L</property>1 s/ o: u, i/ G( t0 ?& m
<property>
& |! [6 N$ _' R( H/ \  <name>hadoop.security.group.mapping.ldap.groupbase</name>5 N1 c4 K0 ]; f
  <value></value>! m; R9 R7 e/ }! g0 t
  <description>- Z+ |3 ^6 s# U3 w
    The search base for the LDAP connection for group search . This is a$ X+ i, Q6 v$ B2 R5 H6 Q  G
    distinguished name, and its the root of the LDAP directory for groups.! i, J& x5 K/ a) s- Q$ b+ c
    If not set, hadoop.security.group.mapping.ldap.base is used.$ S  h3 @* ?+ c
  </description>0 [2 _& Q7 y, _! u
</property>
8 v: m' I. M9 _% q) E' ^4 x; F<property>
1 M1 }, K3 a4 s& _2 q. x% o  <name>hadoop.security.group.mapping.ldap.search.filter.user</name>
# ^2 X" U$ O  x# j: I0 D  <value>(&(objectClass=user)(sAMAccountName={0}))</value>
. J, B) c) _- O; g, S  <description>& R6 F0 v% k% C; _# }: b
    An additional filter to use when searching for LDAP users. The default will. ^! j8 H1 G+ L, I% a5 l; `
    usually be appropriate for Active Directory installations. If connecting to
: |% h5 o0 f2 y- O5 f2 ]    an LDAP server with a non-AD schema, this should be replaced with
3 O2 o  C# u$ h" \    (&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to
# j" M9 k. T. u" G6 F    denote where the username fits into the filter.+ \9 t8 ^5 T* @* c( g
    If the LDAP server supports posixGroups, Hadoop can enable the feature by6 A% D; ^! b$ N3 O) x( X
    setting the value of this property to "posixAccount" and the value of" D+ e  J" |3 a- q4 F: T( ~8 K+ ~5 m
    the hadoop.security.group.mapping.ldap.search.filter.group property to
- i2 k# t5 |/ F    "posixGroup".
. J+ e4 e: E( u' z  </description>, g6 [+ ?7 B" d% k+ d& x+ Q3 T' [
</property>
; l6 P9 C- s& K- t; C3 t% ~& @* w<property>
1 V) c( i6 G( r& r  W; t+ l  <name>hadoop.security.group.mapping.ldap.search.filter.group</name>
* r9 e. V2 M: r  J/ X2 Z  <value>(objectClass=group)</value>
+ L, G5 ^4 m9 F/ u; B  <description>
; d1 b  P0 ?; X! {7 Z% W3 b: k" G. O    An additional filter to use when searching for LDAP groups. This should be
5 o5 q1 j. L1 J    changed when resolving groups against a non-Active Directory installation.; D# c' ^& d8 x$ r* w" y9 ?
    See the description of hadoop.security.group.mapping.ldap.search.filter.user; l5 |6 D. i2 H  m5 C; t' Z
    to enable posixGroups support.5 L6 N& u9 E; c" G! _  M# T
  </description>( W/ B0 S' Q" ^) U
</property>
  T  T) @# B# t8 Y% w7 ?<property>
0 o3 ^0 M' L/ Y: \( v( }, ^" y    <name>hadoop.security.group.mapping.ldap.search.attr.memberof</name>
  ~" y& ]1 d& f1 t4 g( S    <value></value>
) \; Q$ L7 O7 z  h/ G    <description>
) R% M3 v7 @- ?3 g7 O( t      The attribute of the user object that identifies its group objects. By
, k$ v+ X" Y& b; A- }1 G      default, Hadoop makes two LDAP queries per user if this value is empty. If2 @; o* g  X8 a* ?
      set, Hadoop will attempt to resolve group names from this attribute,- C) \1 l* |9 k0 E; e
      instead of making the second LDAP query to get group objects. The value: @8 |; q, U0 j0 d
      should be 'memberOf' for an MS AD installation." V( T2 o; {& \$ r
    </description>
* k: W5 r5 K1 L* @6 G  V+ g! v</property>
6 `: U7 l# x  G/ G6 ]/ e<property>
! L0 V# f, O8 }5 z  <name>hadoop.security.group.mapping.ldap.search.attr.member</name>
' `- [0 p0 J6 M* G  <value>member</value>+ B2 Z1 g, S# l' g# p* T+ p
  <description>" A3 l! w0 }, i
    The attribute of the group object that identifies the users that are
. E! h0 O, b4 G8 g" G9 u    members of the group. The default will usually be appropriate for
. i  |; q" o4 y" h1 {; q. z* w    any LDAP installation.
& s/ D  G9 y- a/ f; }  </description>8 {( K+ @8 ?# c' q# K
</property>/ |/ x) g; ]! j
<property>
1 o% Q7 i# x" |4 P. _  <name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
( x+ W$ H5 F  t, g9 P2 z  <value>cn</value>
. U& V5 w: t& n9 h. x  L% i/ ^  <description>
, Q/ S3 U2 G; M' ^8 N    The attribute of the group object that identifies the group name. The
$ [& {8 v, H. Y$ V( |* w- x    default will usually be appropriate for all LDAP systems.) I  N! e1 @) y
  </description>- B0 N5 m. ?( Z4 f3 I
</property>
& i& k9 t* n# |- J+ H; e4 `6 w<property>
' J* N# W, ^8 K! u' M8 G! s  <name>hadoop.security.group.mapping.ldap.search.group.hierarchy.levels</name>  C2 m/ W) R9 ~# D) ]  [
  <value>0</value>' V' g* X2 J8 |& |& Z
  <description>
2 p- O6 g; B5 g4 b9 O! z7 S    The number of levels to go up the group hierarchy when determining' u& M+ }) y' E( U, N* D
    which groups a user is part of. 0 Will represent checking just the
5 A, o) C9 o8 w9 H1 M( m8 L    group that the user belongs to.  Each additional level will raise the+ O  q& n+ Z; W2 P3 `
    time it takes to execute a query by at most
  y& t& ^5 ]' K    hadoop.security.group.mapping.ldap.directory.search.timeout.3 o" q% d/ H  S. ]' H! N
    The default will usually be appropriate for all LDAP systems.7 [" H% @& ^+ l/ ]
  </description>- f3 q1 c" ]" H2 O7 M% s5 d
</property>% Y; U' \8 k5 A; o  v& x
<property>3 R6 t$ |, S' g( Z4 I$ |
  <name>hadoop.security.group.mapping.ldap.posix.attr.uid.name</name>
9 r2 d# X2 t  P2 {6 f, B  f% {  <value>uidNumber</value>- q1 f, h4 o) B+ v. u* s
  <description>( a# W* t7 \# w# n+ f8 ?/ X
    The attribute of posixAccount to use when groups for membership.
+ j5 ^+ I+ D# x; n    Mostly useful for schemas wherein groups have memberUids that use an/ m1 F+ G  o* n# i" P+ r0 R
    attribute other than uidNumber.
% a& m  o  w0 e( n  </description>$ P6 R% g" }% ]7 m# c
</property>9 `4 P* C% a" c( }1 T0 \. c
<property>
, w# R6 Y% {6 P+ i6 H% h  <name>hadoop.security.group.mapping.ldap.posix.attr.gid.name</name>2 Q( m0 }% @1 p, J6 f' i# y
  <value>gidNumber</value>
+ g: K& N! e( \2 _  <description>
4 z% w& o: I) k) ^4 f% Q# }, X/ N    The attribute of posixAccount indicating the group id.
6 j* G9 V1 s1 o! c2 }) b  </description>
  o8 q, w3 X! A( A5 I9 h9 V: e</property>1 O' _" t) y8 _4 \0 ]% l
<property>! m9 c! Q7 K" d4 k7 Y" p; T
  <name>hadoop.security.group.mapping.ldap.directory.search.timeout</name>& d2 F  P/ v* g6 w8 N* _0 J
  <value>10000</value>  Q% n) J& D6 L) e% D
  <description>
+ w3 \8 p' F! v1 H, v+ y    The attribute applied to the LDAP SearchControl properties to set a1 a: `1 K0 C' F* e# b
    maximum time limit when searching and awaiting a result.
& o$ x" ?2 p: Y3 K    Set to 0 if infinite wait period is desired.+ i5 O; o5 U" `3 k/ D2 w! a; n
    Default is 10 seconds. Units in milliseconds.1 U0 r) x0 D4 ^: K4 r/ z
  </description>
  R4 n9 Q* ]3 e/ S0 w</property>4 b) q* T3 y$ ^* v1 Y; [* Z" I9 q
<property>
0 F6 V- q& X2 X; E  <name>hadoop.security.group.mapping.providers</name>
$ _2 \3 m. B* k$ I2 @3 m, D* \# S  <value></value>
  ^& x( Q2 e4 I& s3 |" N% m  <description>
, b3 s$ F" M5 O1 O, S+ K+ X3 L" L2 [# o    Comma separated of names of other providers to provide user to group( Q: u3 z2 |2 ?1 ]! N
    mapping. Used by CompositeGroupsMapping.
% e. Y$ |% X0 S" L. P  </description>
9 t: k2 @2 N7 P</property>
( i- v$ l4 c( P6 W<property>
# R+ V& Y5 l$ g5 d  <name>hadoop.security.group.mapping.providers.combined</name>
1 z0 X/ y, u$ m& ]( }1 h  <value>true</value>
3 v6 p+ k& c4 W$ O6 u2 `; d  <description>
+ A0 h' R6 S6 ?    true or false to indicate whether groups from the providers are combined or8 k) Z' D2 u, V% V' s
    not. The default value is true. If true, then all the providers will be
0 y4 `* n3 d" |! H    tried to get groups and all the groups are combined to return as the final5 O3 g( k0 v3 ~! H, O* }, ]
    results. Otherwise, providers are tried one by one in the configured list1 o) E) r& R2 x  K# X7 H
    order, and if any groups are retrieved from any provider, then the groups3 V5 S  `8 K; ^5 l
    will be returned without trying the left ones.2 A' V7 |2 H7 P$ e3 D, A
  </description>
  H9 D" J! J; m& x5 A4 F</property>
) R  g8 x) _" O8 J9 U<property>
8 R4 _( C& [% l! ?  <name>hadoop.security.service.user.name.key</name>
" L- L8 B* X1 K- h5 _% T- i8 S  <value></value>
. {- K. z" {; D7 ^0 F% g1 Z  <description>: }( F" Q  M1 M: t$ c0 A
    For those cases where the same RPC protocol is implemented by multiple9 L9 M/ D- e/ _  G; M
    servers, this configuration is required for specifying the principal
7 y9 R  R9 [; t8 k% C. x9 l  M: g    name to use for the service when the client wishes to make an RPC call.4 D; d5 u& d7 |
  </description>1 j$ t7 v8 e$ b  j* e; g( g& W$ v
</property>
, N$ z6 y; F( [, M; c% e5 a4 Z  <property>
8 E8 }1 K" m$ K2 ~& a    <name>fs.azure.user.agent.prefix</name>
8 s8 e/ r0 |' E) ]; K7 x: F! t    <value>unknown</value>/ G% P1 q+ j0 x7 m3 j  h6 T, ?
    <description>
9 v8 o8 @- g2 ]* x9 t* a7 j& c' y      WASB passes User-Agent header to the Azure back-end. The default value0 @; C( ]* [1 ~7 P& ~9 g5 _
      contains WASB version, Java Runtime version, Azure Client library version,
" X1 @6 j: O+ I7 c/ S! ~; S      and the value of the configuration option fs.azure.user.agent.prefix.
; J7 v0 g5 ~, P    </description>8 p& C1 W7 A; P. G
  </property>' L4 C' h! F% C  t* R% A5 l  R
<property>
1 ^5 T0 N2 r% [7 q, F% `    <name>hadoop.security.uid.cache.secs</name>
  B/ ^; Y9 [3 E- _7 V" H0 C    <value>14400</value>
$ [) Y+ y* X; J$ |    <description>0 t2 ^& K8 Z1 ^% I) i7 N
        This is the config controlling the validity of the entries in the cache8 Y" U  O8 d6 `$ s! ^; P. P
        containing the userId to userName and groupId to groupName used by7 w% `- J& W# n# G: a. w$ U5 U
        NativeIO getFstat().
- R  A: ~% ?3 [% O, K    </description>' p# k" G" G9 G: r; A" B
</property>
1 }- N* ?) ]( Y7 K& X  M6 K. T  v3 n* F<property>8 [/ a$ H  e, f3 }. y
  <name>hadoop.rpc.protection</name>6 l) \- S( A% t5 ~8 K+ V
  <value>authentication</value>6 \' K5 q, `- Y  I( L
  <description>A comma-separated list of protection values for secured sasl. J" ]" t7 H/ F. {# u" E
      connections. Possible values are authentication, integrity and privacy.
' b" w4 z& ]! O& o) ~      authentication means authentication only and no integrity or privacy;
5 x- b$ T  o, C9 ^1 m+ ?, ~5 S8 ^      integrity implies authentication and integrity are enabled; and privacy8 O% j# c( e2 l
      implies all of authentication, integrity and privacy are enabled.
* j' b4 U9 w& m" }1 U      hadoop.security.saslproperties.resolver.class can be used to override
* m6 B/ B; B7 Z7 \/ ]      the hadoop.rpc.protection for a connection at the server side.9 [# l: i3 R$ I! r& c
  </description>( \- j6 P) @/ Q
</property>
* g- H- f, }0 o6 R0 [<property>
: P6 G, i6 c" E  <name>hadoop.security.saslproperties.resolver.class</name>
/ ]5 t, x0 b. ^0 n7 [8 m  <value></value>
& o% T+ {/ Q0 v; W0 A0 a; D) v$ k  <description>SaslPropertiesResolver used to resolve the QOP used for a
/ A; v# G6 ?7 P' Z: t, e      connection. If not specified, the full set of values specified in3 Q3 ]( K, d7 X. _' n( g/ w5 ?
      hadoop.rpc.protection is used while determining the QOP used for the
- W% N4 q  t% i9 k; X# d      connection. If a class is specified, then the QOP values returned by: i2 W/ o4 T) g' {4 j1 Y: K
      the class will be used while determining the QOP used for the connection.
* U& |% r2 E4 ]4 U  </description>
. g% f* H1 T% [6 S5 q# ~</property>; X. v3 d# [% I9 W% a7 e
<property>
5 |7 u) d6 \2 t+ @8 z  <name>hadoop.security.sensitive-config-keys</name># n4 r: H; E5 R' [: g, V( M% v! g
  <value>" d3 w1 P1 L7 Y2 N
      secret$# @2 h& U; S- ^
      password$
. }) U% V5 d; N/ n* r5 J5 [* d8 M! p      ssl.keystore.pass$4 Z" W  y+ M) T: T3 }, B/ q' y4 W2 M
      fs.s3.*[Ss]ecret.?[Kk]ey! _& O! j3 g3 n2 ?5 j
      fs.s3a.*.server-side-encryption.key: e/ @; F9 i1 N
      fs.azure.account.key.*
! P9 J& K# ]9 W/ {6 a6 q      credential$
( u! w% Z' K  T: p0 y      oauth.*token$
- {; S0 D1 i* `: b, i# o" O* {( D      hadoop.security.sensitive-config-keys3 j8 G9 H9 F# S  b9 \; W3 w( r
  </value>$ ^+ r% r( K: H
  <description>A comma-separated or multi-line list of regular expressions to; V# M& N; C2 p/ B5 E4 C5 l) Q
      match configuration keys that should be redacted where appropriate, for
* O( M- ?- q7 z      example, when logging modified properties during a reconfiguration,
& ?, U) L8 ^; o% ^& U      private credentials should not be logged.0 P" ]1 x' R- M/ j
  </description>" C4 U" w* E& [
</property>
; t1 V  K8 ^  ~% M$ x6 L<property>
! B$ |& \- R1 q0 q+ ?  <name>hadoop.workaround.non.threadsafe.getpwuid</name>
/ O: m1 P6 e* o  <value>true</value>
2 I! U4 R0 Q" R3 J" R2 l8 n% T  <description>Some operating systems or authentication modules are known to
, J9 z) K; y7 t; q  have broken implementations of getpwuid_r and getpwgid_r, such that these5 A$ D& F5 n  @& F1 n# \
  calls are not thread-safe. Symptoms of this problem include JVM crashes" ?! f1 K7 @+ f1 D& Y( d9 K
  with a stack trace inside these functions. If your system exhibits this
) l7 `3 I* o% z  issue, enable this configuration parameter to include a lock around the) J/ O8 O2 Z% x8 H9 _  z" ^
  calls as a workaround.( }# Y) G; u  i  ^% V
  An incomplete list of some systems known to have this issue is available
! S3 d% `# r7 |0 m$ S  at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations0 R4 I8 R# [( M# i6 _
  </description>
5 r6 u2 E* b! l</property>: }% Q0 x% q4 a2 v( E
<property>2 l1 E6 s" |! C) H8 y  V
  <name>hadoop.kerberos.kinit.command</name>. X1 B- e  w; Q. B( }9 ]! \4 U
  <value>kinit</value>
- Z! ?7 n# @8 T6 q" N. m  <description>Used to periodically renew Kerberos credentials when provided9 K- Q% w6 l: V6 X6 b3 y1 s" c
  to Hadoop. The default setting assumes that kinit is in the PATH of users
2 b/ x, @7 D" L  v9 y3 \) p% G  running the Hadoop client. Change this to the absolute path to kinit if this4 w, n) V; l- ~2 w
  is not the case.
3 U* y; C6 K6 D) K  </description>5 u2 S3 I1 V; ]; \# T; {' d# H
</property>
; l0 a5 Y- N$ M( P$ A: j1 a<property>5 }" Z& s* }: t% E/ K
    <name>hadoop.kerberos.min.seconds.before.relogin</name>
( h) K5 F* z7 D& ~+ }    <value>60</value>
9 b; l) u; b4 E, ~    <description>The minimum time between relogin attempts for Kerberos, in5 F. Z  ~% }6 c# P1 i/ Q+ x1 O
    seconds.
( F# O7 C6 v1 M+ J9 G    </description>) ~' V9 p! o7 E
</property>
9 b' x; p7 H( \# ]<property>
. ^- a/ }+ L! W- R( G( m  <name>hadoop.security.auth_to_local</name>
0 ~2 u8 @. J* M  <value></value>' `7 I: F* v: B
  <description>Maps kerberos principals to local user names</description>0 [+ g2 {. H7 J
</property>: l3 G# o1 o6 r5 I$ x  N( N5 _4 r
<property>
* \" S- a. K# [  <name>hadoop.token.files</name>
! d0 e+ `8 o! n: |8 B. m/ G  <value></value>/ R" h7 Q3 ^' ?
  <description>List of token cache files that have delegation tokens for hadoop service</description>
4 M0 ]' e' P* F: m7 G2 F</property>0 A' L5 m0 ?2 Q4 `: e
<!-- i/o properties -->4 h- s& G: W7 D4 l; M5 P5 g
<property>
5 q* [( A- c! @  q0 [8 j  A) p8 x  <name>io.file.buffer.size</name>
7 q3 B5 {5 c8 [, m4 p+ a1 y  <value>4096</value>
7 |) I: @, `4 {+ D- Z9 k- O3 @; `# b  <description>The size of buffer for use in sequence files.
2 N  X; |. e+ Z$ T! P! J  The size of this buffer should probably be a multiple of hardware
* t- s  s1 |! L- w4 J; c  page size (4096 on Intel x86), and it determines how much data is- ~" B" C- w; t1 h4 F& R
  buffered during read and write operations.</description>
  G! E' F5 S6 f5 f5 I  }</property>
$ E/ z9 G# X, M: k+ R$ B: P: ^<property>
6 r1 q) T) z$ |2 }$ q6 k, d! z  <name>io.bytes.per.checksum</name>5 o( o  o6 R2 K4 G! |3 k- y/ D( Q
  <value>512</value>: \1 ]5 A3 h1 F: x# R# j1 x
  <description>The number of bytes per checksum.  Must not be larger than
5 l. _* `3 \7 Q5 |  io.file.buffer.size.</description>1 H+ f5 }. `7 r* V9 ^7 c; ]
</property>
4 z* ^, v' W/ n5 y8 J& l9 L5 p<property>8 d  y% z3 }. z/ f1 U
  <name>io.skip.checksum.errors</name>; \# Z, h' `/ y% a* u. d
  <value>false</value>
9 ]! A+ c+ s# [: G' ^( n  <description>If true, when a checksum error is encountered while6 M" y+ I: y. h6 ~7 l6 ?5 M
  reading a sequence file, entries are skipped, instead of throwing an" L: G) r$ y$ n! z* f
  exception.</description>  l( r6 ?: O; B  Z" k& L$ V: M! ^3 g
</property>. F: x1 r: e: d& Y- }
<property>
3 M1 U- ?5 Z( C  x2 U  <name>io.compression.codecs</name>, I& U3 I! D/ H6 \6 R, D+ L# I/ l
  <value></value>
# L2 T1 [& j6 S2 s1 y- t  <description>A comma-separated list of the compression codec classes that can
+ @" k  b0 e) l; C( U/ D5 E& d  be used for compression/decompression. In addition to any classes specified
" E' S. [- X* q* q$ M) l# B  T  with this property (which take precedence), codec classes on the classpath
2 \2 j- X7 L2 {( }+ N  are discovered using a Java ServiceLoader.</description>9 y7 i! ^1 B# N
</property>5 y: b$ q5 O+ W# I3 \
<property>
7 z" i* p0 ]  H. P5 g  <name>io.compression.codec.bzip2.library</name># ^! F' g$ q1 [2 T& {9 ^
  <value>system-native</value>) a( z/ {% Z( c) c0 {
  <description>The native-code library to be used for compression and
7 Q$ x9 I4 b0 @% n  B5 b0 ?: b  decompression by the bzip2 codec.  This library could be specified1 K, x$ D9 S4 |3 k8 n# {
  either by by name or the full pathname.  In the former case, the
4 l0 {) C( x! ]1 K5 F9 q$ D; x  library is located by the dynamic linker, usually searching the
$ d( D# ^6 _- M1 f6 L* t  directories specified in the environment variable LD_LIBRARY_PATH.
4 m6 ]( V! R: Z. u+ s: p! {9 c  The value of "system-native" indicates that the default system0 k# ?+ H5 D4 D% J) N3 [
  library should be used.  To indicate that the algorithm should* U1 n- S8 u, ^
  operate entirely in Java, specify "java-builtin".</description>
- {( I* C+ N* |</property>( l+ o) t" r! ~# M, `
<property>. K" B/ _* |+ T/ Q
  <name>io.serializations</name>
; o+ \; [$ L9 n  G' [) {  <value>org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</value>5 K6 M" H) r# F! o) u3 ?
  <description>A list of serialization classes that can be used for
  v$ b$ Y1 l- _+ k  obtaining serializers and deserializers.</description>& _# k" j4 X8 o! ^
</property>
& i2 U* ]# ]8 m; p6 n% {<property>& y3 ]6 z4 I+ z$ O8 H; x1 \2 W
  <name>io.seqfile.local.dir</name>
/ S5 X0 j# }- B0 j  <value>${hadoop.tmp.dir}/io/local</value>: T: E/ G  X2 K6 `& s0 Q! r
  <description>The local directory where sequence file stores intermediate) F' Y/ f9 P" k' q+ u; s
  data files during merge.  May be a comma-separated list of
# }) A8 g; f1 P' _' r  directories on different devices in order to spread disk i/o.1 J9 ^( f: j, P, T
  Directories that do not exist are ignored./ [6 Y( Z8 s' U8 k) |# W' z
  </description>5 e5 Y3 \, T$ }" }
</property>7 f7 B, ^) W( a  {4 N7 ]) T7 d' j
<property># |8 |2 s- ]. X) K# w
  <name>io.map.index.skip</name>
* l) w% T( F- D: e  <value>0</value>
) C$ ^, o8 M# v0 x2 n' Q  <description>Number of index entries to skip between each entry.5 F/ w; U9 g" w3 D0 B
  Zero by default. Setting this to values larger than zero can) r: X! T- n' c
  facilitate opening large MapFiles using less memory.</description>; G; ^3 E% K2 k* d1 [/ g: x6 |
</property>0 Z, F" n% C5 U, n; `- d
<property>
! c% ]* @0 }# l2 ?2 F  <name>io.map.index.interval</name>2 \& [# i; w9 h+ T  A
  <value>128</value>! n$ X6 n# p6 m. U% M
  <description>; A7 P9 m! n8 c* e2 `1 a  X/ b8 Y
    MapFile consist of two files - data file (tuples) and index file
. a/ w% j/ Z( h1 w8 ~( ?7 t    (keys). For every io.map.index.interval records written in the9 [$ k0 C5 }# T( P
    data file, an entry (record-key, data-file-position) is written
  Y  |  `0 e: W; L4 y    in the index file. This is to allow for doing binary search later
( Z1 }, }  |3 `! j& c    within the index file to look up records by their keys and get their# j! E' l3 k( t& j( Q
    closest positions in the data file.
' Y8 F$ O: O2 f1 r8 }  V) e! v( Y  </description>
7 {5 m1 f" V3 g9 x1 Q</property>0 d6 i8 k6 B" z
<property>
# u1 J( E9 ^) i" t- z/ b& U  <name>io.erasurecode.codec.rs.rawcoders</name>5 X* S' j# L, y) o* K, g, U  q5 a! M  B
  <value>rs_native,rs_java</value>  \* r3 Z( o" x7 ?" y0 E
  <description>. u/ ^: ~; n$ J+ U& _
    Comma separated raw coder implementations for the rs codec. The earlier' n) w6 ]9 r  `. P. \+ [
    factory is prior to followings in case of failure of creating raw coders.
$ k9 @' B0 ?" X# j, s' C  </description>
. c. H, u4 X# K  z" T+ s</property>' d9 B) P, {/ v3 x* V
<property>
* Q4 r1 S+ X1 f4 Q# O$ x  <name>io.erasurecode.codec.rs-legacy.rawcoders</name>
- u; F5 @- u/ `7 L! Z& E  <value>rs-legacy_java</value>
1 `+ y* g: Z$ \! v( s  <description>9 i# ~1 o2 u+ k. \# i# w5 |6 y
    Comma separated raw coder implementations for the rs-legacy codec. The earlier/ f2 Y1 p) s' x( S
    factory is prior to followings in case of failure of creating raw coders.+ o6 G: ^/ T, f" z/ C. C* l5 f) h
  </description>& i9 \/ R+ X0 Q. w5 f! C
</property>- e" R/ x2 P- B5 @4 p  z7 j
<property>
. F+ _# M0 w+ I, c' t5 Z  <name>io.erasurecode.codec.xor.rawcoders</name>
2 D$ K6 x6 z! A- G! v  <value>xor_native,xor_java</value>7 a& I6 J! {6 ~) E
  <description>' R5 Q4 S9 F9 Z$ M
    Comma separated raw coder implementations for the xor codec. The earlier( J8 R$ I" H, E# @/ [( a
    factory is prior to followings in case of failure of creating raw coders.9 S* B. z4 ~9 i9 T
  </description>( [5 |1 n- T( o1 S7 N7 I" ^8 X. v2 ?
</property>. N3 p2 ~8 I0 k4 [/ u$ i
  <!-- file system properties -->3 R+ ~! Q1 W+ @2 q$ y
<property>
5 S/ S; X) a0 A9 c  <name>fs.defaultFS</name>3 L2 g& \: H; ]: c8 q8 ]
  <value>file:///</value>
/ J7 G* r  w; r! \/ p& m  <description>The name of the default file system.  A URI whose. u1 h8 \+ m6 v: [
  scheme and authority determine the FileSystem implementation.  The
7 o7 P2 ^: @0 L# f  uri's scheme determines the config property (fs.SCHEME.impl) naming3 C7 |, t; l# f/ ~
  the FileSystem implementation class.  The uri's authority is used to4 D( p1 l6 r$ ]5 m' C# O
  determine the host, port, etc. for a filesystem.</description>
* x+ y' E$ S0 N' [" L6 W</property>
- k: @3 ]$ s* k/ I' A<property>, f7 q% X7 ?$ R1 d( p& c. N& W
  <name>fs.default.name</name>, O6 K  Q, }+ ^; B% C
  <value>file:///</value>
$ a2 Y; k; h5 D1 C( x; }( v/ z  <description>Deprecated. Use (fs.defaultFS) property
# C6 P* ]; Q/ f8 ^& x1 k. w$ U6 C/ h  instead</description>
3 f9 A" ^( ^; l0 Y</property>8 s/ B( S# W& a( R( E  D
<property>  z2 z9 P  A% ?2 S. e* I) a  ^
  <name>fs.trash.interval</name>0 V& R9 l2 _4 X9 x- ]6 ]! x3 b
  <value>0</value>$ k. b0 \# V/ O  N6 r
  <description>Number of minutes after which the checkpoint' v; M' o5 I+ q1 Q" q1 w2 y2 a  Z7 i
  gets deleted.  If zero, the trash feature is disabled.
9 ~& x& X$ h# {7 }4 \: e6 e  This option may be configured both on the server and the; f* a0 D9 E- k1 N2 P/ p
  client. If trash is disabled server side then the client; K2 i# X9 f5 d! X; T. O: t  r2 C
  side configuration is checked. If trash is enabled on the0 G1 V, ]; @5 A7 |6 {: Y
  server side then the value configured on the server is
8 \7 E0 p% k9 ?) {  used and the client configuration value is ignored.: T3 P" |/ l5 X) ^2 r; ~  M% S
  </description>
, Q5 g( z: z, V& p</property>
, [  Y- \- M. D$ E% |/ t: Y<property>
0 U/ c' p0 L. A# ?4 ?  <name>fs.trash.checkpoint.interval</name>, `% P6 z9 c( Y: Y) k# q
  <value>0</value>* l" r- y% ^  A. J- _8 `
  <description>Number of minutes between trash checkpoints.
/ e8 L4 I# B7 W  Should be smaller or equal to fs.trash.interval. If zero,7 t* H/ p7 Z, f$ |: S9 ^
  the value is set to the value of fs.trash.interval.
* E* Q& `# `# x6 \! _  Every time the checkpointer runs it creates a new checkpoint
7 @5 `/ _# g, x( I  out of current and removes checkpoints created more than
& X+ o3 I) `/ r0 ?5 ^% l' F  fs.trash.interval minutes ago." [% D  P! J3 R5 }6 J4 H5 A  X. Q
  </description>
* n$ M" a! W/ u$ N" m</property>8 L, y9 J- d) q: i& \
<property>
* Z2 a6 x' \" f6 G$ p: B  <name>fs.protected.directories</name>- g+ G" U4 {$ Q' G  m& L' @
  <value></value>
, P, n9 K% M( }  <description>A comma-separated list of directories which cannot
' d+ z  N% y" g( l1 L& P3 Y/ K" X    be deleted even by the superuser unless they are empty. This
. V8 j1 S1 y9 a; V3 B, L% y    setting can be used to guard important system directories
4 e/ Y& @( i$ y) r/ c8 ^    against accidental deletion due to administrator error.7 Y8 Y' I- ]% Z: _, g) C
  </description>
) N3 S" r3 `5 k8 l- G  X6 W/ o</property>
* j5 V$ [# F# D/ @- M2 b. p3 o<property>4 m3 o1 P1 D1 J* E
  <name>fs.AbstractFileSystem.file.impl</name>
& q: J0 r( J3 W1 t  <value>org.apache.hadoop.fs.local.LocalFs</value>% x2 _" a. \5 C
  <description>The AbstractFileSystem for file: uris.</description>% O4 x% M- J2 h7 Q  A
</property>+ _. N! V, ?6 y: d% X( P
<property>" x0 O% c5 k6 }4 \9 ~3 u' N
  <name>fs.AbstractFileSystem.har.impl</name>
5 ]) K+ [; Y2 O5 G+ o0 T' y  <value>org.apache.hadoop.fs.HarFs</value>9 i9 O4 S; h% j9 Q- ~% m/ y( j
  <description>The AbstractFileSystem for har: uris.</description>
2 O" m+ b" _7 ?. M' v! J% o: ^</property>
. I! a9 [5 E1 q6 j3 y<property>: O0 J  d( h: S; O
  <name>fs.AbstractFileSystem.hdfs.impl</name>
4 _2 _4 A' _/ X( n  <value>org.apache.hadoop.fs.Hdfs</value>4 U7 Q; Z$ |5 W; V
  <description>The FileSystem for hdfs: uris.</description>7 x8 B5 M1 \  g* F5 F5 x5 G! j0 ^) K( v
</property>3 s# E1 y/ t7 ]: V: e$ U; ~
<property>
/ j0 R: c/ c+ z& q  W; k* t& N- @  <name>fs.AbstractFileSystem.viewfs.impl</name>
$ _; z! w$ t/ _' ]/ e- |  <value>org.apache.hadoop.fs.viewfs.ViewFs</value>
4 r% M! U& b4 I5 x) x  <description>The AbstractFileSystem for view file system for viewfs: uris
# V7 J% c! a" Q  (ie client side mount table:).</description>+ c, M/ j9 Q3 `* `; W- h
</property>6 P; p8 q/ L% a* w# O9 N% C& [" Z
<property>
; w0 J2 A; w7 K; c+ }" U  <name>fs.viewfs.rename.strategy</name>
' w2 A: c2 p$ Q# ^3 C+ c% ^- I  <value>SAME_MOUNTPOINT</value>6 K8 N; w) k3 ^1 B  M( M7 [: q
  <description>Allowed rename strategy to rename between multiple mountpoints.& q: z4 B' y( h. k7 A! s; U. s: u
    Allowed values are SAME_MOUNTPOINT,SAME_TARGET_URI_ACROSS_MOUNTPOINT and1 p* {7 H3 f8 e/ v
    SAME_FILESYSTEM_ACROSS_MOUNTPOINT.
' V, F6 R0 \5 ?  </description>
$ z0 ]5 D% ?& u0 n- r& z</property>! ^- @) F, [5 r+ y
<property>9 g' M# Y( c+ b" e0 z  ?% w
  <name>fs.AbstractFileSystem.ftp.impl</name>
7 E* }3 J4 R  U7 v  <value>org.apache.hadoop.fs.ftp.FtpFs</value>% N0 L* R5 m- H. Y. v
  <description>The FileSystem for Ftp: uris.</description>
7 I' W- u" t- l" x3 L  ]</property>
- x& p! _$ @3 ~2 A3 d$ o8 X, C( P<property># L$ @/ h, h: K4 g% M
  <name>fs.ftp.impl</name>7 \% p$ A4 Q+ {- `. _
  <value>org.apache.hadoop.fs.ftp.FTPFileSystem</value>
& P' h+ M9 i% R; `) U  <description>The implementation class of the FTP FileSystem</description>
* d6 j6 l4 r9 z- t  U</property>
" p* N. [/ \) M: \% q<property>0 k  i! A- ^/ ^  w' }7 ?* h
  <name>fs.AbstractFileSystem.webhdfs.impl</name>
  t  q0 s) p& o/ D  <value>org.apache.hadoop.fs.WebHdfs</value>! @9 }# F" d0 ?, R# ]3 l
  <description>The FileSystem for webhdfs: uris.</description>( u( ]/ q" b. r
</property>
( Z' l. G2 i2 v& T/ m; q' S. ^<property>3 ?  Z9 a9 V3 {2 G! J/ x4 ?7 D
  <name>fs.AbstractFileSystem.swebhdfs.impl</name>
% c- c% i! t" O  <value>org.apache.hadoop.fs.SWebHdfs</value>9 ^$ v7 R$ F* w
  <description>The FileSystem for swebhdfs: uris.</description>
+ d2 q2 A; y, J! ^0 n; b</property>
+ @; M$ q/ f* v' s, t0 E<property>
, E1 m! z* s3 k2 o$ n) {8 A  <name>fs.ftp.host</name>5 s4 h1 H/ n6 x5 O* P" p
  <value>0.0.0.0</value>
5 a5 `8 w6 f% @3 P  <description>FTP filesystem connects to this server</description>* d7 G- X) @9 w: z+ |+ x
</property>
% U; G  L, b, y<property>
- _& Q7 h4 C% G, h6 Z( Q# b( R  <name>fs.ftp.host.port</name>8 N* S( R5 ?. Y$ K# G
  <value>21</value>
2 ~, y' ^$ w# M- {* j0 \, S  <description>
' A$ K6 ?" P! X4 a# ?, `    FTP filesystem connects to fs.ftp.host on this port
( K; ]6 t+ B* X" @% b  </description>2 J0 C7 z9 T0 j! P
</property># b. Y, _! l  z3 O; r, Z  t# H8 o
<property>0 Z; `# j, o1 t
  <name>fs.ftp.data.connection.mode</name>
- ]+ h  J/ {0 s1 y, n  j  <value>ACTIVE_LOCAL_DATA_CONNECTION_MODE</value>
8 f' _9 B2 l* c+ G! V% i0 G, w6 c  <description>Set the FTPClient's data connection mode based on configuration.* H4 x* k- j' h; i
    Valid values are ACTIVE_LOCAL_DATA_CONNECTION_MODE,+ _( \' d  K7 @8 n0 C/ g
    PASSIVE_LOCAL_DATA_CONNECTION_MODE and PASSIVE_REMOTE_DATA_CONNECTION_MODE.
" b6 A" c0 ~+ G  </description>
$ s8 |- K9 a- c9 X& O- V: f, {</property>  {/ z( D" T; h* x$ D
<property>
) e  c# `3 }" V' [  <name>fs.ftp.transfer.mode</name>- `0 m3 r6 X- v' y
  <value>BLOCK_TRANSFER_MODE</value>$ n0 v6 l& W! ^: h/ D
  <description>4 v7 t; r0 z: y  ~/ Y5 @1 Z: ]* z
    Set FTP's transfer mode based on configuration. Valid values are! J8 R, k( E& l
    STREAM_TRANSFER_MODE, BLOCK_TRANSFER_MODE and COMPRESSED_TRANSFER_MODE.0 ?: q2 K* X5 [, h7 y( ~/ y$ O
  </description>. N6 `- X2 w* c+ A
</property>
0 c7 e4 h; L. C3 W  [, K<property>
5 Q( [- a2 T- Z+ y  <name>fs.df.interval</name>* k, v; j/ L& @2 z) D3 |  U
  <value>60000</value>
+ Q2 k, T1 o( p  <description>Disk usage statistics refresh interval in msec.</description>/ G- s7 ~! a0 F- a2 ?# M
</property>( q# z8 ^! g5 b0 S6 A8 W& Q, R( G4 E
<property>, \, E% |3 N) c
  <name>fs.du.interval</name>
0 C& ~* \  N9 c( u# z0 x, \  <value>600000</value>  r; n9 s) v6 T6 ^$ v, Z2 k3 d
  <description>File space usage statistics refresh interval in msec.</description>6 a; x& g# h6 u1 A& W! g. |
</property>
! `3 A. ]( ?/ t7 ?<property>
3 G; c, f. E& n9 I  <name>fs.swift.impl</name>
3 F# i3 C# j( i" R6 ]/ \  <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
& a$ v- o1 [2 e& ]  {5 t7 T" \. `# I/ a  <description>The implementation class of the OpenStack Swift Filesystem</description>5 o* v! F2 P! j. x/ \
</property>0 B& Z) f9 Y" E. z0 s$ ]
<property>
( b+ Z5 g/ i4 @2 C7 J" Z  <name>fs.automatic.close</name>
; V, y) Q1 Q, @( B- m/ f/ O6 }: V  <value>true</value>  q! u* L. e4 ]
  <description>By default, FileSystem instances are automatically closed at program
+ j$ ~* a+ l) g  t! ]/ C, U$ s  exit using a JVM shutdown hook. Setting this property to false disables this8 M$ z7 f' [6 {8 N% U7 t" N8 [8 Z7 K
  behavior. This is an advanced option that should only be used by server applications% o) R4 @: w1 s: [. }1 d" b
  requiring a more carefully orchestrated shutdown sequence.
& E4 `, v: w" J  _, W  </description>
3 q' r% ]# B6 w4 b</property>% k7 y' F  c* l3 e, e% z
<property>
8 q5 F& c# m7 Z& X0 D  <name>fs.s3a.access.key</name>
. d2 k* r# t$ L2 _- T/ R  <description>AWS access key ID used by S3A file system. Omit for IAM role-based or provider-based authentication.</description>, O4 I' y' J6 e% [6 q
</property>
( C4 b+ {1 P8 [<property># m' {) B1 b1 _3 [, W8 t5 i4 G" Y
  <name>fs.s3a.secret.key</name>8 t# W4 ?$ ?) P, S9 u
  <description>AWS secret key used by S3A file system. Omit for IAM role-based or provider-based authentication.</description>0 _$ [/ n( n& [
</property>7 |: }9 h5 {8 o  J4 r7 K
<property>
8 }  t+ c/ W2 P# m  <name>fs.s3a.aws.credentials.provider</name>
% ^% f! b0 \  W  <description>) b4 A' `+ n+ w; j0 d
    Comma-separated class names of credential provider classes which implement
% j  a% K% o! D    com.amazonaws.auth.AWSCredentialsProvider.! e* T4 E$ R: ~6 @6 Z
    These are loaded and queried in sequence for a valid set of credentials.$ n* N/ ^+ h6 D8 j. x0 {
    Each listed class must implement one of the following means of
. _+ t9 \/ k$ g% n' C0 p    construction, which are attempted in order:
; n( n+ U4 l% Q+ @    1. a public constructor accepting java.net.URI and
$ j3 B# t' S% `( s4 H" t# D9 G        org.apache.hadoop.conf.Configuration,
, N% D9 ?* A# k3 W4 ~% N    2. a public static method named getInstance that accepts no
1 Y5 @6 w+ ]$ J  n0 b6 ?       arguments and returns an instance of
, h3 a( K9 B! P6 `, v       com.amazonaws.auth.AWSCredentialsProvider, or
3 f, X8 d7 f( _$ `0 G9 u" N    3. a public default constructor.
1 W/ [5 _6 g" g# N    Specifying org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider allows
# k% d0 W2 ?% P    anonymous access to a publicly accessible S3 bucket without any credentials.. C8 H4 t1 u" A, P0 f" T
    Please note that allowing anonymous access to an S3 bucket compromises( ^/ _4 |1 l$ P$ |; N' L
    security and therefore is unsuitable for most use cases. It can be useful
" p: w. Y6 @! }* c    for accessing public data sets without requiring AWS credentials.
% a* g. s: y' C+ U/ ?3 Z! O$ j% A    If unspecified, then the default list of credential provider classes,
. ^9 K7 k8 e, H: h0 l: x    queried in sequence, is:
4 q* m. n9 y' f3 e    1. org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider: supports static
/ U# U! K) x1 u- P6 G$ P; s        configuration of AWS access key ID and secret access key.  See also
5 P5 Z% w( D& t8 g3 A1 }+ [7 }        fs.s3a.access.key and fs.s3a.secret.key.
2 Z# V6 |0 p7 C, _, k2 z    2. com.amazonaws.auth.EnvironmentVariableCredentialsProvider: supports' }; a# [( Z% d5 ]0 G8 r
        configuration of AWS access key ID and secret access key in
" a/ T: q" N# `3 M3 U        environment variables named AWS_ACCESS_KEY_ID and
" O* F" o, }+ u8 M+ r" @        AWS_SECRET_ACCESS_KEY, as documented in the AWS SDK.
6 |' W: L/ ?9 Y1 Z* J    3. com.amazonaws.auth.InstanceProfileCredentialsProvider: supports use
5 C4 ?5 E2 m- }- l: O9 M* A/ h        of instance profile credentials if running in an EC2 VM.
, c/ e% P+ H" F  N0 A% h  </description>
* ?  b& o& P6 G</property>
- [5 e, e+ i" a! j2 p<property>7 x, \. c; c4 |' X
  <name>fs.s3a.session.token</name>; _, i. f% p9 k' V
  <description>Session token, when using org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
# A: G2 ~% d$ t1 w2 a    as one of the providers.* Q% x/ \  O. \. k
  </description>
, N  I$ C. ~, V$ r6 P</property>5 n& Q! e$ u( m' v$ P! X/ V& H
<property>
. A2 D/ c9 e$ b4 F4 B  <name>fs.s3a.security.credential.provider.path</name>
( S+ B) G+ E; _2 h3 e% t2 @) \: l  <value />  e4 E/ B3 L8 X) p' ?
  <description>2 L9 b. L! L8 h
    Optional comma separated list of credential providers, a list7 Q& N9 h' m% z3 k& D  T
    which is prepended to that set in hadoop.security.credential.provider.path$ x/ R/ M. F: C5 N4 A/ ~) w
  </description>
& n7 F+ x) a, H! \</property>' }- K. K7 c1 d/ M' X9 \
<property>
+ w9 y( I. J6 q" N% f  F  e  <name>fs.s3a.assumed.role.arn</name>
0 e: G( Z" O9 B' u9 s# L; ]  <value />: W  s4 b! }, R, u! f
  <description>
4 a/ E4 b! N& [/ T# A1 X; o    AWS ARN for the role to be assumed.% S% p: t* c- s. J; h0 k3 v# @( ]
    Required if the fs.s3a.aws.credentials.provider contains' v0 E7 @4 |1 O8 `9 v" H4 H. g
    org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider5 ~* D* u  j, x+ d  T- G
  </description>
! J1 I& K3 F/ \- ]; W/ U</property>5 g1 J% v. B9 K
<property>  [$ a2 Y. k' @3 z( l
  <name>fs.s3a.assumed.role.session.name</name>2 Y* t) @: v! V9 U8 \) u
  <value />9 x# v: ~' f1 }
  <description>1 v; w% r$ ]) u& R) M* p* v( B/ d
    Session name for the assumed role, must be valid characters according to
4 Z, t9 I$ {( F; N    the AWS APIs.
5 ?+ q! {1 V& A! o3 l+ f& R; W$ \    Only used if AssumedRoleCredentialProvider is the AWS credential provider.
0 j  o) g9 y- K% J6 B: P0 b1 S    If not set, one is generated from the current Hadoop/Kerberos username.
- i. x. K$ \9 G5 ?# D9 d9 {  r  </description>$ t5 x, O! g! g. \
</property>9 t5 z# Q' i$ S- H% z  s
<property>; |( ^- @( |9 }1 |. q4 p
  <name>fs.s3a.assumed.role.policy</name>
" S% s% U7 {2 t: l; p  <value/>
( h( S) w# f; L5 R9 e. X% s  <description>7 L+ D2 Z6 ]+ m( k
    JSON policy to apply to the role.
0 |4 n4 X. c3 B% d    Only used if AssumedRoleCredentialProvider is the AWS credential provider.
& e- A7 O: C" r; I  </description>
. Q+ X$ i6 _% D: g% f" M" k</property># D8 T0 ?: H6 ^' L' L% h2 t
<property>, `# I& H9 K' n, }6 w2 j
  <name>fs.s3a.assumed.role.session.duration</name>
/ Z! w5 s! ^) v2 M2 ~5 s* {  <value>30m</value>3 l3 v4 \4 |5 L/ J/ Y* Q: N, @  Y
  <description>4 D% L7 |- G" Z2 j. r7 a) x4 l
    Duration of assumed roles before a refresh is attempted.
# Q" L- A3 q$ r! J6 P( L    Only used if AssumedRoleCredentialProvider is the AWS credential provider.- V) q5 U2 z4 z9 v, p# n
    Range: 15m to 1h4 n( q1 D0 ]: C4 o/ y' m
  </description>
& u0 {8 q9 J* W$ `* m" F2 i! V</property>
  K* D7 U1 {  w5 y: j2 N<property>8 N! @; R! y3 T' h( j# J+ |& g3 R
  <name>fs.s3a.assumed.role.sts.endpoint</name>
/ h! ]2 S9 ?5 s; h  <value/>
8 o" j9 D( M1 P& F  <description>7 b) _! Q% c7 t+ i1 M
    AWS Simple Token Service Endpoint. If unset, uses the default endpoint.
* N. W2 p9 r6 E' \$ V/ @    Only used if AssumedRoleCredentialProvider is the AWS credential provider.* }+ l: |8 j) w6 x1 K  r# [6 q
  </description>: T2 v' W: u: j; K
</property>
4 |. Y) _8 ]0 u# [6 {9 U<property>( P; l3 j, s) N0 x
  <name>fs.s3a.assumed.role.credentials.provider</name>% V" E' t) ?2 @( V  ?' g  P1 j( B
  <value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
) }, X% q0 d8 `; Z8 N1 r  <description>
9 k1 m  S% S* J* p* x+ i    List of credential providers to authenticate with the STS endpoint and
2 J& z6 R' f( v    retrieve short-lived role credentials.
5 N' c6 I- K( N    Only used if AssumedRoleCredentialProvider is the AWS credential provider.. J! o. ]8 n9 ]8 \3 @* U) Q
    If unset, uses "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"., z! }4 L, e9 Y$ V8 k) Y
  </description>0 R2 x0 Y3 E, g8 B
</property>
3 ?5 e, [7 x7 b1 _1 R: t<property>
* F- f; w1 O! p( P# N  <name>fs.s3a.connection.maximum</name>$ Z9 ^  e0 m0 X& Q, ^+ H1 q4 J9 C- `
  <value>15</value>
' ]4 N* X! c  Z5 z# g1 c  <description>Controls the maximum number of simultaneous connections to S3.</description>
! s( F/ I7 F) I. q8 G" k" S2 q/ X</property>
4 ^  v6 x0 i& d- l8 G<property>. s+ M  T+ M  \: Z$ d  D6 u
  <name>fs.s3a.connection.ssl.enabled</name>
- ]0 X8 {# R+ l  C- q  <value>true</value>0 I9 O7 |8 X; c/ f! T" a; q
  <description>Enables or disables SSL connections to S3.</description>
4 Q7 |. z* _9 S# n* T5 x) y</property>$ N! G9 R; r: x# ]5 Z4 Y
<property>
3 B. y5 }* j0 U' c5 X9 N, b  <name>fs.s3a.endpoint</name>
$ i, J( Q( o+ d! f* E( G2 G& E7 l  <description>AWS S3 endpoint to connect to. An up-to-date list is# ]4 ?& b0 y: w& U3 J7 |
    provided in the AWS Documentation: regions and endpoints. Without this
! {+ {; z4 w8 _7 \* j/ R1 K4 l! f    property, the standard region (s3.amazonaws.com) is assumed.' _; X4 ~& }5 }* c8 `, z
  </description>5 v) |, O' L9 g9 f. F* T, `: g
</property>1 Q1 D/ L& d, C" k
<property>
. N' Q! O8 C$ \% o  ]3 P6 L- i, m: M  <name>fs.s3a.path.style.access</name>( y7 P1 n5 }* L
  <value>false</value>
; M2 s' u; T0 H! O0 V, n/ l  <description>Enable S3 path style access ie disabling the default virtual hosting behaviour.
- R( `9 n5 D5 d  n$ N    Useful for S3A-compliant storage providers as it removes the need to set up DNS for virtual hosting.
" E8 n( |! ?% ]9 {$ S  </description>
; T) |* s' e3 L3 S$ Z</property>/ V! i! o/ q# e
<property>
; f) ^' b$ _( a6 P- P( X" B  <name>fs.s3a.proxy.host</name>- V" ~5 h: P4 d/ j
  <description>Hostname of the (optional) proxy server for S3 connections.</description>
5 N, |. q+ G  V" C& L</property>
/ [; q& N* a7 O  ?! d<property>6 m/ X) O; [( u! ]
  <name>fs.s3a.proxy.port</name>, }. u" b8 H& j0 W, @2 d
  <description>Proxy server port. If this property is not set* B( {, ~' u6 w
    but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with3 C9 \7 ]- k* n
    the value of fs.s3a.connection.ssl.enabled).</description>+ e1 h4 r# y7 w: Q; P" {# o( {8 F
</property>. @: w6 o/ a+ |: F# W& L: L! r, L3 t
<property>
) P2 D( M# {7 I$ v- R( M6 i: u  <name>fs.s3a.proxy.username</name>
0 A0 Y+ i3 {5 S4 R" t0 f( m7 m  <description>Username for authenticating with proxy server.</description>
5 b" M$ {) z$ T6 a3 T% e* N</property>
4 d( ~+ h$ B9 o0 P0 h<property>
% y  T5 }( {( e  A  <name>fs.s3a.proxy.password</name>+ ^- |9 E- T$ V$ f. F, \- @
  <description>Password for authenticating with proxy server.</description>
( D! U( M0 V/ D% Q</property>
% t6 z* B4 E# M6 i3 f6 P8 Y<property>
, Z$ k* i- t  W* m3 |- s  <name>fs.s3a.proxy.domain</name>' `& q& h) M0 Q8 M3 Q( S
  <description>Domain for authenticating with proxy server.</description>% j9 @- p) p2 _7 R4 @( T% ^
</property>4 c6 E/ M7 R( D0 a6 H, Q4 I
<property>
( z2 s+ u  @$ \4 J  <name>fs.s3a.proxy.workstation</name>
3 U9 u) K" o7 u' B- I  <description>Workstation for authenticating with proxy server.</description>
/ f6 |. R& `3 Q) w- I</property>
6 L9 p! w" _; O; a4 o<property>! ~4 H# l/ Z" u$ h7 B
  <name>fs.s3a.attempts.maximum</name>8 W( ~: i; Y4 p0 F$ e
  <value>20</value>
9 m7 h3 B+ J/ E  <description>How many times we should retry commands on transient errors.</description>
) w8 Q, I. D, N</property>$ f# F$ B) ~$ x8 Q$ ]
<property>  j+ ?' _& v; |. L: }
  <name>fs.s3a.connection.establish.timeout</name># h) j+ }8 J: J3 n
  <value>5000</value>7 y) N6 m6 F: @, l( v, |' D
  <description>Socket connection setup timeout in milliseconds.</description>+ I1 r8 s/ C( ]) v: w$ m( W( W+ E
</property>
; h# G8 V- o' b8 s3 e<property>' X6 G3 D+ z2 L
  <name>fs.s3a.connection.timeout</name>
1 _! e( j1 X& N' N2 z# {  e  <value>200000</value>
6 ]4 V4 T1 V" m  <description>Socket connection timeout in milliseconds.</description>
) [; w) @! \. F0 C) @- P/ n! A</property>
* ~% c( @$ ~9 u) I; o6 _<property>/ Y6 F- J& c& H  N4 R5 E: T* [
  <name>fs.s3a.socket.send.buffer</name>
0 U; u6 V) W+ S# x; P* L  <value>8192</value>: U+ b: F/ q7 v1 I# ?9 N: r
  <description>Socket send buffer hint to amazon connector. Represented in bytes.</description>
# ]4 g! E( e$ N: K. P' X, U</property>' g9 z6 F! i2 h, Z, I$ Y; a% [+ r
<property>
, F. T7 P9 [& l9 g+ ~  <name>fs.s3a.socket.recv.buffer</name>3 l. D' ^$ a: W6 Y+ [
  <value>8192</value>9 q8 U- _( |, A7 c6 {; G7 G$ Z
  <description>Socket receive buffer hint to amazon connector. Represented in bytes.</description>7 X& ~% a1 N0 g% y% T4 H
</property>
% F8 W& E: q# I6 g& U& D9 c<property>
: s% ], Y6 W+ x+ X$ \# i  <name>fs.s3a.paging.maximum</name>
/ e3 q- y) R$ r. a: I% _  <value>5000</value>
  ?2 [0 h/ J- ^$ U  <description>How many keys to request from S3 when doing
; D" B0 S( p  e0 n- S; A     directory listings at a time.</description>
7 j1 B3 L# p- s* |; ~</property>1 v& c! w+ h  k" J6 D7 |
<property>7 Z& _* N6 }6 [4 l
  <name>fs.s3a.threads.max</name>
- z2 m  W  S  z; v7 U  <value>10</value>, U, }- K$ w; X3 q7 @. X2 }
  <description>The total number of threads available in the filesystem for data6 f" v" j( `$ }6 k, T2 S
    uploads *or any other queued filesystem operation*.</description>* G2 V. C3 A" W6 P  W& X) n0 s( m
</property>4 x1 P' @  A% d6 u. P7 m. W
<property># X  n% I9 n/ ^$ E. G
  <name>fs.s3a.threads.keepalivetime</name>. m; T. B6 H2 S
  <value>60</value>
( g) U: q: |  Q  <description>Number of seconds a thread can be idle before being  @& U. G* p; T
    terminated.</description>9 X* q1 s  X1 R  M9 e/ j
</property>, U+ _3 ?/ F9 |
<property>! g. f5 A) \. r8 N; x2 U: |: }6 F
  <name>fs.s3a.max.total.tasks</name>( w8 c0 J/ q* g
  <value>5</value>+ K+ j, t7 L4 b2 z
  <description>The number of operations which can be queued for execution</description>
6 G$ D. B( u3 @! A# E5 @% r( M8 H6 R</property>7 Q# F7 ?0 R4 P- v- e* l
<property>
3 V1 V4 m: {* U+ ?. b) G  <name>fs.s3a.multipart.size</name>5 _" O+ Q+ B. V8 z3 F( \# ~
  <value>100M</value>( G" d# f% g+ z3 _, J
  <description>How big (in bytes) to split upload or copy operations up into.
  ]# b4 \; ~4 W/ z; E7 l$ @  i    A suffix from the set {K,M,G,T,P} may be used to scale the numeric value.
2 G* T) i3 |# i$ Q# n$ o  </description>
% X' B9 [$ w' d: z* _) m</property>
; i; E9 L) t( k, A: C, L4 J<property>
: Y6 y& _7 `' f: \" S. z9 t! n  <name>fs.s3a.multipart.threshold</name>
. ?' P; ^( f9 B- S0 `" @  <value>2147483647</value>* C* Q- g8 m8 t; [
  <description>How big (in bytes) to split upload or copy operations up into.4 X5 D: x4 D1 U7 Z3 j  Q
    This also controls the partition size in renamed files, as rename() involves
0 R$ f6 h! d$ [9 A6 z* T, ~* k" Q    copying the source file(s).! ], F4 H5 X# G2 L* c2 ~6 Z
    A suffix from the set {K,M,G,T,P} may be used to scale the numeric value.! K: E) e+ A6 [; u# v/ F
  </description>+ {* V5 T  c9 e6 d
</property>
" x: [, x$ x% [6 b9 o<property># s8 ]4 {2 X% B  `! m& r" X
  <name>fs.s3a.multiobjectdelete.enable</name>3 H: g" g) R  \% Z( g8 j
  <value>true</value>
7 N6 w' m! y% b# `# {5 i' Z  <description>When enabled, multiple single-object delete requests are replaced by
  ^# f/ l7 g# @; E, g    a single 'delete multiple objects'-request, reducing the number of requests.2 R' X  \: O& V
    Beware: legacy S3-compatible object stores might not support this request." W: W" q( s: D) ]. V7 M( V
  </description>' \$ ^3 }1 X  k8 V; e# G
</property>
8 o( h0 S7 P4 H" z( K<property>
+ h, O: q9 V4 _! ]  <name>fs.s3a.acl.default</name>
" Q+ G4 c4 G( }8 h1 q7 E, x1 Q  <description>Set a canned ACL for newly created and copied objects. Value may be Private,9 T) Y# _. O) v( H4 A
      PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead," N8 A) v) f& j
      or BucketOwnerFullControl.</description>
0 H/ o1 L  u5 a( ~5 f( W( F3 h4 Q</property>, l2 ]% r9 R9 i7 v
<property>
+ X1 I& V3 ]# L  W: x  <name>fs.s3a.multipart.purge</name>
# I: ~# T# f) e8 t/ q5 M5 [. D8 K  <value>false</value>1 f  G8 S, v( c+ I* ^$ c
  <description>True if you want to purge existing multipart uploads that may not have been$ ?& G; U) a; u6 b! e: y% M
    completed/aborted correctly. The corresponding purge age is defined in
  L. z5 O; C; v3 x* {+ K    fs.s3a.multipart.purge.age.
+ I0 l% Z% k5 Y; O    If set, when the filesystem is instantiated then all outstanding uploads
, D3 ]8 S5 S9 y1 ?% a% g& C3 [+ z    older than the purge age will be terminated -across the entire bucket.
: \) u: z: w; W. M! {    This will impact multipart uploads by other applications and users. so should' y, e1 K' R( u( I% V9 u& z
    be used sparingly, with an age value chosen to stop failed uploads, without
7 V# B* {; l1 `: e    breaking ongoing operations.
1 J1 \& E2 u+ ]9 n! k' j  </description>
" @: k* _2 {* M</property>: s9 X3 T6 n" V: k
<property>
- l8 v0 k  `5 H( |6 T' K  ?  <name>fs.s3a.multipart.purge.age</name>2 K% x9 y9 y2 Y3 u0 O/ W
  <value>86400</value>8 x3 H# \/ C4 g- ~( V
  <description>Minimum age in seconds of multipart uploads to purge
+ n6 |) q' W0 B. C    on startup if "fs.s3a.multipart.purge" is true
# `# w! r  z: R! M- l  </description>
1 |8 O9 O! ^2 a3 c0 z% B</property>+ s( |" x5 u/ I
<property>
! j% `- K$ J) F  P' Z8 ]9 q  <name>fs.s3a.server-side-encryption-algorithm</name>7 ]+ S5 H: J7 o, X  n9 J
  <description>Specify a server-side encryption algorithm for s3a: file system." S. {' i- D1 b3 C) m
    Unset by default.  It supports the following values: 'AES256' (for SSE-S3),. g6 y( R. Z/ \; [
    'SSE-KMS' and 'SSE-C'.
' m- Q$ H& l* B) n9 f$ A. F  </description>) }7 N- |. @6 w
</property>
, ?# L# J6 |' z% @) D, H<property>
3 |* i- [% |) g$ V  <name>fs.s3a.server-side-encryption.key</name>, I7 A5 N8 R3 i" }; m5 `7 H
  <description>Specific encryption key to use if fs.s3a.server-side-encryption-algorithm* l# P/ S/ R5 `, p- R: x# H
    has been set to 'SSE-KMS' or 'SSE-C'. In the case of SSE-C, the value of this property
  s1 L% _# C1 ?( f  Y. R; u    should be the Base64 encoded key. If you are using SSE-KMS and leave this property empty,* J. L. Y* {8 G6 \1 p
    you'll be using your default's S3 KMS key, otherwise you should set this property to9 h  }) N( N* n9 w6 _2 q, ]
    the specific KMS key id.2 v3 T* U7 n# ?5 P
  </description>
8 i; V/ H9 S+ g  T: s! k. e  \1 |+ `</property>
& k. T( m9 o4 u+ }1 Y3 L' w<property>4 d7 _2 ]" `+ q' D, N" j
  <name>fs.s3a.signing-algorithm</name>
& G. d3 @1 p7 d# d, I+ q8 z, W: w  <description>Override the default signing algorithm so legacy* I9 x- j7 l, l1 F; N) v
    implementations can still be used</description>  z4 C; m) g4 j
</property>; [5 K* Z) R- {9 a& [
<property>, O' Q9 R9 y, k" C
  <name>fs.s3a.block.size</name>6 v8 C2 p* s. n2 _
  <value>32M</value># k( i0 [6 G+ y5 ^& \* N- x$ L, K" p
  <description>Block size to use when reading files using s3a: file system.+ B" T/ V6 [9 _2 n
    A suffix from the set {K,M,G,T,P} may be used to scale the numeric value.5 E" \5 Y2 O' w; w) P
  </description>  l, X; M9 S4 O; ^& q4 C4 \) X- h8 h
</property>3 b6 \* ]& V3 Z2 K' M
<property>
4 N  @& v( C! f9 Y: ^" V  <name>fs.s3a.buffer.dir</name>
) B( u* {# X/ B- M8 d) b  <value>${hadoop.tmp.dir}/s3a</value>' Q6 ?% \# S$ [3 o; k+ a. b+ [
  <description>Comma separated list of directories that will be used to buffer file
! ^( ]- }! N: p: W2 |4 \' ]; S  Z    uploads to.</description>
8 H, X/ a; @6 c  @" V</property>, k* {3 f4 b! E& v2 }
<property>, k. |4 X' Y  Q- \- f
  <name>fs.s3a.fast.upload.buffer</name>: T) i0 E. E" O; J7 s
  <value>disk</value>9 R. |- h0 B3 Q0 e9 e
  <description>) I2 d, j, J0 i: i% b
    The buffering mechanism to for data being written.; w+ m5 m1 O  L; ~
    Values: disk, array, bytebuffer.
3 T+ p, B+ _+ V0 A7 a    "disk" will use the directories listed in fs.s3a.buffer.dir as8 A; G6 W& {/ h  {) ~
    the location(s) to save data prior to being uploaded.- m" \9 l1 F& o# g) f
    "array" uses arrays in the JVM heap
, j! `" ?. C9 a& S5 H    "bytebuffer" uses off-heap memory within the JVM.
* P+ z, M0 M( f' h2 q    Both "array" and "bytebuffer" will consume memory in a single stream up to the number5 e: d3 R2 y. M
    of blocks set by:$ j5 D4 g0 y6 v( `+ G0 i2 }+ ^" ~+ q
        fs.s3a.multipart.size * fs.s3a.fast.upload.active.blocks.! [' M7 \! {5 Q4 K: |. _$ c
    If using either of these mechanisms, keep this value low
5 W6 a. |" g! D" V# Y" }; E    The total number of threads performing work across all threads is set by
" v( r5 K% [  L( w    fs.s3a.threads.max, with fs.s3a.max.total.tasks values setting the number of queued4 {) j. a( F4 y7 a( I% a( |7 w4 y2 L
    work items.
1 x+ X  j4 p2 y1 X  </description>9 O. ^8 S5 u3 R4 y% ]. h
</property># K1 k. B! ?' I+ J$ j
<property>
: Q& W) U4 z) {  <name>fs.s3a.fast.upload.active.blocks</name># g) `% z7 ~0 n& V
  <value>4</value>. q" P0 d0 b# X
  <description>
$ c* @0 s7 h# h9 J5 P) g    Maximum Number of blocks a single output stream can have4 t& w- o9 n$ `0 @" q5 a
    active (uploading, or queued to the central FileSystem
3 U/ c% F/ \) J7 v0 o% B( g, i    instance's pool of queued operations.
! n+ a  G) y: |+ R1 v. i( l0 q  I- l    This stops a single stream overloading the shared thread pool.0 n4 G0 }& y, r/ E& [1 w" H+ D) r
  </description>5 k/ |+ A' x, u
</property>
" k9 g7 v/ o) ?<property>
* i6 c! O& r% C$ m, n1 W  <name>fs.s3a.readahead.range</name>9 ~& i1 v& s# ]% S  A" x
  <value>64K</value>
: x, _9 [, K: m: B5 ^  <description>Bytes to read ahead during a seek() before closing and! P3 e0 t+ `9 h) F0 H+ h5 A. X" u
  re-opening the S3 HTTP connection. This option will be overridden if$ }7 j9 N0 E; X# g: T% q
  any call to setReadahead() is made to an open stream.
+ h" Y- h& p# d0 h2 i. s  A suffix from the set {K,M,G,T,P} may be used to scale the numeric value.
7 _/ Y* @& w2 i. i  D  </description>5 t6 x6 U# s+ [9 u
</property>! o: P% U/ {  t% V: z
<property>4 h8 Z6 k! C2 C% i2 v
  <name>fs.s3a.user.agent.prefix</name>. T1 T, `  s/ D; L8 S; O+ @5 y
  <value></value>
: @. n  c0 s/ [! \, ?; {% J  <description>
0 b: G  r7 J; U3 `( {* T5 [. j    Sets a custom value that will be prepended to the User-Agent header sent in2 ^$ @& z) m$ l, k: `; H
    HTTP requests to the S3 back-end by S3AFileSystem.  The User-Agent header
6 k9 Y# h/ ]/ k    always includes the Hadoop version number followed by a string generated by
2 W4 k6 M. M" u0 j0 S" t    the AWS SDK.  An example is "User-Agent: Hadoop 2.8.0, aws-sdk-java/1.10.6".1 H9 M+ |8 ^3 f* @7 f$ V% g
    If this optional property is set, then its value is prepended to create a1 G! ?4 i+ _. x% N, I
    customized User-Agent.  For example, if this configuration property was set
4 N# N# M% @+ N/ g7 h6 z! P    to "MyApp", then an example of the resulting User-Agent would be
' g* ]/ q2 Q+ H9 v2 v. p    "User-Agent: MyApp, Hadoop 2.8.0, aws-sdk-java/1.10.6".0 O& l7 T0 {5 W5 ~6 C% P% k' z0 a$ g
  </description>3 B  Y  |7 E. U- C5 C. m, o% [$ J& |( H2 a
</property>
; B; G) H3 @! c" T( Y0 k<property>+ A1 I0 g( E( D7 _; h% W  ]( }
    <name>fs.s3a.metadatastore.authoritative</name>! L4 {$ {& x$ j4 V6 ^
    <value>false</value>9 e9 }7 H: ~. N8 c  ?) T
    <description>0 u7 f/ w- S, B9 x; L- G
        When true, allow MetadataStore implementations to act as source of' c- y% N9 K8 w
        truth for getting file status and directory listings.  Even if this9 `3 X6 f# {* }+ j- F/ y
        is set to true, MetadataStore implementations may choose not to8 X. U& U; o  c" t- y
        return authoritative results.  If the configured MetadataStore does0 K6 }& [# }, g2 ^& _( z& d2 Q7 A8 J
        not support being authoritative, this setting will have no effect.  Y) C; G! [8 r9 [& O
    </description>6 m/ t6 i5 P% M; s4 t8 p& u" |
</property>
9 {; x- j2 s( F- @2 N) A6 V3 o3 ]<property>' e9 _; o2 Q' G( O, g1 s/ |
    <name>fs.s3a.metadatastore.impl</name>0 E+ |3 B6 \+ l: v1 I! v
    <value>org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore</value>* r9 n5 G! \+ V% t  E
    <description>7 |6 J. l; L4 j1 h: W# K" e& w
        Fully-qualified name of the class that implements the MetadataStore1 G+ U: S' e* b9 }$ d( v
        to be used by s3a.  The default class, NullMetadataStore, has no
5 Z# {! R& e7 }( S% N& U        effect: s3a will continue to treat the backing S3 service as the one2 \- g! @+ a: I0 {' {
        and only source of truth for file and directory metadata.0 R0 B% E# o& m! `$ r4 Q" }
    </description>
# V* I7 _! ]' u- |$ ?  `5 r* [. N</property>
+ j( m9 U! ^, n; u( ~8 o4 T. n<property>1 _* r2 N$ T6 W' E$ d0 y
    <name>fs.s3a.s3guard.cli.prune.age</name>
! }/ F% e& o. H; L* W! c    <value>86400000</value>; o. ^# V0 S2 s
    <description>. K2 g2 L6 t$ r4 }$ Y
        Default age (in milliseconds) after which to prune metadata from the
! P# S& F' L9 }7 b        metadatastore when the prune command is run.  Can be overridden on the  y) C! K- I; L( \5 _; c
        command-line.6 e5 N) E( ]& e$ k* G
    </description>
; ?6 m9 J3 x9 X, a( v</property>
: r6 Z" ~6 J/ ^8 Q' e7 I<property>4 d+ s0 J' f! Y1 J) {9 n' ]
  <name>fs.s3a.impl</name>
$ ~, C$ a- ?5 g) R  <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>; h) F4 T& D! O
  <description>The implementation class of the S3A Filesystem</description>
3 @4 L  M- j2 Q. i% O7 g</property>0 _* B& x5 X6 n
<property>! _+ ?* l! t# |
  <name>fs.s3a.s3guard.ddb.region</name>& C4 N9 L$ Q6 `% S& g
  <value></value>
. ~* b' |; H( F  <description>
9 p! F8 E& }- U9 [    AWS DynamoDB region to connect to. An up-to-date list is6 q! u/ q7 W4 u) _( i9 ^' N( @( e
    provided in the AWS Documentation: regions and endpoints. Without this. ^3 r. ?* @$ S: u
    property, the S3Guard will operate table in the associated S3 bucket region.& V+ u9 [* ]# X$ u
  </description>
, a7 z$ M- F, P+ F; E" L</property>
' P& J  E* w& x+ ]- |; R* p7 R" \<property>( e. t* z& P2 U, ^1 ~3 V
  <name>fs.s3a.s3guard.ddb.table</name>: [! x+ @" Q6 O9 W
  <value></value>
' O) {- {) S  j0 T; M& u8 c  <description>7 E; `; L" R7 M' R% a) w
    The DynamoDB table name to operate. Without this property, the respective- ?: U# \( f" Y. d- T7 h' O
    S3 bucket name will be used.- _0 b5 ?! H8 B& t( |/ Q
  </description>  _- d+ W/ t; }6 w' r
</property>0 z! C6 k# \+ |) r0 j
<property>
7 ^! w. ]3 a3 S* I  <name>fs.s3a.s3guard.ddb.table.create</name>5 Z- F  Z7 p! j* q9 N# Z
  <value>false</value>
' g/ U6 d6 A8 l; ]  <description>! N8 o5 K6 b8 Y) A2 G* U2 g# l
    If true, the S3A client will create the table if it does not already exist.; X6 M/ e4 j- q& a( s
  </description>
! K$ r% f" ?" }8 Z- I, g</property>' ^# p# B: K, `
<property>( E) J2 L# s( K( ~2 l; c+ B# m
  <name>fs.s3a.s3guard.ddb.table.capacity.read</name>
, c4 F3 x% u' N9 A% q9 X9 _  <value>500</value>5 w# A. t1 }, K  ^* a( L
  <description>3 |4 {" j& D. F. q: Y" ~; z2 {
    Provisioned throughput requirements for read operations in terms of capacity
/ q$ e3 g4 B) ?7 G5 P    units for the DynamoDB table.  This config value will only be used when8 A$ u- p( I/ ~) k% ~* S# O7 |3 g
    creating a new DynamoDB table, though later you can manually provision by
7 Y4 H& S! h9 `# q: S7 u8 J    increasing or decreasing read capacity as needed for existing tables.
2 H- y- }* m9 L/ }# g' i& [    See DynamoDB documents for more information.$ F/ c" k7 c& N' ]
  </description>7 \7 Y6 @+ E  r6 ?+ `# e
</property>
/ V3 d* Y; v5 K( t  I<property>
8 a+ i3 S- l8 i3 G5 |' R: H( L% e; m  <name>fs.s3a.s3guard.ddb.table.capacity.write</name>
, N1 {- ?1 G' H! X* X  <value>100</value>/ r9 a5 j! _. M) b" Z
  <description>% M- ~7 W7 Z; e) o' L& P/ ?5 Z8 ~$ l
    Provisioned throughput requirements for write operations in terms of: i; ~% _. U. x$ G: l- J% A
    capacity units for the DynamoDB table.  Refer to related config
" s- ?1 ~6 T- x  M+ d" D( ?4 K2 d    fs.s3a.s3guard.ddb.table.capacity.read before usage.
  h# s, c, I) C  </description>
* S  @' i& E- z& X" T</property>  E/ h/ G( @( H/ z: U2 b# R4 G& i
<property>
) c0 x. b6 _: W7 \7 t1 T  <name>fs.s3a.s3guard.ddb.max.retries</name>
+ {) [6 j& j% X$ _$ g# k8 w  <value>9</value>
( o; X" {1 m. e, T0 G    <description>
3 [3 d# @" V2 E( Z* E      Max retries on batched DynamoDB operations before giving up and$ x' X# u- |( C% D; w
      throwing an IOException.  Each retry is delayed with an exponential
, u* L" J- ~, M2 q- v% |: v$ k      backoff timer which starts at 100 milliseconds and approximately( o& V' h  Q- C9 _1 v& ^$ e5 e% V
      doubles each time.  The minimum wait before throwing an exception is
1 f3 y3 L, ~8 G+ B9 g# [2 K      sum(100, 200, 400, 800, .. 100*2^N-1 ) == 100 * ((2^N)-1)
! \6 B7 f* B' p9 [) H+ j6 y) ]- ]! v      So N = 9 yields at least 51.1 seconds (51,100) milliseconds of blocking2 N4 v  \* b  l/ [/ F3 d+ N
      before throwing an IOException.8 c# U3 I0 [# _2 h  @
    </description>$ l, x7 s8 |: M
</property>
. @8 y/ }! ?% d! |! K<property>5 Y8 y7 f6 m- z# @( y
  <name>fs.s3a.s3guard.ddb.background.sleep</name>! Z9 \$ g& M' A4 n3 W# E! R
  <value>25</value>
0 |1 y7 E$ P' Z& m1 W- X- y8 V9 K  <description>3 ]* E0 e4 S0 Y
    Length (in milliseconds) of pause between each batch of deletes when9 H$ w8 m2 o' p: |
    pruning metadata.  Prevents prune operations (which can typically be low
- X; C$ s$ w! c' K  I) v; q2 A4 H    priority background operations) from overly interfering with other I/O
: U2 i7 @1 w- P' P    operations./ S$ _9 t- J, T6 \& i0 a; w9 z
  </description>
: A  A" q- f9 B- V' o</property>) w4 B( A; z$ q% U! e5 q' S' V& b
<property>1 V: ^1 F7 R( R3 o
  <name>fs.s3a.retry.limit</name>* l  ?9 S' P8 h7 M& K
  <value>${fs.s3a.attempts.maximum}</value>
; c7 l1 Y- ^9 |3 t9 `6 f  <description>; s" k) K7 D/ \. O
    Number of times to retry any repeatable S3 client request on failure,
3 x; ?% a: j8 x- ~( ]% B    excluding throttling requests.6 Y$ Y& ?/ Y+ x4 b2 P( C: H; A
  </description>
4 C/ h" X- E7 q1 W+ D- {</property>
+ N7 r3 Y4 ?1 L6 \9 S3 p( B; D/ L<property>1 `5 v: S- Y6 G  F6 e) I
  <name>fs.s3a.retry.interval</name>
+ g5 y# m- r8 Y9 x  <value>500ms</value>
: n; [3 O# H" _  <description>
# a# d. ]9 E! d    Interval between attempts to retry operations for any reason other
, l! r- m! b: X; {- a( s5 f% s    than S3 throttle errors.* y1 t  J. T$ L/ l
  </description>
4 e! h  p3 ^0 c& Y1 k4 F4 O$ Z</property>! M& Y) ^! g+ ]! `( ?
<property>0 y( Z1 c1 q1 n. _- z
  <name>fs.s3a.retry.throttle.limit</name>% p, x4 M% z  U2 i' }  n3 O& l
  <value>${fs.s3a.attempts.maximum}</value>
, I! ?0 b1 ^% }7 y/ X4 O# Z) Q  <description>7 h2 z3 G! L9 S* j/ w, X
    Number of times to retry any throttled request.
4 [5 v# s- _6 T' y2 u5 K  </description>2 n8 [) I8 I7 [( _+ K# g1 g. H
</property># W9 g8 w4 r; e) C/ z* M
<property>
3 F! ^- E$ t& c0 c* p! Q% {0 W  <name>fs.s3a.retry.throttle.interval</name># }1 @8 y' H/ N+ F: F' L1 p, j, C
  <value>1000ms</value>' |0 @5 {8 e4 h3 ]
  <description>0 `( e& I  ?4 L7 w
    Interval between retry attempts on throttled requests.% |3 r0 v: \0 j, o0 }
  </description>7 @; Z) B  b% K
</property>$ c3 C) I" d0 D; V6 G' g* q# ?" a
<property>
! x' b4 d: j! w9 C6 V7 ]  <name>fs.s3a.committer.name</name>
: V. s6 P5 s* R, t; n& L  <value>file</value>" L8 C; h; \" n! D* M& P$ f
  <description>
+ k. M% p6 a0 c+ @    Committer to create for output to S3A, one of:
) m( J# N3 N0 s) S    "file", "directory", "partitioned", "magic".
# j9 a2 w0 B; F" u7 Z  </description>0 U3 ]% o" C' _) O' B8 ]
</property>
' Q6 ?5 r! U* _3 F<property>
; F/ _6 i; C& f- w1 Z  <name>fs.s3a.committer.magic.enabled</name>3 K5 A" K2 K/ q1 i2 s$ y
  <value>false</value>
& i8 ?' [& y& h; B2 R# }: C3 N: m  <description>
) e/ i7 U: R; t9 J. k9 y    Enable support in the filesystem for the S3 "Magic" committer.
2 k2 V. P% }9 a( F8 E. H    When working with AWS S3, S3Guard must be enabled for the destination
+ h) s0 Y6 z" b4 |    bucket, as consistent metadata listings are required.* G* q4 @7 l& @+ `/ a
  </description>: i) ?( Q# f5 j+ y* {6 t% Q  j: L
</property>
6 z( E4 L( g2 b' b<property>
9 S9 b4 |: P8 T$ o$ Q4 M  <name>fs.s3a.committer.threads</name>
5 d3 S) ?% w' d1 c  F0 u  <value>8</value>* Z, M3 F9 f  d" P& T
  <description>- D, j' D# B, |1 A4 k
    Number of threads in committers for parallel operations on files
8 D; K- V$ h+ b8 @    (upload, commit, abort, delete...)
5 I. |& A5 E  }) N5 i; v& _  </description>" `2 i. h/ @/ d( u5 e+ E
</property>
' y0 f; D+ X) n/ K. s+ y- k! D3 Y<property>0 B, T; Y7 z1 B9 l  J- J
  <name>fs.s3a.committer.staging.tmp.path</name>
4 c" v0 n. j/ v. b  <value>tmp/staging</value>: [/ ^5 v2 q: ?8 B2 F
  <description>' Z* u5 S( }0 A+ ~
    Path in the cluster filesystem for temporary data.
! V1 h; _: s) R7 K3 ]" [+ J    This is for HDFS, not the local filesystem.) Z& ?8 u: b8 w, ^0 A
    It is only for the summary data of each file, not the actual
6 ]9 _$ h8 W( x! g8 _    data being committed.$ G  |- }3 L- X7 u3 n# Q
    Using an unqualified path guarantees that the full path will be/ O7 ~, p1 e* b. g7 ]3 X+ b
    generated relative to the home directory of the user creating the job,
' U+ t- z7 a7 P) Y) _  [    hence private (assuming home directory permissions are secure).
, R' x6 M( p8 L9 a- b  </description>
5 K) g' c) D# p) Q. C</property>& k2 A& F1 T# b2 w2 L' }
<property>
( U- B. v9 n: V" X. i  <name>fs.s3a.committer.staging.unique-filenames</name>
2 T) [/ Z; D" i7 ]  <value>true</value>
2 p# e; a& }& x  <description>
( z& j1 e6 i4 ^5 b4 X  j5 q4 D    Option for final files to have a unique name through job attempt info,+ `3 E5 m; I$ [# D% U
    or the value of fs.s3a.committer.staging.uuid) R% u/ ~3 @+ T3 }- F3 L- e" o
    When writing data with the "append" conflict option, this guarantees2 l/ X. O2 ]6 H7 }9 S
    that new data will not overwrite any existing data.
) ^- M: E3 S/ r9 h& R  </description>- D- @( @1 O$ V* T$ x& N
</property>
7 o1 c8 O6 `7 h<property>1 D, |2 v! j! w0 L! e) ^) |6 V
  <name>fs.s3a.committer.staging.conflict-mode</name>5 J5 \. |( u  U: Y
  <value>fail</value>
# g7 z+ M8 Q" P7 N3 Q  <description>" [& e$ m  L: {$ h$ W9 I: F( ]
    Staging committer conflict resolution policy.
0 A( v& z4 K4 f8 A$ p, `    Supported: "fail", "append", "replace"., N" T  i8 |3 s1 q# Y
  </description>9 h( w/ m( z7 C
</property>
6 R" \6 q7 q; V" U5 O. q6 k<property>
( T; w3 Y$ Y" o: \% u  <name>fs.s3a.committer.staging.abort.pending.uploads</name>
4 _8 ]; p+ F, G% ]  <value>true</value>/ W. V/ F, ^2 O( e$ K  C9 k
  <description>
  _0 \3 C! \, ]' V* a7 Y    Should the staging committers abort all pending uploads to the destination
' O( h( t  _' Y    directory?
2 M3 l- I" v2 V. g    Changing this if more than one partitioned committer is
0 I4 l8 \* \) o3 O$ s. p% |    writing to the same destination tree simultaneously; otherwise
/ r- K& h) N% W4 v; w9 \  E    the first job to complete will cancel all outstanding uploads from the
; Y  c* @6 H% e- Z; V3 T    others. However, it may lead to leaked outstanding uploads from failed
  g2 W  V+ |9 N3 V9 O    tasks. If disabled, configure the bucket lifecycle to remove uploads7 S7 @$ N: ^# X9 H- D
    after a time period, and/or set up a workflow to explicitly delete
3 }- s. N  a0 H. g* T0 k9 i' L2 ?- b    entries. Otherwise there is a risk that uncommitted uploads may run up7 ]6 s  }& Q% N( Z% M* d2 b
    bills.( m8 y6 g+ Q1 J: ?* I" A) K  f! H
  </description>$ `: t  }7 ?' C- g6 l; }! }
</property>
  P% Q0 a: a# a; H' x<property>" \; z( C1 H* ?+ |& G* A- G
  <name>fs.AbstractFileSystem.s3a.impl</name>
) \- o4 z$ c1 U7 {  <value>org.apache.hadoop.fs.s3a.S3A</value>
; z8 q2 d* [3 S; W& i9 y3 I6 `" n  <description>The implementation class of the S3A AbstractFileSystem.</description>. ^9 ?$ x/ P8 b5 x9 v. E
</property>8 o9 U! @# h& p$ A! D
<property>
2 F+ P3 P  P6 w8 C  <name>fs.s3a.list.version</name>1 }5 g2 B/ B, L: @$ L8 p) s
  <value>2</value>8 k4 i( R7 a/ ]9 e
  <description>
5 w9 r! w- a, q    Select which version of the S3 SDK's List Objects API to use.  Currently% J; u! {- K+ q9 k3 c1 g" ?: L
    support 2 (default) and 1 (older API).
2 }* R" y7 ]/ e9 [# Z/ u3 X  </description>' A8 n+ m8 j+ g& l+ a
</property>
6 c% I, e1 R8 A# K<property>3 g# T5 i+ g0 a- \5 X& ~
  <name>fs.s3a.etag.checksum.enabled</name>* H' s9 q: z% r
  <value>false</value>& f& S" A0 ?) z, d
  <description>5 g' u; ^" ?0 g$ |. C! `
    Should calls to getFileChecksum() return the etag value of the remote
0 Y! b( ^( X: D7 g% b, c/ `    object.
0 S/ r' E$ O; {, X    WARNING: if enabled, distcp operations between HDFS and S3 will fail unless
: b0 A' ^0 D* {' s/ D    -skipcrccheck is set.
+ k* k* ]5 b% G( B  </description>0 k# P' E: o7 q$ ]# M6 u! H, T
</property>
. y( p0 z& s5 l/ d<!-- Azure file system properties -->2 M5 l1 [9 E* h
<property>
7 Q; Z. ?- W$ t- Q  f: }  S4 ^7 M7 U" W1 t  <name>fs.wasb.impl</name>
; i6 m7 n, C; [4 D  <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem</value>
+ o4 M) Z; C9 h" K) \1 K  <description>The implementation class of the Native Azure Filesystem</description>/ h. ]9 Y7 f5 c* `' }4 P4 U, R
</property>
5 c' b4 a1 q2 o<property># J  k8 K$ O# e0 c, p2 j/ w/ j6 p' }
  <name>fs.wasbs.impl</name>( e* h5 e; G. x% G: R
  <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure</value>
! y) G& x3 m# ^  <description>The implementation class of the Secure Native Azure Filesystem</description>
" U' e2 w: F( ^: I! ]9 X</property>& O' I  F8 g/ x9 e2 z5 F
<property>
5 E1 |+ f5 s% ?8 B7 B' I  <name>fs.azure.secure.mode</name>. A4 L' p2 ^; w% V! a
  <value>false</value>
5 i8 a  V0 ~5 r% Y8 F7 t  <description>
/ G. c* b+ A/ l$ }6 @    Config flag to identify the mode in which fs.azure.NativeAzureFileSystem needs+ _; o. K' d9 t0 s) t2 {5 a/ P, W
    to run under. Setting it "true" would make fs.azure.NativeAzureFileSystem use7 D# K, ]) L: ]6 r5 s' P) t0 Y/ @
    SAS keys to communicate with Azure storage.
" P3 ], _! ~$ P, \" C* l  </description>
" g  w* M+ ~5 J8 k+ l4 U; Y</property>
- \! s5 s) l! y: O<property>+ }  k, c  P# J: P( i. f
  <name>fs.azure.local.sas.key.mode</name>: e. G. {' _8 s4 C3 L" G% J
  <value>false</value>
8 c, y+ H# [# A0 l  <description>4 D. X; U9 z: W- Z1 f
    Works in conjuction with fs.azure.secure.mode. Setting this config to true5 k9 K% O, V  i) D( z7 n- m+ S* J9 E
    results in fs.azure.NativeAzureFileSystem using the local SAS key generation
& h$ |! S+ b* E$ o6 |    where the SAS keys are generating in the same process as fs.azure.NativeAzureFileSystem.
3 t7 k; k7 g9 Z: h: ~' w6 o5 I    If fs.azure.secure.mode flag is set to false, this flag has no effect.
  E" [9 T5 k- K2 L1 a( N  </description>4 p4 x6 q2 X1 Q0 @6 K
</property>1 P7 j- d- ~& F) w; z
<property>: `- `. b2 t+ a2 T9 Z
  <name>fs.azure.sas.expiry.period</name>, l" o, X# i8 w, }' V
  <value>90d</value>
; g# }9 |1 [" W2 S' j( L  <description>
1 ~' V8 J+ V" R+ \3 s    The default value to be used for expiration period for SAS keys generated., b  W$ B, r2 s$ w2 r( X, F) T
    Can use the following suffix (case insensitive):
0 @, ]  B, d! z7 c9 u, K3 j4 M1 H    ms(millis), s(sec), m(min), h(hour), d(day)
9 G. F  q; V& F    to specify the time (such as 2s, 2m, 1h, etc.).
# f% P# d* K2 D$ }9 ^& `2 o, \  </description>9 Q) P/ C- g5 i/ k
</property>
. p+ n( A. ]5 ]4 G! t. y<property>9 \1 E1 A1 Z: B  U1 W3 f( I: e3 U( f
  <name>fs.azure.authorization</name>
5 o8 Q1 F1 k; b  w  <value>false</value>
; J. X, L) T( j% `# d, V) Y  <description>
1 f- i5 H% y0 u7 ?    Config flag to enable authorization support in WASB. Setting it to "true" enables
6 A  t) U, G7 a* s    authorization support to WASB. Currently WASB authorization requires a remote service3 z$ x/ O" ]% e- [; s
    to provide authorization that needs to be specified via fs.azure.authorization.remote.service.url
: b9 }% J( ]8 M    configuration, q% ~+ {$ j1 R" S- r! ?
  </description>
8 V7 M2 i3 O- f</property>& w7 I! D; K2 e7 Y; E% P
<property>
+ _$ F1 K$ ?: ^  <name>fs.azure.authorization.caching.enable</name>
1 T0 X0 q: L/ |: g$ @5 f  <value>true</value>
! b& p# w( i7 |- ~* H  <description>
; d/ ?" c" ^9 i; b& ^$ ^& _    Config flag to enable caching of authorization results and saskeys in WASB.
4 W% d% I' d$ @0 Q+ K1 B    This flag is relevant only when fs.azure.authorization is enabled.
0 X* h) }0 }: I  </description>: e1 i5 Z$ r! \0 ~( d' `& W- [
</property>7 Q. ~5 i6 W& n+ H
<property>
1 J! Z! `, s& N$ m! u  <name>fs.azure.saskey.usecontainersaskeyforallaccess</name>
- w/ r/ e$ G3 x+ s& ^' D  <value>true</value>
: f$ f& N5 ?7 @0 o5 g1 y$ H: D  <description>/ a' ^, x* k  U( k
    Use container saskey for access to all blobs within the container.
! a2 N+ F1 \- C/ D! T  E  F, g    Blob-specific saskeys are not used when this setting is enabled.
/ w( P0 I" I4 q$ _& R+ z    This setting provides better performance compared to blob-specific saskeys.
+ P6 t- i, ^/ o1 V4 {9 z  </description>
" K; q; G5 w; q; H$ D</property>' `7 O& L$ j9 F: P' x
<property>
, Z9 Z: R6 m' X+ j! y  <name>io.seqfile.compress.blocksize</name>; ^' U/ h; O* T* l
  <value>1000000</value>
8 i, A8 E: C: V9 w. u  <description>The minimum block size for compression in block compressed  c7 j! \2 j1 c0 y, s9 V# H( o4 B
          SequenceFiles.
& ], v2 m0 z2 V+ N# Z7 K  </description>
6 V% Y. K5 m' H' p: p, t</property>! ?  A: C  ~! s  x
<property>7 X4 r4 Z- Y" r5 f7 G) f
  <name>io.mapfile.bloom.size</name>% o2 a: A% J) B( G" A( h) r
  <value>1048576</value>
. F+ {4 P' [: `% c. `! P  <description>The size of BloomFilter-s used in BloomMapFile. Each time this many8 g# I* {9 U& e
  keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
2 }7 J" Y% H9 m" {0 ]2 `  Larger values minimize the number of filters, which slightly increases the performance,: A$ c- I# I, l$ D/ F9 J2 |2 U
  but may waste too much space if the total number of keys is usually much smaller
8 ~5 L1 O! Y0 @0 X& D/ G: T  than this number.. C# A, p  U! F/ J. `2 _
  </description>5 C  g5 S" {3 G- l' H0 h% a0 w1 R
</property>. Z/ w; S5 }- U$ A: _
<property>
# ?' g: c5 z/ X0 O% ~4 e  <name>io.mapfile.bloom.error.rate</name>, r0 a" {- r1 ?2 `7 y- @
  <value>0.005</value>
: y- K( b! k3 _) a2 K" m. g  <description>The rate of false positives in BloomFilter-s used in BloomMapFile.
0 V0 e; j& K# S% p/ A0 b% }8 b& _  As this value decreases, the size of BloomFilter-s increases exponentially. This$ B- h9 B9 P: W! c  G6 I
  value is the probability of encountering false positives (default is 0.5%).
1 E4 d$ B2 m7 g* S5 L8 f; g/ m  </description>) e/ E$ ^1 X- b+ @
</property>
- P2 B. T* x' \. `  }5 z<property>% a' k  \9 O3 F% D5 \; e- f
  <name>hadoop.util.hash.type</name>
1 w9 {( n, j& V% x5 Q. A/ m0 |  <value>murmur</value>
8 S& q( v2 {8 K; H6 H; i  <description>The default implementation of Hash. Currently this can take one of the
9 h7 }- A; X" {9 }  M  two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
' u* x/ P& A% D+ D  d. R  </description>+ ^: ?- u( q) h' n; n
</property>3 i8 R* t  p9 d/ x1 U
<!-- ipc properties -->
' n3 N4 D: W5 K8 T5 T<property>  S, q7 `7 Y# d* D; U
  <name>ipc.client.idlethreshold</name>& b8 @/ [# k+ |9 x. M9 K1 g* p
  <value>4000</value>
' Q) T7 @/ z' \, x/ g7 c5 T  <description>Defines the threshold number of connections after which
8 F+ p: y3 a7 T  k3 q               connections will be inspected for idleness.9 ]; M6 Y6 p9 ^; |8 f0 F. \
  </description>2 f& p0 h" ?1 r6 P! r& S
</property>  y, M/ D, \# _3 F
<property>
. i0 `' i( b: _$ O& i  <name>ipc.client.kill.max</name>
8 {( Q0 ^" Z4 O1 q& M! H  <value>10</value>
" s5 ^' a7 W* I0 u( b0 X  <description>Defines the maximum number of clients to disconnect in one go." I. Z3 p* N+ _: `  w* d
  </description>
; W: F5 V4 U) n1 J- t+ ~2 B</property>
7 U& t& X" q" U, @& [" ~3 t<property>1 b  R. v% ^: h* Z$ r0 j
  <name>ipc.client.connection.maxidletime</name>
  P7 s4 M( U% i3 q. B% p4 j% g  <value>10000</value>
6 p. B/ F, k) W/ q8 H1 V7 D# y  <description>The maximum time in msec after which a client will bring down the6 d0 q2 l7 B3 ?! E) u
               connection to the server.+ x3 S( U, B/ b' U
  </description>6 f9 f3 z0 y0 `+ h% T
</property>7 w+ l" |; H) g( i/ M8 K7 x! m
<property>) y" {% Y4 |- i' f5 `+ s
  <name>ipc.client.connect.max.retries</name>
/ l/ d3 a% B% L$ g: U/ m  <value>10</value>
  n, t+ r( f! ?2 ~* K/ H  <description>Indicates the number of retries a client will make to establish
, T; {3 J5 p/ \. e0 w  g               a server connection.
1 S- r. e) Z- F- `6 }  </description>
; a- y, A& U/ w" q; k+ ?1 ~</property>
/ P0 b. D( T- E<property>
2 N: B4 s; u# ?8 b" W  <name>ipc.client.connect.retry.interval</name>9 V& F8 P* H" W! |& e* r
  <value>1000</value>
5 G! K+ v/ o; Z  o% m  <description>Indicates the number of milliseconds a client will wait for0 ^& }( B4 n7 P
    before retrying to establish a server connection.
3 R0 m( f6 `3 y  </description>0 M% p! M3 W, H$ N% ]$ A4 j
</property>
! `; U( a& t' [& m<property>  g2 v) l: A" i* g
  <name>ipc.client.connect.timeout</name>. p+ J( q* B% a. Y7 y- U+ J
  <value>20000</value>
0 ?  e5 x+ D7 S# P4 w7 r  <description>Indicates the number of milliseconds a client will wait for the
# d; B# w# `5 E               socket to establish a server connection.
$ |  V. C/ j1 J% c  </description>
0 e5 n' x: w& k9 f9 n4 g</property>. i, y7 Z. ?/ f2 w
<property>1 T% {( h0 T% X; ~
  <name>ipc.client.connect.max.retries.on.timeouts</name>
+ C+ V: i2 c- L& d: p/ b  <value>45</value>
" G  E3 S( K* m7 r8 C  <description>Indicates the number of retries a client will make on socket timeout
3 }4 e. |; Q- I               to establish a server connection.
, X1 W( |+ k5 m3 W2 v  </description>
/ _' ]. f5 I7 p6 t- k9 w" G* z</property>
. R- ^. C: i5 v3 k: N: a<property>
/ X: P- i  ^1 ~& K+ }* T- z  <name>ipc.client.tcpnodelay</name>
7 d% J0 q* O  F* b1 \  <value>true</value>. w2 d5 F$ F) R6 y0 R0 b1 m
  <description>Use TCP_NODELAY flag to bypass Nagle's algorithm transmission delays.
- d) r5 q  ?0 d2 I  </description>
& ?# N. H7 l0 _; U</property>
+ o7 m! P. l8 s$ C- N2 {<property>
2 ~' D5 o8 v0 @- C6 O4 q  <name>ipc.client.low-latency</name>
3 a2 ?9 G! T! N2 a0 M/ q  h% {+ T  <value>false</value>
. Q  w. q% a; w2 ?4 }% A$ a2 F2 w, c  <description>Use low-latency QoS markers for IPC connections.  G$ q3 z7 p" u1 q, u- P2 ]
  </description>$ W& P) B9 W1 Q% H% q" W: k
</property>
; D# u7 c* R% _, U( ~2 {<property>7 H' r3 r6 A& c
  <name>ipc.client.ping</name>
" J; R5 e) T6 Q3 b5 X1 |2 I# S  <value>true</value>2 r: f; f+ O  q
  <description>Send a ping to the server when timeout on reading the response,
. S5 x9 A# [+ n) X# K  if set to true. If no failure is detected, the client retries until at least
! Z3 ]) r- M+ r2 l( e+ |0 ]; Y: w  a byte is read or the time given by ipc.client.rpc-timeout.ms is passed.9 I2 z1 b. \9 Z* J
  </description>
  E+ O6 n$ V& N( J9 _8 C( B* O+ [! m# c</property>
( W/ c4 F( `# Q3 Q# P<property>
6 ]4 m. c! R: Y. [  <name>ipc.ping.interval</name>
) A. F2 `" _7 S. O- C3 k9 l  <value>60000</value>
* c4 K) w9 ?- x+ p6 d  <description>Timeout on waiting response from server, in milliseconds.
8 D# w) T7 }' t; T  The client will send ping when the interval is passed without receiving bytes,- |# x6 [/ _% L7 T5 f: w
  if ipc.client.ping is set to true.
$ P) W; h5 r3 f* \  f  </description>9 A! a( ?' Y  |& ^; g1 h2 x
</property>8 ^, {8 Y2 @6 j& z4 z' j! v
<property>+ d0 j- N6 M6 f
  <name>ipc.client.rpc-timeout.ms</name>& p- Z! k) U6 y4 g0 j  c
  <value>0</value>
/ E' J& t3 M% q) o  <description>Timeout on waiting response from server, in milliseconds.
$ t0 }2 m% t4 p* M/ m" l: P3 S  If ipc.client.ping is set to true and this rpc-timeout is greater than
; v( C6 g. l, j4 B3 ]' b  the value of ipc.ping.interval, the effective value of the rpc-timeout is
6 ]0 c& H* ^& t  rounded up to multiple of ipc.ping.interval.
, z2 R( `! e, N2 E/ X2 Z! v+ O  </description>
8 j' ]- P) P7 |! b</property>9 ~- e$ J! T5 @: M
<property>, F" q. ?5 @& _& d3 o6 V! R: w
  <name>ipc.server.listen.queue.size</name>! H' b3 f3 C- N- K0 `% B: L
  <value>128</value>
5 \* @4 c% i) k  a2 d9 Y  <description>Indicates the length of the listen queue for servers accepting6 k6 y7 g6 |, K7 f
               client connections.# a$ N. U( _& X2 n- H( O4 l
  </description>
; I7 o/ h' S1 ?; Q</property>% @/ u  G! |( Q
<property>' l( @8 E4 K, h  n- |
    <name>ipc.server.log.slow.rpc</name>2 ?0 E; \, A7 y5 ^/ F& S% K2 i
    <value>false</value>3 X0 Q" c5 |2 O6 [3 h
    <description>This setting is useful to troubleshoot performance issues for
) V- c3 v- v2 U: Y. i% s     various services. If this value is set to true then we log requests that7 j7 P( R) r4 P+ u
     fall into 99th percentile as well as increment RpcSlowCalls counter.
! F" {: M- `3 F# j    </description>0 I; k. b4 A* W
</property>
. Z) H2 r1 o$ {) H<property>
- M+ \7 ~7 U) L5 n1 V* D  <name>ipc.maximum.data.length</name>  j0 K! s6 P) f, k
  <value>67108864</value>
7 K5 V* e  `( L- M0 O  <description>This indicates the maximum IPC message length (bytes) that can be4 E+ o; P4 u$ c( N% a3 `
    accepted by the server. Messages larger than this value are rejected by the
" U# ], F8 C2 e# d" N6 ^: R    immediately to avoid possible OOMs. This setting should rarely need to be2 `; R/ M1 M  P5 j4 S" y9 z! }
    changed.3 C% a7 O; L1 k: r" r
  </description>* j( t; U* f5 v1 x3 y/ M* x+ }
</property>
  ^, r# s2 a' o1 n<property>  o3 l( T' S$ U  ]# y  G3 S
  <name>ipc.maximum.response.length</name>) H! O, T7 K& b. a- x: n/ u$ L
  <value>134217728</value>  p9 A$ {$ Z( G/ z; M  x
  <description>This indicates the maximum IPC message length (bytes) that can be2 I1 K9 S3 X: R: X: h- q- [8 c* q
    accepted by the client. Messages larger than this value are rejected  I! h. h. u9 S, Y# Z  w7 R: |
    immediately to avoid possible OOMs. This setting should rarely need to be
, x. d1 n" \* H; N. U& p& U" B3 M    changed.  Set to 0 to disable.7 ?- z" d  F' ~$ G0 g3 n' w# v
  </description>
/ r' `+ I9 ^0 I- s. q. V</property>
% M2 h# N: H$ x# P<!-- Proxy Configuration -->  {6 i' T0 X, c' J
<property>
  T! h' l# F) c7 ?; u4 c: U  <name>hadoop.security.impersonation.provider.class</name>
$ K  A: {+ a/ C7 X  <value></value>
: `1 E! W% h$ O  <description>A class which implements ImpersonationProvider interface, used to
7 l) d  L# v0 n/ S       authorize whether one user can impersonate a specific user.
5 j! c- k  r: s2 Z+ e$ x       If not specified, the DefaultImpersonationProvider will be used.8 d& _9 \3 G; T' m+ V  C
       If a class is specified, then that class will be used to determine
  ~8 S. p" K+ m' c2 ]. ~       the impersonation capability.
' Y* I3 R0 h% C& \9 e  d  </description>
( Q: Z" E3 l3 V6 P4 e. J3 }! w</property>( |8 {% m* ?! A9 e
<property>! g$ A+ I6 @, z0 v
  <name>hadoop.rpc.socket.factory.class.default</name>. z! q' p: c) }, Y* j3 I
  <value>org.apache.hadoop.net.StandardSocketFactory</value>+ N1 i" t- d8 q5 d$ ^8 ^
  <description> Default SocketFactory to use. This parameter is expected to be) a: P7 \& j+ H1 ~+ a0 S" n
    formatted as "package.FactoryClassName".. `; y: _* [( S) b$ |# |' }. o
  </description>, n: a& G  z0 i- D; t7 |
</property>0 `7 a& `  L3 o' x5 n
<property>" w4 V+ E( S( D- r) e+ u. z& `2 v- n
  <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>4 l7 D% V8 u# Z3 w
  <value></value>- A: a/ q7 @# q4 Z4 N
  <description> SocketFactory to use to connect to a DFS. If null or empty, use- N: ]% X# j7 q2 M/ }9 I" x4 s
    hadoop.rpc.socket.class.default. This socket factory is also used by
1 f% [" R" p0 A) S    DFSClient to create sockets to DataNodes.
$ ]8 T8 A0 ^  H) M  r7 Z& o  </description>/ Y% ~& I+ I' W7 |% ~
</property>& L9 y! c5 H& f6 |) J9 v
<property>
8 \- q2 O5 {9 J/ p2 ]  I# A  <name>hadoop.socks.server</name>; K# p* q* F& ^1 {* D
  <value></value>
+ n6 w! W" r# _  s  <description> Address (host:port) of the SOCKS server to be used by the# o) q9 s! [( S, O4 @0 e
    SocksSocketFactory.8 ^5 Z9 I, \7 }; b5 Z/ b$ t
  </description>
; m9 V# x. C, z2 L: A! y, Z- N% T9 V</property>
! X% H( b+ p! D. v3 Q& N) K<!-- Topology Configuration -->9 \, x! x; q% m7 O5 c& r
<property>
  z' P+ |4 z% T, V$ r  <name>net.topology.node.switch.mapping.impl</name>
9 `# R. P' \. J+ G  <value>org.apache.hadoop.net.ScriptBasedMapping</value>1 e0 g0 v) X4 w* Q6 N* e
  <description> The default implementation of the DNSToSwitchMapping. It
1 Y! F. A3 |# J) T7 W' n    invokes a script specified in net.topology.script.file.name to resolve
, ?9 J2 C7 Z3 h- X% G2 S    node names. If the value for net.topology.script.file.name is not set, the; N4 E. f1 }. {- ?
    default value of DEFAULT_RACK is returned for all node names.+ s4 i, u9 I" ?) T  ^
  </description>$ Z( r" n! Y. n/ g: p% W( E: w
</property>
/ E( h- s$ b! Q<property>
) j, t4 O+ b* t/ ^) ^) z2 W  <name>net.topology.impl</name>
, d2 M# B( ]3 V1 Z+ w  N" c  <value>org.apache.hadoop.net.NetworkTopology</value>
  i! x# A0 |* i' e  <description> The default implementation of NetworkTopology which is classic three layer one.
4 r* H2 A/ a# Y! K) i  </description>2 K9 U8 |5 Y# P' |
</property>, T( t+ {- v3 \, N7 z1 k4 a
<property>, w0 t2 s, n0 v8 D+ p" V" `. l
  <name>net.topology.script.file.name</name>
) G$ M/ f7 Z! R+ @; c  <value></value>
% [. K! Q5 [- S% Q2 k8 O  <description> The script name that should be invoked to resolve DNS names to( S! P( W6 [! G- b
    NetworkTopology names. Example: the script would take host.foo.bar as an# j) t; O8 O" d! s6 a- V0 }0 F  q
    argument, and return /rack1 as the output.1 Y1 }8 j" d: G/ S9 x# J3 I$ U1 s2 k
  </description>
7 a5 ?1 U: R2 T1 S. w4 N</property>
; w& n% ?+ [# q<property>
" D& p) b% y( }, H8 R0 e  <name>net.topology.script.number.args</name>
( N5 d& m5 X' [5 u! \6 B$ {# q  <value>100</value>" ]8 j; V7 d" {) u5 m" o
  <description> The max number of args that the script configured with
/ R: ~" \6 g% G, X. U  v    net.topology.script.file.name should be run with. Each arg is an  J  x1 O6 j5 s0 \
    IP address.; C! k$ K0 s' f; ?; y2 W
  </description>
1 Q' C2 w; ]" w' v: w, Z0 D( X8 X5 v</property>
% q) U0 V4 a* i2 q( D+ y, f<property>
6 |6 `- K, ~8 S# Y  <name>net.topology.table.file.name</name>
3 e) s/ A  `) Z+ A& J/ B6 |  <value></value>
' C: J- p) j: V+ t6 E4 x2 `  <description> The file name for a topology file, which is used when the( l/ W. @, \' E4 O
    net.topology.node.switch.mapping.impl property is set to6 l/ O: h3 d* f1 `! c5 V! l1 ]& H8 e
    org.apache.hadoop.net.TableMapping. The file format is a two column text
! Q$ u0 T& }8 F& |9 n) Q    file, with columns separated by whitespace. The first column is a DNS or2 w1 ?0 b2 j  \7 G3 i/ d
    IP address and the second column specifies the rack where the address maps.
' s. H7 G8 H! H7 S) i2 T    If no entry corresponding to a host in the cluster is found, then
( }, C9 P& H$ S, a    /default-rack is assumed.0 W3 H- E3 t; S9 m
  </description>
! n7 l% [4 W" H</property>
1 \/ t+ ^" z7 X0 l" l2 O/ X( E7 N3 I<!-- Local file system -->
6 _3 f! i5 {- y! r; ?" X<property>' S( D2 n- M6 H& `# F2 Y( X
  <name>file.stream-buffer-size</name>2 J2 m$ C$ f$ Q/ R# p2 D; c' b9 E
  <value>4096</value>
1 _/ E6 U" y5 ]: J' u; K$ W/ N  <description>The size of buffer to stream files.4 r6 T, l( w  Q
  The size of this buffer should probably be a multiple of hardware
/ }' j7 T9 t% A( I$ M: _- x  page size (4096 on Intel x86), and it determines how much data is9 {9 s( O: s- m) k; O
  buffered during read and write operations.</description>
: ]+ I& h1 Q# @  l; B' `  z</property>
" J5 s1 Y: b8 Y2 B- ~5 u- u3 o<property>6 C5 c+ U/ B3 q$ r+ A1 p% C6 @5 G4 I
  <name>file.bytes-per-checksum</name>
. A4 f8 x4 s/ g- @4 N: \  <value>512</value>. j) |3 H& c0 q1 H; u) C5 V
  <description>The number of bytes per checksum.  Must not be larger than
3 |9 n2 \1 d" t) L" {7 C  file.stream-buffer-size</description>
8 K0 M- ]* b( u7 Y, {</property>
% l: P" B' f  w7 g" ^<property>% C  r* m6 S5 t( ~8 i
  <name>file.client-write-packet-size</name>4 H4 W6 g5 D; E
  <value>65536</value>
# n0 O9 f+ H( y7 m  <description>Packet size for clients to write</description>
. z0 B2 _8 z* m$ Q( \7 O* M3 ]</property>
$ p4 Y: ^  d# D( v8 n8 U4 ?<property>% w. ~9 H& n! V& `+ z) a" M. t- P
  <name>file.blocksize</name>
0 T6 }. y" P9 ]0 R  <value>67108864</value>
3 @1 Z8 S, W+ w1 G+ J' ]  <description>Block size</description>
4 P- {: }9 U; R  L</property>
9 E+ a! L" W  P, E2 s<property>9 u; n0 _4 g9 A, i/ d# Z
  <name>file.replication</name>
4 g5 N  l0 h. g  <value>1</value>% K5 m1 }$ k* [7 O& l; K. K
  <description>Replication factor</description>& V9 B% d& E* G# H1 z* ~
</property>$ {0 d, P0 c4 b2 ]
<!-- FTP file system -->- L9 W' f; h0 H/ w. f
<property>& U( T8 w# c; J
  <name>ftp.stream-buffer-size</name># i% T, P+ [0 L9 `+ ?: p
  <value>4096</value>
/ j/ P. a; ~5 D. @5 s  <description>The size of buffer to stream files.
$ Q" S. o! T& V9 e  The size of this buffer should probably be a multiple of hardware
6 O' J& R" J4 y1 U  page size (4096 on Intel x86), and it determines how much data is
* O+ o. _7 @6 H4 U  d0 c! C  buffered during read and write operations.</description>
% E: [; E" W1 p+ O% @5 p</property>, Y" {/ y4 c3 _* T( l
<property>& D% K8 h/ |2 I( P" @
  <name>ftp.bytes-per-checksum</name>
4 \' C+ e  U8 [. |4 }( W2 F3 V9 c% Y  <value>512</value>
! N! B5 i0 u2 H2 _  N2 p: {  <description>The number of bytes per checksum.  Must not be larger than
9 m2 U( {! p+ @- p) `  ftp.stream-buffer-size</description>
/ s8 e5 h9 \( c$ x  b</property>, K! e! i' V- I/ z! A) j
<property>
( i/ r1 R) W6 J# {# U# Q2 b& O  <name>ftp.client-write-packet-size</name>
9 s. N1 v* j3 P/ Y  <value>65536</value>
  u) @& F3 m0 d  <description>Packet size for clients to write</description>' ^1 u. D9 ]: N# I# K% [6 |
</property>. u; f8 N) S0 y  q3 u% k9 C
<property>
* W! b$ s0 h8 p8 a* s) m  <name>ftp.blocksize</name>
, ]% y, w/ Y1 s) ~  <value>67108864</value>4 O0 n7 h+ B+ L5 `
  <description>Block size</description># j  h# d9 j+ a* ^5 V- ]/ F
</property>4 `0 f! b+ V9 c, d) Q& a* U0 m, o
<property>. c) G9 [8 R1 o
  <name>ftp.replication</name>$ K7 z) @" l% F
  <value>3</value>
+ w: N" F* B% z9 ^  <description>Replication factor</description>/ J) M7 ]+ h: y- v2 C
</property>
/ P: w, w8 ]5 x# l: B' x<!-- Tfile -->9 \- r: K3 @. E1 g4 l$ Q: P8 W
<property>
! D" ^* c% E5 `7 }  <name>tfile.io.chunk.size</name>
. ^. [# Q7 B, c& H' z0 q  <value>1048576</value>
+ H' t4 T5 X! ^5 @  <description>
: l: B3 u( u$ \) P5 i( x+ [    Value chunk size in bytes. Default  to
) X1 g; r) B# O- X9 q1 C- G    1MB. Values of the length less than the chunk size is3 T) a' ]4 ?1 U+ ^# W
    guaranteed to have known value length in read time (See also) @; L; _$ A2 B* D. S( R) d
    TFile.Reader.Scanner.Entry.isValueLengthKnown()).
$ N/ @8 P, h8 X' w7 N2 Y  </description>
  I  Y. X+ t6 w</property>
& }9 B8 ^- A9 W<property>
0 P$ M: N* i; y) F4 j( B  <name>tfile.fs.output.buffer.size</name>
2 R7 o1 S6 Y; h' G! v6 V' ?  <value>262144</value>  l; l6 L1 Q8 y/ w5 r/ }, E
  <description>
  J5 |" F: M& B4 b( I, M2 ^4 V    Buffer size used for FSDataOutputStream in bytes.5 B5 p" t! U3 _. e3 @
  </description>
0 N- \1 y5 W  m. `</property>) C9 d8 F. M& G) @
<property>
. }/ X5 Q0 g: T% ^  <name>tfile.fs.input.buffer.size</name>
; x2 O: Y; x) [+ I, t3 Z  <value>262144</value>3 s' K, O5 Z% i
  <description>
4 k) ^  v6 E. k3 F; P    Buffer size used for FSDataInputStream in bytes.
* A8 ~: x3 ~7 n$ v& p6 M  </description>; {; k5 I& z9 c0 k- i1 d, r
</property>; {2 g1 q* m* y
<!-- HTTP web-consoles Authentication -->- N7 I+ w  C" A: [" H9 D/ v% t
<property>3 w2 ~9 d- x5 K% X3 |
  <name>hadoop.http.authentication.type</name>
' T5 D/ j; \# _" m$ f3 a  <value>simple</value>% D( ^0 t! t+ w! P5 `6 S
  <description>
0 v6 I9 Y, C" O2 d# {$ T) c    Defines authentication used for Oozie HTTP endpoint.2 [+ @* y5 I! A5 W5 l5 _
    Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#& F2 R: ^, R8 G- K) a5 N5 B) V
  </description>
; @/ z) E& s( n8 ~, u</property>
( }& o6 [. S; n3 b( J<property>
' h& ^- L# H  B; P) M! R& i  <name>hadoop.http.authentication.token.validity</name>  @+ s# C( n( O
  <value>36000</value>, L! {6 Q- _1 H
  <description>
( X- K9 q! ~1 |! G8 b    Indicates how long (in seconds) an authentication token is valid before it has) l. a6 P- m* d: C
    to be renewed.
) M6 X1 W6 G7 Z6 s  </description>
1 }! ]* E  C' Z& t/ S# ?* _</property>
: r1 X- W7 Z1 k7 w<property>8 ^- K% A: C/ v5 I
  <name>hadoop.http.authentication.signature.secret.file</name>6 J: m+ Z, x- m) G
  <value>${user.home}/hadoop-http-auth-signature-secret</value>: @- i& \; z; _% o" X
  <description>
* W! f4 g4 u, {& E. Y    The signature secret for signing the authentication tokens.
" Q  K; a2 m" q  U$ D& w    The same secret should be used for JT/NN/DN/TT configurations.
. g! @* _4 }, t$ K) ~  </description>( s/ Z% M& F: d8 g8 u4 B2 o1 y
</property>2 d$ r2 e# N8 c( i$ t
<property>
+ q4 [- ~: D& R- @  h- `$ z  <name>hadoop.http.authentication.cookie.domain</name>
9 B2 l% I' F# }# S4 U/ t  <value></value>. R  L7 X3 U& o$ |& X: _; N
  <description>$ R; p! J+ U8 z: I1 X* e) U. z
    The domain to use for the HTTP cookie that stores the authentication token.$ t& V) ~. }6 d5 N9 r
    In order to authentiation to work correctly across all Hadoop nodes web-consoles
1 E- @& D1 m# D    the domain must be correctly set.
# i" e$ `$ j+ A: t( l% [    IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings.% x# P( E$ u. T$ Q1 g% ~1 R
    For this setting to work properly all nodes in the cluster must be configured  e( G+ Z1 ]- T- k7 L
    to generate URLs with hostname.domain names on it.
! P+ D) b& E. A$ N  </description>
  b8 \/ M5 x, H, a1 r' u</property>
4 G. z8 ?: a9 k; H7 N' X. F2 c<property>
% m, v. w1 y, h$ H- u, G  D" I! K  <name>hadoop.http.authentication.simple.anonymous.allowed</name>2 I8 `& q% e/ X9 U
  <value>true</value>) w3 {$ D8 L. s$ Y4 z/ G
  <description>
+ u6 Y- f% W/ F% j% `! w, e    Indicates if anonymous requests are allowed when using 'simple' authentication.7 F/ X) ~, E: C2 R
  </description>
4 A/ Y/ W) q- p5 g! G: O</property>' Q0 E. d! W( d6 }' p# B6 h+ K# J
<property>
- Y- B) H: }0 z+ ?6 C  <name>hadoop.http.authentication.kerberos.principal</name>
- M" j- Q. ?0 l0 Y- I6 ?  <value>HTTP/_HOST@LOCALHOST</value>5 Y9 Z5 \  T# V% \3 @9 l  u$ \# \3 @
  <description>* s, d3 `8 L  U- J  i
    Indicates the Kerberos principal to be used for HTTP endpoint.; t3 U% ~; o8 D8 O
    The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
3 ]3 y7 Z, j5 ~: a  </description>
9 V# s+ Y+ A; @3 f2 \! m# w</property>4 d0 D6 o* C; a
<property>
( Y' ]) w1 Z* K( I  <name>hadoop.http.authentication.kerberos.keytab</name>
, f( M7 D& @0 A  <value>${user.home}/hadoop.keytab</value>
( y# k& ?! P& T3 x0 S  <description>& r+ ^6 }8 ^1 J- }& G
    Location of the keytab file with the credentials for the principal.
' \) @* y2 E# i7 Y    Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop., P7 I* i( J: J
  </description>
5 X1 Q& ?( q8 g) h) E) T5 Y8 {</property>- W% G9 o( Z* j" L
<!-- HTTP CORS support -->1 M1 s  ^6 e- L. s  f: {
<property>! o3 M2 N, L5 _
  <name>hadoop.http.cross-origin.enabled</name>
' ?' y$ e$ c3 I" I/ P  <value>false</value>
( Z, _* D) x9 U* I( w+ X% a  T  <description>Enable/disable the cross-origin (CORS) filter.</description>
( L# x3 ]8 ~, b</property>
' y: ?, v8 K* B. q8 p<property>/ A, r1 u" W0 e
  <name>hadoop.http.cross-origin.allowed-origins</name>" D$ @( g8 x, p2 J) W# c4 p
  <value>*</value>  W! P' f& w0 P, W! u
  <description>Comma separated list of origins that are allowed for web services
8 q- }0 ]' w, k& B' K7 G" B$ L    needing cross-origin (CORS) support. If a value in the list contains an, S  K) Q0 O. d& f( v+ P& w
    asterix (*), a regex pattern, escaping any dots ('.' -> '\.') and replacing6 h9 H( X6 Z; e' R
    the asterix such that it captures any characters ('*' -> '.*'), is generated.
7 k+ l1 W: Z- C: Q    Values prefixed with 'regex:' are interpreted directly as regular expressions,3 ^& I/ A& `8 l4 H" h8 v- v
    e.g. use the expression 'regex:https?:\/\/foo\.bar:([0-9]+)?' to allow any$ t. @% `, m0 r  Y
    origin using the 'http' or 'https' protocol in the domain 'foo.bar' on any0 i1 T! y$ i6 W# c
    port. The use of simple wildcards ('*') is discouraged, and only available for
1 D$ ~! h5 V4 {! _/ J8 L8 Y    backward compatibility.</description>% Y& n. w/ L1 b7 @0 T/ f
</property>
6 C. j$ g, ]' A- C& n! R& c/ q<property>
( c* w  f" ^+ e  <name>hadoop.http.cross-origin.allowed-methods</name>
  B" h2 Y3 x( w2 s5 p" Z  <value>GET,POST,HEAD</value>
+ d% x9 p! q2 i. W- H  <description>Comma separated list of methods that are allowed for web! F( w5 H, a) B; \4 L
    services needing cross-origin (CORS) support.</description>
% v( ~4 ~6 l8 }7 w* q( ]</property>
& d/ \- Q! n: q' i. f( D<property>1 `9 d% g3 n4 K9 }, s* ^6 u. V% Y
  <name>hadoop.http.cross-origin.allowed-headers</name>9 Y7 ]; U3 D5 A5 G" d
  <value>X-Requested-With,Content-Type,Accept,Origin</value>
" m/ Q  _4 W7 b: x$ Q1 F  D  <description>Comma separated list of headers that are allowed for web
; w3 i3 Z8 j8 h4 W- k    services needing cross-origin (CORS) support.</description>: G6 ^) e3 L' d
</property>$ F5 j! |3 D. M( |3 `1 n5 ?
<property>1 x0 H& t+ R/ A4 Y5 w  G' @
  <name>hadoop.http.cross-origin.max-age</name>
; ~1 e0 ^* }" v: e1 x- H0 W  <value>1800</value>$ N- O* p" f' _- K% O' X) L; b: u
  <description>The number of seconds a pre-flighted request can be cached3 [# v! U( i8 ]8 W
    for web services needing cross-origin (CORS) support.</description>
  m# k' [% _% t</property>  i$ _6 A: v' k3 L, b! v$ U
<property>
) N) m4 G8 d7 Q- O& X; ^; i/ U2 K/ @5 `  <name>dfs.ha.fencing.methods</name>) a, Z* C$ j3 ~: P: A. I
  <value></value>& o7 w- N' Z) V" |8 n  u
  <description>6 ^1 w- u6 D, x1 J# Z+ b$ ~
    List of fencing methods to use for service fencing. May contain
8 W6 L) c: u3 d! W6 y    builtin methods (eg shell and sshfence) or user-defined method.
! ^: T& D% v; h/ y0 D- r  </description>
" z* [* O' m- K3 N2 u7 J/ T</property>4 R6 O2 u5 \# s( K% C9 k/ e: \
<property>
0 v6 i$ b! U+ t+ N  <name>dfs.ha.fencing.ssh.connect-timeout</name>
; l, y( p' H, l4 q5 e% l6 H2 W3 |  <value>30000</value>
0 o. d) e. [4 `, I1 t: |  <description>$ Z9 ]7 x; w8 I* A) v" z
    SSH connection timeout, in milliseconds, to use with the builtin
8 l: D( n6 u- v/ a, {    sshfence fencer.: `5 i& n2 O4 m/ H! O0 B
  </description>
; W; r' S8 T5 U5 ?7 k* {# W</property>6 ~7 T, ^, Y9 |; `: Y9 L  T& J
<property>2 G" `( O4 E6 P2 N7 b$ v
  <name>dfs.ha.fencing.ssh.private-key-files</name>
! a9 N0 v' x; Q+ ~0 q! ^- h  <value></value>
* J$ X& C+ ~6 b9 V- p  <description>, g/ I* q6 D* t7 r6 S6 c! ]
    The SSH private key files to use with the builtin sshfence fencer.
' u6 s1 ~) R  L  </description>$ P9 T# V2 _* g
</property>2 j8 S; n. h# S- h4 `) W- }  g
<property>
# j( n) K( G: L, i  <name>ha.zookeeper.quorum</name>. `9 h/ |% u% o+ C4 \( h9 z2 N
  <description>
7 ~3 G6 k0 V5 {, A$ Z    A list of ZooKeeper server addresses, separated by commas, that are# Y! R* _3 P0 H8 ?
    to be used by the ZKFailoverController in automatic failover.
! i% S2 }! @6 e& ^& M2 n* |  </description>
6 E! a& B2 P; S' K7 v</property>
: i3 C; n! y$ [8 A* U" n8 [! W<property>2 U/ l* w% k- N" i9 G) Y; ^  g
  <name>ha.zookeeper.session-timeout.ms</name>
* e( O5 r7 Y; o0 b0 D/ }8 K  <value>10000</value>
* H  p2 b6 F2 l4 [& v) w) n2 \  <description>* v9 h6 ~; t% u( m1 o+ ~
    The session timeout to use when the ZKFC connects to ZooKeeper.9 D9 |  |3 X7 v
    Setting this value to a lower value implies that server crashes
# }4 Z! j! h5 M9 L2 s6 L; E/ Y    will be detected more quickly, but risks triggering failover too
* `1 {, S" x1 W6 [5 e2 `5 T" ~    aggressively in the case of a transient error or network blip.
4 n# M: }3 P" m# L  </description>
( _4 q9 N4 B, j6 X4 z' T</property>
4 n& S/ D9 j+ J$ A0 g& a5 F<property>
+ }, `+ E5 u. O) `# T  <name>ha.zookeeper.parent-znode</name>9 o4 }9 d0 ^  V. n. g" F( T7 R
  <value>/hadoop-ha</value>
. N* g0 \6 l9 N7 D3 w8 I+ W  <description>' Z8 F% X# E1 K1 |+ Y7 }+ H: L3 E
    The ZooKeeper znode under which the ZK failover controller stores
% M4 u9 s$ g: o' G$ r" L! B+ X; H9 y    its information. Note that the nameservice ID is automatically
/ o8 d% S+ _2 Q2 r% B    appended to this znode, so it is not normally necessary to
/ ]' o# H0 N% _6 @0 `    configure this, even in a federated environment.
+ N' U2 [9 R- X3 u4 I7 V7 Z  {  </description>4 C$ z9 J3 t! h# {: I' E% |
</property>  e2 @+ R5 V7 U+ `8 t
<property>0 k- `+ T" j6 d
  <name>ha.zookeeper.acl</name>
/ i. g4 l5 E5 G! h( ~  <value>world:anyone:rwcda</value>
7 u6 s; `( i; }, d: |  S, i, \  <description>
; j1 e% l) H# V    A comma-separated list of ZooKeeper ACLs to apply to the znodes
: `' S7 a  a& ~    used by automatic failover. These ACLs are specified in the same
/ F9 `4 J& r; Q4 }7 ^4 {- Y    format as used by the ZooKeeper CLI.
) f: B1 E* }3 G' w  ]7 |6 C/ s    If the ACL itself contains secrets, you may instead specify a7 C! u. N  ~' }% b
    path to a file, prefixed with the '@' symbol, and the value of
4 I, ~; c$ [( _/ v    this configuration will be loaded from within.8 ]7 X) L+ P  J8 Y/ U
  </description>
. v4 g/ E7 i7 M</property>* C/ `; U0 b3 I) T5 E2 y
<property>
( z' }& m7 p& j  <name>ha.zookeeper.auth</name>; |$ s' d4 \/ _% x
  <value></value>
+ a9 [' t+ e. O% P  <description># T, K& X# [% ~- f
    A comma-separated list of ZooKeeper authentications to add when  h6 u: {- c; Q, t, D
    connecting to ZooKeeper. These are specified in the same format6 _- j! \8 v7 R4 x5 f% E
    as used by the "addauth" command in the ZK CLI. It is
+ G  K0 y8 E3 G4 H    important that the authentications specified here are sufficient
: q/ ^  Y7 q5 o    to access znodes with the ACL specified in ha.zookeeper.acl.% [. g* X1 M) |- @0 y. e% F* d
    If the auths contain secrets, you may instead specify a
2 Y, A, ~. p9 D) H; {2 R    path to a file, prefixed with the '@' symbol, and the value of
. ], j/ e; W8 y$ @6 L) F* U    this configuration will be loaded from within.! f5 s* {' c7 w* B
  </description>; L2 Y7 M- z7 q. |2 i4 g1 L% I
</property>7 `+ _* o. E3 n
<!-- Static Web User Filter properties. -->" \. h( T* d  b- d6 i3 ^1 A% [
<property>" ]7 v& f: H- x2 A6 H, P$ Q
  <name>hadoop.http.staticuser.user</name>
3 h) N+ h( u- Q* Z( P) ^$ `9 j  <value>dr.who</value>
% U5 F) u$ _, V: S4 s: W/ a% O+ K  <description>/ F6 U* p4 v' `( h* |1 D' c" Z/ \& K
    The user name to filter as, on static web filters
3 q: l/ v0 C& U4 p8 U4 U    while rendering content. An example use is the HDFS
6 M" B$ }7 k+ r& {# I* Z    web UI (user to be used for browsing files).
; R( H$ g! R1 f5 [5 `  </description>
* o: n0 r5 P* Y* {</property>
8 s- S' P# ?8 U: W8 `6 x<!-- SSLFactory configuration -->  B" i+ p: c# p' e
<property>; P6 K% H9 Y+ w" U1 ~/ [* ~( \* w
  <name>hadoop.ssl.keystores.factory.class</name>+ q3 K; b, I5 X% v
  <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>
$ ~* w- v. Y0 T$ q1 {7 a  <description>
# u8 B! z5 m! e. X. w4 [    The keystores factory to use for retrieving certificates.1 p+ d( q) t7 U% U
  </description>0 V7 `1 H" C9 b; v4 L
</property>
* a4 @. ^( j: f4 ]2 a( I( Z<property>  L. z3 Z% q" ?9 `6 m
  <name>hadoop.ssl.require.client.cert</name>
% j4 W* y" Z: B2 M; O  D# ~  <value>false</value>) u, w/ E; Z( }& l# @1 U) B% P) K
  <description>Whether client certificates are required</description>( [( H7 P% F, B% ^5 {$ p/ a
</property>
; o: l: h# m9 u4 Y* L5 b' n<property>0 S* `7 f+ n9 M+ x$ O
  <name>hadoop.ssl.hostname.verifier</name>+ {5 T* a% ~& M, S- b
  <value>DEFAULT</value>0 X3 i; q  S- h: w2 [- T& q
  <description>7 K+ \6 n/ U% z, o% T7 z' K  m
    The hostname verifier to provide for HttpsURLConnections., P) t/ o3 \8 I% _/ K  ^- F
    Valid values are: DEFAULT, STRICT, STRICT_IE6, DEFAULT_AND_LOCALHOST and8 f% ~8 T; ]5 f9 e
    ALLOW_ALL
% O4 l" I4 Y) v5 P, |' {, U/ c- M  </description>* i3 A7 i$ Q8 Y3 J" w
</property>2 M  m: s5 l8 K/ p% j6 ?
<property>
+ [2 V+ u% X# Y3 n% y2 e  <name>hadoop.ssl.server.conf</name>
# ^( h5 h6 ~: V  F1 d+ o  <value>ssl-server.xml</value>
/ ?- w$ o' l8 d- L3 {( P  <description>
0 o0 R: A+ ]+ F$ `! y1 ~) o    Resource file from which ssl server keystore information will be extracted.. d" Z  y  }# ^! v
    This file is looked up in the classpath, typically it should be in Hadoop! ~" b; ~% h. ^7 Y; g& h
    conf/ directory.
3 u7 t& w. }0 s- n" g; {  </description>- ^2 y# T( b  Y/ R/ B
</property>
. G, G/ O) \* Q/ Q  ?) T<property>" s. G, h$ V% |
  <name>hadoop.ssl.client.conf</name>
' P; H8 G6 D2 X, c& ^3 T4 \  <value>ssl-client.xml</value>2 }% R( N$ l- g# X! s
  <description>5 p2 k+ \2 L% ?
    Resource file from which ssl client keystore information will be extracted% B$ y  n8 e. q2 k: }1 [/ `: L& y
    This file is looked up in the classpath, typically it should be in Hadoop9 W8 j. l+ N3 Z# W2 \- J8 u
    conf/ directory.; R8 I: y( P1 \, {. C
  </description>8 m4 h7 p0 f$ n4 S/ e. e
</property>% `. a8 J3 H/ B8 f' t7 r3 P! p5 L
<property>) N7 N1 M& n0 M' K* N: K
  <name>hadoop.ssl.enabled</name>
, e! z, p1 V; }! _# |8 G( X5 E. ^  <value>false</value>
2 y5 ~2 x: B! g- c) G; x! ]  <description>* Q5 ?3 v8 m, i3 u! c* g
    Deprecated. Use dfs.http.policy and yarn.http.policy instead.; h; e# a6 P6 B: q7 m" `8 a
  </description>8 J0 t) L# L, a( l6 m
</property>! x! Z+ f( P. C& J; i3 |
<property>$ u* j+ y% U7 f1 R8 h2 d
  <name>hadoop.ssl.enabled.protocols</name>, X+ c: V4 d5 A- E( t. j; G6 l7 t" J% v
  <value>TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2</value>& S/ H9 v) E6 O9 u2 Z/ P
  <description>
. z# P8 g, `  J) k/ o9 R* P    The supported SSL protocols./ T! r4 f. {. d# w% d- U
  </description>
% e% f6 Y0 ]8 v5 z1 z  E</property>4 n4 k( a* o! n9 T' i1 l
<property>5 a9 m% x  V$ f2 ~. f
  <name>hadoop.jetty.logs.serve.aliases</name>7 I  W4 a  I4 V" O3 J2 V) C9 D! L, T" B
  <value>true</value>8 L" m0 i% w8 W9 W; @3 k0 k1 C
  <description>2 B! P' G" |4 P; z
    Enable/Disable aliases serving from jetty
3 L2 i$ k, }* M2 \  </description>
/ F+ u. ?, }& [3 H( o</property>
4 H. M; {; z: d2 m# S/ U  x<property>) A! H2 R6 R  ]. ^7 M6 T9 a% {. K
  <name>fs.permissions.umask-mode</name>, R8 W7 a+ F3 I! ]; y' C
  <value>022</value>
- |5 J8 L* t" H) F) R9 ~( C3 V2 D  <description>
# d% i. a9 W& R# O    The umask used when creating files and directories.. G2 w6 C$ N5 f6 F4 h- _! l: A# ]
    Can be in octal or in symbolic. Examples are:/ p3 ^7 y9 I4 q# {, G$ r' J
    "022" (octal for u=rwx,g=r-x,o=r-x in symbolic),+ x1 O! y2 x! L% d5 `
    or "u=rwx,g=rwx,o=" (symbolic for 007 in octal).; L, m$ w* M' t5 u! M! E* g" J% Y
  </description>2 `0 s" _( M* ]. U
</property>
! J! b( [, {4 O<!-- ha properties -->) V/ R$ n3 k) S/ h
<property>
" i0 m6 M4 A$ O4 y# L6 m8 `  Y. g  <name>ha.health-monitor.connect-retry-interval.ms</name>
, B0 n$ W: V  r. M  @" _$ _  <value>1000</value>; b0 `, c) o5 o
  <description># W, B$ y' T& y! |- A
    How often to retry connecting to the service.2 M0 E3 H- C8 G# ^
  </description>9 f# V7 z* v8 d, Q+ R$ i% A6 }
</property>
, x4 b9 G' r8 H7 l<property>
' Q' }7 X  e6 o' N) Z* n7 e0 p& P; C  <name>ha.health-monitor.check-interval.ms</name>! G8 G8 T; L- W4 w6 b
  <value>1000</value>  s, R$ u% C+ T! [
  <description>
) w! k! M" T- N5 T    How often to check the service.- h1 v% t. Z$ T! I; g% E
  </description>( E3 g  H  m5 ~8 f) o0 c& B' `% n
</property>4 n# G) |# ]. u6 W! w4 V
<property>
# n0 b3 j! w. v4 d: g+ h0 e  <name>ha.health-monitor.sleep-after-disconnect.ms</name>
7 I7 a. d5 L; S* p5 d4 H0 f9 @  <value>1000</value>6 C! d0 ^) A6 W; P1 s  M" O
  <description>
1 Y/ P3 G  Y3 V: @- W+ J    How long to sleep after an unexpected RPC error.$ i; x5 J/ Y! v' _
  </description>
) k# J4 e8 G  w</property>) h! f8 t  D5 X0 h+ b
<property>
% B, E: D3 Q$ h! I! q7 I; X  <name>ha.health-monitor.rpc-timeout.ms</name>! a& X2 B& y& o3 @$ J, c
  <value>45000</value>
/ X9 o' N, L: p+ S  <description>
, P5 p! C( a# w) N$ o+ N+ g- E# J    Timeout for the actual monitorHealth() calls., G$ Y& g- \% {4 s! M& U
  </description>& I/ g+ f  u* `' W  |* g. \/ h8 h
</property>
& W# k# N. B/ }0 a/ R" u9 Q<property>+ ~8 ?% e4 z# t% ^1 C% g
  <name>ha.failover-controller.new-active.rpc-timeout.ms</name>
& i4 N1 J' \" `  <value>60000</value>
* ]+ p* c1 I" T, E: C; N  <description>
* B/ q8 z; \8 D# F2 N) h    Timeout that the FC waits for the new active to become active4 J; \# a* o  T6 E) d! |
  </description>7 N# V6 Y# R6 V$ x% q% n1 \
</property>0 l7 r! J2 V* _. M
<property>
+ Q9 _8 V( @$ M0 ~  v0 N  <name>ha.failover-controller.graceful-fence.rpc-timeout.ms</name>* @' i& H/ z" T# `$ \
  <value>5000</value>/ r: [- o5 I5 l2 q% O" f
  <description>
3 f' @, W. v7 s3 B9 b5 h    Timeout that the FC waits for the old active to go to standby) H$ T4 G9 u3 `  o
  </description>; U# r3 h$ z) x
</property>
, t: K0 [8 U0 ]( ^$ P<property>
0 T  z* C+ }2 A( z! u  <name>ha.failover-controller.graceful-fence.connection.retries</name>
9 {/ h$ u$ ]/ K0 J  <value>1</value>, U" e/ ~' h! d$ A6 B
  <description>) u! e, L, Q3 k
    FC connection retries for graceful fencing
. P2 v" w' C1 ~$ c6 e4 ^  </description>
8 h" a$ O9 a. C! _+ p</property>
+ g0 B8 v2 ^9 j1 F5 d<property>
. S* w9 K, h# p% y  <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
: I7 M$ d& |9 N1 t5 y1 _# ?+ b  <value>20000</value>; E, x) W$ {# B$ `; g, F
  <description>. Y# h. x) L! f* B# `9 N
    Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState
* K7 g9 b( e4 U  </description>% I) _( u6 P) ~; l2 d0 h7 a$ q
</property>3 s- M5 y2 ]6 w
<property>- L" e# P, N+ o$ \' {5 P! A
  <name>ipc.client.fallback-to-simple-auth-allowed</name>5 q7 e' G! f+ V5 K) u; ~
  <value>false</value>3 |4 j# G4 Z5 Y" J
  <description>- v; }2 V( ~% R1 Q- q& G3 `
    When a client is configured to attempt a secure connection, but attempts to
6 `4 r4 F/ j2 p* @, N    connect to an insecure server, that server may instruct the client to! j: x8 K4 M4 V- ?& M5 K3 v
    switch to SASL SIMPLE (unsecure) authentication. This setting controls
6 Y! p" X. |, D$ i/ h& D' a    whether or not the client will accept this instruction from the server.
! s$ d& Z! j& G% ?  x- S    When false (the default), the client will not allow the fallback to SIMPLE
5 e, u: p$ N( \9 m+ J    authentication, and will abort the connection.; a  z4 j$ P: R1 l) m
  </description>1 w" B3 x' O+ M1 ~2 S- w
</property>
) |! B3 g# J$ \2 _& e% f<property>
* ]/ k- H% C, U3 N: D/ S  <name>fs.client.resolve.remote.symlinks</name>% u/ x( O  [8 k
  <value>true</value>
/ k2 F/ u3 P1 Y; a: ^  <description>
8 M. M: Z- S8 W9 z      Whether to resolve symlinks when accessing a remote Hadoop filesystem.
. p' U: P1 D+ n1 e4 |. K      Setting this to false causes an exception to be thrown upon encountering( Z  M4 `: n& W, o( `$ w" r, I* s8 ~
      a symlink. This setting does not apply to local filesystems, which7 K7 E' E6 V* G0 l
      automatically resolve local symlinks.9 F4 P' X- }2 ]+ o) e# n
  </description>: ~# q$ b5 M* ^  O8 g. j
</property>
! H$ D1 F  @* s# F8 h' L, x<property>  i3 D- h& X, Z$ R' [$ |' G8 k
  <name>nfs.exports.allowed.hosts</name>1 d! s( L. j' z& a/ _
  <value>* rw</value>$ ~* p; H1 H: m
  <description>. |- `, A7 j% \
    By default, the export can be mounted by any client. The value string
. Y8 r9 w# a8 ?- A) F% Y. L' @    contains machine name and access privilege, separated by whitespace0 E; h% z+ ]1 o  W7 c/ W
    characters. The machine name format can be a single host, a Java regular( }+ K9 H& G- q; x$ f8 j1 A
    expression, or an IPv4 address. The access privilege uses rw or ro to2 v- `$ U2 }: q) E$ }. V5 ]
    specify read/write or read-only access of the machines to exports. If the! S0 K+ C6 ?- G2 `/ G6 a5 d
    access privilege is not provided, the default is read-only. Entries are separated by ";".
+ l0 d5 [5 a1 n! m  t    For example: "192.168.0.0/22 rw ; host.*\.example\.com ; host1.test.org ro;".- |6 u1 X/ Q) {# i, w* ~
    Only the NFS gateway needs to restart after this property is updated.( b' B7 @  m9 O3 Z9 H& v
  </description>& W4 e  v% |$ K
</property>, S0 _4 Z- H7 \* q; x3 A
<property>
- D8 y, U) d6 v9 z' W# T% \+ z$ X  <name>hadoop.user.group.static.mapping.overrides</name>7 ^" v" D3 ?8 i! V
  <value>dr.who=;</value>
$ T% E9 p9 y! _7 \  <description>
6 W1 \: i. D& G& q: G    Static mapping of user to groups. This will override the groups if! J( @4 \" t. o+ J2 W
    available in the system for the specified user. In other words, groups
) C. |) a% P5 n) X6 F) ?    look-up will not happen for these users, instead groups mapped in this4 r4 k1 D5 h! o% j
    configuration will be used.
5 z! B+ o, e, i+ K. j    Mapping should be in this format.. z0 U2 H. G8 r4 G: C$ ]0 i, j
    user1=group1,group2;user2=;user3=group2;  X+ k! u( x) L# S1 z- `
    Default, "dr.who=;" will consider "dr.who" as user without groups.' ^3 e1 ^, k! Q: ?
  </description>
7 z7 j. @3 f8 [+ z1 R! Z</property>
* g& r, ~  r; s& ]) t, M9 y: R% S<property>$ j: l! S/ k2 d3 F$ T1 [% \
  <name>rpc.metrics.quantile.enable</name>
7 \0 R, J. y& c0 ^  <value>false</value>
- R( ^6 t1 R1 b( L  <description>/ o) J. J; G7 \0 d5 i
    Setting this property to true and rpc.metrics.percentiles.intervals' Z/ w2 u8 R/ K* v
    to a comma-separated list of the granularity in seconds, the9 M3 X  [# O5 e* T9 F- H  K
    50/75/90/95/99th percentile latency for rpc queue/processing time in/ X3 K3 F# K6 k& ^" m+ i
    milliseconds are added to rpc metrics./ J& }" C- Y# N5 X7 L6 N) f
  </description>2 z) {( o! s) E+ F1 O, `/ a/ D
</property>
! L5 l+ }. y, T- z. x0 {7 t<property>' g" ^8 Y! c: I% k: v' U$ R
  <name>rpc.metrics.percentiles.intervals</name>
, o- H" Q7 w9 Y6 F* J- v$ _  <value></value>! W( `) A1 e! A& R' o9 i0 x9 N
  <description>5 w6 t% y( p! K9 s7 Q& Y
    A comma-separated list of the granularity in seconds for the metrics which
* {6 M/ a& l( [' u    describe the 50/75/90/95/99th percentile latency for rpc queue/processing9 j4 ?: w$ ~3 X+ r
    time. The metrics are outputted if rpc.metrics.quantile.enable is set to
5 S1 P9 I& q, F; K' l" E4 E* l    true.& s1 p- _! O) p# S; D1 _: B  @( p
  </description># C% {" b' m3 y" L; \
</property>
, L" n2 a9 E8 A! m; d. l6 G<property>& G) W" F' e# s8 `( D' k, N' q% J8 F
  <name>hadoop.security.crypto.codec.classes.EXAMPLECIPHERSUITE</name>
) X  C% r: O  S! ^' E1 p) T  <value></value>+ M. W. a/ W1 S, K4 I0 y' @
  <description>5 l5 P2 |* g! w# G+ M+ L2 ?) e
    The prefix for a given crypto codec, contains a comma-separated& }1 |- @$ |2 g+ v: {# u
    list of implementation classes for a given crypto codec (eg EXAMPLECIPHERSUITE).
. g: {& |4 ^$ O4 O% b    The first implementation will be used if available, others are fallbacks.
8 j" O; ]7 ^9 S/ J2 }" }( w  </description>9 U* m# L; G0 a" r! g2 T
</property>- d4 P$ ~  G+ e; d9 E# E+ A
<property>
; b6 J" e( s0 R: m9 E  <name>hadoop.security.crypto.codec.classes.aes.ctr.nopadding</name>* X0 [0 i9 c5 k2 ^6 `
  <value>org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodec</value>
) ?! W' [; M/ @& x: ]  <description>
0 C+ u" F( H# c0 w! u! J    Comma-separated list of crypto codec implementations for AES/CTR/NoPadding.% K* s* O8 D; K  }
    The first implementation will be used if available, others are fallbacks.
% R' W' y3 k' D6 V) z7 i5 E  </description>. e, c# z. }+ V% m* O+ D1 u: \
</property>6 I# t; G* i. n( B6 U: l
<property>
) J, o, K9 O3 z1 U- v7 N  <name>hadoop.security.crypto.cipher.suite</name>
4 q. _% c* J+ B( A( R$ o6 Q  <value>AES/CTR/NoPadding</value>8 M# H: g) S3 P( B! g
  <description>, X7 w! h* n  b6 \# m+ O% l2 A/ m
    Cipher suite for crypto codec.
- D  d  i" u: f  D; j" f  </description>
! s  C/ i3 q! _- n! V# x( B</property>7 p# m; U( N! B" x2 p
<property>
7 V% t8 B  S  A) @& i* `  <name>hadoop.security.crypto.jce.provider</name>
, n, |9 }8 K* M: d, J3 d/ ?/ t. D  <value></value>6 W0 ]+ z1 F. a$ n9 M* \
  <description>
) ?3 t; W$ p. E0 O  D9 I" ^, n9 F    The JCE provider name used in CryptoCodec.
0 a7 K4 A' X* o- E6 t: ?  </description>
* J  ]$ m0 ^# Q$ \6 y5 {5 o</property>
7 l1 `, V2 @" \+ G$ i1 l<property>
! V, l2 \  w# m/ a! Y$ n( h  <name>hadoop.security.crypto.jceks.key.serialfilter</name>. J+ z7 J9 `/ _! F' {' H7 X
  <description>0 r; Q+ I4 R$ H/ m8 l2 d# U
    Enhanced KeyStore Mechanisms in JDK 8u171 introduced jceks.key.serialFilter.
+ O/ Q! t1 z" }3 b4 u; @    If jceks.key.serialFilter is configured, the JCEKS KeyStore uses it during- n6 R9 R" N2 O; S, E7 e
    the deserialization of the encrypted Key object stored inside a
5 q9 T% M0 a. n' s7 ?    SecretKeyEntry.8 h7 c  \) c  S# |/ h
    If jceks.key.serialFilter is not configured it will cause an error when7 g# U6 J+ q% v  w
    recovering keystore file in KeyProviderFactory when recovering key from
: i9 S# u  V# M* s( u* J' J    keystore file using JDK 8u171 or newer. The filter pattern uses the same; \) _( m1 F5 f' r9 a& ?' e
    format as jdk.serialFilter.3 ~& C* V7 Z# L8 d/ ^' D& a
    The value of this property will be used as the following:
; X8 I3 O7 t; Z* m' }7 u! @$ _8 Q9 T/ ]    1. The value of jceks.key.serialFilter system property takes precedence
/ M# f: O* @* G* @( p    over the value of this property.
# k/ e1 F1 c, d# u) ?# L    2. In the absence of jceks.key.serialFilter system property the value of+ X: D' e% ]: W0 U" R) |
    this property will be set as the value of jceks.key.serialFilter.
  C2 [: D: j3 J" i" m    3. If the value of this property and jceks.key.serialFilter system
7 R2 f; x' D- r, [    property has not been set, org.apache.hadoop.crypto.key.KeyProvider/ [) R. ]; T7 d
    sets a default value for jceks.key.serialFilter.
0 B0 W1 W& X- i" n. L$ W  </description>
" S! ]2 t6 a6 g0 X</property>% \  Q9 L6 d3 ?% c. W* G( W5 V
<property>& A7 @) M2 S$ p* Y
  <name>hadoop.security.crypto.buffer.size</name>
6 \& @. m. H, e! Y4 T+ g8 B1 X  <value>8192</value>& Y; r, r8 B: S9 R" C
  <description>: S, O' W. s6 P5 L6 Y9 @! Q  C
    The buffer size used by CryptoInputStream and CryptoOutputStream." x% m6 C9 ^, w/ S
  </description>
2 M5 v1 e7 `; ~. ?0 p</property>. T1 G; @* Q5 I& |; I# Y$ p1 N
<property>
* F" c  C) ^& _, i/ }; U! A8 _- n9 e  <name>hadoop.security.java.secure.random.algorithm</name>
+ H1 k! x# F/ }4 M$ E" h8 {5 T+ F  <value>SHA1PRNG</value>, x6 l: {$ M& j$ d
  <description>
8 I) D8 L/ \; w/ p: H' F& {0 A, n6 L    The java secure random algorithm.
" g+ d1 w( n: m& J  </description>
9 t, g' @1 t1 F7 E/ f& S" g. e</property>" u7 Q' B# A# V. `. X; n
<property>9 F2 J1 D. P; r5 M
  <name>hadoop.security.secure.random.impl</name>3 n+ @& I5 O/ R! B
  <value></value>6 z* L) E2 q3 i  E( A* `) I8 A
  <description>0 Y" {7 a; P/ K& q6 _: h
    Implementation of secure random.
2 @, |/ A. |' b8 D  </description>* \8 P) K+ v! D3 \% b) A
</property>( ]7 W- _- x- y. }. N+ }  \
<property>5 q( k3 P$ Z: ]8 O- f" L- ?
  <name>hadoop.security.random.device.file.path</name>- ?& h. ?/ R* T8 b6 h5 t
  <value>/dev/urandom</value>6 N$ S- Z6 b( `' ^
  <description>1 h" {' X8 \( K( [/ B4 B
    OS security random device file path./ B0 }& X  ]# Q  c
  </description>. i/ X: E- k2 @6 i* i, h2 {& j% T
</property>& i! g& ]3 e2 O. E* n  [" n
<property>
2 W& K$ l9 X2 y5 l) F  <name>hadoop.security.key.provider.path</name>" J% o, W8 v& g: Z) `) y# F" x
  <description>
. N& X1 }8 |0 E2 u/ e- L    The KeyProvider to use when managing zone keys, and interacting with
- M/ [" v9 c9 c- ]; T) F    encryption keys when reading and writing to an encryption zone.9 ^5 s- ^5 o% U1 v9 L
    For hdfs clients, the provider path will be same as namenode's8 R1 ?8 D; w( {$ Y# z9 N
    provider path.
5 i, ]4 x# B* E4 [0 p  </description>- t! I6 L' G5 A2 d& l
</property>
3 Y: A- Y+ Q) N- n2 Y% y0 {<property>
- o0 ^2 a$ v% k. y5 ^. |( B8 s" Y+ G  <name>hadoop.security.key.default.bitlength</name>! f3 z8 |( B% A9 o
  <value>128</value>% U7 H9 C  }) y- }, `7 n  Z! a+ u1 @" X
  <description>
0 C6 u. [# W( s. |6 ^7 p# o    The length (bits) of keys we want the KeyProvider to produce. Key length, `7 l' R7 N# ]$ G6 O
    defines the upper-bound on an algorithm's security, ideally, it would
( J( S; `* R3 i) m, Y    coincide with the lower-bound on an algorithm's security.. P  V: V( M$ S8 |3 x/ Y
  </description>
5 D( V: T% E1 v& S. t</property>6 h% m; `" w9 g) E$ `8 t
<property>
  Y# {' m# k2 \5 j" P% ^  <name>hadoop.security.key.default.cipher</name>
3 s% W3 G% D4 H4 S$ F1 f  <value>AES/CTR/NoPadding</value>
1 X; x4 J) x. ?3 o8 [  <description># f% s+ d, j- j2 e- {; E
    This indicates the algorithm that be used by KeyProvider for generating
# T9 i+ [( Q( y& R; ]* P4 Z    key, and will be converted to CipherSuite when creating encryption zone.
/ V+ B& g1 o4 a. }  </description>6 W0 V: i" T0 ~+ O1 e8 F/ n8 d
</property>* ~( B* M) m  O' ?8 }9 d
<property>
, C; l9 ?' D5 E! s9 I  <name>fs.har.impl.disable.cache</name>
& u% b( n, `( i  \% c  <value>true</value>
9 _" A3 P1 i8 Q) @1 W  <description>Don't cache 'har' filesystem instances.</description>
1 e+ H! d+ P9 Q9 h9 f</property>3 |8 {  C) s" S- }
<!--- KMSClientProvider configurations -->! {) [6 _! _9 j1 Y! q% k' D
<property>
; `5 i0 Y2 t6 t. J/ h* q  <name>hadoop.security.kms.client.authentication.retry-count</name>7 Q3 ~- b  {6 T3 f
  <value>1</value>
& m$ ?6 S) `- R+ q3 s+ A$ w" w& @' f  <description>( }4 M; S8 r: x5 N1 I, X
    Number of time to retry connecting to KMS on authentication failure* C$ Z/ n; M) U
  </description>+ ?' [5 J; R5 b- k% c
</property>, p+ U0 U% I9 s  R  b
<property>
3 ~, b! X0 d) a1 S  <name>hadoop.security.kms.client.encrypted.key.cache.size</name>2 I/ D; ]; z" j/ O+ U) `
  <value>500</value>
# w- w6 x7 O+ T6 u' G  <description>2 |2 Q0 N" D$ D  j$ x
    Size of the EncryptedKeyVersion cache Queue for each key
/ D  w' Z+ _2 l+ o; @  </description>7 P3 T5 v2 M6 @" P) ]' |; j
</property>
( O) ]* Y0 |9 V" Q. t7 e' k<property>$ U. N$ N3 m' Z% c
  <name>hadoop.security.kms.client.encrypted.key.cache.low-watermark</name>2 ^; ^6 ?# I" X: m, E4 ~
  <value>0.3f</value>
6 J0 G1 m) w+ X" w2 G1 Q7 [  <description>2 U4 V  m5 ?! {3 s$ a  G5 z
    If size of the EncryptedKeyVersion cache Queue falls below the
3 @& F- g/ r# u- ]7 P6 v7 H) E, J    low watermark, this cache queue will be scheduled for a refill+ y  e6 j: c3 ?% V9 v6 p9 R
  </description>
  Y$ v3 I) L/ }9 ]/ ]</property>0 N! a# H, f- W  Y* ?& X8 X
<property>6 \) l5 a* q$ L3 j! H# R1 h7 S6 a
  <name>hadoop.security.kms.client.encrypted.key.cache.num.refill.threads</name>; Y4 ]+ z. r/ O4 X# R3 P1 H7 l; c9 f
  <value>2</value>3 d, m+ `+ s! H; {9 W1 |
  <description>
( G  z% j( D% T    Number of threads to use for refilling depleted EncryptedKeyVersion1 }% E2 e, t( S: Q$ P: V
    cache Queues
( U3 V( V7 O; g, X# ?  </description>
  P+ b# ?: y. l! S</property>3 u1 l: ]. s* k6 D" E; P
<property>
' x3 E7 G( r2 z3 w  <name>hadoop.security.kms.client.encrypted.key.cache.expiry</name>
9 i# H7 F$ o+ m% w: C7 C  <value>43200000</value>
# L, @1 W8 t# Z# a  <description>
/ Y8 R5 K7 y" ^0 S    Cache expiry time for a Key, after which the cache Queue for this
2 y% T4 Y0 N5 a, z( l+ S# l    key will be dropped. Default = 12hrs3 @$ S+ o4 W2 D/ |3 c
  </description>" |  A: A7 _0 d
</property>; W, p9 M* i& g/ a7 F1 W
<property># e5 }7 E7 H8 ?" v& v) b
  <name>hadoop.security.kms.client.timeout</name>
: v; G. |" N9 p/ M  u  <value>60</value>2 [9 ?* m3 [4 O9 T
  <description>
! y. ]) Z& S4 j    Sets value for KMS client connection timeout, and the read timeout
& }5 z1 J: ]  @9 ~$ o    to KMS servers.9 m; m; c4 y( J/ N# b5 |
  </description>
9 a$ Y# E$ o- X0 ]</property>. W! ?3 A. M& m! y" f
<property>
: f2 R0 Z) i$ u& d6 i  <name>hadoop.security.kms.client.failover.sleep.base.millis</name>& i5 M4 J* C- g8 @3 `+ h
  <value>100</value>. F, p; P5 c- S2 ?2 }
  <description>
# s3 Z3 K- f" X$ k& @    Expert only. The time to wait, in milliseconds, between failover
# F& j* C' e+ G* y0 _    attempts increases exponentially as a function of the number of
- k; q  D" Q$ g    attempts made so far, with a random factor of +/- 50%. This option7 r* K! e/ T" j
    specifies the base value used in the failover calculation. The4 U, g7 R6 m/ P1 F( L! F1 L
    first failover will retry immediately. The 2nd failover attempt6 b6 K. P/ J1 S
    will delay at least hadoop.security.client.failover.sleep.base.millis6 D) Q( t4 @/ B
    milliseconds. And so on." j2 C9 |# v- t6 d
  </description>
  ]' N) @/ H. L</property>
- d" j& \% y/ d. V7 N5 b# P<property>
1 G$ f* q( ?2 j: d  <name>hadoop.security.kms.client.failover.sleep.max.millis</name>$ `4 b3 s3 W) z  w" Z! u1 f# S
  <value>2000</value>2 |* z& {7 ~5 ^! ]* k
  <description>, E: K3 p8 c* {
    Expert only. The time to wait, in milliseconds, between failover0 S1 \# v' e- L0 P2 s2 N
    attempts increases exponentially as a function of the number of
2 A9 _. U$ P, w" a( e: Y' O  S    attempts made so far, with a random factor of +/- 50%. This option: x3 F4 r( x) u; {! p
    specifies the maximum value to wait between failovers.! O  l8 F; Y4 m- @+ g# S9 c& `% A
    Specifically, the time between two failover attempts will not
# \* K& g; W2 u% f" e1 t' m0 O, G    exceed +/- 50% of hadoop.security.client.failover.sleep.max.millis
3 Z3 C; L, a, f) b9 z: c    milliseconds.
  s7 r! L. f( n% I7 ~+ x  </description>
' _* F6 S5 a" S( x/ ~</property>
7 [. X; Z8 G2 W1 ^. W <property>4 l3 J* L7 o! T! z4 i3 T
  <name>ipc.server.max.connections</name>
/ a8 U) ]% p6 H4 d5 |" K" W- \; f  <value>0</value># ]2 x4 P2 R$ j$ _, f- V# z
  <description>The maximum number of concurrent connections a server is allowed% Z. F, I; `, c8 X+ ]
    to accept. If this limit is exceeded, incoming connections will first fill
1 L, j. Z6 v  d    the listen queue and then may go to an OS-specific listen overflow queue.
8 d# B8 n/ U1 E$ m4 K1 j4 H& Y) p7 Z    The client may fail or timeout, but the server can avoid running out of file
) d: w' T" M0 F    descriptors using this feature. 0 means no limit.9 ^+ @1 U: ?) `7 W2 B; T* I% k4 X
  </description>
: Z  H# V( }; U* Q& q3 j</property>
& I) T% J) f% {: L$ b$ }  <!-- YARN registry -->+ K' Z# G; ?2 P5 R' l
  <property>
% n$ e* E2 W; H  v9 @/ S    <name>hadoop.registry.rm.enabled</name>8 V; _0 Y& T9 p0 G4 ^
    <value>false</value>
+ n( e+ ~/ S. c    <description>. }% Y, v% [+ Q! l( b
      Is the registry enabled in the YARN Resource Manager?+ g, u% A/ `  L0 U
      If true, the YARN RM will, as needed.8 K- d7 v, ]' h. I/ ?, _
      create the user and system paths, and purge0 c; P+ ]( q7 u9 W3 m
      service records when containers, application attempts
8 q. T) _' A4 A- R8 N' [( L& ?) W      and applications complete.. R5 i# c; G* _
      If false, the paths must be created by other means,/ Y- Z, g* X: Y+ \
      and no automatic cleanup of service records will take place.3 @/ _; R6 K) v8 ]4 \- ?
    </description>
4 @; o0 C" M0 K/ _2 J' U6 ~  </property>
% a" u0 B% \+ c! `" |/ \* H; A! K  V  <property>
7 m' h$ ]- B# z8 ^/ A    <name>hadoop.registry.zk.root</name>
/ z3 {* l& `5 a# L% M. u1 B    <value>/registry</value>
+ A  f/ A6 I( ?; l    <description>0 A2 J: g' a; M) O$ I( u# g, }4 u# w
      The root zookeeper node for the registry4 k0 i/ G9 q/ _! W4 }' `6 N2 Z  T
    </description>
+ z% m7 _; S, J% p3 t. R! _  </property>) k' Z: L3 P. `; N
  <property>
+ T( w6 S" `4 C9 H; \    <name>hadoop.registry.zk.session.timeout.ms</name>3 d; J2 T) q) S, t
    <value>60000</value>1 S& @. n$ o, x& X# T+ k
    <description>
. o. \  z& |6 v      Zookeeper session timeout in milliseconds" J! T% ~6 r5 L: |1 G; r$ D) ]. b& @
    </description>. P) d! ]4 M0 [: ^1 ~9 @+ d
  </property>* |$ W! B8 N' {1 q5 W% p- t* s( P
  <property>$ }$ H1 G. [. o/ v4 A
    <name>hadoop.registry.zk.connection.timeout.ms</name>3 c* A$ p; a$ Y8 D) l
    <value>15000</value>
( F* T" \5 u# y2 D9 G4 Z    <description>
' C; v3 k# x4 U# \. f1 N. K      Zookeeper connection timeout in milliseconds3 ?. p" b6 F/ ^* f# I
    </description>0 l" O/ d) f' i4 r' H
  </property>
1 Q, S: j2 g& k& L7 `" y0 c  <property>( R5 y7 p2 V: X8 O5 ~# f5 ~) ?
    <name>hadoop.registry.zk.retry.times</name>7 t, e- p/ P* h  O5 b# `
    <value>5</value>
0 [0 @; i3 f/ @: b! q    <description>) @- z+ U0 m  Q3 o' K# ^1 j0 d
      Zookeeper connection retry count before failing% {: [" c* h# P/ u  X1 Q
    </description>
8 T. A4 ^8 a5 ~1 f  </property>5 S4 o" }: H6 F! y9 X! ^
  <property>
$ _! P2 L$ ?& D# p! K    <name>hadoop.registry.zk.retry.interval.ms</name>
& F% q. d  w* j( ^    <value>1000</value>
3 w. e; [2 Q7 ^: y5 @" _1 c. v    <description>9 ?% |9 B* o5 o( w# k+ S1 x5 v
    </description>  |: |' r, }7 x
  </property>9 ^! y5 {* a, ~& }" `) k
  <property>0 m9 O' g: _' C& s! ?; ?) @
    <name>hadoop.registry.zk.retry.ceiling.ms</name>" c8 X3 k1 T2 ^; w
    <value>60000</value>1 \- q! @0 g/ K$ p0 e' t7 s
    <description>
) e# X: j0 [6 H% b) _# A* C3 J      Zookeeper retry limit in milliseconds, during
2 b# S$ y" I& }      exponential backoff.) T: v) k# z: A3 T4 ~8 o1 d  ?
      This places a limit even
- o  n5 ^* _8 e: U      if the retry times and interval limit, combined
" s" m' T3 u! M2 L      with the backoff policy, result in a long retry7 F. `7 i' f& L' V, j, K
      period3 ?" L9 l1 Z* t% A# e9 h4 {
    </description>
7 y5 _. U; W, J: y  </property>- t3 y9 I. O+ I; }# @
  <property>
  p" B( g+ H# `( K/ f, y% _( i    <name>hadoop.registry.zk.quorum</name>
8 \& H+ a( m0 X5 T7 H6 {    <value>localhost:2181</value>
6 }% B$ i- ^1 O2 S  T    <description>
9 k; ?' A" e& n- ?7 x; B' q. O/ O. F3 \0 D      List of hostname:port pairs defining the6 j* e! X, D  y& L8 Q
      zookeeper quorum binding for the registry
! t7 A  V8 {4 a9 G4 }, U    </description>
3 e# ]( w' L- n; U3 ?, p  </property>& f3 z$ X; l6 f3 q+ r
  <property>
4 h4 O! ~, \. @: K5 c) v1 _/ k3 K    <name>hadoop.registry.secure</name>
/ @6 H  |1 G: Z& y8 y    <value>false</value>6 e# k; L8 p9 u4 F( y7 B9 J( t+ D, v
    <description>
6 @6 p# w3 R9 i9 D; O% l' M8 S      Key to set if the registry is secure. Turning it on
, t5 U- a( v$ K$ {8 V& U* w      changes the permissions policy from "open access"
9 {. y; c; g$ ?      to restrictions on kerberos with the option of/ B+ v7 Q. O# a. d- b
      a user adding one or more auth key pairs down their' z$ t' c- `' [2 Q: e" `$ R
      own tree.
3 A6 X7 r* A4 @# z/ o    </description>
/ ]( A0 D" f7 O  </property>; _# V6 K5 E7 [
  <property>' Z+ I" {2 p; W
    <name>hadoop.registry.system.acls</name>8 [+ ^( \, i; b: q6 {$ I
    <value>sasl:yarn@, sasl:mapred@, sasl:hdfs@</value>7 t5 ^% w9 @- g  B
    <description>
4 d; {0 y( Y; M8 z      A comma separated list of Zookeeper ACL identifiers with
9 u0 L' M# ~3 H! ?$ V      system access to the registry in a secure cluster.- O/ B* S3 N6 N# {
      These are given full access to all entries., w8 n! W! \; E3 @: O4 f
      If there is an "@" at the end of a SASL entry it# o5 m# b. w, E8 Y( ]1 r: ?) A
      instructs the registry client to append the default kerberos domain.
4 H/ b! b0 n0 G: m; ^- @    </description>" k+ ?3 \5 d3 m; S
  </property>
- G# H+ O3 o4 p/ j  L7 m! e$ G  <property>
8 D/ Z1 q- c/ F4 B% |( Q    <name>hadoop.registry.kerberos.realm</name>5 b" D6 _& {1 t6 r( j4 v( t
    <value></value>
( t' B" |1 b1 B( h    <description>! d1 X+ o* K9 m8 q# t; U3 @  S% f
      The kerberos realm: used to set the realm of
$ C0 }% `& l& A: y      system principals which do not declare their realm,% J% U1 M5 a' s# }- u/ j& R8 B4 W& K: S
      and any other accounts that need the value.
6 |: _- K) h- _2 Q9 \      If empty, the default realm of the running process9 Q0 i; b2 g& h! K  j3 o. Q" i% U5 ?
      is used.
+ P" w! i( ?3 H* t      If neither are known and the realm is needed, then the registry; T! ^- o7 e$ Z4 B: `
      service/client will fail.) M% G" }/ I4 f8 e8 S2 ]1 q
    </description>
! S; L) T1 a( w- `, [4 l  </property>% J3 L4 E. v# c" M& A* J
  <property>
7 [1 s  j2 B% [5 f2 v    <name>hadoop.registry.jaas.context</name>3 n  q& v# t2 M- `7 p/ ~1 t! G
    <value>Client</value>
# W* {4 W! T! i6 f9 k- S/ m    <description>
+ U, `% Y6 U3 T, x+ ?      Key to define the JAAS context. Used in secure
9 c4 @. H4 F; c$ i      mode) Q+ m- V' I- A! H$ ?0 A
    </description>
9 {9 g% V& O: ]+ k! e. l  </property>
; t. K& h" R5 A  E7 q/ q  <property>3 M6 @  h1 u- G4 S2 F: q
    <name>hadoop.shell.missing.defaultFs.warning</name>
! k, b+ U9 e( N/ l- ~7 }5 g    <value>false</value>
& a  i  F" E! t9 Z1 x) D    <description>
3 `% o, T* N% J$ Q      Enable hdfs shell commands to display warnings if (fs.defaultFS) property* V/ x4 }* U/ h
      is not set.
; n' T6 g( K% T    </description>
7 }) a/ o9 n. c  </property>
$ v! c0 ^7 g$ a) R& h, j  v  <property>
; r+ _9 ?5 ?- {5 T- Z* o    <name>hadoop.shell.safely.delete.limit.num.files</name>
- k+ g7 e* I* E0 X    <value>100</value>
; c: E. F# m4 m0 }7 |    <description>Used by -safely option of hadoop fs shell -rm command to avoid
, d- Q) I6 e, L& R9 z      accidental deletion of large directories. When enabled, the -rm command: S* u3 l2 l; v- t6 D
      requires confirmation if the number of files to be deleted is greater than/ m! {7 D& a$ G! p* {
      this limit.  The default limit is 100 files. The warning is disabled if
: I( F0 {6 o) \% J' z% t      the limit is 0 or the -safely is not specified in -rm command.6 h! w$ i, j! f3 |6 f
    </description>" u# O0 n0 c' U, n- E# n# F" s8 D
  </property>  b$ V" {, H: J
  <property>
7 R# X( \; q; {. W# F9 j1 K    <name>fs.client.htrace.sampler.classes</name>; X* [' N* Y) O1 e( \$ Y9 Z
    <value></value>
" T) A: l1 t% q  s8 z6 P2 C; i    <description>The class names of the HTrace Samplers to use for Hadoop1 Z" m1 }, U) t
      filesystem clients.
3 [8 m0 U, j9 P! Y6 [8 w& ]    </description>
; H6 r0 x% I' _, ]0 i+ Q  </property>- z( j6 Q2 n4 }
  <property>
& ?) L6 F2 G6 d+ k/ T0 ?3 f    <name>hadoop.htrace.span.receiver.classes</name>
% R5 x1 q# q% a- F1 u& v; K$ N    <value></value>
& p7 H# ]% A$ F" z  J* s    <description>The class names of the Span Receivers to use for Hadoop.
1 M9 W, q2 ^( P& p    </description>
' r- e5 r, ]: t  </property>
2 H9 ?# o  ]4 l% }! D2 j$ c. |  <property>
4 x: k7 P! I7 |- W$ ~, y; j$ s    <name>hadoop.http.logs.enabled</name>& H( a* [! p3 `2 X! m% s5 m  s
    <value>true</value>7 e. g* G9 t4 Q8 K
    <description>
/ j9 g8 N! I( J( m. O- A) _      Enable the "/logs" endpoint on all Hadoop daemons, which serves local
8 V9 y" G/ n+ i/ w) W  ^$ D      logs, but may be considered a security risk due to it listing the contents* z7 S$ S9 {$ V- `
      of a directory.# Q! ^& t) j( I) y8 t2 y
    </description>
0 |- s7 ^1 s3 ^, L7 {  </property>% @/ E2 C- s1 V5 F% s
  <property>/ v$ G3 K8 W; X1 o, [0 a% d2 N, o
    <name>fs.client.resolve.topology.enabled</name>/ B; t; Q2 W" N4 b; E
    <value>false</value>1 @& l# [  I3 v- Y
    <description>Whether the client machine will use the class specified by
. @8 u% x, a, t      property net.topology.node.switch.mapping.impl to compute the network
3 t! a- {- e" P      distance between itself and remote machines of the FileSystem. Additional
6 m  l! q9 ~6 q5 m1 [; I% z. A      properties might need to be configured depending on the class specified
# I7 n8 j6 a5 d: x5 ~  E0 K7 X      in net.topology.node.switch.mapping.impl. For example, if
. i8 A* {! u, H* X+ V3 s      org.apache.hadoop.net.ScriptBasedMapping is used, a valid script file6 G/ d( C9 \/ `' N
      needs to be specified in net.topology.script.file.name.
3 j5 Q5 G# A3 ^6 J    </description>  n0 |$ Q2 ?! h# R5 C9 i
  </property>
- g% s1 G( h: v1 F  <!-- Azure Data Lake File System Configurations -->. J& \5 {7 N0 v& j4 p. z5 G( {7 L
  <property>9 Z" E( c6 \3 x: U! L! N
    <name>fs.adl.impl</name>$ l. o- F6 x# b; M* [  \
    <value>org.apache.hadoop.fs.adl.AdlFileSystem</value>6 y! W5 s# ^3 X% K% E
  </property>
# _% \! J5 l" t4 b  <property>' l) B  \. X/ v2 v) b5 b' y  L& ]
    <name>fs.AbstractFileSystem.adl.impl</name>
1 x9 O. q! G) `' |  R    <value>org.apache.hadoop.fs.adl.Adl</value>/ C7 x0 u1 L. h* Z
  </property>
+ y0 t  P# q( b; K9 C: F$ ?  <property>; A6 ~( `- j( F- w
    <name>adl.feature.ownerandgroup.enableupn</name>' z' W. u/ q" |6 K" t) ?0 Z( S
    <value>false</value>+ H) X: U6 F9 x
    <description>
; F. ~3 A# d: {7 V- U) L      When true : User and Group in FileStatus/AclStatus response is
1 w4 P* I( ^) z      represented as user friendly name as per Azure AD profile.
- F- H: N; v# {; e. C# ~; b5 Z      When false (default) : User and Group in FileStatus/AclStatus4 ]" Q0 ^9 j( t6 M
      response is represented by the unique identifier from Azure AD0 K' S1 B4 U1 E; l; E3 l
      profile (Object ID as GUID).
* m( D- J4 ?3 i9 i$ P( J6 q$ b      For optimal performance, false is recommended.: F( X' Q! y* D
    </description>% p- w  _) f4 V
  </property>
$ e/ R+ p" S, Z6 ^  <property>( [, A& t+ u2 |4 ?7 a
    <name>fs.adl.oauth2.access.token.provider.type</name>
2 h5 n9 i- {: _+ E3 B$ T% y    <value>ClientCredential</value>
  k( B, ~) d: [/ Z/ o    <description>, A. `' X! l( _/ z! P$ f! h
      Defines Azure Active Directory OAuth2 access token provider type.* a, |+ @. B7 b
      Supported types are ClientCredential, RefreshToken, MSI, DeviceCode,8 L" C; T3 r$ E! f1 P
      and Custom.7 y* x% H8 w2 K( h8 z
      The ClientCredential type requires property fs.adl.oauth2.client.id,6 j5 @9 v* N; T; E. J
      fs.adl.oauth2.credential, and fs.adl.oauth2.refresh.url.5 s& x4 f& |* V: W: u4 B, {
      The RefreshToken type requires property fs.adl.oauth2.client.id and) h' N8 S' b' j4 i! q) o
      fs.adl.oauth2.refresh.token.' D0 Q7 }( e6 Z, Q& \
      The MSI type reads optional property fs.adl.oauth2.msi.port, if specified.
0 `/ }- w: G/ u' v: N      The DeviceCode type requires property
- M  S' _6 b" G0 l! C; Q5 E* S      fs.adl.oauth2.devicecode.clientapp.id.% M& \3 j! Q: J+ i
      The Custom type requires property fs.adl.oauth2.access.token.provider.: v& o- \. h0 k% C% H
    </description># m$ ]- p" R! `- A0 g4 K& b# H% n
  </property>( D/ |1 C, s! K; k- q( D
  <property>
0 p; U3 S. a. |, W" _2 ?    <name>fs.adl.oauth2.client.id</name>: o+ j. L, G4 z! Z3 s
    <value></value>, y9 n/ V% E9 [
    <description>The OAuth2 client id.</description># W! |% m. Q# y0 A" c: K7 f% {
  </property>- i: c1 I! g* o- i: d5 i. m
  <property>
  C& }+ h5 W1 P  z5 ^, K9 w& D4 |    <name>fs.adl.oauth2.credential</name>% G' T, K) Q, @' n( L
    <value></value>
2 I9 ?; [% V% I/ Y  Y+ M! s    <description>The OAuth2 access key.</description>1 c+ w9 ^0 V" q0 Z, R" e4 g
  </property>* v8 G/ H' D8 c3 e% \
  <property>
( H( p/ ]% `+ h2 U5 p7 `( k    <name>fs.adl.oauth2.refresh.url</name>
9 N% G) Q8 R7 H9 R, t    <value></value>
( B, \! h7 S- M& J* r; }( L0 Q1 z! D2 B    <description>The OAuth2 token endpoint.</description>: K+ D, {/ ?0 v7 d, j4 L% C
  </property>! g, \; t4 k: r! ?
  <property>
  {& p2 a, j6 T& E    <name>fs.adl.oauth2.refresh.token</name>
: H# B. F. c0 }8 h) q' Z    <value></value>; F" y# m9 {3 _0 X+ y5 Y
    <description>The OAuth2 refresh token.</description>9 j/ G6 a. o9 c( q
  </property>
9 L" S6 g9 t- Q/ a" Y; y  <property>
& r0 y4 Q" y: @- F7 e/ q    <name>fs.adl.oauth2.access.token.provider</name>. ^( G4 y: b+ n% f! y( i0 E
    <value></value>
7 {# b3 O2 h0 t    <description>  I4 o1 \7 e$ z9 E' }9 F, {7 j
      The class name of the OAuth2 access token provider.! g4 B  F1 }5 L& J
    </description>1 q0 v0 h' X3 K9 A
  </property># s1 w8 H; d& A
  <property>
5 U2 C" B; Z5 }. H    <name>fs.adl.oauth2.msi.port</name>3 i, f! I; e$ d, g5 f( p
    <value></value>
# g8 g5 Y' {# U    <description>1 ?- W# B4 b+ g' i
      The localhost port for the MSI token service. This is the port specified8 y" `/ H: y7 P# C8 a* l" b
      when creating the Azure VM. The default, if this setting is not specified,) |9 u5 X7 u8 h
      is 50342.
% X& n2 T% n$ P1 D1 F' B      Used by MSI token provider.
: i3 B% \$ b! t/ z; j% @8 ^    </description>! ?" b" W9 d7 t
  </property>3 f4 S0 w. I; R$ q1 s
  <property>6 R& |/ _1 q# o5 z$ K; g
    <name>fs.adl.oauth2.devicecode.clientapp.id</name>
# p0 X) k+ y5 F6 X4 i    <value></value>. _0 X8 V6 `  I% [1 a
    <description>
' L% X' i5 Z; b2 v& y      The app id of the AAD native app in whose context the auth request3 T# J3 q1 I( ?2 Y7 _" `
      should be made.* m7 Z7 x( c$ L
      Used by DeviceCode token provider.* u" ]! R0 q2 K: _
    </description>
+ E( |1 X6 @5 ]4 D  n  </property>+ r; z4 o' _1 w/ G
  <!-- Azure Data Lake File System Configurations Ends Here-->5 H) h: Y& j. U- l4 a3 j# b
  <property>
& D! ~) s8 |' I. m8 K* r0 I$ a    <name>hadoop.caller.context.enabled</name>4 ^6 ?8 k+ F1 }! J  U: a& ^
    <value>false</value>
- s( V; v. U* L  O; P    <description>When the feature is enabled, additional fields are written into% z- x* J- X; P- E( x1 V
      name-node audit log records for auditing coarse granularity operations.
2 R* y! {8 a! G1 H    </description>, o! F( O  h+ S: R* L
  </property>
& o9 ?! Q* X0 u" G; T  <property>5 E" ~; ?4 n# @" E" k' \$ F
    <name>hadoop.caller.context.max.size</name>' D9 D+ t* a  W4 v
    <value>128</value>
4 v8 j" V8 N; y4 e; {. L    <description>The maximum bytes a caller context string can have. If the% {$ G5 L/ V% x. e( s, l1 a
      passed caller context is longer than this maximum bytes, client will
6 k9 k  G; |& j* ?( l- Z  [      truncate it before sending to server. Note that the server may have a
5 F1 D) I" A! e      different maximum size, and will truncate the caller context to the
* d( c8 B" Y1 P/ X0 J      maximum size it allows.4 ], b- G0 W, n% _( k
    </description>
+ X) g" p5 q& E) I, U' a  </property>3 ^: o1 E# o! u' @' q
  <property>5 k( C3 {2 b- A  U
    <name>hadoop.caller.context.signature.max.size</name>
$ I6 f* z8 @$ p" A$ L; f1 [0 u    <value>40</value>% f; `: [( A1 `2 ]1 L1 k
    <description>
7 M$ P- c) n. S) U7 D# s3 P+ A      The caller's signature (optional) is for offline validation. If the
! S; P2 C2 w# N  D+ P' R      signature exceeds the maximum allowed bytes in server, the caller context
' ~  S& W5 q% V7 Z% n, ~      will be abandoned, in which case the caller context will not be recorded3 _- e) Y: X! A: h* S# J; O
      in audit logs.' M) B( k+ D8 o) L% Q2 o+ _5 L6 H
    </description>* b; s; t3 u1 D/ P5 q
  </property>
3 b. G. j4 ]9 b8 u. X<!-- SequenceFile's Sorter properties -->& n( M7 M. h$ F5 L6 k9 T5 H
  <property>% H" ^* d; B! h0 ~& e2 ~
    <name>seq.io.sort.mb</name>
6 T) R) O  k! S  d: {  G    <value>100</value>
; `8 c, o! S  Z" Z" W: i    <description>
1 r6 P: A  L- [9 F" F      The total amount of buffer memory to use while sorting files,3 u& j% M+ d7 v% `4 k. Z6 z; v
      while using SequenceFile.Sorter, in megabytes. By default,
& J( L9 M; C* Q5 B3 M      gives each merge stream 1MB, which should minimize seeks.: }$ [$ X6 E8 h  j/ m' f0 \- [
    </description>
, g# n; }" x6 ?2 |  </property>
- B! p* P* X  p- j  g  <property>
8 `* h* ~- w' r" P( C0 G    <name>seq.io.sort.factor</name>
5 e# A' h8 r/ Q$ l  p    <value>100</value>& q6 G0 T( M- k
    <description>5 b! T% n: Z4 \  r
      The number of streams to merge at once while sorting( I/ r6 E/ j( q6 \6 s( q% j" `
      files using SequenceFile.Sorter.9 y+ ?% k2 C% X. G8 {# L3 X" p
      This determines the number of open file handles.% X" ^) y( G% B/ z
    </description>! R: N0 c& N. D; C5 D" n- \: z% }2 s
  </property>4 x# J8 U7 t: c: X2 D9 D- I
  <property>
$ ?( z( T" E" J  w: ?/ _/ p# @1 ~: Z9 {    <name>hadoop.zk.address</name>
* V7 |% p0 K. \    <!--value>127.0.0.1:2181</value-->6 q* x0 f. J' D
    <description>Host:Port of the ZooKeeper server to be used.
5 e1 h3 F# Y& ?    </description>
( n9 Q% X( [1 h  </property>0 g9 l5 `# G2 j0 I# [
  <property>5 _, L3 ]0 g) q8 y# a( A
    <name>hadoop.zk.num-retries</name>) {7 e# e7 ~( u, c! q! w
    <value>1000</value>) f( ?# o" O3 j' a/ I
    <description>Number of tries to connect to ZooKeeper.</description>7 {. ~3 Z  @) K' o# k! x# Q9 w
  </property>
7 D: W+ m1 w3 f" x; v6 b" {  <property>! a; y' C' k. y$ S+ @
    <name>hadoop.zk.retry-interval-ms</name>4 Y/ E5 C( H" t/ J
    <value>1000</value>
3 t) X% T% G2 S  O# c    <description>Retry interval in milliseconds when connecting to ZooKeeper.
) h/ o0 M3 @! w9 L    </description>7 A3 Q7 o: b6 d3 ?2 T5 }% r  k
  </property>
8 w  [1 t+ i2 A- E  <property>
/ Y6 N3 X. s, _  o& P! t; a" l2 z    <name>hadoop.zk.timeout-ms</name>* D! ?. l+ P  @5 c8 g
    <value>10000</value>: a6 z2 [% Q1 h- g3 d
    <description>ZooKeeper session timeout in milliseconds. Session expiration) H9 b" S& f( w: N  U% i
    is managed by the ZooKeeper cluster itself, not by the client. This value is
7 n# Q. f4 E' S& `    used by the cluster to determine when the client's session expires.2 l9 ^. Q) V& r1 w7 a
    Expirations happens when the cluster does not hear from the client within
& }  D. z5 N/ Y' R9 s" o! f; ^0 r    the specified session timeout period (i.e. no heartbeat).</description>
! Y' E, F, k7 l$ H4 ]  </property>
8 i& e5 C6 _+ B7 Y  <property>! I( X) d' ?% E6 j4 s( d
    <name>hadoop.zk.acl</name>4 s# p) h$ {% Z, R
    <value>world:anyone:rwcda</value>
" \" I0 H& t5 s! o5 }    <description>ACL's to be used for ZooKeeper znodes.</description>/ h1 L# ]6 U7 B4 [$ c+ B
  </property>8 x$ ?! T0 ]7 z
  <property>4 k3 H  D( h6 p" Y( k
    <name>hadoop.zk.auth</name>, p6 e$ C( v( R; W' r
    <description># p) v0 X" x' j' M5 G: ]
        Specify the auths to be used for the ACL's specified in hadoop.zk.acl.4 ]8 g) }3 a) M; {8 Q9 |3 v. @
        This takes a comma-separated list of authentication mechanisms, each of the7 R- T" s% r2 n# O2 c
        form 'scheme:auth' (the same syntax used for the 'addAuth' command in" @7 q: K+ {" B8 c
        the ZK CLI).
9 Q: d- e. P- `3 T8 |/ t! h# c    </description>
% J1 @8 {; f. R! F2 O9 G2 P: D  </property>
+ N( b& b6 l( S+ r  ^% F  <property>
& z6 y# c8 d2 j" U/ W1 X7 E% _    <name>hadoop.system.tags</name>; Z4 w/ e  F4 Q  {7 D9 o1 |
    <value>YARN,HDFS,NAMENODE,DATANODE,REQUIRED,SECURITY,KERBEROS,PERFORMANCE,CLIENT
8 X$ h: S0 v4 b      ,SERVER,DEBUG,DEPRICATED,COMMON,OPTIONAL</value>; r+ j- V2 E2 h9 A
    <description>: e" B, U' _( k4 n
      System tags to group related properties together.4 m% c! ]  r6 H6 [! A* ]
    </description>* q& n5 x. h: E7 Z6 M
  </property>
% L. o- p9 V2 K8 W+ l/ B$ t  <property>
" g  S7 J+ n- P* V3 m) E+ I( k    <name>ipc.client.bind.wildcard.addr</name>7 Q6 L* P, A0 f+ O! x; V  p
    <value>false</value>
7 a0 V% s3 \3 a0 E3 k( L, n2 x" [    <description>When set to true Clients will bind socket to wildcard* g3 t* e3 q& d; ^+ s( H% ~! ]- _
      address. (i.e 0.0.0.0)4 m- t- `4 ~1 ~5 |2 |8 R
    </description>  U1 p2 x: C  n* z7 ]% B2 }6 d
  </property>
6 A( J, p/ c" `9 ]% @</configuration>
6 F2 [7 v( h* ?) z& `, o- l2.hdfs-default.xml
0 `: {( A+ {7 B+ b/ J  W
$ U0 P& V9 i: I7 R9 @" u<?xml version="1.0"?>
3 I" q0 J& ]; j3 D6 B* |7 S<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
, D! A. k9 z) g( K7 ~2 B  m<!--
) C! U8 V' v, S( T, e% \$ |   Licensed to the Apache Software Foundation (ASF) under one or more1 {, m4 {0 X( A5 r% V
   contributor license agreements.  See the NOTICE file distributed with
, u1 B( @; H" o/ m3 v. \% B   this work for additional information regarding copyright ownership.
2 w* n: `- Q$ X" M* v5 Z" X) y   The ASF licenses this file to You under the Apache License, Version 2.04 t2 ^7 o7 r. ]4 L3 Y# \
   (the "License"); you may not use this file except in compliance with. |& F4 ^! t# T) K
   the License.  You may obtain a copy of the License at7 {$ T7 f7 w9 H0 y
       http://www.apache.org/licenses/LICENSE-2.0
* |) o! ^! z  x4 y+ `   Unless required by applicable law or agreed to in writing, software
& f" x3 n3 @9 }( H1 t   distributed under the License is distributed on an "AS IS" BASIS,4 u. ?; |4 _( p
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.9 J( z: @! Q; b( a$ w( l% p
   See the License for the specific language governing permissions and: X1 b8 I% N8 |) q
   limitations under the License.( |* j" O2 b; b% }
-->
9 f" F" j% h2 S# @5 S) i6 [<!-- Do not modify this file directly.  Instead, copy entries that you -->) p- P. t* \1 y" v$ f
<!-- wish to modify from this file into hdfs-site.xml and change them -->& A" |5 Q7 k) ]3 J7 j
<!-- there.  If hdfs-site.xml does not already exist, create it.      -->5 J/ l* N) d; G- _  n
<configuration>9 Z7 W+ n' L% R# U. ~; f7 D5 j$ E9 H
<property>
, y. J  X" r, T4 O& p+ W  <name>hadoop.hdfs.configuration.version</name>
* R& p5 e2 r5 S. z7 X  <value>1</value>
! l' T+ j# A- B  S  _  <description>version of this configuration file</description>
* I) o0 E3 ?/ Y  ]) K& M</property>
( }- C, \9 S) q# A<property>
# |% y0 @- @# N! `% }( a8 Y! A6 I  <name>dfs.namenode.rpc-address</name>) v! a% U7 ^- n: {, y* k- G; Z
  <value></value>4 F9 O3 r+ K" }) e6 b* T3 c* W
  <description>
* @  s8 \' C/ Y3 O" O    RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist,6 M/ s, y4 N; ?2 q; l; K( x
    the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1! i9 y+ ?: g, `% q5 @. R& A
    dfs.namenode.rpc-address.EXAMPLENAMESERVICE9 k& d  S. y2 o& s* h  P
    The value of this property will take the form of nn-host1:rpc-port. The NameNode's default RPC port is 8020.% t8 l4 s, K! p0 |
  </description>
& d- C% G4 N8 D, S/ h/ h1 R; }# o</property>
! `$ d/ n. P  F: l5 U2 o' R6 R' h<property>  y' v3 y$ f) Y9 f/ j4 X0 h
  <name>dfs.namenode.rpc-bind-host</name>
* w# R# b# \' `  <value></value>
, [0 p8 G- H  f3 E% \  <description>/ n* h* v) F0 W2 N4 R
    The actual address the RPC server will bind to. If this optional address is9 \" k2 v; i, L* [+ D+ n, W! v. r# b
    set, it overrides only the hostname portion of dfs.namenode.rpc-address.
6 z% K8 k- r% ]9 a8 }4 Y    It can also be specified per name node or name service for HA/Federation.
* K7 \& E' v0 ^4 ?! A+ l' r    This is useful for making the name node listen on all interfaces by. K9 A' G- V  D/ I! a
    setting it to 0.0.0.0.
# {4 H3 W' N, y* W4 v2 _( i  </description># O: I6 v- i- {" B* C! G
</property>8 b) C5 C, F8 |& _
<property>
8 o7 P& W3 B' g8 `* G  <name>dfs.namenode.servicerpc-address</name>6 T3 d! J- I% L5 K
  <value></value>
0 L' d8 `' U. q  <description>
) g/ n9 U- A$ V    RPC address for HDFS Services communication. BackupNode, Datanodes and all other services should be, ?1 `9 L% I  p" ^# n. p
    connecting to this address if it is configured. In the case of HA/Federation where multiple namenodes exist,
7 b$ I# |* c; I# P+ Q* X    the name service id is added to the name e.g. dfs.namenode.servicerpc-address.ns1. }+ k: S3 j+ P8 ^  [
    dfs.namenode.rpc-address.EXAMPLENAMESERVICE
  g; Y; k: C/ n6 O0 E1 R7 G* X    The value of this property will take the form of nn-host1:rpc-port.
9 u( I; b: s! t1 v    If the value of this property is unset the value of dfs.namenode.rpc-address will be used as the default.
2 ~3 ^" s: _* U8 W$ u8 y" B  </description>+ k, F& R+ ^7 r5 T% p
</property>7 a" J  f# m$ F5 a; h
<property>
9 ]! R6 E# s8 k. Y# D8 s  <name>dfs.namenode.servicerpc-bind-host</name>4 I- k; i. I! w+ u' a2 W' F
  <value></value>& d( h1 e/ b$ [" G, k. n
  <description>
: n8 R" N8 B  n( V( E) R. h5 C: X* [    The actual address the service RPC server will bind to. If this optional address is* P/ M( E9 g  N9 D1 h
    set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
9 T+ j; L8 W" a  I/ P% X- W    It can also be specified per name node or name service for HA/Federation.
) l, W( P. @8 R% l' o( w    This is useful for making the name node listen on all interfaces by
  \: S$ s9 g6 Y6 v6 M# g  H" o. i3 f    setting it to 0.0.0.0.! e: m+ L# ]2 Y) k6 k
  </description>$ x" k" Y2 B0 C$ @# t
</property>
  z% A! R. u1 c: B7 [<property>8 a3 a* q" J$ r% Y* }# F, t
  <name>dfs.namenode.lifeline.rpc-address</name>
- a' D* [3 J' p2 `; x" q7 w  <value></value>8 _1 n6 [+ y$ W2 m3 V6 C  L
  <description>
1 J" f' Z. J7 W4 V' H( L5 I5 J$ _    NameNode RPC lifeline address.  This is an optional separate RPC address7 E3 _6 q, e. `+ c% G. U8 ?- M$ R- d( d
    that can be used to isolate health checks and liveness to protect against/ p+ Y) R2 Z1 x# L% A; W
    resource exhaustion in the main RPC handler pool.  In the case of
( H: g8 E6 T) }9 N  N    HA/Federation where multiple NameNodes exist, the name service ID is added
# }! B( R' q: e: A; e    to the name e.g. dfs.namenode.lifeline.rpc-address.ns1.  The value of this! b1 B# ]% c7 x, M+ [
    property will take the form of nn-host1:rpc-port.  If this property is not) P$ T4 s5 @! |3 _
    defined, then the NameNode will not start a lifeline RPC server.  By! T" r$ D4 ]  A6 J( {( V% F- [
    default, the property is not defined.
- s! M. B2 ], j  </description>; R5 k& A" ~% i* N2 c. G
</property>
) G5 I" |- Z: k<property>
3 K" [- G7 f' w* N* o  <name>dfs.namenode.lifeline.rpc-bind-host</name>
$ d* [5 M% `2 F) c! i8 L  <value></value>
  n1 w5 I1 J( C0 ]6 c4 m( z, I: P  <description>
4 y0 P* a+ D8 I8 l8 b    The actual address the lifeline RPC server will bind to.  If this optional' ~8 i+ `& Z  V+ b- R/ m- o
    address is set, it overrides only the hostname portion of1 \$ H& D$ b& j; O/ }; q- E+ i
    dfs.namenode.lifeline.rpc-address.  It can also be specified per name node
/ h. h# p9 O- t    or name service for HA/Federation.  This is useful for making the name node
3 O, x5 g$ w6 s5 m! F  E    listen on all interfaces by setting it to 0.0.0.0.
" n1 M. e4 f8 m, N5 q  </description>- h- D; o) B! \" U. M6 F- m1 J% h
</property>9 B* l2 e8 [3 Y! {" i4 X* p' V
<property>
7 E, ?. R) S" ]; T4 y" Q  <name>dfs.namenode.secondary.http-address</name>+ v& r. d9 S/ X0 A" R7 W5 o, o
  <value>0.0.0.0:9868</value>
5 Z, D$ B; s' F2 L- N( V- y  <description>
- s) c& @1 Q) o    The secondary namenode http server address and port.* i0 \/ G3 u5 c/ T' W! S, R  y
  </description>; X3 T3 F' D# z+ m- l- l! d
</property>$ k* }8 F& R! Y4 J
<property>
7 V. b% G: B; K/ A, B3 M  <name>dfs.namenode.secondary.https-address</name>, C% H2 P9 E; w5 W9 @6 V" b% ~
  <value>0.0.0.0:9869</value>, ^2 x# t$ G) X% o, L
  <description>
) M1 O5 ]9 w9 w: C6 j    The secondary namenode HTTPS server address and port.
4 U  ]: m( B* R) R5 N  </description>
$ o; z/ l# A( |* Z7 l</property>
, O0 f4 {% x% P* q<property>
# F) P9 E5 f9 R  <name>dfs.datanode.address</name>
& x6 N7 K! ]% B/ T  f- K) r  <value>0.0.0.0:9866</value>
" s6 n7 {" S5 }; O  <description>% k( ~* T8 O" G6 V% t# C
    The datanode server address and port for data transfer.- _+ p7 n3 I0 L& f# f# q
  </description>* g8 h9 i6 h; B& n" [! T$ y
</property>
1 D! {- S/ |0 B' I: I( D+ e<property>
* q' w3 E& E5 z# x- X+ W  <name>dfs.datanode.http.address</name>- u) f- J+ x+ \. ^+ e2 O
  <value>0.0.0.0:9864</value>
0 p. K6 H9 }' v  <description>
' f# \" o. S. v5 Y4 V    The datanode http server address and port.: W# x" O+ Z- q
  </description>
" e1 \* A6 j8 |7 d& C</property>& H. h5 v8 `( L* x9 @! T, x
<property>
6 e+ ~$ U2 k6 [& g9 _  <name>dfs.datanode.ipc.address</name>. k, X" r7 {) ~! P8 J
  <value>0.0.0.0:9867</value>; l$ h9 h3 m0 C6 }2 F% L
  <description>' j1 k1 A( q2 w. H& d0 C. }" N
    The datanode ipc server address and port.. c; }* ]& ~) I" N4 |& x8 D
  </description>% I% e2 ?8 I# b" Z! V4 p$ D
</property>
  F: }, o3 N& @( [7 j1 x3 x<property>
: U' Q5 y( L1 j- p& n  <name>dfs.datanode.http.internal-proxy.port</name>
$ \  p1 g7 V# q  W! _  d3 r2 ]  <value>0</value>
0 K* V7 g1 a+ H2 y: G# b: u  s  <description>) B1 [$ A  b5 E2 _$ m- r' t
    The datanode's internal web proxy port.
( q, z% r/ ~4 e6 K9 @    By default it selects a random port available in runtime.
$ _7 F: U, Y3 N; ~  </description>
: N/ v4 @! f/ a</property>6 s1 m9 ?* i1 w, ]) W  J
<property>
  N# R, |3 r6 U# ]  <name>dfs.datanode.handler.count</name># g# O, X: s8 Y! I) u% G, A* b3 U
  <value>10</value>
% K6 K" c, E9 a# v1 m: N0 s  <description>The number of server threads for the datanode.</description>
" ]4 N. b+ F& e1 y</property>
/ @! X9 E& }4 y. x" E* s/ f<property>5 J' g) W2 U9 B7 B
  <name>dfs.namenode.http-address</name>" ^; v7 w) i5 x  Z4 t3 C: F- g7 z3 {
  <value>0.0.0.0:9870</value>' o8 I2 L6 b1 t5 F6 u/ t6 S+ _
  <description>
6 ^$ p5 e% k9 C6 R! n- j$ v1 U    The address and the base port where the dfs namenode web ui will listen on.
. h' c$ V+ T4 e" i+ `8 |' Y  </description>. n! E1 ~3 q/ ~: x$ D& _& X6 a) l
</property>, n( {: }- y) _+ v5 \& F
<property>0 q5 o6 b' z/ U4 z7 H
  <name>dfs.namenode.http-bind-host</name>5 _/ d# l, F3 Y2 b3 I
  <value></value>
# x3 d# E; F, E( i  <description>
. R( [+ u$ T& l' T    The actual address the HTTP server will bind to. If this optional address
9 @4 k& |- Q1 ~$ x( a8 p- c% e    is set, it overrides only the hostname portion of dfs.namenode.http-address.6 }% J! y0 k' {2 o; r0 X
    It can also be specified per name node or name service for HA/Federation.
- }% z3 p3 j" n1 y+ _% l2 k    This is useful for making the name node HTTP server listen on all) }! {( e' _9 h7 x: p
    interfaces by setting it to 0.0.0.0.
# w) {, J6 }; j  </description>
, H+ }6 m* [4 c! y3 x</property>+ |2 y! t, N' F5 O
<property>9 E, e; q0 S8 T$ i
  <name>dfs.namenode.heartbeat.recheck-interval</name>- V- S: B& n4 H! n( j8 f
  <value>300000</value>2 f4 n+ h( y) {
  <description>
" I- y- n! T* }: L) Y9 `7 h! t    This time decides the interval to check for expired datanodes.! G% s3 z$ {$ L6 L/ c
    With this value and dfs.heartbeat.interval, the interval of
+ u# X, V/ |5 x    deciding the datanode is stale or not is also calculated.
6 @7 n6 k5 `% _6 U6 w/ w    The unit of this configuration is millisecond.
4 M3 x$ X+ V' n9 `  </description>3 ?; P, K) ^8 J9 i, r$ O% \6 U
</property>
) h' F- u" [. T. z5 r/ y<property>
8 c1 x8 Y- H! I- s6 n( _  <name>dfs.http.policy</name>! Y' K) D$ U+ X+ L3 X6 V2 J0 a
  <value>HTTP_ONLY</value>0 }7 d9 ^% D0 J" \# K
  <description>Decide if HTTPS(SSL) is supported on HDFS
: u4 `2 w, w. h2 i0 T* j1 W    This configures the HTTP endpoint for HDFS daemons:
0 v4 c; B1 R+ P6 ^1 ^      The following values are supported:
+ ~! C: K1 O( ^      - HTTP_ONLY : Service is provided only on http4 f" i$ A0 M! C' J9 K6 f
      - HTTPS_ONLY : Service is provided only on https
2 l0 G8 a- J3 v      - HTTP_AND_HTTPS : Service is provided both on http and https
/ T/ {3 P8 ^. T4 b* U# Z% W+ ~$ @  </description>
$ i6 {3 g/ G$ z0 l7 f6 w, w/ C1 I</property>
+ i& ~( j' L/ `* [  e2 l<property>
9 y" {; K  l" N$ ]" O  <name>dfs.client.https.need-auth</name>% ^/ W' O! g4 b9 K
  <value>false</value>+ Y; O: P1 a1 t. [, [2 h
  <description>Whether SSL client certificate authentication is required# s9 _, r( p1 S; V
  </description>
9 P$ U$ h1 u$ d! h* H</property>
0 [1 u. ~& G* Y+ Q! d) a+ t<property>
; p& G- {# }6 @/ z  <name>dfs.client.cached.conn.retry</name>; w! E7 C  k7 m$ V
  <value>3</value>
$ A. d  x2 l. X$ Y. X3 `  <description>The number of times the HDFS client will pull a socket from the" B7 r) j* K' k8 P8 p0 g, c3 n
   cache.  Once this number is exceeded, the client will try to create a new( i$ \2 r. \3 \, o, b4 Z+ }
   socket." x  k$ F" m; f$ l: H# {
  </description>1 A) u: q& R, R' @- t# F
</property>' }4 |, [; x6 `" [9 c
<property>
& o) m6 U2 \, d  <name>dfs.https.server.keystore.resource</name>- f$ t  X1 q$ I9 V; ~; v* M
  <value>ssl-server.xml</value>
8 x8 `) j+ q0 d  <description>Resource file from which ssl server keystore0 L6 @8 ?1 W6 A5 |, @8 K9 u7 M* A" P
  information will be extracted
' \. |$ E1 C2 A2 S* n& d* @& H  </description>
3 c3 P% I& C/ I: h# C5 C2 g/ F</property>( {+ O2 l, Y: f1 x
<property>
2 B; t8 s6 @. X( Z- g  <name>dfs.client.https.keystore.resource</name>
8 f1 w2 w: ~/ b: }  <value>ssl-client.xml</value>
1 X; v: A0 P9 O; y6 y% M3 a6 _  <description>Resource file from which ssl client keystore
6 I% u9 b* R' z! [+ G7 N- C  information will be extracted# }* [1 U# G- T
  </description>& k4 f  t, k8 K3 O
</property>
% G6 k- F, C, E<property>
% v& ?3 K& t) P2 ^6 J  <name>dfs.datanode.https.address</name>0 M$ ~: o* D% C9 m6 V
  <value>0.0.0.0:9865</value>" V# p' J5 w6 Z) b, C5 o
  <description>The datanode secure http server address and port.</description>, T: V3 o7 V# p7 Z1 e
</property>
3 F* z- ?7 w. h2 q" I<property>
# F, b- F- [3 T  L' p4 m; u; o; h  <name>dfs.namenode.https-address</name>4 l7 M& J: M6 i
  <value>0.0.0.0:9871</value>
: X" y1 a4 ?: M& \  l  <description>The namenode secure http server address and port.</description>
. u5 A1 v/ ^  t- ]/ ~, H3 s2 t+ B</property>
2 l$ {9 I" U% ]5 G( {4 r. l0 I<property>- I2 i$ v6 ^4 h  J, Q$ @
  <name>dfs.namenode.https-bind-host</name>+ u% l9 ^; v: q% q9 A& t; V, T3 v+ P2 ^
  <value></value>; h( e- x! q4 w0 k" o
  <description>
' m' j; {/ J# l8 u, ~, Y. c    The actual address the HTTPS server will bind to. If this optional address
- \# K  b# e, t# v    is set, it overrides only the hostname portion of dfs.namenode.https-address.
& R3 t) ]4 w$ N! x( K! ?" U    It can also be specified per name node or name service for HA/Federation.; W/ Z( n; M; k
    This is useful for making the name node HTTPS server listen on all0 ?* B( ^. w# Q$ B( _2 z9 p; _; J1 L! o
    interfaces by setting it to 0.0.0.0.
0 A! d) f6 j/ o1 ]  </description>7 r" i6 C" o" B4 o
</property>& e% z0 S& I9 B" o
<property>
# a$ v4 ?+ N7 y( H   <name>dfs.datanode.dns.interface</name>
  Q% {. Y% i! ]& ?  |% `$ E9 o" \; h8 ]  A   <value>default</value>( V. h6 W" E4 `8 Y+ I# n9 l
   <description>
& T' u6 M7 D5 o5 \, l     The name of the Network Interface from which a data node should8 D5 T( m* B, ?3 C6 @
     report its IP address. e.g. eth2. This setting may be required for some0 Q5 @4 N( a3 s$ z5 c9 b5 c, u
     multi-homed nodes where the DataNodes are assigned multiple hostnames* n- }  Q- e: N6 M9 d
     and it is desirable for the DataNodes to use a non-default hostname.
8 F% w' v; w( N3 _% C" J' _     Prefer using hadoop.security.dns.interface over% _8 K3 |5 l, h( G2 S& n
     dfs.datanode.dns.interface.& L- ~, I2 |0 r7 r/ T7 }. f1 F
   </description>, {* d  A% g: E8 v# u
</property>
# ]' q' K4 F# O7 g% c7 v& _, A<property>
2 P4 j: m- x8 b  `# n4 u  <name>dfs.datanode.dns.nameserver</name>9 p+ p6 B1 P6 r
  <value>default</value>, f8 Y7 K5 H* u; a
  <description>
; C* q: D$ |6 x5 B  g' e    The host name or IP address of the name server (DNS) which a DataNode( e# y+ _3 p6 Y  i/ B7 w7 {
    should use to determine its own host name.9 A) y; B/ G  R9 z8 A" [
    Prefer using hadoop.security.dns.nameserver over
0 N( k0 _& ?5 g1 h0 Y: l    dfs.datanode.dns.nameserver.# l5 Q  N2 \5 @4 ]) P
  </description>
+ z& |' g, k6 a3 ?2 V# U& T( j </property>+ K5 d  `: V0 d% l
<property>2 m1 j' x# }( c+ C5 H6 m) ~. @
  <name>dfs.namenode.backup.address</name>
: v- K. p9 k+ d/ ?  <value>0.0.0.0:50100</value>) q& A8 P; l" ?4 R; _
  <description>3 @7 B, M3 m) q% A& I9 O& g
    The backup node server address and port.
$ ^, V9 f4 t9 d" u3 \    If the port is 0 then the server will start on a free port.* a) q. x3 g1 {
  </description>
9 A# U: ?. K' N) I, A</property>9 R7 Z2 ^8 x! N
<property>
) H% X! `5 g. ?- X3 D  <name>dfs.namenode.backup.http-address</name>5 {: i2 I( p% }$ c- O
  <value>0.0.0.0:50105</value>
" J+ R$ k' }  |0 d9 V+ D  <description>2 d& ^' N/ h6 A1 e, X5 s
    The backup node http server address and port.$ N8 d/ _4 d5 B  q+ _3 q/ y4 ]. D7 h
    If the port is 0 then the server will start on a free port.
) E: u% ?! k" [, W* e3 n  </description>" S; b) X0 R( y! H$ Q* G
</property>) X- w( F) G7 q  h) Y0 \$ Y
<property>, ?9 i0 Z& b; M) l; n/ S* _
  <name>dfs.namenode.redundancy.considerLoad</name>
  h4 ~8 q9 D$ {8 w' M* y( n  <value>true</value>
, o! b& M& T+ Y$ W% L" U  <description>Decide if chooseTarget considers the target's load or not. @( H2 j' I: @$ T8 N+ H7 B3 E
  </description>
! J: {: w) k6 C: l</property>1 V5 {( b! u5 d! r  |
  <property>' j# ~+ U; m  d- V2 H
    <name>dfs.namenode.redundancy.considerLoad.factor</name>
; g/ o9 Z' Q+ P) n    <value>2.0</value>: }' k# [* ?# h2 U$ p
    <description>The factor by which a node's load can exceed the average
0 S. U6 O/ [0 {4 }+ O% P# K1 t      before being rejected for writes, only if considerLoad is true.7 I8 a% h: X  N1 M/ r' Y
    </description>
) B2 o% |9 O1 {  </property>
9 T" e; o8 G# v% }: b<property>3 [! h; Z: r, g# s3 ?
  <name>dfs.default.chunk.view.size</name>0 Q$ D. b; q9 U# u; \7 A* S2 H
  <value>32768</value>2 i; r+ m  a+ {7 q
  <description>The number of bytes to view for a file on the browser.; F" y) x8 M4 D" J. M! v
  </description>; u/ d( z3 _3 W* U, S# [3 H
</property>
& F& q9 ]6 l+ A( G& X/ e' C2 K  i<property>
. \" l% N3 j* \( R( O; K9 i  <name>dfs.datanode.du.reserved.calculator</name>
2 n" y+ p! ]+ [+ Y3 ]  <value>org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReservedSpaceCalculator$ReservedSpaceCalculatorAbsolute</value>
, y' n1 ?0 D8 j% V0 \  <description>Determines the class of ReservedSpaceCalculator to be used for5 {7 t. u% k8 t& P, y$ V; F
    calculating disk space reservedfor non-HDFS data. The default calculator is4 @/ T) U/ A& W6 D3 u
    ReservedSpaceCalculatorAbsolute which will use dfs.datanode.du.reserved3 T2 A& X, x6 x& [/ ]) x
    for a static reserved number of bytes. ReservedSpaceCalculatorPercentage
1 d& R. v1 M1 K: V( \7 f, T' o0 t    will use dfs.datanode.du.reserved.pct to calculate the reserved number" w: s3 q; u; q
    of bytes based on the size of the storage. ReservedSpaceCalculatorConservative and- F# Q- L1 q* q; ?, y' B8 E8 d
    ReservedSpaceCalculatorAggressive will use their combination, Conservative will use+ }6 \) f! K8 h: a
    maximum, Aggressive minimum. For more details see ReservedSpaceCalculator." L$ L; r8 i) q8 \: Y5 f6 @+ U
  </description>' ?) A; ?+ e2 {
</property>" v9 n: h4 \) L& }1 r: S# X
<property>
/ u4 ~0 v" Y# l! `1 l4 ]  <name>dfs.datanode.du.reserved</name>
) p6 U" x& X% F1 R7 q  ~6 V  <value>0</value>. R4 ^& V$ U- F3 z! |: y8 Y! s8 t
  <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
; z& u7 w% P+ a' ?      Specific storage type based reservation is also supported. The property can be followed with& s9 S: W; U: m' t$ d' P: v# \- d
      corresponding storage types ([ssd]/[disk]/[archive]/[ram_disk]) for cluster with heterogeneous storage.
6 L2 S. b0 _1 l- y) a6 S9 s- Z      For example, reserved space for RAM_DISK storage can be configured using property
: D9 |' H, h; ]0 y" D* i      'dfs.datanode.du.reserved.ram_disk'. If specific storage type reservation is not configured
7 W. u: I" ]% c7 E7 S' H2 T# T1 f. m      then dfs.datanode.du.reserved will be used.
+ M5 w# {, `* V7 G  b( O; P  </description>3 l% W9 J# d2 _( m1 _' Q9 h
</property># c2 g' f: V: @- B* K, O
<property>
4 F3 z' M; \: v! X  <name>dfs.datanode.du.reserved.pct</name>
" _1 f3 J2 w: {  <value>0</value>( p. R9 K0 m& ^* N3 [
  <description>Reserved space in percentage. Read dfs.datanode.du.reserved.calculator to see
" R: g. S/ `6 z3 h' L$ X    when this takes effect. The actual number of bytes reserved will be calculated by using the
% O/ m0 c. [& }/ O; h- [, x    total capacity of the data directory in question. Specific storage type based reservation  F1 u: H. @3 m6 h/ K
    is also supported. The property can be followed with corresponding storage types
& j# _  x, `* W    ([ssd]/[disk]/[archive]/[ram_disk]) for cluster with heterogeneous storage.8 q& z5 S; P. U- }
    For example, reserved percentage space for RAM_DISK storage can be configured using property
" g' S0 k5 d5 Q' u% B    'dfs.datanode.du.reserved.pct.ram_disk'. If specific storage type reservation is not configured
( w  n% Y( z/ m$ {    then dfs.datanode.du.reserved.pct will be used.4 f, i: v2 c3 G1 P  z0 |2 T
  </description>
/ G! S. Q# f. o! Q6 ]4 s: P</property>
0 D% ?, S6 l) x2 m<property>
- Q( Q& ?" p: S0 V! q+ C  <name>dfs.namenode.name.dir</name>. b6 [5 s2 \8 m" Y: c" a& @
  <value>file://${hadoop.tmp.dir}/dfs/name</value>
; Y& K# a; ~! m/ W" O  <description>Determines where on the local filesystem the DFS name node
' R8 X  j) Z+ x  j) c" n2 `& Y      should store the name table(fsimage).  If this is a comma-delimited list
: f' L: f3 h9 R4 a7 a) U5 q      of directories then the name table is replicated in all of the$ b1 z  o% D3 o3 a
      directories, for redundancy. </description>! N! ~( P! Y/ _4 l. o; D
</property>; T' f# q) D# l8 x
<property>2 @+ u& D4 }: Y1 j  Q6 }+ Z; n
  <name>dfs.namenode.name.dir.restore</name>
$ H; @( I0 t9 Q7 S% i: o  <value>false</value>  W$ o6 P# ?. [9 K1 Z5 Y/ U( v
  <description>Set to true to enable NameNode to attempt recovering a
: q( j; ~  i/ O5 M# v. D      previously failed dfs.namenode.name.dir. When enabled, a recovery of any3 }5 ]  K9 T) E
      failed directory is attempted during checkpoint.</description>$ r2 o; Z# [2 E( [2 u( h7 U! {
</property>
% G$ [1 D4 |1 y  r+ A! A% O( f2 ~<property>7 f3 N0 W4 b# v4 C  u
  <name>dfs.namenode.fs-limits.max-component-length</name>3 ~' R# K- H7 C  q3 h
  <value>255</value>: h4 ?! Q( _3 X$ j/ E" u
  <description>Defines the maximum number of bytes in UTF-8 encoding in each: J1 ^4 ]1 M9 M% q4 V
      component of a path.  A value of 0 will disable the check.</description>
. C% L2 b6 q; _  Q" o</property>
, r7 X7 d& O2 A* t( U- g/ V6 @<property>
% X+ q3 E. o' o$ V+ L: z$ L  <name>dfs.namenode.fs-limits.max-directory-items</name>2 A, n- U2 w. m! i
  <value>1048576</value>
$ ~# w# g2 R, m# J( J0 E% Q  <description>Defines the maximum number of items that a directory may
' B) W" u+ h% k1 z      contain. Cannot set the property to a value less than 1 or more than' @$ s% r& M& ^  ^# B$ Z
      6400000.</description>& A8 M5 ~. `# P3 d: l
</property>
& Z8 \: Y3 f2 C<property>9 Q& o* f* |+ p
  <name>dfs.namenode.fs-limits.min-block-size</name>
0 M; p7 K$ l. g; Z$ c7 |: O  <value>1048576</value>7 e$ d3 k6 a# S- S
  <description>Minimum block size in bytes, enforced by the Namenode at create4 _, G' z5 @+ Z$ A6 T
      time. This prevents the accidental creation of files with tiny block* L# p) Q1 S2 F- K1 l
      sizes (and thus many blocks), which can degrade2 l0 a! Y1 p5 I. ^; p
      performance.</description>
- G% r# \7 R) G1 F4 o</property>, U+ N0 p  f& |  C( O
<property>
% ^  A/ x: n' v1 }' M- }    <name>dfs.namenode.fs-limits.max-blocks-per-file</name>
; J; r: I/ E; j" j3 j) V+ r- k    <value>10000</value>5 i- N; w2 h/ n! @0 d
    <description>Maximum number of blocks per file, enforced by the Namenode on' k) M: K; g, R5 |/ |" [6 a' W& X
        write. This prevents the creation of extremely large files which can
& ?1 X& I) A/ `        degrade performance.</description>0 s" W7 D0 g6 e8 |" s1 m6 [. o9 L
</property>! z" g* O5 J% z* ?. \- u& P; S2 f) q
<property>
. o  C# r' o: Z" J' Q  <name>dfs.namenode.edits.dir</name>3 U( X# [' b6 n
  <value>${dfs.namenode.name.dir}</value>- O2 y& M! G; j% f+ C: `5 F. \
  <description>Determines where on the local filesystem the DFS name node
) A! S$ @1 y$ T4 B7 i8 p, J      should store the transaction (edits) file. If this is a comma-delimited list
4 q- W: [/ ^& {      of directories then the transaction file is replicated in all of the
6 s6 C9 i; T" a# a  V' R& J* q      directories, for redundancy. Default value is same as dfs.namenode.name.dir
0 Z/ G6 V1 h6 O: e% j  </description>
. x* J0 T, l3 Q! s6 M1 @</property>
0 b0 l5 J# ?- o! ^6 C<property>
# D  ?% f2 J8 M: d  <name>dfs.namenode.edits.dir.required</name>  f  T+ k. |1 E7 ]+ l* D. j" F4 T
  <value></value>
5 W. ]6 U7 d$ P# @1 h, L  <description>This should be a subset of dfs.namenode.edits.dir,7 _( H: y2 L8 K! Z! R
      to ensure that the transaction (edits) file
5 z$ }  b4 n) t* C$ x9 S' d, ~7 w* g5 D      in these places is always up-to-date.! u+ ~, _2 L+ O
  </description>
( {$ t- E. ~* V* Y6 H* S</property>
6 l. n2 y; L8 C<property>
9 K/ p' \! J8 ~, b+ s# }& P  <name>dfs.namenode.shared.edits.dir</name>2 W$ [& m! ~$ d" l$ e% _9 E( n
  <value></value>
# g. {# S) I# @1 v! L  h/ v. s  <description>A directory on shared storage between the multiple namenodes
& m9 ?$ b& {! E7 l5 P& C! d( O  in an HA cluster. This directory will be written by the active and read: p" o5 B& x4 Y6 X. l
  by the standby in order to keep the namespaces synchronized. This directory
7 p+ J) q" U$ _( {. J; H1 Y# E% N  does not need to be listed in dfs.namenode.edits.dir above. It should be
0 B4 G2 m4 w# Q% t8 n  left empty in a non-HA cluster.
# h: ^1 n3 x" U# ^, o0 n4 J; d/ J/ R  </description>
  V# @3 X) m2 ^0 R</property>, X* m# o' f( c" _$ r9 N5 b6 w
<property>5 u5 v4 v/ p! i; o4 ~
  <name>dfs.namenode.edits.journal-plugin.qjournal</name>
9 F- J8 n! J/ t3 M) F  <value>org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager</value>
0 u& w& E+ B1 C! U3 C& Z& j</property>
0 N; d  ^1 q4 K) P2 f<property>
. B0 a% U- H1 n7 ^2 ~. |  i1 z4 ^' F+ Y  <name>dfs.permissions.enabled</name>
9 J7 `4 ~9 m9 o5 q: G  <value>true</value>
- A# x& M$ @5 i% n4 i) k  |  <description>
6 ?7 m8 D, N- s' S" k! p6 P2 ~    If "true", enable permission checking in HDFS.$ {% R* J, X7 Z% \4 e8 ^
    If "false", permission checking is turned off,
2 R$ f3 ]$ S# d* Q+ G; J/ \; v+ Q, J! G    but all other behavior is unchanged.
# O1 [; V; w4 v: W6 G6 ~    Switching from one parameter value to the other does not change the mode,
# ~- l5 _, {# u% r/ F    owner or group of files or directories.8 V3 F& [4 [# N1 d6 [- Q3 W8 y+ M
  </description>1 f0 f2 F$ S$ B( p5 J" `0 T
</property>! C9 @% n2 T5 H7 Z0 \$ O+ \  X0 K
<property>; ~+ _. q: i; B: ?
  <name>dfs.permissions.superusergroup</name>, m2 U& }# d. |8 O4 D) j8 h1 u
  <value>supergroup</value>/ G; `, ^8 W7 {! @
  <description>The name of the group of super-users.
* x; \* h$ n& j* T" J+ C# o    The value should be a single group name.
) h& Y) E4 Y" z; k  b" P8 Y  </description>+ v; P# Q+ e. F/ y- x
</property>. K( o" u3 B5 f/ B
<property>
; m4 a8 x& a5 B( G. K" I# s   <name>dfs.cluster.administrators</name>
, E0 |4 k2 N* \" y* `, q; d  q/ F   <value></value>
6 v) X9 c: O, a& d6 n   <description>ACL for the admins, this configuration is used to control
9 l" Q7 m6 |( t. c& u6 W     who can access the default servlets in the namenode, etc. The value
: Y* H2 Y  l: G( f     should be a comma separated list of users and groups. The user list7 d8 \5 W4 |/ {4 @
     comes first and is separated by a space followed by the group list,
3 ~0 y4 ]- [; I% O     e.g. "user1,user2 group1,group2". Both users and groups are optional,
1 \* \& b9 k4 w9 `     so "user1", " group1", "", "user1 group1", "user1,user2 group1,group2"$ t" h6 Y9 Q2 M
     are all valid (note the leading space in " group1"). '*' grants access
  x# q+ t! ~; K- _# {# o0 [     to all users and groups, e.g. '*', '* ' and ' *' are all valid.: d$ Q6 D# ?: r5 O4 m
   </description>
0 I# w% _% x  @& r</property>0 G# U8 m' F( m* |
<property>
, h+ f- O1 @' C# R$ f$ o  <name>dfs.namenode.acls.enabled</name>
! C9 h1 M# L: M9 U  <value>false</value>
# M7 L4 Y) A! g7 c) l* y  <description>
7 d5 g6 {/ X3 {; ]+ O) j    Set to true to enable support for HDFS ACLs (Access Control Lists).  By9 X: G  r7 d  m# m; u$ s) J
    default, ACLs are disabled.  When ACLs are disabled, the NameNode rejects
9 N& F/ ]9 A# W6 k: F0 D3 }( @9 i    all RPCs related to setting or getting ACLs.
9 b- T, j* |) u6 Y: L  </description>. C+ Z. }0 H! P6 w% y
</property>
8 X* Q9 R) ^  Z" u  <property>
9 D  N! s4 o2 I3 L$ c    <name>dfs.namenode.posix.acl.inheritance.enabled</name>+ v: \( T8 g9 n+ @! X( L: n
    <value>true</value>0 i: h7 O8 Z* d7 Z$ ]
    <description>
2 A" H  }: g/ A# N( C2 Y" [$ A      Set to true to enable POSIX style ACL inheritance. When it is enabled
1 n/ S! R4 ?0 Z9 U: k      and the create request comes from a compatible client, the NameNode3 B1 e) T" g$ D0 H
      will apply default ACLs from the parent directory to the create mode8 @2 z5 J% u: B; C8 B% w
      and ignore the client umask. If no default ACL found, it will apply the. N. t' h. Q) B
      client umask.
+ n/ y( C* n! a* U    </description>
9 K9 Q7 @5 ]5 j  s  </property>
$ K# J$ ?* V. ]$ s8 k  <property>
/ P, x1 V' f1 r' R: U9 j7 v' \  <name>dfs.namenode.lazypersist.file.scrub.interval.sec</name>. j: E% ]+ I; Z3 ?/ U6 r1 m
  <value>300</value># e2 }: t* _7 J+ ~$ R* m  t
  <description>
7 e% V5 b& g9 C& U' C1 n: l. j    The NameNode periodically scans the namespace for LazyPersist files with
" B7 W3 M% y* f# b8 W& f" {    missing blocks and unlinks them from the namespace. This configuration key
1 g) @/ O* o: p! u; Y4 z( l    controls the interval between successive scans. If this value is set to 0,7 I- A; l) E. d* @7 ?! Z1 b
    the file scrubber is disabled./ i3 f6 W+ D3 Q! s: z! L- O
  </description>
9 v: W+ w# }( o9 @9 w</property>0 V: ^. H  o- C  s% F8 j" t" A; ?- N
<property>" k+ j1 d3 i! R! ~
  <name>dfs.block.access.token.enable</name>
7 A* y4 ~/ S8 K: O" J  <value>false</value>% G. B$ |  _; Z8 e& k) w8 X- g$ V
  <description>
) {6 D2 h/ a. ]    If "true", access tokens are used as capabilities for accessing datanodes.
  @) N5 |0 J# M. f    If "false", no access tokens are checked on accessing datanodes.7 R7 D7 @( D2 Z) q8 l
  </description>
2 Y- K) a/ M7 Z- c</property>
, x9 j/ ?0 y4 E" d3 \- d<property>
) A/ d/ J* G9 d8 X2 b  <name>dfs.block.access.key.update.interval</name>
. Q( ?+ {/ W6 }9 [% N' i) l- T; {  <value>600</value>
0 n, E& ?* M- ?9 n" H) ?' P  `& X9 ~  <description>
0 U& ]* ~3 O$ s! |# h    Interval in minutes at which namenode updates its access keys.( a% r, w7 Q# h5 t3 x
  </description>
  d' T; b% ^. q- R" c; z' n# ]1 J' s</property>' ?8 ?' l$ [% W( n7 y6 a6 T+ x8 {
<property>( V: b1 a4 t$ v; \5 Z
  <name>dfs.block.access.token.lifetime</name>
5 z) k; P( T0 }2 C. q! {" H  <value>600</value>( K1 R5 {- h$ n- o1 Q1 P# i. G
  <description>The lifetime of access tokens in minutes.</description>: G- |$ Y# D6 m4 z
</property>' _+ ]+ o  B  N+ N: A$ l: k
<property>- M6 R# t$ U8 m% Z% l0 ^
  <name>dfs.block.access.token.protobuf.enable</name>
) m; X7 c5 n- ^  <value>false</value>
" I4 Q  p5 N2 `9 d  <description>
! I; ^; H# W- v$ b/ v2 A$ B    If "true", block tokens are written using Protocol Buffers.$ ~4 s& r9 ^4 P: n8 z
    If "false", block tokens are written using Legacy format.
8 Z- L# h1 y6 `! t- a( Q: I: d  </description>; K2 O( ^6 q4 Z. u- m" O
</property># x# a% ^" k2 i% t6 i7 O
<property>
8 m1 U6 E4 ~$ ~7 W% o" W* Q  <name>dfs.datanode.data.dir</name>
; F6 ?  k+ C, a. s, t& ?) ~& }* O  <value>file://${hadoop.tmp.dir}/dfs/data</value>
& n4 K1 U7 x3 ]+ w5 S$ c  <description>Determines where on the local filesystem an DFS data node
6 Z* F2 _! e8 D8 t2 [1 c  should store its blocks.  If this is a comma-delimited
  k$ L$ U/ a5 }, A% T$ x0 G: `  list of directories, then data will be stored in all named, _# C/ M0 l" u3 `# \+ f; ]% N8 Z
  directories, typically on different devices. The directories should be tagged
  ?4 K' l: r/ M4 ]+ I' Y  with corresponding storage types ([SSD]/[DISK]/[ARCHIVE]/[RAM_DISK]) for HDFS$ x" `+ @* n8 m( R, J8 W
  storage policies. The default storage type will be DISK if the directory does
' }) d  x7 W% {0 q1 b& t& s  not have a storage type tagged explicitly. Directories that do not exist will
# e7 T' f  a# j  be created if local filesystem permission allows.
, M* ?2 b- p* t! a: L  </description>
& Z* S  r  F9 F6 z6 H" {5 `</property>; K- Y& W; L6 u+ g! P& D) m* J
<property>7 C+ A6 m+ q/ |4 {1 s
  <name>dfs.datanode.data.dir.perm</name>& Q# B5 \/ a( ~. j- r0 h! ^
  <value>700</value>( k& u/ F* @2 Y( X+ Y, d6 ]0 K7 U- O
  <description>Permissions for the directories on on the local filesystem where# S- Q( R3 t. n5 b8 }; V
  the DFS data node store its blocks. The permissions can either be octal or/ s& j2 ?$ Z- F$ x
  symbolic.</description>
5 P+ q# m- [  z1 H* f! ~8 b</property>
8 |. E1 o8 @2 H# I% X+ Y: U<property>+ e' S. X9 q. Q/ G
  <name>dfs.replication</name>0 ]7 m, b1 O% H, @1 ]4 G% V6 Y+ Q; U
  <value>3</value>6 F' n( Y8 A/ d2 R: e* V, z
  <description>Default block replication.
. ?8 Z7 G7 T8 i! k5 F0 u' T  The actual number of replications can be specified when the file is created.
8 t* B6 e4 E: O; N1 q  The default is used if replication is not specified in create time.
) B& ^6 Z) v2 M9 b  </description>
* t# D7 f' Q" ~: ^2 J3 r</property>
$ ~5 {$ {8 I: X/ y) N! A/ _<property>4 A' J3 G$ x- ^0 \
  <name>dfs.replication.max</name>
$ p/ ~+ C- z1 N5 u! H! d  c  <value>512</value>! L7 Y& h- h: I5 s% c
  <description>Maximal block replication. ' s" W  r3 K! N7 ?' D9 U, H- N
  </description>/ U- h/ A9 E, k6 l- f$ t
</property>
, x$ C' o. ~8 Y<property>
: W: o1 J; P" |! W: Q7 Z0 ?. t  _  <name>dfs.namenode.replication.min</name>
1 x4 K8 R+ n: G1 ^  <value>1</value>
5 S, u& i1 `6 k- N/ a. `4 p  <description>Minimal block replication. & Y, ?: u" i5 H* a" Q' u# Q4 L  X! g
  </description>3 W8 H9 U2 z) p" ~- K5 ~
</property>5 K0 s/ c+ W. z/ `& A, D* @
<property>
1 n+ ]4 F1 H. W% h! \  <name>dfs.namenode.maintenance.replication.min</name>. L  w) b/ Y& p3 i% h
  <value>1</value>/ i9 m% R* p) `$ v
  <description>Minimal live block replication in existence of maintenance mode.
. u, e; @5 u3 n& X) `) k  </description>
) Q2 K' Q/ z: r2 b2 _1 s</property>
0 N6 Q$ w8 g  G) ]<property>7 ~2 w( }4 `9 t8 R
  <name>dfs.namenode.safemode.replication.min</name>
- T; ^# u. z$ L  <value></value>* K6 \/ Y' H+ e2 ^; C0 C
  <description>9 G. K, v/ p  v9 m: Q7 Z8 H1 q  y+ A
      a separate minimum replication factor for calculating safe block count.
1 C  W0 U( [! Y) @) A      This is an expert level setting.
, `1 J* e6 {/ h: X% f- m. g      Setting this lower than the dfs.namenode.replication.min
! A" j# f; M3 ?% ?% @; \& i      is not recommend and/or dangerous for production setups.& L) W) r/ b, `8 A# E8 J+ E
      When it's not set it takes value from dfs.namenode.replication.min
' N2 k* f# v; }' e- S  </description>
/ h- I7 T1 F4 Z% b; U4 \- A</property>! E5 @# _( V) |  Q' I! }% G/ I% ?. S# f
<property>6 @2 c& ]: x5 l
  <name>dfs.blocksize</name>' T5 z; Z$ x2 ^
  <value>134217728</value>
/ Y+ G7 h% p% p2 t  <description>
/ G% T( |, C" i/ G6 Z% k0 O6 A* E* b      The default block size for new files, in bytes.
8 ~! M" i" m$ p0 X: N      You can use the following suffix (case insensitive):
9 ?7 T& c0 g4 q& }7 I; E  J" p      k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.),
" ~, @& @- z" u# T" S, \      Or provide complete size in bytes (such as 134217728 for 128 MB).
. C/ K2 e- ?" R4 h$ v1 Y- S) R  </description>
1 O: j) c  z0 s/ |</property>
1 q. }6 F& h# N# I. M* a: T7 x: \<property>" l9 s+ u1 S2 z) {8 j% s
  <name>dfs.client.block.write.retries</name>! L0 D1 A6 w) U: `% P0 X% c/ n
  <value>3</value># m8 l& q3 U; q! Q3 ~: w
  <description>The number of retries for writing blocks to the data nodes,
1 q; N* n7 Z9 `# z- h$ T4 \  before we signal failure to the application.
+ b, K+ a1 d9 a, {/ I  </description>4 C4 |' Z. x. B: a$ M# [
</property>
3 I8 F/ R3 H; u- ?% }<property>8 ^9 _4 w8 T9 O+ [2 B. |
  <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
6 M( k3 ~" Y; h3 Y! P% |1 z  <value>true</value>" a) F( g1 N+ O
  <description>
" @; {/ U4 c; D& }; j  ~, D! ?    If there is a datanode/network failure in the write pipeline,
( J& {% k& c; |* f& O$ J% {7 ~    DFSClient will try to remove the failed datanode from the pipeline& d" t. ?  i8 |- }0 N# a- K! l4 K
    and then continue writing with the remaining datanodes. As a result,
1 @0 r* \% j  `. S3 U    the number of datanodes in the pipeline is decreased.  The feature is
8 ^& W5 u* o' o& @/ B5 E    to add new datanodes to the pipeline., q, R: A2 m! l- N& O  P8 v
    This is a site-wide property to enable/disable the feature.
1 B$ v5 I" [- F( I- F    When the cluster size is extremely small, e.g. 3 nodes or less, cluster
. A8 i- e4 J6 L. t- e/ [/ b    administrators may want to set the policy to NEVER in the default4 g; {) V  l, V; L4 ]; l8 o
    configuration file or disable this feature.  Otherwise, users may
. c; i* ?' L: H# s$ M; j$ V    experience an unusually high rate of pipeline failures since it is6 I8 v! {5 V4 Y. u3 @
    impossible to find new datanodes for replacement.. w( {! ]( j' F9 i3 m
    See also dfs.client.block.write.replace-datanode-on-failure.policy7 M' g% C8 K* H+ p
  </description>
3 v" Y) j5 t( b- `3 }</property>
( Z8 O" ~& g  `9 O( b<property>6 Q3 _1 E" d: w
  <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>4 F& v, [( n1 z4 H
  <value>DEFAULT</value># c) I. L) v, F$ T2 U2 d6 U! Q
  <description>' y5 F# Y- J& _, g
    This property is used only if the value of
# N. o9 U  n+ I! ^    dfs.client.block.write.replace-datanode-on-failure.enable is true.( J3 _, y) p( e1 _
    ALWAYS: always add a new datanode when an existing datanode is removed.! H2 r: L0 ]( r4 B
    NEVER: never add a new datanode.8 s1 l$ C( D6 ]0 t
    DEFAULT: & ?  Y9 d; k6 B  I: n$ V# d; [
      Let r be the replication number.
$ W4 M) _0 L  O& `: v5 {- F4 ?8 a; L# l      Let n be the number of existing datanodes.
; ]; y5 {/ F( I2 n      Add a new datanode only if r is greater than or equal to 3 and either
) Y4 {' s& s2 A! M4 j4 P      (1) floor(r/2) is greater than or equal to n; or
$ M7 C  s2 G( D3 p( ]5 o. l      (2) r is greater than n and the block is hflushed/appended.
1 O( b- {: v8 I& c  </description>/ A- J" ~4 M2 o  o
</property>6 C# x% c/ N* _3 l
<property>
; Y- m3 m( ^6 [( D  d0 g) i  <name>dfs.client.block.write.replace-datanode-on-failure.best-effort</name>6 U# b7 \: A3 y, D* j) o1 }
  <value>false</value>6 r( b9 e3 v2 o1 s# m0 ^1 f" }
  <description>2 j5 n6 r5 Q, z& H
    This property is used only if the value of
1 \5 a/ Y7 I: ~' ?+ W0 Q  T    dfs.client.block.write.replace-datanode-on-failure.enable is true.
, M; d4 e: j% b# h3 }9 t3 w" |    Best effort means that the client will try to replace a failed datanode; Z5 r/ P2 _" a  b' J, U
    in write pipeline (provided that the policy is satisfied), however, it 9 |5 K3 v" x( U1 o/ |( M
    continues the write operation in case that the datanode replacement also
' W! d  Y! O8 W* Q- n    fails.
: ?) S5 {. Y  e- W. g    Suppose the datanode replacement fails.; ?* g* e! E7 O4 ^) a: E
    false: An exception should be thrown so that the write will fail.
5 ?6 D# X% Z  `    true : The write should be resumed with the remaining datandoes.
. \) u4 w0 G5 X7 ^0 g    Note that setting this property to true allows writing to a pipeline
# E' v0 k  w- c8 n: u9 x  w    with a smaller number of datanodes.  As a result, it increases the
8 P) ]4 D- y' V- ?* g- {; }    probability of data loss.+ |: y4 {: f- U  ~! k. I- t( O1 s7 S
  </description>" G( b5 `0 k; d$ q
</property>
% t1 O8 f0 G, m  <property>; @- a, m* @8 v
    <name>dfs.client.block.write.replace-datanode-on-failure.min-replication</name>8 h$ w, k6 E) W$ L( Y( ^! D
    <value>0</value>2 F& G; Z' J# A2 ?% V. i
    <description>
7 s) T0 r# {* S      The minimum number of replications that are needed to not to fail
6 y& c' u& ~* D' q- g8 U, \# ^      the write pipeline if new datanodes can not be found to replace. B5 u. j% {8 I) T& @! L/ Y
      failed datanodes (could be due to network failure) in the write pipeline.
, S. [7 d: R  p! a. W" j/ W( h3 g, `4 P      If the number of the remaining datanodes in the write pipeline is greater* o3 ~- H) p' l, X; v3 ]1 {
      than or equal to this property value, continue writing to the remaining nodes.
3 x) |% I8 G& Z; Q9 ?+ `      Otherwise throw exception.$ u* j- l& P1 {/ \$ t
      If this is set to 0, an exception will be thrown, when a replacement
8 ^2 w! u: Z' y+ _      can not be found.
1 L9 I7 N& }5 Q6 `4 \      See also dfs.client.block.write.replace-datanode-on-failure.policy
9 l: ?) A0 F4 o  ~    </description>
1 F2 C' F$ s# K( Q+ E2 R: E  </property>; q8 L5 e5 x$ R: v6 c- [
<property>1 W# ]4 R% I( E: f3 C3 g
  <name>dfs.blockreport.intervalMsec</name>8 O2 p( m" k3 ?) x+ K& [7 N
  <value>21600000</value>1 v  ~5 P$ `# F; ?  [
  <description>Determines block reporting interval in milliseconds.</description>
" E6 @+ `' F7 G& D! V) S  C</property>
4 @8 b5 S/ R3 ]$ t<property>* G+ x, j- A4 Z. y+ v# t( d3 p
  <name>dfs.blockreport.initialDelay</name>( U  j+ F7 F% y1 a3 `" H$ W- i7 {
  <value>0s</value>+ q) L* s: l! R
  <description>
7 [, E, m; X( x    Delay for first block report in seconds. Support multiple time unit
/ k4 a4 B+ J0 [  Z+ V7 f: t; D9 C4 Q    suffix(case insensitive), as described in dfs.heartbeat.interval.& L' i. Y. i) J+ q5 X; `2 ]
  </description>
; |4 J1 u( y% q# ?' O</property>
; \0 V8 G' a# J! y3 X3 A0 m<property>
8 y8 Z0 r6 _; K" [    <name>dfs.blockreport.split.threshold</name>% l' C4 r# l1 K/ N" `! @# x& s
    <value>1000000</value>
8 F! e1 P& c' Z. k# m    <description>If the number of blocks on the DataNode is below this% Y. d7 g( E; d, {4 C5 K. d+ ?
    threshold then it will send block reports for all Storage Directories2 G7 P/ t9 ?7 j) j1 d9 Z
    in a single message.' G$ N* m3 [; m# S* z' N
    If the number of blocks exceeds this threshold then the DataNode will& o% `$ ]2 H( e8 E* f5 A2 U8 Z
    send block reports for each Storage Directory in separate messages.
, T! I- y) B0 w6 q% I    Set to zero to always split.
7 E: m: }4 X" Q% R! E9 h7 b4 L    </description>
9 @. g' U- n# Y</property>
5 _9 i' N6 [9 ^% A3 ?; e# x<property>/ e& C9 K. D( |$ t3 r
  <name>dfs.namenode.max.full.block.report.leases</name>3 Y  J# c; B/ R4 l; R
  <value>6</value>- l- q7 o) X  f  g5 a6 W1 J
  <description>The maximum number of leases for full block reports that the! @; F. z: }2 J
    NameNode will issue at any given time.  This prevents the NameNode from
6 v% p+ C4 {2 f" u. h. z6 t" x    being flooded with full block reports that use up all the RPC handler
$ L9 w4 f+ ]: u7 e9 E- r' `    threads.  This number should never be more than the number of RPC handler
" m1 H3 v- r$ T) T    threads or less than 1.
4 d9 u" o+ e0 P) }- L, @  </description>
6 T' C- e$ \" A$ E* W( F3 [</property>
" h. r# J& \/ @9 D% H  j<property>3 ?8 t! V  g3 O3 a
  <name>dfs.namenode.full.block.report.lease.length.ms</name>3 s8 x, Y0 X% j. K  ~) f/ z
  <value>300000</value>
/ u- J; i& T! l  <description>; {; W1 ~. b( ^
    The number of milliseconds that the NameNode will wait before invalidating; w+ Q) t: e8 a# U7 n) A9 D
    a full block report lease.  This prevents a crashed DataNode from- }: C( u, ]. u: w# ~8 m
    permanently using up a full block report lease.
0 W" K3 y4 Z/ a  </description>2 |; V( O& s+ q* j" `8 J
</property>
- p! d* R9 C! C# q' t<property>
) g3 E8 h6 ^' i0 X- F" J6 e; I  <name>dfs.datanode.directoryscan.interval</name>: {" x) S$ t4 v' d" E. E0 m5 x3 R
  <value>21600s</value>; z. q# o: ]" r9 ~1 j  K0 D$ ]: q
  <description>Interval in seconds for Datanode to scan data directories and, c+ k  |. i1 Y
  reconcile the difference between blocks in memory and on the disk.
( {9 \, R! N, S3 h2 Y3 l; h/ e: |  Support multiple time unit suffix(case insensitive), as described2 q; }, |! K6 u5 g& S1 H
  in dfs.heartbeat.interval./ q# W! a" P, T$ Y
  </description>8 t' j8 Z' K. @0 s
</property>" M) t) ~9 V% h2 \% ?
<property>/ M: b! s9 m/ ^
  <name>dfs.datanode.directoryscan.threads</name>
$ q5 g! e/ d! e' f' n- Z/ P  <value>1</value>
/ x/ g5 [7 {3 K  <description>How many threads should the threadpool used to compile reports* ]2 m& j% c/ t8 a4 O8 {- Q
  for volumes in parallel have.
# T+ G& k5 k& m+ B7 T  </description>
  g- y3 T- w; }2 \</property>
, V2 o- m& Y; {& R* w# W, B<property>
& o: ^# W6 T- R% S* J  <name>dfs.datanode.directoryscan.throttle.limit.ms.per.sec</name>2 o6 i" v/ O! p  ?
  <value>1000</value>+ m; \. y& e  ?' J; z
  <description>The report compilation threads are limited to only running for* W3 {0 C. |9 x$ t7 J3 j
  a given number of milliseconds per second, as configured by the( ~8 y& d4 m: z' L" \' ^
  property. The limit is taken per thread, not in aggregate, e.g. setting
+ ^% b, W0 r; S3 O/ k  a limit of 100ms for 4 compiler threads will result in each thread being
  w( [. v5 a' n+ E  limited to 100ms, not 25ms.
8 i  s3 Z0 w9 s3 a' c  Note that the throttle does not interrupt the report compiler threads, so the
8 Z9 o: P/ ^: X7 N: M/ p' q  actual running time of the threads per second will typically be somewhat+ `- Z5 q$ O: m3 O' f) k
  higher than the throttle limit, usually by no more than 20%.0 O2 M( v/ T7 x3 e5 Y1 T
  Setting this limit to 1000 disables compiler thread throttling. Only
7 \' n6 F4 `4 x9 H  values between 1 and 1000 are valid. Setting an invalid value will result
" G/ n4 `/ w6 ?( J* d2 A  in the throttle being disabled and an error message being logged. 1000 is
- u# Q7 S" q  d7 b  the default setting.3 y- }) {. `2 o# \
  </description>5 z6 I4 n8 K" G) P
</property>! D5 P- s* Y, b
<property>* \* x4 ^4 z! C& L
  <name>dfs.heartbeat.interval</name>; `  z# \+ d: g% H
  <value>3s</value>& f8 E! b8 F0 m  n5 b
  <description>
+ A) v8 x. Q& ^1 P! C6 w0 y    Determines datanode heartbeat interval in seconds.
, h6 L  A1 P- ?, C4 t+ c    Can use the following suffix (case insensitive):" g# U  A. l; a7 h# r0 L
    ms(millis), s(sec), m(min), h(hour), d(day): Q' J5 V2 ~4 j9 w$ C
    to specify the time (such as 2s, 2m, 1h, etc.).
% e1 f# f/ Z: A7 V% k* Y& h7 P    Or provide complete number in seconds (such as 30 for 30 seconds).
" Z  q1 @7 Y: E8 c0 E& d% r  </description>& s. [. A' I$ T. P4 i
</property>
6 e/ u8 G5 |5 G5 ?5 v* ]' P5 w% H<property>4 K4 l1 V0 ?# a" z
  <name>dfs.datanode.lifeline.interval.seconds</name>
- y- V, O) C. s  <value></value>+ a; |+ `  J7 e! V6 s" s% o1 N1 D
  <description>
' X  U! J5 O0 i$ R3 h1 [# D5 m0 X    Sets the interval in seconds between sending DataNode Lifeline Protocol" s: e# q3 j3 J% B/ t% K9 A$ _
    messages from the DataNode to the NameNode.  The value must be greater than
1 N8 V3 J1 |0 B0 f% A    the value of dfs.heartbeat.interval.  If this property is not defined, then8 J/ j+ `2 n. V. x% b) a9 Y" u2 v
    the default behavior is to calculate the interval as 3x the value of9 D6 g; Q( q0 u( V8 C& x4 R" N7 G
    dfs.heartbeat.interval.  Note that normal heartbeat processing may cause the# z2 k" p. f5 h
    DataNode to postpone sending lifeline messages if they are not required.0 r  Q. o* |( p
    Under normal operations with speedy heartbeat processing, it is possible, f/ T6 S8 f# x- C$ D
    that no lifeline messages will need to be sent at all.  This property has no
" [  t# c* M# ^& w, ~$ L    effect if dfs.namenode.lifeline.rpc-address is not defined.
+ x/ M4 Z0 s9 F3 u( c  </description>; v; J7 Y% ?5 P2 ~
</property>
; b, [1 q. I' J' b" c  c<property>! r$ R5 R$ Z9 k6 F+ l# Z- z
  <name>dfs.namenode.handler.count</name>; y5 M% R" h" r8 C4 Y
  <value>10</value>
, E/ Q6 I6 U; o  <description>The number of Namenode RPC server threads that listen to
6 \' U9 S9 {5 X3 L  requests from clients.
/ T+ q- g% O# Y3 z  If dfs.namenode.servicerpc-address is not configured then
0 \3 G0 n; D# u  Namenode RPC server threads listen to requests from all nodes.. f4 E1 Y. h, ], J
  </description>
7 ^+ D4 c% k' X' d8 H: k</property>. r1 n: c# @" H6 |$ h
<property>+ T8 {/ L) I- m9 v( ?' G+ v
  <name>dfs.namenode.service.handler.count</name>+ o1 d: B9 f: o  w, Q, r3 t3 W7 ^
  <value>10</value>! m0 `: ~5 v  K0 M$ a" U8 B5 R
  <description>The number of Namenode RPC server threads that listen to
- n# D% n) `: f6 i$ ]  requests from DataNodes and from all other non-client nodes." o% K+ E9 q; \; R
  dfs.namenode.service.handler.count will be valid only if, R3 ]) z' `! _7 @) W, k+ h# t
  dfs.namenode.servicerpc-address is configured.
- y- `& W9 D. l; z  </description>; _$ U3 K+ E* n' U7 w
</property>3 K/ w! `0 f- ~8 R0 [7 ?$ Z$ a
<property>
* Z2 K; y+ z# x" g" k0 n6 s, S( W  <name>dfs.namenode.lifeline.handler.ratio</name>
0 C9 Y+ {3 B- h6 \' I) A  <value>0.10</value>7 {' |$ ~! E# V4 Q9 P
  <description>7 w7 B3 _! y7 `& }
    A ratio applied to the value of dfs.namenode.handler.count, which then) W* D7 k9 s5 H  N; n; e9 C6 Z
    provides the number of RPC server threads the NameNode runs for handling the
+ \, @/ h1 X. K7 Z$ C+ i0 R    lifeline RPC server.  For example, if dfs.namenode.handler.count is 100, and* ~* |+ |9 W/ M% E. Q* n; [# f4 c% |
    dfs.namenode.lifeline.handler.factor is 0.10, then the NameNode starts1 [( F) r5 u6 F# P. |$ {
    100 * 0.10 = 10 threads for handling the lifeline RPC server.  It is common. Q% d7 g$ }: A
    to tune the value of dfs.namenode.handler.count as a function of the number9 M  }. l7 G6 g( f+ z  m
    of DataNodes in a cluster.  Using this property allows for the lifeline RPC1 ~. q* p/ A' R( G* f0 L  w
    server handler threads to be tuned automatically without needing to touch a" q6 V9 `; S. @" |- P& A
    separate property.  Lifeline message processing is lightweight, so it is
+ g; w" E9 S/ A; B2 _/ y2 a    expected to require many fewer threads than the main NameNode RPC server.
; M9 O( N$ L: X/ z; g/ {    This property is not used if dfs.namenode.lifeline.handler.count is defined,
0 w% p, h3 G% M( _0 _' t/ t    which sets an absolute thread count.  This property has no effect if
8 \6 P+ Q% b  E, q4 h6 T) N! `    dfs.namenode.lifeline.rpc-address is not defined.% i5 i% F1 H# b4 L" E1 S& K3 ~
  </description>+ m) Y3 A$ \4 D) Y4 |4 e0 |9 j) o1 u
</property>
( G* t* @# \$ T  |6 g<property>' J) v# S" z3 k2 ?; u) j
  <name>dfs.namenode.lifeline.handler.count</name>: r* R$ \4 y2 t/ \4 B
  <value></value>
# D9 n0 D* B% S  <description>
* G4 y' U% l- ]. }    Sets an absolute number of RPC server threads the NameNode runs for handling
! Z+ _* u8 n: u    the DataNode Lifeline Protocol and HA health check requests from ZKFC.  If
8 [' Q( f2 V2 x& C: y    this property is defined, then it overrides the behavior of( z/ D+ w$ K2 k: f$ t
    dfs.namenode.lifeline.handler.ratio.  By default, it is not defined.  This9 k" _6 H2 U* u9 ^
    property has no effect if dfs.namenode.lifeline.rpc-address is not defined.
& I' n4 v; k" F4 a2 R! w9 s  </description>& [5 z! T% }; F! V/ O
</property>
3 p) C/ m$ P$ \) u* F; x<property>
/ g' w! H6 N+ ?  <name>dfs.namenode.safemode.threshold-pct</name>
. Q& K8 T& \0 u. {% k  <value>0.999f</value>+ ]$ ]+ A" r  b) Z
  <description>
+ B8 h* Y" M" b$ C    Specifies the percentage of blocks that should satisfy
5 X2 O2 Y  z0 E, T" s3 j    the minimal replication requirement defined by dfs.namenode.replication.min.( K! ]2 @6 x5 k% i# s6 `
    Values less than or equal to 0 mean not to wait for any particular) f0 X% i/ [" }: r/ ~9 O
    percentage of blocks before exiting safemode.8 \) Q: Q) ~, e  P! C2 f: ^
    Values greater than 1 will make safe mode permanent.
) M' U! X$ W0 s- s5 A( L  </description>2 }' T' X( s$ ?2 W# b) `
</property>
3 p0 {5 M( x3 c6 F2 q2 J<property>
9 N; H+ g7 }0 J  <name>dfs.namenode.safemode.min.datanodes</name>
' V1 I- V% S6 _  <value>0</value>8 v5 s$ u/ d5 v
  <description>
$ s& j5 S2 B: K0 h% K: Z  r2 C    Specifies the number of datanodes that must be considered alive- S3 P4 @& G9 n0 L6 ^5 v
    before the name node exits safemode.
6 A5 j% j/ F4 H8 e7 V% r  s    Values less than or equal to 0 mean not to take the number of live5 R$ Q* f: V% h1 Z) y7 V
    datanodes into account when deciding whether to remain in safe mode
  {; o: Y. {  V1 _4 _    during startup.7 k- v) z: X1 P- ?; i$ q5 o
    Values greater than the number of datanodes in the cluster
, ]- o) s% R* M* y5 O: O4 I& O( C    will make safe mode permanent.$ r! ~9 f3 k) w4 s" N! v
  </description>
: o' [. K$ x1 O- t; n</property>8 B1 u  \" {. M( B/ m7 `1 q
<property>' Z  V" R" {. D; o8 J. T4 a8 V! K) w
  <name>dfs.namenode.safemode.extension</name>
6 b1 W3 S+ \( @) X1 l, l' f& N9 \  <value>30000</value>  M' \( \" m# U+ F2 {( V+ v
  <description>
  a! I9 u" X# S, v! D. a    Determines extension of safe mode in milliseconds after the threshold level5 Q- h" x3 ^0 N' u5 Z2 S
    is reached.  Support multiple time unit suffix (case insensitive), as
& s& N  x3 K" n0 @0 |# F    described in dfs.heartbeat.interval.; C9 T/ F0 B3 h0 g( B2 x( M5 O$ Y
  </description>
8 E# ?* Y4 j+ q, d7 D, ]; `' J</property>
8 ^0 Z. R0 S/ d, }& G5 {<property>( ?( h7 Z7 e  O* }: x, j4 H$ [
  <name>dfs.namenode.resource.check.interval</name>
! g9 ]* z: z! o9 k  <value>5000</value>. f( H) p$ ?7 m1 o1 e
  <description>
3 |- n" Q  z) q& P- F7 O: c/ F    The interval in milliseconds at which the NameNode resource checker runs.+ ^% R2 e) A2 H' A( i, D8 O. s
    The checker calculates the number of the NameNode storage volumes whose
: d0 H) Z5 Y; p5 x# K# ?    available spaces are more than dfs.namenode.resource.du.reserved, and4 z3 i  R" L/ m3 @  y4 e) j7 r
    enters safemode if the number becomes lower than the minimum value
( L/ @! P: z$ C, q8 c3 |# t- M    specified by dfs.namenode.resource.checked.volumes.minimum.' k- m0 f. ]0 t6 i8 P2 Y
  </description>" X8 ^% p7 ?8 Y/ S. Q5 k. `
</property>
. w4 B+ m, N$ v$ a) d  K; u<property>- ?' h" M) y7 s' E$ B
  <name>dfs.namenode.resource.du.reserved</name>
( C& y7 _4 G4 `  <value>104857600</value>
' O3 o4 I; P$ E8 l$ j- q' c  <description>
9 Z6 m( R4 m, Y, [$ q* M1 }+ W    The amount of space to reserve/require for a NameNode storage directory$ e6 X  J  G1 _2 D$ H* _/ C
    in bytes. The default is 100MB.# ^  s$ g8 u/ A
  </description>8 m. g, ?0 s; @2 Q
</property>
) f& q+ }9 [) M6 `/ @4 f: N) G& x<property>6 {; \2 M& q' L, E* c8 S: Q
  <name>dfs.namenode.resource.checked.volumes</name>! A) u) \* q1 Z" Y
  <value></value>
: e# |) o, X& ]; K  <description>  ~1 e" T% q8 _
    A list of local directories for the NameNode resource checker to check in
4 |! r8 g, W5 V+ H3 _1 l2 F    addition to the local edits directories.3 _: l: T1 {- \" ^8 n7 a* W0 k
  </description>  [& ^. M7 Z& R# t
</property>
7 {; Z. P7 A' M) `/ U& [<property>* e+ |) P0 N+ K3 O& J% C
  <name>dfs.namenode.resource.checked.volumes.minimum</name>4 U% W( d) \- i; g9 x
  <value>1</value>7 B/ L8 V2 `* u/ z2 V
  <description>  v1 O+ K! v2 ?9 W( y: Z4 W# r
    The minimum number of redundant NameNode storage volumes required.- i3 {7 r+ r; t1 g/ n+ W
  </description>! a/ N# n6 r2 S4 F0 H1 V
</property>9 X- G0 [. t! c& A3 y- k( H. q# C
<property>
( r% d: ]- V& O5 y# O  <name>dfs.datanode.balance.bandwidthPerSec</name>
* m" ~" a% E, Y  <value>10m</value>- |4 [9 s' {5 Z8 ^
  <description>. [. }& D' w2 I# y3 N
        Specifies the maximum amount of bandwidth that each datanode
1 d  x7 t( a8 S! k        can utilize for the balancing purpose in term of
- q; V- n3 Z+ T, J( p8 Z# _        the number of bytes per second. You can use the following3 m, c3 s" l1 ~5 k) D% A6 X4 G% p& V5 X
        suffix (case insensitive):% w2 |% @. |. \% G
        k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa)to specify the size
! |7 y8 d$ A0 ~        (such as 128k, 512m, 1g, etc.).+ s$ N# D0 y4 e) D9 T: b
        Or provide complete size in bytes (such as 134217728 for 128 MB).! ~3 d0 m9 x3 g% F2 Q3 f2 l
  </description>2 i/ i* x7 ]: s) W
</property>
; c! a7 Z+ w9 R  e* M; N<property>
- s  w% A8 c% a' W4 n  <name>dfs.hosts</name>  |# U, ?" @9 a( Q
  <value></value>
4 H2 N, s5 q: k+ V  <description>Names a file that contains a list of hosts that are0 I6 f4 a" z4 A, U
  permitted to connect to the namenode. The full pathname of the file7 v/ b7 |$ A2 K+ I) Q
  must be specified.  If the value is empty, all hosts are
! N0 Q! Z/ E+ T7 ]* F  permitted.</description>4 E8 _' E7 F# p6 u- I0 @6 Q/ r
</property>+ o' |( b; M- _: E7 [3 b
<property>
3 }3 S4 B& P9 P+ Y7 R8 o  <name>dfs.hosts.exclude</name>
9 `/ n0 }8 C. t$ a3 ?: ^  <value></value>
) o6 n9 y0 [" k! }3 m7 {% D$ V. s2 q  <description>Names a file that contains a list of hosts that are5 e/ Y. [- Z: }1 R% p+ G+ \8 b7 e. U
  not permitted to connect to the namenode.  The full pathname of the
0 D4 A  L8 O2 [4 o) S* k! ?8 Z  file must be specified.  If the value is empty, no hosts are! @; _) a/ w/ }+ @
  excluded.</description>  I, D% B% {' j
</property> 4 ?& m  V( v  o
<property>
: L! a6 @% y5 m/ W  <name>dfs.namenode.max.objects</name>
$ g0 i3 ~9 h; a! e( @0 ~2 A4 g" ^  <value>0</value>
, U8 P+ V! v6 m( l1 U/ ~3 N  <description>The maximum number of files, directories and blocks
8 l1 A# B: ^, i  dfs supports. A value of zero indicates no limit to the number
* y7 |5 ]: T% A# ~  of objects that dfs supports.
) z' T  e" b# F, f- X  </description>) V) I& p( e) f7 P4 K
</property>
7 [( ~; S3 h) m+ W9 O& |. \+ y<property>
* T$ c. G& a' J  v8 q  <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
' y  s) M) b- q4 E3 q8 O) Z, x  <value>true</value>
1 I# S3 |  K' E$ H! X  <description>/ y/ W0 Z+ W0 F( G
    If true (the default), then the namenode requires that a connecting7 d3 Z( Z& V8 @+ z7 R0 A
    datanode's address must be resolved to a hostname.  If necessary, a reverse' S7 ^6 |, }3 J; A! M) o9 e0 G
    DNS lookup is performed.  All attempts to register a datanode from an
. F& y8 ^* Q& o! E6 g1 P( ~& T    unresolvable address are rejected.; {- V. H& i; {& L, J
    It is recommended that this setting be left on to prevent accidental& ]1 w( b: T* ?$ ~8 {9 J( W/ N
    registration of datanodes listed by hostname in the excludes file during a) ~2 l$ R* q+ D6 U
    DNS outage.  Only set this to false in environments where there is no6 S* i3 M4 w) E% l2 K1 c% ?# T
    infrastructure to support reverse DNS lookup.8 L* H) y. B# x& V8 R/ j/ _
  </description>
0 [( V/ X1 \1 {$ v2 f</property>
, V  |+ @& g; P8 \9 P# q<property>
* V7 D! N# u! s5 M# B  <name>dfs.namenode.decommission.interval</name>0 M8 g) u2 J, v' ]& T
  <value>30s</value>
1 w9 t' W7 j: p, ]  <description>Namenode periodicity in seconds to check if, i  m0 A9 ^. O" Z6 y2 @; M
    decommission or maintenance is complete. Support multiple time unit. f2 }3 g3 a7 c2 c( K6 g
    suffix(case insensitive), as described in dfs.heartbeat.interval.6 c+ ?- U0 g8 z' G; T; R
  </description>4 f8 O" O* v9 w1 a/ f8 V* e2 x
</property>
# h3 u: _% `$ c<property>' d1 R9 M) {; K9 ?% ^8 g
  <name>dfs.namenode.decommission.blocks.per.interval</name>/ m- H6 D4 b1 n
  <value>500000</value>! F) P! X4 p; y8 d9 I' {- y2 }
  <description>The approximate number of blocks to process per decommission
! b; g0 C; y4 o    or maintenance interval, as defined in dfs.namenode.decommission.interval.
+ a* H) f3 R# G- P3 F  </description>
" c9 d; D, L; Y. K, p( X</property>
$ a% \0 K" X+ `1 F. h2 E2 w* @! f<property>
& m: v! C5 b5 F  <name>dfs.namenode.decommission.max.concurrent.tracked.nodes</name>
! m$ l' w  D5 ]/ o( {0 N  <value>100</value>
% W/ G  `  `1 B0 w/ d  <description>
1 m% w+ U* a: ]' a5 |7 P) g8 I    The maximum number of decommission-in-progress or5 i: P1 n- z7 h" N5 C
    entering-maintenance datanodes nodes that will be tracked at one time by
  b. C6 e) j; m/ d! X5 y, W: }9 Y    the namenode. Tracking these datanode consumes additional NN memory
" ]! J. r1 @1 Y0 V5 J; x    proportional to the number of blocks on the datnode. Having a conservative' g6 }2 E. W. k( S. V  ^2 {# V0 S
    limit reduces the potential impact of decommissioning or maintenance of: p. |- }1 C& n) v! f0 Y; ^5 _
    a large number of nodes at once.
8 b) W( ^4 p. f# C    A value of 0 means no limit will be enforced.
: M8 V# G, d) @2 h/ B. m  </description>) q8 H: a- _5 A1 I2 A
</property>4 p2 i; J. v0 s
<property>6 T6 u3 u! ~& t+ }: t1 |9 N) b
  <name>dfs.namenode.redundancy.interval.seconds</name>+ H9 ?* E' t. V0 L4 N: H8 c" r  c( U9 i" a
  <value>3s</value>  ]2 v" y* x! T# j
  <description>The periodicity in seconds with which the namenode computes % {7 \5 ^! e2 b5 `
  low redundancy work for datanodes. Support multiple time unit suffix(case insensitive),
7 U7 T$ B( Q3 y6 T/ l; M  as described in dfs.heartbeat.interval.
8 j7 @0 p& m7 O, d& ^0 m, @0 _" R  </description>) U# a: Z5 K5 ^8 `5 y2 V& z
</property>/ h9 r2 d2 j" V( Z
<property>' M& H6 L$ Q) M# s2 V! V
  <name>dfs.namenode.accesstime.precision</name>
( f7 I3 e8 O: W& X: B  <value>3600000</value>0 C, T& e: C( s; t* c5 M+ B9 r+ g
  <description>The access time for HDFS file is precise upto this value. ; |: m: D3 U( @8 W! f& E. E% a
               The default value is 1 hour. Setting a value of 0 disables+ n# n5 e$ U' d" Q% I% D
               access times for HDFS.
# {$ O& N5 G$ `- W  </description>" c6 M. J* ~2 @6 ~( s$ I6 ~
</property>- m$ W: J3 N, w2 O0 G! \
<property>
5 }3 |- Y* Z& A( ^  <name>dfs.datanode.plugins</name>
/ F& f1 j7 Q! \  <value></value>0 m' ]; Q8 j# s" u
  <description>Comma-separated list of datanode plug-ins to be activated.
/ m$ l+ n, @, S8 u$ C+ V7 ]% S  </description>
- H4 k* ?% t8 S; G; m% o7 S; s</property>5 n6 y6 y: \' O9 P
<property>
, y' n' s: C& v  @( c, W: e  <name>dfs.namenode.plugins</name>
0 D/ p/ Z: J8 @& ^: ^( J  <value></value>
: g4 b9 i# S) S4 [  <description>Comma-separated list of namenode plug-ins to be activated.
% q  N& d( X0 E3 v( Q  </description>7 i$ @* V' E* u7 w
</property>
; m& O/ g+ k  k" ~' k<property>
$ z) R. ~9 ~! q( H9 r# j' Q8 r  <name>dfs.namenode.block-placement-policy.default.prefer-local-node</name>
4 N* e! ^& g2 w6 Q- c  <value>true</value>
- _7 z) e* G. _. y( _  <description>Controls how the default block placement policy places, s6 p3 i7 e" y
  the first replica of a block. When true, it will prefer the node where
  @) G- f* K, |4 N$ c' T  the client is running.  When false, it will prefer a node in the same rack, V+ a1 Y5 I6 C
  as the client. Setting to false avoids situations where entire copies of
: n: |4 T3 ^9 s" k3 b! p- B  large files end up on a single node, thus creating hotspots.
" k0 t( X1 ~7 y& Q! y  </description>
1 h6 w! Y  F9 E8 ^& P* _/ i7 C4 e</property>5 ~' F4 T% i, C! w! N2 P; Y
<property>
. Y- ], S- Z  W; }. V! Y  <name>dfs.stream-buffer-size</name>
3 ]  k4 U$ [3 v6 T, c, w  <value>4096</value>) Y0 d% S* e  X5 c
  <description>The size of buffer to stream files.
5 L# S; ?" c# c/ X! V" a/ r: |: v  The size of this buffer should probably be a multiple of hardware$ ?4 h% `9 |! a( q0 Q
  page size (4096 on Intel x86), and it determines how much data is% d8 {+ Y4 R! ]1 ?8 C4 A
  buffered during read and write operations.</description>0 ?2 H  F5 n* o2 i9 A
</property>$ M% A; f* i1 \, q1 G
<property>
1 P4 G7 G0 Z/ C$ _; I) g  <name>dfs.bytes-per-checksum</name>
1 R3 @" k/ I+ r0 R; Q) w  <value>512</value>
/ \! ^2 t- ]  H; r8 S( l. U  <description>The number of bytes per checksum.  Must not be larger than# X: h* S1 h5 N! v* U. `" }
  dfs.stream-buffer-size</description>
1 f& w; `  P* ?$ T6 K</property>! h% C! T, z& |9 S* X1 Z" |
<property>8 Z( w' G0 j$ @9 t7 x$ n
  <name>dfs.client-write-packet-size</name>
5 I2 }4 ~5 ^& d; I) I( s  <value>65536</value>
; a+ Q& g: K8 w! U  <description>Packet size for clients to write</description>, r! L. p- F0 F; X
</property>3 G# B/ j0 B  V7 X  P  J
<property>
0 o1 h! a1 F( D: m  <name>dfs.client.write.exclude.nodes.cache.expiry.interval.millis</name># k: D0 m% \, g7 h4 Y- ~: o
  <value>600000</value>: G4 ]2 Q7 @% K& [% M; z; f( \
  <description>The maximum period to keep a DN in the excluded nodes list2 Q" \0 p4 V; N9 ^3 O
  at a client. After this period, in milliseconds, the previously excluded node(s) will- u0 V) s! ~' _5 {" P
  be removed automatically from the cache and will be considered good for block allocations
7 X+ Q2 R7 E  l7 c9 |  again. Useful to lower or raise in situations where you keep a file open for very long! O. [- h0 }7 o1 q% O) a: J# d
  periods (such as a Write-Ahead-Log (WAL) file) to make the writer tolerant to cluster maintenance. N( s: X9 t2 a5 W& A' H1 ]
  restarts. Defaults to 10 minutes.</description>
# K  |; _: i/ W9 Q- w</property>( S" i# w! {$ u5 i) u: |' |2 F
<property>! O2 u; ^: p" v8 _+ V  b
  <name>dfs.namenode.checkpoint.dir</name>
! c" o5 ~9 d  u! B8 \- K8 L  <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
4 Y9 l/ X) `1 I6 K9 p3 ~$ p  y  <description>Determines where on the local filesystem the DFS secondary
. H' S2 ?$ ^: ^      name node should store the temporary images to merge.0 l6 U# |2 S# u3 z8 m4 s7 \
      If this is a comma-delimited list of directories then the image is
% {0 L5 X# T$ M$ u  h$ ~1 V      replicated in all of the directories for redundancy.
0 j% G* z+ b/ @  </description>
& L6 g. q5 ~# @0 h</property>& g+ A+ C( |6 U0 _' O# I" d5 I
<property>
% Z# X9 U- s/ S  <name>dfs.namenode.checkpoint.edits.dir</name>, e- S, `: k" d: P' b5 }
  <value>${dfs.namenode.checkpoint.dir}</value>7 X! }) Y" C6 E, I8 s; y
  <description>Determines where on the local filesystem the DFS secondary
. K7 T1 \. Y; z( c% j! C      name node should store the temporary edits to merge.  g) t# n# d4 w$ e
      If this is a comma-delimited list of directories then the edits is. I, x% }4 X% o& x* c. B
      replicated in all of the directories for redundancy.) \! L! V3 Z# F0 U0 o6 z; `0 W& ^
      Default value is same as dfs.namenode.checkpoint.dir5 F9 l$ ]6 e5 V+ P
  </description>2 u2 n! Z! O, T' Q; b
</property>8 Z& e1 O0 Y) O0 g! F; Q
<property>) e' k  v% L: ~
  <name>dfs.namenode.checkpoint.period</name>3 Y5 x* y+ d" K5 [& M! \
  <value>3600s</value>8 d& C8 W( n5 A) [  b
  <description>
; L! y* a1 V2 @# v8 P) I" N2 L1 Z    The number of seconds between two periodic checkpoints./ w4 S& M* W7 J! f# Q7 D
    Support multiple time unit suffix(case insensitive), as described$ l% s, q# H. d' a7 u
    in dfs.heartbeat.interval.  v6 G5 B5 i8 a! X: k" }
  </description>8 H! l, C0 x# p  }) r
</property># B$ Q9 e, a! c
<property>
. ^+ Y: ~, c# h* T7 v  <name>dfs.namenode.checkpoint.txns</name>* i4 V) `1 z9 D
  <value>1000000</value>
7 p6 U3 {7 Z' i4 ?; f! n* w  <description>The Secondary NameNode or CheckpointNode will create a checkpoint9 r0 O6 o" }, u' d# u: m
  of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless
  q" h! `4 J  W0 p* r6 D& l( e; E& e  of whether 'dfs.namenode.checkpoint.period' has expired.) E2 p6 J1 f! U$ A' C
  </description>
1 v3 Z% `8 J( K: z: ~</property>4 n5 L6 F4 J& L2 g5 I  s
<property>
. x% C4 X4 i- a% S5 X  <name>dfs.namenode.checkpoint.check.period</name>! M- m, Z  s2 p) }( y
  <value>60s</value>
4 Y3 R6 K& a" L$ x8 _& j  <description>The SecondaryNameNode and CheckpointNode will poll the NameNode$ R% H, K* Q+ T3 U0 ]
  every 'dfs.namenode.checkpoint.check.period' seconds to query the number
9 k: {9 Y( M2 S( X! I& K2 ?  of uncheckpointed transactions. Support multiple time unit suffix(case insensitive),
3 r! G2 H' c/ K. H0 b  e$ k% Z0 Z  as described in dfs.heartbeat.interval.. P! C* V4 G, z. d; f
  </description>
0 P  O, V5 w' b7 p; z( z</property>
' e* k2 ?$ [% M2 m( n<property>
) M6 g- n* r  K# [% Z  O  <name>dfs.namenode.checkpoint.max-retries</name>5 t3 \( K) ^$ W: {- }
  <value>3</value>2 O" ?# J7 b: y0 W* c. o* d/ V% [
  <description>The SecondaryNameNode retries failed checkpointing. If the ( W0 G+ S& J1 P* K) X$ a" p
  failure occurs while loading fsimage or replaying edits, the number of
" C) ?# p$ |2 x$ R4 P8 v  retries is limited by this variable.
- y% @  V5 U0 m/ d8 B" @3 O; ?8 d! R  </description>" h( Y" N) \! W
</property>* P5 L: y+ A2 B- [- v" w
<property>
; a8 N: F  n" `' x7 ^9 ~: p  <name>dfs.namenode.checkpoint.check.quiet-multiplier</name>. f" F% L0 R) W! I
  <value>1.5</value>, k& k+ ]. }1 j) g; F
  <description>( G6 W* _. ]& [2 e
    Used to calculate the amount of time between retries when in the 'quiet' period. u) y7 r6 B3 Z3 |0 |! h
    for creating checkpoints (active namenode already has an up-to-date image from another9 g! i$ [; o. i# i0 z3 h) h8 V
    checkpointer), so we wait a multiplier of the dfs.namenode.checkpoint.check.period before
% ?; k" r% X+ |) |  O7 p) f& w    retrying the checkpoint because another node likely is already managing the checkpoints,
' y, h5 Z5 R! V5 k( R+ f    allowing us to save bandwidth to transfer checkpoints that don't need to be used.# |0 s' C; L; Z' }  `. W8 g# X
  </description>6 v' ^2 Q7 E) w- B& o
</property>
8 e& R. l0 ?5 X* d! p5 Y<property>" L4 |- @! {8 I0 j& L+ w5 \
  <name>dfs.namenode.num.checkpoints.retained</name>+ E- U' Y1 R2 c  c7 ~$ I) O6 b
  <value>2</value>
9 {  |1 ]- T4 \" W  <description>The number of image checkpoint files (fsimage_*) that will be retained by
& j4 d5 ]' p/ Z3 C1 b  the NameNode and Secondary NameNode in their storage directories. All edit
* f7 m7 Q: m5 L; A( @" @1 F  logs (stored on edits_* files) necessary to recover an up-to-date namespace from the oldest retained6 c# j2 l0 G. d" S- ?# N
  checkpoint will also be retained.
( s; q- @  u$ [9 W# _  D  </description>
$ z, o3 k* g# h6 Q# B' G</property>1 w9 @2 e( B8 G0 r& Q; m
<property>6 [0 B  G9 F3 D# e6 v
  <name>dfs.namenode.num.extra.edits.retained</name>
) p9 o% A& F2 W/ j  <value>1000000</value>
) U$ {2 `( t, X" |7 B0 ]0 q) ^  <description>The number of extra transactions which should be retained8 R! G7 h$ ~# i
  beyond what is minimally necessary for a NN restart.
8 W. \$ V" P/ Y  Z" V& ~' c0 i+ a  It does not translate directly to file's age, or the number of files kept,4 A# q. r5 ?  A
  but to the number of transactions (here "edits" means transactions).
. H: Z4 Z+ n8 x, N) h  One edit file may contain several transactions (edits).1 Y9 t% O0 K* A, p) ~
  During checkpoint, NameNode will identify the total number of edits to retain as extra by
- x4 J6 q+ A& D5 l  S# j+ z  checking the latest checkpoint transaction value, subtracted by the value of this property.6 C* y& p; H2 q) a; c
  Then, it scans edits files to identify the older ones that don't include the computed range of
( p4 T) b; j( \7 _# T! m$ n  retained transactions that are to be kept around, and purges them subsequently.
( Q, t) O3 P5 y0 A  The retainment can be useful for audit purposes or for an HA setup where a remote Standby Node may have: b/ B3 O* q- K+ }1 g( O
  been offline for some time and need to have a longer backlog of retained
' r1 n/ _, v  h  edits in order to start again.- r6 K  W+ E/ C& Y
  Typically each edit is on the order of a few hundred bytes, so the default; f( Y/ c8 Z) `$ G1 T% h( H  P8 F6 z
  of 1 million edits should be on the order of hundreds of MBs or low GBs.
, D5 V) E0 R, D8 r4 C3 S, W; x+ ^) p6 c" J  NOTE: Fewer extra edits may be retained than value specified for this setting* b% D1 T' `& O$ K; c" t
  if doing so would mean that more segments would be retained than the number
6 U6 V3 s9 E& Z: b  configured by dfs.namenode.max.extra.edits.segments.retained.
; b" }' s% u1 X5 F  D  </description>
0 C8 [8 n8 P. ~* |( g8 O1 M" i& m- @</property>
9 c# x2 J2 R, ?! ~: d- z8 W<property>
! h' H3 M' i4 i$ C$ D  <name>dfs.namenode.max.extra.edits.segments.retained</name>
; t6 [5 Q5 w+ G# n0 T  <value>10000</value>
# w) y$ j# P' a2 x: u7 l; D! X  <description>The maximum number of extra edit log segments which should be retained1 x  I0 {& {# r/ F4 N" l! j% f
  beyond what is minimally necessary for a NN restart. When used in conjunction with$ ^* C- ^/ `% o; N& ~2 ]3 g
  dfs.namenode.num.extra.edits.retained, this configuration property serves to cap# W2 m: D3 }  {8 m* S. ^; `
  the number of extra edits files to a reasonable value.
  }# f- B  D  l" o  </description>& \9 l0 f7 [$ N9 W. a
</property>
% G* ?! c! A9 i4 ]9 u+ Y<property>
( b; }% K$ N+ p& p) L  <name>dfs.namenode.delegation.key.update-interval</name>
& t1 F4 ^% J+ N# h. k  <value>86400000</value>
4 f, i( l6 [* O9 G  <description>The update interval for master key for delegation tokens & h& U* w/ n7 T+ _
       in the namenode in milliseconds.. w7 k+ f9 u: B2 L6 a$ X
  </description>
; J+ U: ?% U1 |8 R8 J& R</property>4 ]' @$ s" s0 B* m9 i1 `
<property>
; B. q8 Z+ T/ W. b1 E4 j  <name>dfs.namenode.delegation.token.max-lifetime</name>
, f1 D3 j5 Y( p# k: K! Q# z  <value>604800000</value>. z0 B- J9 ]7 U) K3 b) P+ I+ U2 ~
  <description>The maximum lifetime in milliseconds for which a delegation
$ @1 b- ~, p* n" {6 E      token is valid.
- Z* _* w9 I# Y  |, t0 c3 Q2 ^! s  </description>( R3 J) V4 N4 p+ R) Y
</property>
$ a6 ?7 D9 G8 B* |) u7 Q; |<property>5 \' ?& C; y, C, }& r
  <name>dfs.namenode.delegation.token.renew-interval</name>) B+ K: m) H9 `: u& y8 i7 d  @
  <value>86400000</value>
. C. H7 g+ V; A0 J4 K  <description>The renewal interval for delegation token in milliseconds.
: t% h4 T* @) s% f( \  </description>5 c3 ~7 }9 U. D, M
</property>" B5 a9 b0 @" O
<property>
0 r5 T: J3 U% ~3 ]% F  <name>dfs.datanode.failed.volumes.tolerated</name>
. F4 b7 o" |. t' }" S  n  <value>0</value>  g+ F- ^2 J& Y4 l1 @1 Z
  <description>The number of volumes that are allowed to
! I0 D5 b6 q8 f+ M2 v  ]  fail before a datanode stops offering service. By default
2 h8 P  N: Q; a  any volume failure will cause a datanode to shutdown.2 U4 Z( m2 Y3 B! e. K8 W; {' e9 I
  </description>/ h2 h0 Y) _! R  l& g
</property>
5 n& L3 q# z# n3 g( A) f  _<property>
$ p0 e; ?/ w3 A$ r, f7 d- L! }  <name>dfs.image.compress</name>
% j% U7 p) d" S1 n2 q, l4 j  <value>false</value>
+ {) |0 D- c* x  <description>Should the dfs image be compressed?
1 v3 J# ^3 G' w3 Z1 m* S  </description>
1 m4 `7 [' p1 q8 x& T4 I! Z/ B</property>3 X( X/ ^0 U5 e0 h
<property>
3 l  I/ ]& w3 E3 d. J  <name>dfs.image.compression.codec</name>$ a0 M$ [1 s+ c* |
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>
! Z# J2 i7 C7 W4 A8 b  <description>If the dfs image is compressed, how should they be compressed?
& d' f: f4 t7 L' O, g               This has to be a codec defined in io.compression.codecs.
; t! w+ }4 C% }+ O6 J9 P# M  </description>
" L1 v( ?+ a6 Q; ]" c/ e</property>
  t/ W$ w% z- T0 W<property>+ P0 h0 O: a. a  o' C
  <name>dfs.image.transfer.timeout</name>
. _) V- l: Y1 O7 R7 w  <value>60000</value>9 q, R# r. p: m+ c" R5 i$ ~1 V
  <description>! w! n2 h" I$ ~. N: R' @
        Socket timeout for the HttpURLConnection instance used in the image
7 s  P/ b* @& l% ]. O        transfer. This is measured in milliseconds.; W$ u8 a1 s) V* U
        This timeout prevents client hangs if the connection is idle
+ e( r6 t, j4 I: f        for this configured timeout, during image transfer.
0 \  g% ^. M3 O$ x; i# k3 r6 r$ v  </description>
5 l7 ~9 t5 x5 g. h+ y2 i$ c</property>0 }* A4 G. o) e$ M1 w; v
<property>) V' _' Z# k: M6 }% D* }
  <name>dfs.image.transfer.bandwidthPerSec</name>
, I) n* t3 x* a4 f  <value>0</value>
& m. p- I" _6 i1 P8 S  <description>
. [2 l9 X; r6 n2 f- V        Maximum bandwidth used for regular image transfers (instead of
/ z. \. n& g2 c4 g: G        bootstrapping the standby namenode), in bytes per second.
) h% o6 `, y; D' u7 Q" b% R        This can help keep normal namenode operations responsive during
! y: u$ K8 o! ~3 u3 [- \        checkpointing.+ d& o' V0 \. N3 Q- L
        A default value of 0 indicates that throttling is disabled.7 o2 A7 y; Q- H6 b- a
        The maximum bandwidth used for bootstrapping standby namenode is
; v, k% K3 e9 E+ i        configured with dfs.image.transfer-bootstrap-standby.bandwidthPerSec.2 Y% z' N1 Y  Z
  </description>
5 @: [# s# t1 Q# h' U3 H</property>
) D: q% w% Z4 |0 C# z9 }( p  <property>* @7 `. D+ s/ m
    <name>dfs.image.transfer-bootstrap-standby.bandwidthPerSec</name>
0 P5 N" e, ?) A: F% D; o    <value>0</value>0 c" z2 n" w0 ^) o) \- n# M4 b
    <description>
  Y; R7 t( \. k9 v0 v& F; w      Maximum bandwidth used for transferring image to bootstrap standby' ?5 B0 q1 G  P. e
      namenode, in bytes per second.
5 G# s9 O0 c- q      A default value of 0 indicates that throttling is disabled. This default
0 R5 i5 B' ~8 _: P  u" ?      value should be used in most cases, to ensure timely HA operations.
& K$ x$ t6 A8 _$ \$ c% X9 G      The maximum bandwidth used for regular image transfers is configured
+ x. ]9 Y& N  z      with dfs.image.transfer.bandwidthPerSec.; A* L) ]. V4 b$ f4 h6 e0 C7 H
    </description>
7 B* {3 A/ b/ a1 F* E; m! h  </property>( V1 ~" K0 l& l6 L% b
<property>
9 n% s8 z2 R3 y  <name>dfs.image.transfer.chunksize</name>! r" j- `/ ?+ A% J1 r* E8 h8 _; k
  <value>65536</value>4 y8 M* c! u& u) `' n
  <description>3 n% r& g- q- y: b/ _+ J. W/ O
        Chunksize in bytes to upload the checkpoint.* T& m& r* G4 H& E& @9 e
        Chunked streaming is used to avoid internal buffering of contents: p( o$ U' P9 S. r
        of image file of huge size.% ~: [( L: @: Y$ t
  </description>
, y5 u8 K1 C+ ^( O9 ?' r</property>
3 U: R8 P% v  Q9 `# M) U* Q<property>1 ^$ N8 M1 C! |
  <name>dfs.edit.log.transfer.timeout</name>, |& H* P! D2 X5 ?; J- o5 H
  <value>30000</value>& I7 U. |, W: M) p; z* u( X
  <description>
" W2 L- B% Q8 @; u. A    Socket timeout for edit log transfer in milliseconds. This timeout/ ?) g" k; _9 T  H8 h
    should be configured such that normal edit log transfer for journal
, |0 t7 n- T+ q3 R    node syncing can complete successfully.+ b, N4 z% C1 |! R7 M# Q
  </description>
) F& b1 C9 r" B</property>
, ]5 v9 x; ^: w<property>
9 M3 M9 G* N. e( h2 p; W5 S  <name>dfs.edit.log.transfer.bandwidthPerSec</name>
- x& G0 X- i. i% k6 t  <value>0</value>
; k: r4 ~; t: t; d+ f! U( F  <description>
' E$ j& O; m  H2 t! J& L! m2 Z    Maximum bandwidth used for transferring edit log to between journal nodes4 }4 T3 e* L8 i. [. R
    for syncing, in bytes per second.
5 S1 D  C5 u1 ?' n) y' L    A default value of 0 indicates that throttling is disabled.
3 W/ [- U' {0 K+ M: x+ z  </description>
6 E( M. w$ `: I# C5 a0 i' V</property>3 ?" y% ?, F+ K; @" U
<property>
& n% l% \/ k1 p# G7 c  <name>dfs.namenode.support.allow.format</name>
! h7 y9 c- [$ Q9 r6 G  <value>true</value>- W2 H9 ~) Y. L: m* R, e
  <description>Does HDFS namenode allow itself to be formatted?8 U9 q* @1 c0 f: n: i
               You may consider setting this to false for any production, x1 t8 p& Z; Z: a7 ^5 [% R
               cluster, to avoid any possibility of formatting a running DFS.8 h' \  a* t- |( [8 a/ [; N
  </description>$ \( a0 I  E' w  F2 b4 y1 k
</property>& o9 p/ O, U# v9 [6 _
<property>
( S7 p2 Q0 B. K4 K# F4 I  <name>dfs.datanode.max.transfer.threads</name>
- W( r" Q) Z# @( s  <value>4096</value>
, ?& A) D4 K  D% c8 ~  <description>
, ]" S' t* Z& b+ _        Specifies the maximum number of threads to use for transferring data/ F2 z# i7 Z- ~- D
        in and out of the DN.0 x4 j( m' |9 X% p4 F! S4 f8 R& p! n
  </description>, F' u! z/ T3 m, u8 Q% n
</property>
$ O/ {8 \3 G0 f+ G, J) G3 ^0 h<property>
2 \0 R$ I; }+ }, G  h0 {: L  <name>dfs.datanode.scan.period.hours</name># l. z5 H, C1 W
  <value>504</value>( \$ a! _+ _% X+ ^- @" R
  <description>9 q. @$ R/ a7 ~. R% @
        If this is positive, the DataNode will not scan any
, t' k. f$ D: ]* m3 _2 c0 L        individual block more than once in the specified scan period.! `5 G! X8 g' s5 S, L
        If this is negative, the block scanner is disabled.
3 W& p% |/ x+ V# M        If this is set to zero, then the default value of 504 hours
$ c# l2 e! v1 \! u% K        or 3 weeks is used. Prior versions of HDFS incorrectly documented7 [8 i" b: b7 }6 j- v
        that setting this key to zero will disable the block scanner.: ?% R- j5 X1 u. j
  </description>
; m; ^) w( T# l8 f2 O, {</property>5 F8 c$ ?/ v# j1 ?$ @" c
<property>& t  E$ }  I" v5 R7 X$ X
  <name>dfs.block.scanner.volume.bytes.per.second</name>
$ z* L6 Y6 i" w: ^5 X$ ^0 |  <value>1048576</value>, F% v) Z' W; g! b( |
  <description>
. u9 ^3 K  a. b8 L% X! }+ T        If this is 0, the DataNode's block scanner will be disabled.  If this$ P% g4 O. S- W( i6 X2 D, u
        is positive, this is the number of bytes per second that the DataNode's& @" a  R8 \  L  g" a* Y1 J7 R
        block scanner will try to scan from each volume.
2 J; p0 p; n' i) _! O% G  </description>
5 r  m5 O: R' e6 t</property>5 D$ C; p5 ?8 O( d( w% e6 {1 z
<property>& ~2 z5 |" M: z, K; ?1 r
  <name>dfs.datanode.readahead.bytes</name># A  R7 E" }7 v" ]4 i! M
  <value>4194304</value>! ]) N0 E# j  Z
  <description>
2 n/ M# E: ^6 c( a        While reading block files, if the Hadoop native libraries are available,2 U. J8 Z! P' P2 M
        the datanode can use the posix_fadvise system call to explicitly0 k9 `4 C; b. j3 [4 `  f
        page data into the operating system buffer cache ahead of the current% Y- f: M: B- b9 {( {" C3 L3 r# Z
        reader's position. This can improve performance especially when0 v& ~* \4 }9 c8 {
        disks are highly contended.
) [  C5 F; b, u  }' F2 \        This configuration specifies the number of bytes ahead of the current! G- x- n' B! G& f7 ]; }; M6 m' U0 l
        read position which the datanode will attempt to read ahead. This
2 O  h5 I* L- b' ?" b6 Q! ^% P  y6 I        feature may be disabled by configuring this property to 0.7 E2 ^2 R- I' M4 \+ T
        If the native libraries are not available, this configuration has no
- ~! i) ^; @% l1 {0 a        effect.
- G" j. l% [2 w$ p8 u5 |  </description>
9 F% p( o0 j# ]) A</property>4 o5 @9 I9 z$ l
<property>
5 ], \+ [+ {+ R  @0 F/ d: s  <name>dfs.datanode.drop.cache.behind.reads</name>
* ~9 S! K, t3 E8 h0 O: x, b$ K+ z9 K  <value>false</value>
! e; @( _3 m8 d  <description>
8 L$ b4 E4 o' t  `        In some workloads, the data read from HDFS is known to be significantly
. I' |5 F7 f2 Z& b$ X        large enough that it is unlikely to be useful to cache it in the' \; e, ?8 S5 e% o% c
        operating system buffer cache. In this case, the DataNode may be
. p" M# s4 D6 r  E; \$ a        configured to automatically purge all data from the buffer cache
* y" H) R* V( o& s  P5 J: u! Z        after it is delivered to the client. This behavior is automatically! S7 b4 m* ^3 n) s
        disabled for workloads which read only short sections of a block! P6 K2 ~8 N" h2 i7 M" s; V7 i
        (e.g HBase random-IO workloads).3 G  v! @1 {( x8 w( B& ]0 }1 r8 r4 C
        This may improve performance for some workloads by freeing buffer5 Z8 I; M6 [" P4 ^- W! r. Q1 |$ a
        cache space usage for more cacheable data.7 L( h6 H( L- }* i& h
        If the Hadoop native libraries are not available, this configuration, i/ T5 e7 N% F2 J* u
        has no effect.$ `7 ~4 V) U8 C, E9 a' ^
  </description>
. ^! G$ s7 s, K9 j4 S</property>
  E8 W) R8 ~# F) w( d+ B  C% t<property>+ j5 I+ {- M- a1 b/ P9 }% ^
  <name>dfs.datanode.drop.cache.behind.writes</name>
0 `* n4 _3 z2 J* n) ~; ~  <value>false</value>
# W- C! s; @3 D+ b  <description>5 _6 w: ]1 c; X/ Z8 n3 B, g: C
        In some workloads, the data written to HDFS is known to be significantly
$ T( [3 i0 U% i2 u) h4 s        large enough that it is unlikely to be useful to cache it in the
- ^) |/ p1 a! U1 f  k; O& ]        operating system buffer cache. In this case, the DataNode may be
' _6 V/ @: _  {+ s$ P        configured to automatically purge all data from the buffer cache' g9 K+ ~( A. g
        after it is written to disk.
( S) R4 O$ b9 K+ i+ \& M        This may improve performance for some workloads by freeing buffer
. i) d8 c, O* r7 m        cache space usage for more cacheable data.
* y& }5 W/ b4 {2 |9 n        If the Hadoop native libraries are not available, this configuration+ K+ d# x; I/ Z4 E0 t
        has no effect.) O) K* Y6 i1 s
  </description>
8 J! Q- h" E6 x1 E8 c0 l& H</property>
( \8 H7 R" R; u) r' m<property>. _: |" G8 M1 e4 u+ N7 f
  <name>dfs.datanode.sync.behind.writes</name>
- d+ q) \' N: o: F# ~2 H  <value>false</value>2 x( @4 Q6 f9 f/ y: S' X* k
  <description>. u* d4 l, |+ C7 {
        If this configuration is enabled, the datanode will instruct the
  G2 N  p2 v; }/ e        operating system to enqueue all written data to the disk immediately
$ B% Z; e1 w4 F. \& A5 V0 D& R        after it is written. This differs from the usual OS policy which
# z$ M2 f  a) Y9 c9 R+ c        may wait for up to 30 seconds before triggering writeback.1 I( K' K" c; C3 e
        This may improve performance for some workloads by smoothing the
# w0 R* ^) G" P2 U& O8 D        IO profile for data written to disk." a# R# i3 j9 q* z) g
        If the Hadoop native libraries are not available, this configuration) k6 f. }5 v( J5 J7 [
        has no effect.
. b" w4 J8 }7 D- ^7 s' h  </description>
3 w% }5 P( x4 e& f+ p</property>
2 g) F  z- Z: x' \& W* M5 M- H$ c<property>
6 D( V4 K) o. `# R  <name>dfs.client.failover.max.attempts</name>
5 ?9 @( U6 S- C0 V/ K1 k) O1 C1 z; C  <value>15</value>8 T: G. B! F  A# n
  <description>
" W# A) Q8 W4 u5 e    Expert only. The number of client failover attempts that should be
/ H4 O$ J, D% ]2 F* I1 \    made before the failover is considered failed.
+ I8 `  k0 f% X& H# H- }  </description>- S: J; o) w& W) n" g. T
</property>$ t: h% ?) |. ~: n2 d( y' \% k
<property>
  {0 a1 r: n( P  <name>dfs.client.failover.sleep.base.millis</name>
, \, v/ H" e  S- C0 ~& d; I: R  <value>500</value>) \. a! N: S, _6 R
  <description>
4 Q' Y9 S8 W8 ]8 i2 h, h2 m    Expert only. The time to wait, in milliseconds, between failover4 F0 c$ f( \% D
    attempts increases exponentially as a function of the number of0 i- Q1 h1 ]; ?/ w/ r- g
    attempts made so far, with a random factor of +/- 50%. This option
. m2 m9 M" r- X, m; e    specifies the base value used in the failover calculation. The, D( l8 {1 L1 T
    first failover will retry immediately. The 2nd failover attempt! X" x* w8 x. H% t' y5 |* N# d5 B& Y
    will delay at least dfs.client.failover.sleep.base.millis
& a- g6 w% [; U& s3 Y1 Y# z    milliseconds. And so on.
( l/ }* N  p  X1 K" @1 X( \  </description>; _8 }/ E) J$ E& l
</property>' [7 O% U% V8 Z- J' D# p
<property>0 o5 P9 s9 M) H8 H: s$ v- ^) H
  <name>dfs.client.failover.sleep.max.millis</name>" Y1 i6 N( Q: y! F% N; A' m
  <value>15000</value>& ?( t: \' g) P
  <description>- S; n5 T% D# _
    Expert only. The time to wait, in milliseconds, between failover
8 o9 L" H6 m& F% M+ l    attempts increases exponentially as a function of the number of
3 P0 p9 p; M% V* j    attempts made so far, with a random factor of +/- 50%. This option0 Z0 i% A" ]/ U
    specifies the maximum value to wait between failovers.
- {$ _/ w, E- }9 j9 ?$ D. ^    Specifically, the time between two failover attempts will not
1 I7 J7 ]+ d  X0 Y# O! M" G, U    exceed +/- 50% of dfs.client.failover.sleep.max.millis" d! k3 v1 ^. q7 l: Q" w
    milliseconds.
& h( C( ]( L3 v3 M1 H5 }  </description>& W& f# O1 A& ^& S! _5 a
</property>
9 L% y" X! U  Y' F5 t) Q* Q, G<property>
! x; P( O2 R, s/ {  <name>dfs.client.failover.connection.retries</name>3 z: d( u6 g/ o9 k" N
  <value>0</value>
+ U+ e3 R' N' x+ {+ q6 v% _  <description>
* W$ a/ a+ U5 O! ?    Expert only. Indicates the number of retries a failover IPC client
3 L$ }5 p$ ?; u& b7 U    will make to establish a server connection.
! x; `2 G1 `. O( B2 ?  </description>
; L2 G$ {# i% M  V</property>0 ^4 ?+ _: n/ V& ~8 x* @7 Y
<property>  G( _- X/ j6 \5 y! I- [9 g
  <name>dfs.client.failover.connection.retries.on.timeouts</name>9 k2 |. X" x  t
  <value>0</value>
' Y' b2 r6 j" g" f9 G6 U7 A% h  <description>6 F. K% U$ f/ f( b0 q* u
    Expert only. The number of retry attempts a failover IPC client
! t* u7 e0 v( H: U. `    will make on socket timeout when establishing a server connection.
8 L1 W! B% d, `- Q' t  </description>
% G& M2 g9 }2 W/ F</property>
8 O1 I0 g/ Z5 X* Z  a<property>/ E1 J/ t2 E) ?  a0 l  m' N
  <name>dfs.client.datanode-restart.timeout</name>' Q1 x4 k8 R. D5 m4 @: |" S
  <value>30s</value>% i; C. R. L( Q1 a
  <description>& [1 J. `/ q/ @0 J2 ~% ]3 A
    Expert only. The time to wait, in seconds, from reception of an
. D; }6 |! T5 b    datanode shutdown notification for quick restart, until declaring# D8 `) L: a0 J6 X  G- h
    the datanode dead and invoking the normal recovery mechanisms.
( U/ E# r9 f  s    The notification is sent by a datanode when it is being shutdown* b* c: ?. c  c/ y: U5 w+ N. |& A! S
    using the shutdownDatanode admin command with the upgrade option.- l; x6 T( E) v6 n* P8 c4 o
    Support multiple time unit suffix(case insensitive), as described9 g& W; L8 D1 M0 V
    in dfs.heartbeat.interval.
+ |$ X1 N. Y, g/ b; S2 S  j  ?9 K1 `/ _  </description>
0 k; n0 Q. l2 n( g) }; ^! P</property>
) }  u' o# T8 P9 w1 X5 s<property>6 `) y" z) {% R3 l
  <name>dfs.nameservices</name>
% }  q* r! f  g0 X7 Q. J2 `- B  <value></value># y' Z( a; O# n) x! _
  <description>
: M. y0 a! j! h3 L/ ]1 p: s! O8 m    Comma-separated list of nameservices.1 O4 N- K) c2 U+ A7 B
  </description>
9 _, X7 Y7 Q6 |0 C! r  N</property>
) |0 v! O$ w) `  Z$ X! |<property>
" h3 y# \3 M: Z  ^9 D9 W" V1 S  <name>dfs.nameservice.id</name>6 E/ a% |) L7 k& M1 G; |
  <value></value>' ?9 \2 T6 ~$ E' e& k& \" T7 a
  <description>
$ f! ?. [9 |* B/ c4 ]+ N+ Q    The ID of this nameservice. If the nameservice ID is not
0 V5 `, N1 y% x% H    configured or more than one nameservice is configured for2 b  m6 d$ x3 m2 A: s
    dfs.nameservices it is determined automatically by
, z4 r  Z( p+ R1 _- d    matching the local node's address with the configured address.$ m  N: [: ]6 z9 s
  </description>: m9 T4 |- d, @6 f
</property>
* D1 k6 ~4 ?, A) q, S- X- D<property>
" V: s8 v+ E1 ?% y) V. Q- ~  <name>dfs.internal.nameservices</name>. W" P) J$ Z! V1 i
  <value></value>8 r9 c8 w; }1 N: ^- J
  <description>
' f1 A% x6 F  u: l% {2 j    Comma-separated list of nameservices that belong to this cluster.
9 j/ {4 e. G, O    Datanode will report to all the nameservices in this list. By default6 @# Z& t4 M2 j0 ^5 Y: ?
    this is set to the value of dfs.nameservices.: l+ q3 R3 m/ W# I3 M
  </description>
( B9 W# R: W+ l2 g! t& v1 R+ k2 R</property>
! V& U% Y$ K3 l$ d5 A' U<property>7 K! L. _& u3 N
  <name>dfs.ha.namenodes.EXAMPLENAMESERVICE</name>
: ]) \+ {2 J0 S" h. D  <value></value>
# J0 _0 C$ t& A: ?" w& ]* q% ~  <description>
: S3 y' W+ E  Z    The prefix for a given nameservice, contains a comma-separated
: r5 G$ K* x' `) T, ^. ~    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).) j. q) j: x0 j/ J
    Unique identifiers for each NameNode in the nameservice, delimited by
+ H1 i" L, Z, _0 `6 N3 N, z    commas. This will be used by DataNodes to determine all the NameNodes
- Q0 ^: ^1 J4 I( {2 x    in the cluster. For example, if you used “mycluster” as the nameservice
, x1 \7 \9 U4 h/ q    ID previously, and you wanted to use “nn1” and “nn2” as the individual
2 M  D, w7 L1 ]( E+ Z    IDs of the NameNodes, you would configure a property
/ n) e* A7 @! [    dfs.ha.namenodes.mycluster, and its value "nn1,nn2".
) d4 ~& }# F0 Q# h4 v  </description>
  {# ^8 V% _# z! y- G8 F- ^</property>  U% g- n" G; I7 w
<property>
4 v/ e9 d! i) G5 L  s2 z' P2 w+ W  <name>dfs.ha.namenode.id</name>8 A/ C# X9 q( d4 y6 X. w$ E* W# I3 K
  <value></value>
1 }; `$ k$ r* f, b* f8 L5 D& b+ m# R  <description>9 K* A/ k8 a! I) A) @7 ]
    The ID of this namenode. If the namenode ID is not configured it4 R$ ]# r. i2 A: f2 n
    is determined automatically by matching the local node's address4 Y2 T, P8 p; v# }: a% ?7 {
    with the configured address.8 L/ A7 d2 i; e) [; l
  </description>' m2 U# L3 r7 T  n  i' Q1 z
</property>
7 [" D$ q- h) T/ S) t6 N/ m2 e- @<property>" M: O) h4 F* j7 b0 g8 W/ D
  <name>dfs.ha.log-roll.period</name>
+ @, a2 n8 g! F$ p& i# W7 z  <value>120s</value>
* e3 ^, k$ Y" S7 _4 z9 t$ i  <description>0 I( }# K2 y; r/ _# u3 W
    How often, in seconds, the StandbyNode should ask the active to
  E$ p7 R5 b2 e    roll edit logs. Since the StandbyNode only reads from finalized: S) `5 Q. F& a# `
    log segments, the StandbyNode will only be as up-to-date as how: ~5 D, W, C2 f8 O, ^5 E9 ~5 Y; W4 ^& F
    often the logs are rolled. Note that failover triggers a log roll4 _/ `! K# o) i( M8 X% ~
    so the StandbyNode will be up to date before it becomes active.9 ^; G$ E4 ^2 v: u
    Support multiple time unit suffix(case insensitive), as described
) |  P3 g1 e# p  o- ^* H    in dfs.heartbeat.interval.& T1 u4 N8 H- ~1 D
  </description>
; |8 Z( c0 }- K# |$ p1 M' ?& G</property>
0 t* z0 m0 ~" ^' d; D# v<property>
1 Q/ A; p  C2 i  <name>dfs.ha.tail-edits.period</name>
! d$ g& T; O5 N; T4 }  <value>60s</value># n; Q. X) W, K5 {5 ?! P3 Y
  <description>
" ^# `- ?" L# q  S    How often, in seconds, the StandbyNode should check for new
  n' G2 ]1 H3 {0 ~# V8 V' Q    finalized log segments in the shared edits log.
- a  ^: w5 f6 u6 U% r  x4 x    Support multiple time unit suffix(case insensitive), as described
, B! M& G5 O! ?4 a9 a9 ]: p    in dfs.heartbeat.interval.
* g; }4 g) J1 p4 y  </description>
. u( d3 c* }$ L% D1 q2 [: B</property>
( h, m+ Y8 ^. o/ ?<property>
: _! `5 `! i! s& S6 z& s" b  <name>dfs.ha.tail-edits.namenode-retries</name>
& o  Q( I, R; q( z& p( ^4 k2 F+ D  <value>3</value>
' A+ S  x: _& v: v- H+ i  <description>
- F; g  P) ?. u2 g    Number of retries to use when contacting the namenode when tailing the log.* [6 R9 I4 y$ J+ y; F
  </description>- A1 S+ @5 \1 w7 {  }1 M" |$ ]% @
</property>
" h- A: [% c# g" k% b/ j<property>: P. q  x  E$ Y2 s' m
  <name>dfs.ha.tail-edits.rolledits.timeout</name>
2 U. u* }" P- M6 C1 J1 U  <value>60</value>$ @) `. d" M+ M
  <description>The timeout in seconds of calling rollEdits RPC on Active NN.
5 ^2 U0 [8 e1 j, j3 Q  </description>/ q) }" |3 k5 e/ v: h; y% o8 d% L
</property>
' E5 m" c( D; f( w! k<property>, d! i. g- r, e7 s6 ?$ g
  <name>dfs.ha.automatic-failover.enabled</name>, @' V, ^& z4 C! w" b% e6 t4 L4 U
  <value>false</value>
1 _( P) K- |7 }4 k3 U  <description>
  @$ q6 S5 ^+ _8 B    Whether automatic failover is enabled. See the HDFS High
: M1 T. I6 K% I2 ?5 N    Availability documentation for details on automatic HA
- o6 F: V# e4 S. U    configuration.  s% f( v. B2 |4 E' `
  </description>3 `! V: F( B: L; l8 H0 p
</property>
8 T. ?1 H4 }5 Q0 _5 [<property>
' j$ o# z" x* {6 n/ x/ \6 g  <name>dfs.client.use.datanode.hostname</name>3 p( p9 x% G' X, T& N% m
  <value>false</value>+ S  s9 }  z3 ?. ^/ l; V5 @4 V
  <description>Whether clients should use datanode hostnames when
8 H+ r, ]. A' R9 q, N, l4 ~, G    connecting to datanodes.
; e" Z0 S0 [$ U% l. k  </description>
# T- X: q  m: ~</property>
# R* Y  R% i9 m& @$ q<property>. I! l3 h; }' g8 D2 T- v
  <name>dfs.datanode.use.datanode.hostname</name>4 S% a+ t# V2 n" t/ s
  <value>false</value>
+ N$ I, _# X, K1 q+ [  <description>Whether datanodes should use datanode hostnames when3 I3 u' l" K1 S; e4 l' V$ K
    connecting to other datanodes for data transfer." [+ M! p1 e8 T# o8 V. r
  </description>
4 g# k0 x; J( V3 G5 `- s</property>  p) H+ f2 J/ C: E' b4 M
<property>
& L4 L" G4 \9 h( h# q' e  <name>dfs.client.local.interfaces</name>
+ L; T* y3 N) g5 j, Y( j* B9 b  <value></value>! V; }& }) I  T  q! u3 Q
  <description>A comma separated list of network interface names to use
) J  \5 ~+ Q) T# y2 C    for data transfer between the client and datanodes. When creating
. c8 P+ w# y* e8 P5 o    a connection to read from or write to a datanode, the client
! U/ ~* `5 ~* E/ v8 H0 l& v    chooses one of the specified interfaces at random and binds its  b3 i3 y7 I2 V( y
    socket to the IP of that interface. Individual names may be* N8 k' c/ _. h& j( ^# z, }
    specified as either an interface name (eg "eth0"), a subinterface9 V& R; m: E9 o6 G, m  t
    name (eg "eth0:0"), or an IP address (which may be specified using# s/ p8 f- M3 K% I- Y) f
    CIDR notation to match a range of IPs).
+ J- J4 F: R7 F( A3 |) T  </description>
9 ?/ y- }( {5 m8 H: z& O</property>
: y7 E- T5 {$ S' z* y<property>; g) z" q$ Q) n+ j1 W3 }
  <name>dfs.datanode.shared.file.descriptor.paths</name>8 O4 P% ?4 P2 r# |4 N
  <value>/dev/shm,/tmp</value>  _. s# D0 o$ O! Y0 @  ^
  <description>4 e0 ^+ l* E. x0 z7 j
    A comma-separated list of paths to use when creating file descriptors that# X4 Y! t6 E) Y) r5 }
    will be shared between the DataNode and the DFSClient.  Typically we use9 U% r, T8 p' x7 |. S
    /dev/shm, so that the file descriptors will not be written to disk.
" f% |- y( t' R0 Y    Systems that don't have /dev/shm will fall back to /tmp by default.2 o  f" c# g( x. G' u- [( g' G
  </description>0 {. K  Y% Y0 B6 p% t  L
</property>5 \" I5 g; c' R+ ?0 p2 Q* [# B# W
<property>
0 t" N4 _/ z& \' j) n  <name>dfs.short.circuit.shared.memory.watcher.interrupt.check.ms</name>( r4 n, k% c  w& i% d
  <value>60000</value>
; ?5 R; R9 }" k  o# p( K* E+ u  <description>6 f+ R. C: o: Z
    The length of time in milliseconds that the short-circuit shared memory
- f" E2 F0 f+ G; T" a' ^* O9 ]    watcher will go between checking for java interruptions sent from other
4 X8 k( \2 ?# x; @  ]0 R    threads.  This is provided mainly for unit tests.
0 n( y- Q9 M( }8 r$ Q! P  </description>
# H1 {0 r% Z7 i& s! ]6 p</property>  R" x+ j/ {2 |/ z# J% ~  N- L
<property>
1 K1 G! b- `/ K7 J$ H& c( ^" h% }0 j  <name>dfs.namenode.kerberos.principal</name>! l0 Q6 B4 E* I! w9 O
  <value></value>
9 z3 O9 O3 K( ~- ^1 V6 `  <description>
3 g1 g9 N+ X/ \; w% R+ d$ j/ L) q3 b    The NameNode service principal. This is typically set to
& Q2 B) Y0 _. u  q4 O7 V* U    nn/_HOST@REALM.TLD. Each NameNode will substitute _HOST with its
9 `6 w1 @# ]: u6 C    own fully qualified hostname at startup. The _HOST placeholder, H4 |8 D! ^1 U3 Y+ F, X
    allows using the same configuration setting on both NameNodes) \  E1 h5 r: z
    in an HA setup.
9 H- b  O- m9 o/ i: o. c  </description>7 g7 S$ T/ H0 Y. T" u* r! I
</property>
0 M/ B2 A4 X% H  C<property>! I# W; C2 k6 m# O
  <name>dfs.namenode.keytab.file</name>2 H) l/ |/ ?, a  E
  <value></value># ~. A, q9 D2 w; i! W8 R1 A) W. T7 H
  <description>5 O  I" r8 }! J6 A% ?. h
    The keytab file used by each NameNode daemon to login as its6 D5 I" s+ k) p
    service principal. The principal name is configured with% j" @$ @$ j1 A  J$ z, E' L
    dfs.namenode.kerberos.principal.. f- N7 X8 S8 T; W4 d
  </description>% |/ g" }9 e9 ]2 g/ I  O( L
</property>
5 a  t! l5 u% z7 o<property>  R- `! k/ O/ j5 j* l; w
  <name>dfs.datanode.kerberos.principal</name>
  p$ G8 l  w2 |2 C1 f8 x1 l  <value></value>
) O; m' @$ V. H: u8 E8 L  Z/ C; ^: O% i  <description>
% \  R0 z1 Z: T) f( d% Z$ j" y    The DataNode service principal. This is typically set to% G' [$ f( e5 g) G; t
    dn/_HOST@REALM.TLD. Each DataNode will substitute _HOST with its  q+ G0 L' j. {) S! |
    own fully qualified hostname at startup. The _HOST placeholder
6 P7 s1 L2 n; E6 t    allows using the same configuration setting on all DataNodes.
- h5 ~+ u8 y0 D' ^# [( E' w  </description>
# p1 I1 x" m6 j, ~; r6 V7 n. D' h</property>
/ C1 h2 v0 u$ Q/ Y; n: X<property>
- O5 q6 ]5 q( ]! _  <name>dfs.datanode.keytab.file</name>
& r6 P5 o9 }+ E  <value></value>
! r& R$ v9 Q' N0 W9 O  <description>
# ?: Z+ e  E( X+ S2 z# Q    The keytab file used by each DataNode daemon to login as its
  g' O  h( B  N: ^% S    service principal. The principal name is configured with1 j0 x+ x+ U3 M! D6 p7 y
    dfs.datanode.kerberos.principal.
6 l4 g  x( [% O  </description>& m' a* H/ }6 E8 e
</property>
: q$ h: F# K  W1 y, p8 z<property>, y4 x2 q1 ?5 @% m/ W
  <name>dfs.journalnode.kerberos.principal</name>
, [2 s7 ]1 M* `( _  <value></value>
- a' Y0 Z# Y4 V  <description>
7 h1 B5 d$ \+ P    The JournalNode service principal. This is typically set to
; b, I% v/ a9 H0 C* x/ {7 i    jn/_HOST@REALM.TLD. Each JournalNode will substitute _HOST with its5 e4 X  X( a) g; G$ u$ r: ]8 D; y
    own fully qualified hostname at startup. The _HOST placeholder+ a" R) k6 Z/ W, K1 K& K$ H
    allows using the same configuration setting on all JournalNodes.
/ t. N! P7 L4 g4 i# X  ~$ U  </description>& }# ?, M# A7 x5 d6 X
</property>
2 i5 M" T2 i) s<property>, f2 C1 Y& u( ~" s9 p
  <name>dfs.journalnode.keytab.file</name>
& d6 z' x7 V! {" y  <value></value>3 M! T" R+ r9 `; X  W, _
  <description>
3 m& x* o. H& ?' d    The keytab file used by each JournalNode daemon to login as its
& L, s9 n/ U  x+ v" D    service principal. The principal name is configured with0 l& ~4 U; \8 [5 k6 Z, a! b
    dfs.journalnode.kerberos.principal." x; \6 W8 f; J7 r. Y
  </description>
' J, o9 [) _+ _* M- s# X. \</property>
0 L1 ^+ H1 D8 f3 F: o$ C. R<property>- c+ \; Q8 G  Z" ]
  <name>dfs.namenode.kerberos.internal.spnego.principal</name>
. d- M2 y. U8 O1 U2 |  <value>${dfs.web.authentication.kerberos.principal}</value>
! p8 N5 s1 p! s8 D7 o% f8 }% f  <description>
3 a$ N" I- V. A1 N  ?& F    The server principal used by the NameNode for web UI SPNEGO' _+ ]* M$ F4 W; I
    authentication when Kerberos security is enabled. This is
# ]: X6 T- p! K0 _    typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal
3 b/ Z$ A- {$ N. F0 h/ |    begins with the prefix HTTP/ by convention.
+ l1 K, M( _) E2 O2 P* c    If the value is '*', the web server will attempt to login with6 J6 o2 l/ ~5 P+ q+ g$ n$ j
    every principal specified in the keytab file
5 q: R; L/ Z7 P! S% A, x# D7 ?2 _    dfs.web.authentication.kerberos.keytab.0 p* @; g, h# e# J
</description>
5 I5 F& x. d- Y: }0 a% q4 ?/ w" z</property>9 Z: \9 b5 F5 S3 O, B8 H
<property>! x/ I4 I7 e" M$ D1 q8 g% r, z4 k4 n
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
) A9 u6 `6 R, @/ a* b$ a  <value></value>. V8 W' X/ S3 o& j) Y6 A3 b( A
  <description>! a# y) e0 u0 K1 f' y8 d6 D. j# _
    The server principal used by the JournalNode HTTP Server for
* j7 ]: ]- F1 C# {. q! t/ e    SPNEGO authentication when Kerberos security is enabled. This is# \! N! W* K2 y9 f9 T  g# a. K
    typically set to HTTP/_HOST@REALM.TLD. The SPNEGO server principal+ x; Q+ B8 ^+ ^6 L
    begins with the prefix HTTP/ by convention.2 Q( k9 O4 f& q( Z7 A6 h, W7 k
    If the value is '*', the web server will attempt to login with" P* j! u+ Z/ p& |4 Y% |
    every principal specified in the keytab file9 u8 {3 @' G" g; E, @
    dfs.web.authentication.kerberos.keytab.
; o3 ]; _% z+ k' u0 i( |; G    For most deployments this can be set to ${dfs.web.authentication.kerberos.principal}& O6 e; I. h6 j- R- l/ V$ E! j' @
    i.e use the value of dfs.web.authentication.kerberos.principal.
* N* [8 {$ _7 h- ^4 g  </description>% W. U. m8 P3 k
</property>
+ A$ ~5 |, c8 X<property>8 j5 ?, g& l8 G  A  ]5 W
  <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
% n& o7 T& E* c, e* |  <value>${dfs.web.authentication.kerberos.principal}</value>$ L, _8 i) Y" `: _( i
  <description>
9 M- R) ~! C) {3 y) y' E, K    The server principal used by the Secondary NameNode for web UI SPNEGO
4 L+ V! Y; H6 B    authentication when Kerberos security is enabled. Like all other: x1 ]2 n# Q0 T5 C( V
    Secondary NameNode settings, it is ignored in an HA setup.
" D" J$ u! @" J4 Z# `    If the value is '*', the web server will attempt to login with3 J4 e; A$ z  z7 |8 F. l0 A3 t
    every principal specified in the keytab file
. D4 P$ W; Y5 D( Q) e" @  _5 g    dfs.web.authentication.kerberos.keytab.
2 _* i3 b1 K, c& g1 `% q9 g  l  </description>8 N) Y8 l0 H! J8 Z0 {3 @+ E
</property>
( y/ i( q6 m8 X% T<property>" D+ `8 U4 u) j* I% V, b
  <name>dfs.web.authentication.kerberos.principal</name>: \- t# Q, G* ^! {
  <value></value>  P4 o' Z$ u/ b2 B  n; L
  <description>; }. [2 J$ Y3 E9 M: Y5 x
    The server principal used by the NameNode for WebHDFS SPNEGO, E% n  p, _. C
    authentication.
! @5 |  K7 W; B" ~6 x& A    Required when WebHDFS and security are enabled. In most secure clusters this- |# X( j. i* T7 |- X, b$ i
    setting is also used to specify the values for
" G1 Z6 [& v! s& r& `" V1 e7 H! P    dfs.namenode.kerberos.internal.spnego.principal and" D# C' R" i" K. `7 @1 d
    dfs.journalnode.kerberos.internal.spnego.principal.. f1 u' b; i& E. m2 s" T  C7 }
  </description>
2 M  w- F$ M& F' p# b</property>; u' E8 X' U1 ^) h# j9 c
<property>5 W, X& p& X: m, ?7 m: H
  <name>dfs.web.authentication.kerberos.keytab</name>
+ L$ b9 c: D1 y7 E" Z  <value></value>
! C4 D, G2 ^  r% k  <description>3 e7 u( O3 `' A: M% V" f
    The keytab file for the principal corresponding to
1 n' k0 n8 y$ x. |: K# J1 c    dfs.web.authentication.kerberos.principal./ C* Z/ N. M5 P( a# Q
  </description># x: ^& L/ L4 j' _
</property>6 j  L& k$ A4 W) o! n7 \9 q
<property>' T6 o( l" E  B2 ~  [6 m* {2 G
  <name>dfs.namenode.kerberos.principal.pattern</name>" }0 f8 P; Z$ P, p) j! l( u3 k
  <value>*</value>3 @( p" B9 O* q6 k% H" @  X" Z
  <description>
8 a" I3 n$ e& c$ H' F9 S( W    A client-side RegEx that can be configured to control3 D& e+ u6 Z( f7 E8 H. i6 R
    allowed realms to authenticate with (useful in cross-realm env.)
" Y+ W* @" @! W* l8 v$ h  </description>- r4 g( C" ]/ f# I6 j( x! B
</property>7 {5 T: e# i# r3 |4 \, M" p( e8 y
<property>8 K/ W' o, x( M9 c- M2 b
  <name>dfs.namenode.avoid.read.stale.datanode</name>8 P& u1 A5 X# y% }+ M
  <value>false</value>4 f, A4 _" l, u
  <description>1 \8 U1 N; ~8 n+ y+ o/ n6 @- u
    Indicate whether or not to avoid reading from "stale" datanodes whose" R6 t" V% d5 N" [5 ]" b* n& s
    heartbeat messages have not been received by the namenode " ?' [, Z* X' D  e) E( A. J+ D7 |
    for more than a specified time interval. Stale datanodes will be- j5 ^( N( ]# T0 z7 p
    moved to the end of the node list returned for reading. See0 c9 L+ b* a' ?5 Y
    dfs.namenode.avoid.write.stale.datanode for a similar setting for writes., t1 ^& d1 q' k) D
  </description>
/ t4 g8 v7 u# a</property>; z% F6 I8 z+ f2 y8 j" r, Z
<property>
9 s8 F: R8 X- |  J# ], ?+ R  <name>dfs.namenode.avoid.write.stale.datanode</name>; C8 }6 h8 P! w# y+ U
  <value>false</value>$ {% h7 x4 C- _, Z$ J6 j; y3 L
  <description>1 X) d$ s. u8 J) v; g
    Indicate whether or not to avoid writing to "stale" datanodes whose
; B5 ^1 z) t9 D4 h6 M7 b& D5 W    heartbeat messages have not been received by the namenode 3 F( Z" Q6 `( ~
    for more than a specified time interval. Writes will avoid using
2 S1 r8 Z* O& E5 h3 V    stale datanodes unless more than a configured ratio
' Y/ c5 ^$ N' j' t7 H    (dfs.namenode.write.stale.datanode.ratio) of datanodes are marked as 5 @- ?9 q6 s0 _, V4 {+ e" e  Q
    stale. See dfs.namenode.avoid.read.stale.datanode for a similar setting2 v' p7 Y7 W: |/ w
    for reads.8 k- g$ }; f& r% M4 p  z
  </description>
) p. H: s2 v; b2 O" u/ v</property>+ `. N; [2 z& U, [+ N4 C9 S
<property># D! q7 [6 b0 F* D
  <name>dfs.namenode.stale.datanode.interval</name>
- E) ]/ o3 X7 T( G. J* b  <value>30000</value>! X+ p" P9 Z& y  d! L- F
  <description>
: ]1 `3 j& D) C' K) a    Default time interval in milliseconds for marking a datanode as "stale",
( j' |6 M& v* X! i# M  t: F- t( c/ U& l    i.e., if the namenode has not received heartbeat msg from a datanode for" O5 Y4 R* h6 g/ w
    more than this time interval, the datanode will be marked and treated
/ Y9 Y% p& k! {8 G' d5 f    as "stale" by default. The stale interval cannot be too small since
  q- G  U! ^2 F! M) h/ Q7 v3 j    otherwise this may cause too frequent change of stale states.
* R4 c; U4 M. q( e    We thus set a minimum stale interval value (the default value is 3 times
* [0 B  z& L8 S9 S' k    of heartbeat interval) and guarantee that the stale interval cannot be less  y: x* I5 k/ Q$ h# c. }4 x! l$ s
    than the minimum value. A stale data node is avoided during lease/block
. M& n  O( b, J6 Z    recovery. It can be conditionally avoided for reads (see, t2 ^( ]. W; A- O- Y' F
    dfs.namenode.avoid.read.stale.datanode) and for writes (see
: F0 b/ {# F+ s' w. L% A0 K0 @2 U    dfs.namenode.avoid.write.stale.datanode).* x6 }6 M4 ~+ P" a, ^
  </description>
6 R5 H/ U; `: s</property>
" A* b4 W! a; ^<property>$ [# S; a  a4 c
  <name>dfs.namenode.write.stale.datanode.ratio</name>3 u" s- V0 B0 }: c: n5 c
  <value>0.5f</value>
: A0 N6 M2 r: N) s  <description>
4 {1 e0 v6 l3 Z! ]6 g    When the ratio of number stale datanodes to total datanodes marked9 X) f* S+ w5 k, j1 g
    is greater than this ratio, stop avoiding writing to stale nodes so
5 a' F8 ?. x' v# p/ l$ w$ h8 C    as to prevent causing hotspots.* \9 Y' Y8 Q1 ?1 X
  </description>  p& B7 e; K$ q4 H* b# u
</property>. K! n* D+ i6 J% C6 p
<property>2 ?) n' U% U/ y- o4 M9 i
  <name>dfs.namenode.invalidate.work.pct.per.iteration</name>
! e+ Q5 K! X& R5 s7 L  <value>0.32f</value>* b  d* ?& u: n, L. V* x
  <description>0 r) m" }8 x- B
    *Note*: Advanced property. Change with caution.
% N/ U9 {6 A+ V$ T9 [    This determines the percentage amount of block, W0 x' w; f2 q2 c* ^
    invalidations (deletes) to do over a single DN heartbeat
5 G  I5 Q7 X" A) _    deletion command. The final deletion count is determined by applying this
; t9 A; t/ P" U/ F    percentage to the number of live nodes in the system.
1 a9 V5 R0 g9 H: B. d    The resultant number is the number of blocks from the deletion list
. `5 k" u! T) E: O4 D    chosen for proper invalidation over a single heartbeat of a single DN.
/ Z; f  j4 o0 `    Value should be a positive, non-zero percentage in float notation (X.Yf),8 F6 [  @5 X' }; V% y
    with 1.0f meaning 100%.' K; ?! I2 g. E: O) u1 Z
  </description>& w: \# T- J. i7 f1 O2 m  L
</property>
6 f7 m  W! o+ k  l/ }! q6 y! m<property>" C5 K( N! i# q% V2 X7 U
  <name>dfs.namenode.replication.work.multiplier.per.iteration</name>) E0 p" |% F1 A# U$ S& e
  <value>2</value>) j9 ^, ~# g7 Z# Y
  <description>
- l# {7 C' B/ f8 l, f    *Note*: Advanced property. Change with caution.5 {3 M3 V& B; d$ J+ X' ?( g
    This determines the total amount of block transfers to begin in4 s$ K% R5 @4 w, R9 I
    parallel at a DN, for replication, when such a command list is being
. A8 W+ \$ _4 w1 |4 {    sent over a DN heartbeat by the NN. The actual number is obtained by0 C# G0 g" ^7 ?  G$ \
    multiplying this multiplier with the total number of live nodes in the' X( @3 d- U  }
    cluster. The result number is the number of blocks to begin transfers! K+ r) I) Y9 W( }  |
    immediately for, per DN heartbeat. This number can be any positive,
2 ^! Y0 J3 ~4 l) ~. Q2 D    non-zero integer.
. }4 p# ~) Y  V1 C  </description>
3 j: R  m5 `) _1 {: w7 y</property>2 ^2 W0 N% F# T. ^
<property>/ E# U$ i) D7 \! X& ?# {; @' z
  <name>nfs.server.port</name>
1 x# M5 Q; c" D  }& L2 D  <value>2049</value>/ V% ~. B! b! u
  <description>2 J' n- L) k$ k. H) d( B
      Specify the port number used by Hadoop NFS.1 M$ y7 ~3 z, v$ A
  </description>1 J+ j: E9 p8 \% \. K
</property>
: S. f) F+ @6 N4 o3 H# Y* Z8 G<property>: p. V# g/ [# I' Q# Y
  <name>nfs.mountd.port</name>
8 z5 O2 F/ o8 E) g2 X  <value>4242</value>
* ]4 D: S) s( L& S  <description>
: W- N4 ]! Y& U/ K7 D" X) G      Specify the port number used by Hadoop mount daemon.
5 e7 H# B# K  g. f+ ?( O  </description>
! @% ~8 \4 Y0 h9 y5 L1 `</property>
% ]) R3 X$ T% Q1 o  U<property>    6 g0 u3 L' z, I
  <name>nfs.dump.dir</name>- H* X1 y7 f; i; g
  <value>/tmp/.hdfs-nfs</value>) {" ]  h% x6 m
  <description>
" g) q4 B0 i  d/ I8 c% ]/ S    This directory is used to temporarily save out-of-order writes before
: j" _' [6 Y& C' A+ H" l- j    writing to HDFS. For each file, the out-of-order writes are dumped after4 s% P8 Z  |* L( h! H3 |4 w, j
    they are accumulated to exceed certain threshold (e.g., 1MB) in memory.
2 z8 d# H2 m  C5 {) j0 A$ N    One needs to make sure the directory has enough space." v/ q% |% _, a8 |0 n( N
  </description>
' W- Z* h3 z! S' D' ?5 \9 |</property>
& O- d( I3 u) N6 v<property>
5 B" r% b. `$ ]7 F0 n1 ^  <name>nfs.rtmax</name>
/ h# ?1 B$ n" }, t3 A- E  <value>1048576</value>: R* b6 ^, t0 C, Z! B1 x
  <description>This is the maximum size in bytes of a READ request
; o8 X2 L' k) p' c) @  e    supported by the NFS gateway. If you change this, make sure you
4 k5 g0 f. C) p0 x! O3 o    also update the nfs mount's rsize(add rsize= # of bytes to the
# ~5 u3 M. p- e% ^) L    mount directive)." S1 h' s+ I+ g2 ]) `
  </description>
" d! k/ X& Q9 O" v8 v. i# @, Z</property>
4 z/ v2 k0 f3 n' \' q6 u9 {<property>4 E% W2 F6 V5 }6 y
  <name>nfs.wtmax</name>9 x1 c  Y/ D$ Q; V" j
  <value>1048576</value>0 ~) K$ k$ i# ]. V% V' E
  <description>This is the maximum size in bytes of a WRITE request1 s$ N5 u4 K$ Q4 ~5 r( I5 t  C9 |
    supported by the NFS gateway. If you change this, make sure you, u. }4 t8 \3 Q& b+ h
    also update the nfs mount's wsize(add wsize= # of bytes to the
- Q. S# A  j5 i. X. r    mount directive).
* u0 m. l3 B6 c( R7 `& G  </description>/ a9 X0 B0 {( S; r1 c7 s
</property>- F' S* U% w- L) w) ^/ P2 w
<property>9 {- J9 U3 G7 v7 r
  <name>nfs.keytab.file</name>  r+ J! a  e# V2 `: h* N- L& \
  <value></value>7 y. Y5 Y( O  [" o. E
  <description>( p( G2 Z! i! W/ R0 G8 _
    *Note*: Advanced property. Change with caution.4 O& Z& q6 X9 r4 O6 r
    This is the path to the keytab file for the hdfs-nfs gateway.7 g6 L1 A4 t: m. {5 V; ?
    This is required when the cluster is kerberized.! s0 o6 E8 o. P( _
  </description>! g( a: R( @! G$ C- v: G; `3 J
</property>
$ y" q# s" z7 |/ |<property>
; }$ F3 L- M8 ^' z1 Q5 N  <name>nfs.kerberos.principal</name>1 F4 T, E' w" M  P* H# k
  <value></value>5 O, e7 t* e/ G( y& E  c
  <description>
% ?+ g; u- y: A5 C5 K    *Note*: Advanced property. Change with caution.
3 p) R9 O1 x% ~( E' `" K    This is the name of the kerberos principal. This is required when
: M5 _! w# d& r    the cluster is kerberized.It must be of this format:) }$ c* ~# F' {( \' H# U6 ?
    nfs-gateway-user/nfs-gateway-host@kerberos-realm
0 p8 W- l1 B7 l0 R+ g3 b4 C  </description>
" S0 p1 Y  m- N$ |3 j</property>* \0 g) J5 V. J  ~* q
<property># [7 ~, c3 o* Z  ?8 ?
  <name>nfs.allow.insecure.ports</name>' D3 H4 j' Z* P# u
  <value>true</value>; z6 }( `& |. t8 T5 P. N5 g( K
  <description>
/ c5 k* V) M/ r/ l5 C: h    When set to false, client connections originating from unprivileged ports
9 k" x8 X- e6 V: A# Q9 ?/ W    (those above 1023) will be rejected. This is to ensure that clients: m$ Z* Z) ?- I; c  y# ^9 ~" H
    connecting to this NFS Gateway must have had root privilege on the machine$ `' A" ]2 m1 L$ j5 m
    where they're connecting from.- k( N' P1 q: v) ^* L
  </description>
1 e7 K$ @( d% o: g</property>8 S7 z' c/ f' ~
<property>3 ^: p* Y8 v4 l
  <name>hadoop.fuse.connection.timeout</name>
7 e  ]. `9 s6 [- n: [  z  <value>300</value>8 D/ K2 I) _' r! o! T, d' t
  <description>9 v; _7 T+ K0 O# n
    The minimum number of seconds that we'll cache libhdfs connection objects
; n+ |. W: c/ S+ }6 q0 e    in fuse_dfs. Lower values will result in lower memory consumption; higher
5 q5 O* |8 v7 i/ ~    values may speed up access by avoiding the overhead of creating new2 F0 M1 p' i) ]7 N7 _
    connection objects." I0 M- ]3 G* e5 }: b0 s/ t
  </description>
) J; w. k  a% E9 h1 X8 p</property>* E: F: k8 D4 [: ]- d! Q
<property>
* Y' s5 g1 Z0 |' Q8 D; E# d; J  <name>hadoop.fuse.timer.period</name>
4 z, }! R& ?: u3 J7 f  <value>5</value>
4 [+ i% @3 C! O6 W  <description>
; Z9 j2 a/ {$ I/ A% b. ^    The number of seconds between cache expiry checks in fuse_dfs. Lower values
$ A/ s% M+ e) Z    will result in fuse_dfs noticing changes to Kerberos ticket caches more
& R0 |0 ]6 ~5 @, u    quickly.0 Q' \! }+ u2 H* {' B9 P6 h
  </description>+ Q2 ^* [/ ?& D1 B! s& i, ^+ H, t
</property>; o( ^5 v5 U: E* I' j* \
<property>" R2 d6 b6 f7 L/ y# S; Z* n: E. B
  <name>dfs.namenode.metrics.logger.period.seconds</name>
7 ~, ~! k- w: K6 ]2 I: K' o  <value>600</value>7 A% o5 t2 k. w8 M" M$ h: j
  <description>
! x  p- h+ r9 q3 W4 }! I    This setting controls how frequently the NameNode logs its metrics. The
- h( Y# A$ I% v; I, i, J& \    logging configuration must also define one or more appenders for
4 I% x6 O7 Q1 n( J3 m    NameNodeMetricsLog for the metrics to be logged./ C% K3 D4 i6 E; a4 g# @
    NameNode metrics logging is disabled if this value is set to zero or
) P$ v2 y: c# D; Q9 h: a    less than zero.
: ^6 M4 o9 ]  L8 p- E' ?) \" i  </description>
# H+ F- X2 f% M/ J( C</property>
6 h+ G# G: [5 [3 ?6 l2 n<property>- N& E0 K' k' ]" ?; }
  <name>dfs.datanode.metrics.logger.period.seconds</name>
8 q, z* w( }1 O) I% ^  <value>600</value>! K+ i- _% @9 ^4 u
  <description>  m+ y& Y, E3 M# ~
    This setting controls how frequently the DataNode logs its metrics. The
! C$ h; x* C& _0 q    logging configuration must also define one or more appenders for8 |, E- J9 L1 }9 g' h% L
    DataNodeMetricsLog for the metrics to be logged.
8 F( u( @. F* h, P2 X" v    DataNode metrics logging is disabled if this value is set to zero or
7 B9 v5 U9 {; B  c2 [8 o    less than zero.' _# Q9 J+ J' B+ Y( p$ B& u
  </description>, a) V* C. u5 z' @6 Z. m6 y
</property>
" u0 m7 i9 `+ f<property>
. e, C: `! P$ x. [  [  <name>dfs.metrics.percentiles.intervals</name>
) ?! G* P1 N8 q# }" k5 c! i+ \- z  <value></value>
/ [% P* w) T$ `# B$ v3 G0 I  <description>, x+ t" U) A2 a
    Comma-delimited set of integers denoting the desired rollover intervals
3 H* y: t  D# N  N/ {1 G, [# \    (in seconds) for percentile latency metrics on the Namenode and Datanode.* q0 ~. b+ o( O6 Y6 |& I5 U+ ?
    By default, percentile latency metrics are disabled.) S8 k( v8 P( e) e, y
  </description>/ h7 d$ _/ E1 x& t  e5 i
</property>) V# @0 X, P$ T7 B
<property>& {4 L6 U. }! [; s7 R
  <name>dfs.datanode.peer.stats.enabled</name>7 [2 w* z3 H! [9 o
  <value>false</value>
8 Q2 x( N: ~; S$ [) o5 ~9 P* F  <description>! u/ J: R: R: E" ]
    A switch to turn on/off tracking DataNode peer statistics.
! }6 t: V1 y0 |1 T  </description>
$ _* F7 N9 m; B) l</property>
9 G& {  R3 n: f<property>
: h- z9 ^/ `. A7 ]6 s  <name>dfs.datanode.outliers.report.interval</name>
5 _7 K. e8 }8 O2 c: o5 ~  <value>30m</value>
! w3 S, n  Q8 G  <description>
# K% s, N* y; j2 a4 b    This setting controls how frequently DataNodes will report their peer9 \* O- ]& t3 Y/ @4 J9 B- V( M  U& X
    latencies to the NameNode via heartbeats.  This setting supports: O# v: n. i0 M0 b8 N4 O% R6 D; Z
    multiple time unit suffixes as described in dfs.heartbeat.interval.
9 ~# s3 _& ]/ x/ ?( F9 v    If no suffix is specified then milliseconds is assumed.
' e$ I, o+ Y% y- A% s" W& b3 e    It is ignored if dfs.datanode.peer.stats.enabled is false.
/ w1 ?6 U3 e4 a- D  m  </description>5 }2 V6 }2 @, Z/ R; `
</property>
% H. C, ~5 q3 W) `. ~2 z! V" u7 V<property>
2 s9 L/ C$ L+ U9 L, B) @9 z  <name>dfs.datanode.fileio.profiling.sampling.percentage</name>
; G- C8 M5 a5 h1 N! p- F  <value>0</value>6 B- J/ H, Y/ I& n9 d! S
  <description>( m$ a8 ^  G# O! `5 d' _
    This setting controls the percentage of file I/O events which will be
9 d% B, a/ V: R- ?! t9 g  _    profiled for DataNode disk statistics. The default value of 0 disables1 G; T1 c* b" }$ B5 ^5 N% n
    disk statistics. Set to an integer value between 1 and 100 to enable disk
9 R1 S# x7 s5 ~; g- L* T- `    statistics.* `# `; T& X4 a, k2 L
  </description>1 A- _. ~' f: [
</property>
( @: D1 [& Y  h4 m4 K! \<property>+ ~1 C4 _) M; H4 `2 }
  <name>hadoop.user.group.metrics.percentiles.intervals</name>  d8 a9 J" A/ V  l; N
  <value></value>  t4 x9 ?9 K. w. d) E- Q4 A; ?
  <description>
( F1 p+ z, w/ l: J' o    A comma-separated list of the granularity in seconds for the metrics
1 m/ o7 k. b5 H) W7 k, T    which describe the 50/75/90/95/99th percentile latency for group resolution
: [0 K4 G  r7 Z! P8 w6 n    in milliseconds.4 C+ b5 l1 B. t) h3 o/ a2 D
    By default, percentile latency metrics are disabled.0 |# N% g2 e8 Z' m- i5 E' f  {
  </description>0 [  A9 |2 _' l  p% H
</property>7 g8 a* H3 K: T9 m6 b
<property>
. ~! g' r8 `' z5 O, m  <name>dfs.encrypt.data.transfer</name>
* `! t' q  v4 u) g  C5 \( M0 v% T9 M, l  <value>false</value>0 v) i  y0 H! q) s
  <description>2 A, E% g" k1 G, R) p* V1 A
    Whether or not actual block data that is read/written from/to HDFS should
3 C% p' l7 I) A- S3 `    be encrypted on the wire. This only needs to be set on the NN and DNs,
7 Y* v! y! d9 E- e    clients will deduce this automatically. It is possible to override this setting ' k) Q/ B/ e- [! \: H% R
    per connection by specifying custom logic via dfs.trustedchannel.resolver.class. , l7 G" W* t  K( ?
  </description>( h2 g9 v- Q0 n; T
</property>
' b0 Q; B+ ?8 d6 w<property>, {# [5 ?) X% P/ H1 D
  <name>dfs.encrypt.data.transfer.algorithm</name>
& V4 k: f  G+ |6 j2 D  <value></value>' P$ C$ T/ E" i7 a
  <description>7 C- y- b, F7 ~$ H' I  X0 X; n9 V
    This value may be set to either "3des" or "rc4". If nothing is set, then
8 s1 [) Q5 o1 U1 z) F3 R) Q7 K    the configured JCE default on the system is used (usually 3DES.) It is
2 C; ]8 I" [4 V5 ~    widely believed that 3DES is more cryptographically secure, but RC4 is
  z  M0 v& N3 L4 o- d5 w( B    substantially faster.& s* i& ^9 B! u8 W6 U7 j% y7 m, f
    Note that if AES is supported by both the client and server then this
0 q+ Y3 S- v- v2 W! o    encryption algorithm will only be used to initially transfer keys for AES.8 q0 T" V( n. P+ h) M
    (See dfs.encrypt.data.transfer.cipher.suites.)
* q) o& \; l4 \/ X2 h+ i  </description>
3 v) `: ^+ _9 Z8 F! h</property>' R$ e) t4 r. z$ d/ `( s8 Y- O
<property>/ J4 ?7 k1 s4 _+ O
  <name>dfs.encrypt.data.transfer.cipher.suites</name>
& \! B% U! T7 D% J  <value></value>6 ?3 }; @/ H; j9 A; q! {9 b) f- M7 K
  <description>
5 P. g# D2 n  \    This value may be either undefined or AES/CTR/NoPadding.  If defined, then
% B' M! n% L: Y. p    dfs.encrypt.data.transfer uses the specified cipher suite for data
, S1 W9 J) M- w    encryption.  If not defined, then only the algorithm specified in9 v% @/ N* g. V/ o0 A3 C
    dfs.encrypt.data.transfer.algorithm is used.  By default, the property is! ^4 r3 G( d0 q' D2 _2 }
    not defined.
, j: W" y: ~& e# G0 q/ {( p  </description>2 {" m7 i1 ]3 a/ }* O
</property>
% N0 T) U, u6 G<property>$ \6 y) J+ k/ S% N0 M
  <name>dfs.encrypt.data.transfer.cipher.key.bitlength</name>9 p, ~4 N/ y8 p8 j( d2 l! r: m
  <value>128</value>
( t. b4 K9 h6 v; a' r: Z  s  <description>' Z% ?4 h, _3 v. }2 p
    The key bitlength negotiated by dfsclient and datanode for encryption.$ L4 K7 w# j' ~9 s
    This value may be set to either 128, 192 or 256.8 _: N+ X. g- g9 B6 s$ k
  </description>
2 k7 R4 T# [3 B9 P</property>
% p5 }8 a0 r; K2 V5 c& i* R<property>. I/ {/ H, [# C. l3 ]7 X
  <name>dfs.trustedchannel.resolver.class</name>
. K5 A: k, J2 e2 j' _  <value></value>1 s  y5 ?( U2 S! _2 S
  <description>
' q' y" z4 M) r$ [5 [      TrustedChannelResolver is used to determine whether a channel 8 F6 L/ D' i) p7 i' R
      is trusted for plain data transfer. The TrustedChannelResolver is
& B  g& Y3 I8 F/ B! `. S3 ~# z* U      invoked on both client and server side. If the resolver indicates
# h# g1 W$ ]2 a- G7 o      that the channel is trusted, then the data transfer will not be
7 _" l( C' P& w  P      encrypted even if dfs.encrypt.data.transfer is set to true. The$ z$ |  V4 F7 j2 n9 L8 _; Q) i
      default implementation returns false indicating that the channel ! I" k# S" Y( S* {( L
      is not trusted.
. z, P- _1 P! R. m4 Q( F3 C) a  </description>. Q) m& z( Y/ U3 K- s
</property>
2 N2 A5 N7 k9 V' J, c5 K<property>
2 B9 ^  i1 F- d3 P4 }& ]& A& Q1 ^  <name>dfs.data.transfer.protection</name>
' Z# m2 \  C( z- @1 y  <value></value>
" J! r7 x8 t0 }7 o0 i* H+ G/ h  <description>
6 G) Z4 u% l" Q* Z7 F9 p7 j    A comma-separated list of SASL protection values used for secured
8 X4 ~3 z0 U% |# j" b5 w% O    connections to the DataNode when reading or writing block data.  Possible7 x9 a# e) I  G& p' C3 K' t
    values are authentication, integrity and privacy.  authentication means
1 ]9 P; f  B. N. \    authentication only and no integrity or privacy; integrity implies" f) _" C% h: ], m! C
    authentication and integrity are enabled; and privacy implies all of* D8 q8 n# O) G$ |( W( m; J) @
    authentication, integrity and privacy are enabled.  If
* s9 ]- e/ ?/ Q. v    dfs.encrypt.data.transfer is set to true, then it supersedes the setting for
; r* c+ d0 {4 R1 j1 y: U% I% s    dfs.data.transfer.protection and enforces that all connections must use a
. f: e% o0 W4 K    specialized encrypted SASL handshake.  This property is ignored for6 _3 `, q% W4 ^' Z/ e
    connections to a DataNode listening on a privileged port.  In this case, it- c3 l# F* g3 K, y( `' b
    is assumed that the use of a privileged port establishes sufficient trust.- V. p" P# b2 t# f5 M$ f
  </description>3 q% }# X' a# w4 R
</property>- V4 L& {, q4 c0 L0 X3 V# y$ ~
<property>
: B$ b1 e) O& ?/ L9 e  <name>dfs.data.transfer.saslproperties.resolver.class</name>3 d- O+ P$ m7 i7 E; M- Y
  <value></value>% t( p1 R) |3 r/ Y
  <description>5 d: s( K# Q" W
    SaslPropertiesResolver used to resolve the QOP used for a connection to the( A/ j( i0 n" ~) `' b
    DataNode when reading or writing block data. If not specified, the value of4 r3 S& Y8 M: q  J6 ^" w
    hadoop.security.saslproperties.resolver.class is used as the default value.
( u4 z' G# j% q( A% B  </description>
/ Y2 }4 I  E" D& a</property>' K4 v) L$ p* m9 R* e. Z- G
<property>
7 b7 @3 Z" k/ ^1 A; l* l' v  <name>dfs.journalnode.rpc-address</name>; l. e/ W8 L9 Z# i0 C( u
  <value>0.0.0.0:8485</value>
9 w! l7 e' L( C  <description>
  d& V9 M% M+ `$ _6 _. R    The JournalNode RPC server address and port.
9 S; k6 R- A, x7 Y) K* A  </description>$ T0 H3 i' }0 k3 c  S: B0 u5 p
</property>
: u% ?4 M0 R( ]! z5 j<property>
9 E. t2 _3 ?' e4 n+ I- x  <name>dfs.journalnode.rpc-bind-host</name>8 M  O0 o, j$ [! e/ N* ~$ I
  <value></value>
! N( Z, H, l1 k" p7 Y5 `6 y5 ?  <description>
  i. A) a0 x% Y, C) ^) E! B    The actual address the RPC server will bind to. If this optional address is
/ y- [7 W* {8 w    set, it overrides only the hostname portion of dfs.journalnode.rpc-address.
) K* p7 ^2 j+ _) E* Q    This is useful for making the JournalNode listen on all interfaces by2 ^6 C7 K8 @% n
    setting it to 0.0.0.0.  n6 W# N/ R, ]" p! H
  </description>* h+ h% p/ b, u. s3 v
</property>
% F' V) E  e( R0 s, c<property>
) u. s3 V; ~0 c: y; i+ P5 \  <name>dfs.journalnode.http-address</name>! m7 L3 v5 v) C
  <value>0.0.0.0:8480</value>, A+ w6 s3 {/ M& A# T8 T5 Y
  <description>+ U' F4 I9 q; A2 z
    The address and port the JournalNode HTTP server listens on.9 A4 p! }! c- T& G0 T5 F
    If the port is 0 then the server will start on a free port.
, X/ K2 n1 R+ K5 A6 _3 w& `  </description>
1 ?( P( n1 |7 k2 H  U</property>$ V7 _% l1 G! z; i8 W
<property>
, D6 l: ~7 Z+ Y5 @3 E# y- p  <name>dfs.journalnode.http-bind-host</name>0 }( S, @, w7 F
  <value></value>
0 s/ O7 j# h+ X  <description>( p! p! `3 P+ }/ ^& e: U4 w0 d
    The actual address the HTTP server will bind to. If this optional address
; D7 L8 n' I/ c$ g; n/ b% ?& d2 {    is set, it overrides only the hostname portion of/ Z, k1 ]) ~  Z, E$ y) r9 [9 z. A
    dfs.journalnode.http-address. This is useful for making the JournalNode3 c( ~- I  A- H" n; G
    HTTP server listen on allinterfaces by setting it to 0.0.0.0.# k6 G' @" w: Z2 J. o
  </description>2 t; [1 T3 N7 @; Q& v
</property>! J, c$ ~# g2 ^" J' }+ ~% F
<property>) A# h0 R0 ?1 }# o
  <name>dfs.journalnode.https-address</name>
; I4 y. p! }' p% q  <value>0.0.0.0:8481</value>% e2 x: }/ P5 x9 ?9 b
  <description>/ a4 d# s7 K! u! {" Q
    The address and port the JournalNode HTTPS server listens on.# n8 u- v% V4 X/ S8 I+ `1 [1 x3 R' U
    If the port is 0 then the server will start on a free port.
& d# l; T9 ~& k4 q+ w' H. u  </description>
7 g. C4 s' e- W7 |</property>
; \" ^  ]: Z5 F. m" o+ N& c<property>
3 @* {3 ^; Z; a  <name>dfs.journalnode.https-bind-host</name>
  [1 z" @! j0 h* A  <value></value>
& t3 i5 W; H! {, D  <description>
/ N/ ?0 l  }5 L  }' o    The actual address the HTTP server will bind to. If this optional address
4 I5 y2 s5 s% a' x1 i+ m    is set, it overrides only the hostname portion of
% A! V" e3 \- z+ ], [+ B    dfs.journalnode.https-address. This is useful for making the JournalNode7 z0 H2 Y8 @. n# c
    HTTP server listen on all interfaces by setting it to 0.0.0.0.
7 x' r+ ], ~, i  </description>0 @7 @0 ^' l- W% ^( ^" j7 h
</property>4 S' X$ X6 k$ J) k
<property>- n: y& t3 i8 r4 q+ \/ N& r# C
  <name>dfs.namenode.audit.loggers</name>
* a/ {0 p" i; N1 F+ y  <value>default</value>
$ W& e: F* b. ?# r; c% T  <description>
. d( S* j: z3 j& b    List of classes implementing audit loggers that will receive audit events.8 P5 L  K, H0 z. F
    These should be implementations of org.apache.hadoop.hdfs.server.namenode.AuditLogger.
2 J8 T& N2 _2 C/ }* C8 F    The special value "default" can be used to reference the default audit
1 I' D( V9 r& g8 F0 c    logger, which uses the configured log system. Installing custom audit loggers, E; D! g5 U, T7 \; `7 h0 i
    may affect the performance and stability of the NameNode. Refer to the custom
9 D; v( o7 _1 S    logger's documentation for more details.' ]+ t. b- T0 T
  </description>' x: E' Z( x: b1 c' C& U
</property>
5 ?& f# ^. w( H<property>
' }8 e; Q0 f% W% v  <name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold</name>8 ?4 k3 L* A2 J
  <value>10737418240</value> <!-- 10 GB -->
7 J1 r/ B8 ]; y( `  <description>
7 t2 w7 v4 I: i+ A    Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to" c- I0 \9 N( c
    org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy.
9 H: N' R7 q: j/ b' o    This setting controls how much DN volumes are allowed to differ in terms of# T! ^# V( V7 |1 z
    bytes of free disk space before they are considered imbalanced. If the free
+ W) e0 n& {3 U7 M7 [+ P1 n3 k8 y    space of all the volumes are within this range of each other, the volumes
9 o: ^* V3 L- L# S    will be considered balanced and block assignments will be done on a pure
7 a' X5 Y: Y# I    round robin basis.
) D5 V, i9 U# \: b# N9 V4 g3 e. R  </description>
/ C$ o7 [" S% ?/ g0 D$ ]</property>
) [' `0 T/ w1 \  \6 A3 z/ `, c<property># m1 G: S8 ^# ~) v- U% v- v2 ]/ _
  <name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction</name>
5 R5 H- v2 u1 _6 U  <value>0.75f</value>
' R# ?( n8 I% ?+ Y% w; w1 c( n  <description>
3 L" m1 C  w1 t7 L# R    Only used when the dfs.datanode.fsdataset.volume.choosing.policy is set to$ t! \; \7 u  s  Y7 p
    org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy.
) v4 Q# b! N+ r& {    This setting controls what percentage of new block allocations will be sent
2 a( a) o8 ^: f* g9 T+ E    to volumes with more available disk space than others. This setting should+ Q( K' n- W. @/ w4 `
    be in the range 0.0 - 1.0, though in practice 0.5 - 1.0, since there should
, Y/ X3 h2 @1 s/ T    be no reason to prefer that volumes with less available disk space receive
; d& @' f7 X, W9 o- K  D( j    more block allocations.; }: d% O0 N2 k1 j6 }
  </description>
: |1 _! |. v( F2 I</property>
" D( A- v/ ^% j0 Q# x0 U3 y<property>
& H6 m4 v, Y) o) \! V- I  <name>dfs.namenode.edits.noeditlogchannelflush</name>
. X0 D2 B6 S; V  v* G  <value>false</value>0 [" `6 `2 p1 l9 V
  <description>
& a; j3 J% F& K5 _% ]9 L/ \    Specifies whether to flush edit log file channel. When set, expensive2 L) y6 m3 i# w/ u# j
    FileChannel#force calls are skipped and synchronous disk writes are
: t. u0 Z# e" b; ?4 j    enabled instead by opening the edit log file with RandomAccessFile("rws")& U& Q! W& o4 i: u* u
    flags. This can significantly improve the performance of edit log writes5 S: J4 s% M8 M4 ~2 x+ f$ _
    on the Windows platform.
* ]1 K2 o7 u) p+ d! g4 q    Note that the behavior of the "rws" flags is platform and hardware specific) R! c1 P7 l+ D8 }3 G
    and might not provide the same level of guarantees as FileChannel#force.! G1 Q" s4 v, [4 p2 B
    For example, the write will skip the disk-cache on SAS and SCSI devices; H: G1 g" H( i: ~4 A% F# S6 [
    while it might not on SATA devices. This is an expert level setting,. t/ w5 q8 \" ?$ C0 e7 Z
    change with caution.
9 k  u, }+ q0 D% j" d3 s" M  </description>
" d# K2 v- r; ]</property>
& L' I4 C4 c- }4 Q: O# l<property>
* j8 D. T% Q& T% C- R) J( }! c  <name>dfs.client.cache.drop.behind.writes</name>
  h0 _4 Q8 z" w- F1 q% p/ s  <value></value>
  O+ v+ M/ }3 D) {) w  <description>) N4 C  u2 P( Y" E. c( a5 l
    Just like dfs.datanode.drop.cache.behind.writes, this setting causes the
1 I: V! g$ b1 u8 d    page cache to be dropped behind HDFS writes, potentially freeing up more9 \1 |* m4 m# p4 t2 k
    memory for other uses.  Unlike dfs.datanode.drop.cache.behind.writes, this
1 E" L. v8 y5 h' Q/ e3 N  b    is a client-side setting rather than a setting for the entire datanode.5 v1 w& |; _2 A9 z2 W- r
    If present, this setting will override the DataNode default.
, Y& K  _; H% y0 r8 j9 D    If the native libraries are not available to the DataNode, this
; K/ t; ]- T9 S% W    configuration has no effect.  e% S, [3 s; g3 E0 _
  </description>
! g, u! \$ P% m$ I# h</property>) v" s8 n; V$ q# M' }- s
<property>
1 m' m# o; P- e. y8 f0 V: ~  <name>dfs.client.cache.drop.behind.reads</name>2 B$ S' W+ H: @+ a' G" h6 F
  <value></value>
8 J0 ~+ X1 M7 {1 ~5 o, ^  <description>
. b% u' R! h1 g4 u  P$ P    Just like dfs.datanode.drop.cache.behind.reads, this setting causes the
" D" B" d0 J% y; P6 J    page cache to be dropped behind HDFS reads, potentially freeing up more
, B' W' m5 \. u    memory for other uses.  Unlike dfs.datanode.drop.cache.behind.reads, this! c6 k. l1 c. f# S
    is a client-side setting rather than a setting for the entire datanode.  If
- M, O$ J$ E4 x9 ]9 {. ^; D% R2 L7 |& p    present, this setting will override the DataNode default.
% w- T0 r  y- |8 a  g+ q: `6 O    If the native libraries are not available to the DataNode, this
# K* W+ Q  H% _    configuration has no effect.6 ?4 B/ F5 U: E
  </description>
% ]# \2 T. \( c. Y</property>
& e7 \9 }- Q& c<property>4 T9 m$ |. m9 x3 F2 f
  <name>dfs.client.cache.readahead</name>
! m# D0 p" G1 ?  <value></value>
7 q* S) e2 x1 {; R; U' w/ c5 |  <description>
1 T  ^' e& ~: [# T  B2 j    When using remote reads, this setting causes the datanode to% G8 N; p$ R* C! g, T0 x; o
    read ahead in the block file using posix_fadvise, potentially decreasing
% ~5 h" D% m' k# I6 ]  v2 P    I/O wait times.  Unlike dfs.datanode.readahead.bytes, this is a client-side
0 [: _9 Z& D$ Y/ g, o5 ]    setting rather than a setting for the entire datanode.  If present, this0 k: a* Z, v  J, ?1 N: i9 p  ^+ r
    setting will override the DataNode default.
. C6 J; s, @9 }% ?5 G3 _  o    When using local reads, this setting determines how much readahead we do in
6 O: x1 d6 i# v0 Z- _    BlockReaderLocal.1 L$ f; r- a* ^) i! M5 a1 a
    If the native libraries are not available to the DataNode, this
$ _% t  j) e- i" J    configuration has no effect.
, c% V' n8 M, f" t8 A  </description>
. D/ R* M5 @) R' C! I* J. E  V. K</property>. G0 l* G! u+ x% Y( G; K+ J
<property>9 \( g- ~5 K. \
  <name>dfs.client.server-defaults.validity.period.ms</name>
; g' z0 `( R$ L: G  ^2 l  <value>3600000</value>" F+ `2 s  B1 M' T8 d! E- D
  <description>: R4 E7 p+ k: r) P
    The amount of milliseconds after which cached server defaults are updated.1 j. s- A5 x# [5 W/ n1 K
    By default this parameter is set to 1 hour.
# O, ^+ \) O( p+ ]2 E6 T3 s  </description>! y; x: i! }+ K" w& D
</property>
% e: ?4 m! p9 E# F9 j- \1 n# z<property>' F' q9 ^6 Y6 K2 M
  <name>dfs.namenode.enable.retrycache</name>+ t6 d9 p+ J4 h9 n/ n( n5 A
  <value>true</value>% q# B/ U/ c& d; |& f" B$ {
  <description>
& \; O- u9 X3 o: v% h# o) \    This enables the retry cache on the namenode. Namenode tracks for$ P# r! l; n( c3 b  J4 j9 `+ U
    non-idempotent requests the corresponding response. If a client retries the
. \3 {* |+ {0 _) D3 u: m" S( ~    request, the response from the retry cache is sent. Such operations
. W& O3 w' `6 o( G# M5 r$ I: U    are tagged with annotation @AtMostOnce in namenode protocols. It is
1 ~, X# c4 O2 G9 Z: f5 [    recommended that this flag be set to true. Setting it to false, will result/ h4 b0 m* L! j3 I; ?
    in clients getting failure responses to retried request. This flag must 1 y1 O# w$ `- R9 z
    be enabled in HA setup for transparent fail-overs.8 q0 p6 u0 Q/ B0 k. K
    The entries in the cache have expiration time configurable4 g8 U/ t% R; r
    using dfs.namenode.retrycache.expirytime.millis.
# W+ K# }. z0 \$ i, J0 w  </description>: K3 }. y4 |4 b; C8 W
</property>
. P9 m3 B. B! H( l5 j- R; j<property>
, h2 W! Q5 I8 r& d  <name>dfs.namenode.retrycache.expirytime.millis</name>
/ R% j  l6 |# Z  <value>600000</value>- k' {0 N- l! Q  \
  <description>: E& I. @* b' y# G# X8 t
    The time for which retry cache entries are retained.
3 y" f: x0 @( X# y. O4 P, P  </description>
. ?% W$ C/ T; L3 l- h( I</property>' W1 J! G9 Z' ^; G$ N/ ~
<property>
& O5 [, C9 d( |9 \0 `. l  <name>dfs.namenode.retrycache.heap.percent</name>
6 X6 I$ r( E+ C7 R  <value>0.03f</value>5 S3 C! U: {) I6 O; j
  <description>5 F0 p2 P& e& Q' g" s% d
    This parameter configures the heap size allocated for retry cache
  ]+ _6 a( J0 e+ b6 b$ I# d8 x    (excluding the response cached). This corresponds to approximately. V) {' O( S" }  w, C) X9 _3 W
    4096 entries for every 64MB of namenode process java heap size.
" S8 S* ~' d1 a4 f& |$ u8 [    Assuming retry cache entry expiration time (configured using
# ^" k( U$ p) J8 E& }( N    dfs.namenode.retrycache.expirytime.millis) of 10 minutes, this9 q2 ^, h" v# a' [
    enables retry cache to support 7 operations per second sustained! ^6 T  t* X1 `4 x3 ]$ z- e+ B
    for 10 minutes. As the heap size is increased, the operation rate
! t2 i& e3 _4 |/ S& K    linearly increases.
& n+ k: k, K1 Z9 A7 j& m  </description>& F' v8 o+ ^; d: f* p
</property>
5 o# s, |9 k! I9 X! B1 b<property>
4 F. P3 x$ m: p( e7 L9 @$ u) m! h  <name>dfs.client.mmap.enabled</name>7 A: x, ~) C/ _) L$ z# l+ Q
  <value>true</value>2 {' o' P- y, O* A6 I; G% L% u
  <description>
# I: Q: \, P5 r" @    If this is set to false, the client won't attempt to perform memory-mapped reads.
* U: m& g# ]0 W0 T( ^  </description>1 @; s: b* X4 U$ U, k
</property>0 R# e6 n1 ^3 f' H1 P( `6 I
<property>
' V9 |% V# s! T4 X6 _( M3 a+ o9 c" @  d  <name>dfs.client.mmap.cache.size</name>
2 T: ^1 \! k3 y0 b  <value>256</value>5 I7 @* w! q2 L. p
  <description>
. z7 Y6 J+ W+ b    When zero-copy reads are used, the DFSClient keeps a cache of recently used! b; R# X& G+ n& n5 ^# b
    memory mapped regions.  This parameter controls the maximum number of# J' L0 H6 [; x9 q! x' n0 h
    entries that we will keep in that cache.% F! O! j# g7 g7 j: x: g" q3 U
    The larger this number is, the more file descriptors we will potentially0 v, n, S+ A# p+ t4 {; ]7 w+ I# z
    use for memory-mapped files.  mmaped files also use virtual address space./ a( T' d: ^( Z5 U9 t! t
    You may need to increase your ulimit virtual address space limits before
' s" P& O' n) p9 H: r    increasing the client mmap cache size.6 h5 e9 Q+ L; K. l! B  w
    Note that you can still do zero-copy reads when this size is set to 0." H3 Z. _+ A3 Z! _' r* n3 ]7 r
  </description>
2 v+ y$ }/ b" F( H! R- e( V+ r( r</property>
0 P# I6 |* j4 _% I' w<property>
$ m* U; Y, @8 C' k. `4 s3 A  <name>dfs.client.mmap.cache.timeout.ms</name>
. c8 c" w- n/ d  <value>3600000</value>
/ E. L& p* Q  _8 n9 N8 {  <description>+ I$ z( L* [( q! B0 g+ k- `$ g
    The minimum length of time that we will keep an mmap entry in the cache" T; j  p/ m8 u8 X5 G: ~- C
    between uses.  If an entry is in the cache longer than this, and nobody
. ~% W! O' m& z$ s    uses it, it will be removed by a background thread.
8 j4 Z! X% x7 L. i  </description>
  p) K$ r/ K! M6 u</property>  n  k* t6 R6 S2 S
<property>
7 l4 Q5 ]$ q& T0 V# ]  <name>dfs.client.mmap.retry.timeout.ms</name>* _+ p4 c# \5 b3 I7 W# \+ O
  <value>300000</value>
$ L' T0 ?* t7 D1 b+ n  <description>
4 Y' c! t8 Z) e8 ?9 c' w    The minimum amount of time that we will wait before retrying a failed mmap
( F5 k) h' E* o5 c: x0 ^    operation.* V' S( _" S9 x$ ?. \
  </description>) [/ f1 t& B) R6 ?; @5 z3 M8 c
</property>
4 ~& j# |) ]% q. ~* y) K<property>- s8 g4 m* P" T0 [0 P! x; M, @
  <name>dfs.client.short.circuit.replica.stale.threshold.ms</name>
; j9 K! A: I, L8 r* a, P  <value>1800000</value>) [7 F$ w* j3 C
  <description>
$ F" n( {' p4 t    The maximum amount of time that we will consider a short-circuit replica to
. Y' e8 A/ b! C, v: L7 P% q: Y    be valid, if there is no communication from the DataNode.  After this time# B; y' ?4 {; Q3 f3 Y& i
    has elapsed, we will re-fetch the short-circuit replica even if it is in
7 N# ^, H" h% j2 G* J- P" d    the cache.
9 v8 B2 u" A( m! g  </description>' O- x7 b( H. G
</property>. s1 v% Q3 z% i( i) n
<property>/ ?6 Y* l  r+ r
  <name>dfs.namenode.path.based.cache.block.map.allocation.percent</name>$ n) p# p' S% x* U
  <value>0.25</value>0 w& V# D' ~: F/ D! Q5 o1 u% U' V* V
  <description>" n9 y! _/ T3 T6 _$ O; M( k) M- Z9 l; B
    The percentage of the Java heap which we will allocate to the cached blocks4 F" Z( h& C3 s- x/ I1 O
    map.  The cached blocks map is a hash map which uses chained hashing.1 d/ x# o) v+ O
    Smaller maps may be accessed more slowly if the number of cached blocks is. `" I' {# T/ B  X) I
    large; larger maps will consume more memory.+ y8 M  v% w+ t0 `+ I; M
  </description>& ^% T) s7 f7 P) B+ A' E5 ?
</property>8 V7 c  ^0 k' E8 K2 e3 X# x# ?' d
<property>
+ G- U# P# G9 z9 ?8 W; v6 v, u$ z* Q  <name>dfs.datanode.max.locked.memory</name>4 S% S& [: h/ j
  <value>0</value>3 f2 |% A5 G1 ~: R6 d0 c4 X9 y
  <description>
% ^3 S+ y" z& Z& H    The amount of memory in bytes to use for caching of block replicas in! L# I$ K1 F! H+ ]  V) K/ }1 H1 Z
    memory on the datanode. The datanode's maximum locked memory soft ulimit
: \6 t' _; L, v7 N/ |    (RLIMIT_MEMLOCK) must be set to at least this value, else the datanode
, b' A& q/ y& R% U" t  B    will abort on startup.0 J6 @7 L- L4 ?( b6 `3 k
    By default, this parameter is set to 0, which disables in-memory caching.  \, N5 H7 ~2 z. i( |) w! q; V
    If the native libraries are not available to the DataNode, this
! d; `. s2 Y9 [8 L3 T    configuration has no effect.
: ?9 z- I& _! E! X  </description>$ r. ~5 h* _8 D. e/ D( {2 l
</property>" f0 h' Z" n$ v+ X! g( ~& U3 Q
<property>
* a. q1 F6 _/ G; N! K% h  <name>dfs.namenode.list.cache.directives.num.responses</name>
. b1 F2 v1 E6 Z8 w  <value>100</value># h6 N( |5 y2 w1 a4 P6 m
  <description>
1 O0 y7 C" C; n/ x5 g6 w    This value controls the number of cache directives that the NameNode will9 h% N- o/ B# ~0 [, }
    send over the wire in response to a listDirectives RPC.
( n4 ?$ `6 A0 l8 v8 U8 j  </description>
) C7 ?* n; |: L4 C</property>4 ~$ t* w) P  ?0 C
<property>- o- d0 r- v  Y) @7 U3 ~' S, u
  <name>dfs.namenode.list.cache.pools.num.responses</name>
' k2 ~4 F1 k& a- f: _0 V  <value>100</value>5 w* q7 [- b, m0 M- i
  <description># j/ q3 h0 e& C, a# s4 g! C; h6 a
    This value controls the number of cache pools that the NameNode will
4 s/ E: C0 ^9 l) x+ j, p$ E  G    send over the wire in response to a listPools RPC.7 Z- C5 R" [8 Q& O$ p
  </description>7 a4 L% n* D! w# S. N& Z
</property>& a. Y2 H0 J* U0 w# g1 ^/ u3 c6 L
<property>
$ L- r9 @, ^9 q: [' \; _+ C) P  <name>dfs.namenode.path.based.cache.refresh.interval.ms</name>
3 a3 _* O( e' }6 b8 ~* L  <value>30000</value>& r  I8 D. q: N1 r8 B, e; m! L& C
  <description>
0 G/ J: r4 \+ r  {2 b4 D; i/ q/ P    The amount of milliseconds between subsequent path cache rescans.  Path2 f: r& s, p/ N
    cache rescans are when we calculate which blocks should be cached, and on
1 o2 X* Y& H/ S6 q5 x/ B  @    what datanodes.9 H* E! v) |* h1 v3 I5 o! @
    By default, this parameter is set to 30 seconds.
) L5 e6 H- }, ^; ]" h. Q  </description>
( @& ?" t- Z6 k5 \7 s4 A; N( X3 O</property>3 k; C, l7 |/ i# ~. C
<property>% ]' K5 c/ k' X3 |8 W  [2 k$ N
  <name>dfs.namenode.path.based.cache.retry.interval.ms</name>, X) D! o4 P$ l* ?+ {
  <value>30000</value>
& T5 q( I! U; i& }& H# ]2 K  <description>
% Y/ o" n% d. y9 C+ [4 ^    When the NameNode needs to uncache something that is cached, or cache
; R  K0 `* N0 Z- u( X- a4 X0 ?* h" T+ K    something that is not cached, it must direct the DataNodes to do so by# U  H- _2 {" x# ~" [
    sending a DNA_CACHE or DNA_UNCACHE command in response to a DataNode! o7 ^9 n, M! f% o- A1 C! s
    heartbeat.  This parameter controls how frequently the NameNode will/ u% y" D% ?* g3 Y
    resend these commands./ Q: c. O0 U5 ~+ X, W: s
  </description>
* u4 H  Z( h. V8 S, Z</property>
% S6 U1 I* O/ w3 `* Y" b6 k, [<property>
$ M( W* A1 c% ?$ V- F: ^  <name>dfs.datanode.fsdatasetcache.max.threads.per.volume</name>4 Q/ d$ f* j' K  I9 h* y' f4 c6 X
  <value>4</value>$ ~4 T& [! v) B$ z2 a) U
  <description>
. s6 d7 K/ G: o: k- l! T/ ]8 |    The maximum number of threads per volume to use for caching new data+ g4 o8 _' g/ I( ]+ h( }$ J
    on the datanode. These threads consume both I/O and CPU. This can affect
9 x8 K& B' a( j4 X5 t4 p+ o    normal datanode operations.
. p4 O4 g. S: o$ i+ M  </description>4 E! Z$ z/ A6 z4 R. P
</property>5 R, `  L1 y' Q, S
<property># m: f( I$ S6 a) |
  <name>dfs.cachereport.intervalMsec</name>
+ {/ j# y" m, l) O  <value>10000</value>
/ z& ~- b- |0 L2 a8 k3 P0 |- j" f8 T  <description>; ]( a& {6 l, i5 E% ?1 \
    Determines cache reporting interval in milliseconds.  After this amount of$ [! H. \7 H' V
    time, the DataNode sends a full report of its cache state to the NameNode.
( |8 I4 M/ ]4 i" V) z* t; e  X3 [    The NameNode uses the cache report to update its map of cached blocks to3 Y# v7 O! n4 e* o- {
    DataNode locations.
: a, P1 `. t3 K4 g    This configuration has no effect if in-memory caching has been disabled by' s- k& Z4 F! ?# N: @% h
    setting dfs.datanode.max.locked.memory to 0 (which is the default).
' S6 d; j3 q4 |* }: ?' z* i' G: ^    If the native libraries are not available to the DataNode, this
. e5 R$ j( p5 ^& b1 d1 F    configuration has no effect.
4 b! q7 Q# V1 R& l' i# U% F  </description>
& H% p5 ^0 O9 X/ e! J</property>
% I6 a3 Z5 C& v) N" b( w<property>0 ^4 \' u8 ]' h$ ?; d
  <name>dfs.namenode.edit.log.autoroll.multiplier.threshold</name>) k& {3 j8 b, _( z/ Y5 ~( x
  <value>2.0</value>
) J  _6 c1 f0 f. s6 E4 j  <description>' I# F+ r5 z1 L3 V8 ]0 f
    Determines when an active namenode will roll its own edit log." L( X  j) F2 o% K9 `. i
    The actual threshold (in number of edits) is determined by multiplying
2 ^0 S4 i6 w) k" R; C$ c    this value by dfs.namenode.checkpoint.txns.9 i) i8 T8 ~3 B* _3 _) V
    This prevents extremely large edit files from accumulating on the active
9 o  u) o$ ^9 }3 ]    namenode, which can cause timeouts during namenode startup and pose an" E0 e+ L' v. I$ K2 S
    administrative hassle. This behavior is intended as a failsafe for when
1 I! X4 U& d' q. e# D. M    the standby or secondary namenode fail to roll the edit log by the normal8 S' c7 y7 T6 s8 h
    checkpoint threshold.. }5 v5 m- D7 r. _, S1 ^
  </description>
8 c1 v2 r/ ?. S" v8 j( S8 m</property>
& H0 l: {1 t9 }<property>& B+ j+ L1 P5 N6 C* F8 ~
  <name>dfs.namenode.edit.log.autoroll.check.interval.ms</name>
; b" v2 z4 }' c% \5 M  <value>300000</value>) Q! _" N1 S6 R) g+ H
  <description>
+ y/ C* R8 W" @, x% H    How often an active namenode will check if it needs to roll its edit log,! X+ C) G" u7 a. X# i& k
    in milliseconds.
' z5 J- C0 F+ _( \& Q  </description>3 \- u8 f1 z" ]! l( g
</property>
+ _" G" _* a  {6 F, F<property>- }6 {% L" c" o4 t* m8 P7 o& `
  <name>dfs.webhdfs.user.provider.user.pattern</name>' t% T+ C; X  }# Z! ^
  <value>^[A-Za-z_][A-Za-z0-9._-]*[$]?$</value>3 i' o! u! Z5 `/ d$ ]
  <description>
; G" c0 k5 e1 O4 _    Valid pattern for user and group names for webhdfs, it must be a valid java regex.3 t. f6 B: a( D
  </description>
. G+ H2 z6 d6 W. u3 s</property>
. W! \* X8 F( L) w4 F8 h* a8 w: C<property>$ Z2 {8 m" s( q$ |+ e  N6 \9 Q
  <name>dfs.webhdfs.acl.provider.permission.pattern</name>% {6 u: I/ l, J% l3 k6 ?
  <value>^(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?(,(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?)*$</value>
8 E( O6 I$ U% a9 f8 `/ t+ A  <description>
) f/ ^& e6 F, C, Y( Q. H    Valid pattern for user and group names in webhdfs acl operations, it must be a valid java regex.6 C& T+ |8 p# `% d+ E
  </description>8 q9 v& t" Y1 O' ^) P; M
</property>6 `8 O* k! R) p& M
<property>
  q4 C' Y: Q7 }* B: E" w. B4 K0 X: E  <name>dfs.webhdfs.socket.connect-timeout</name>
$ J4 ^& ^$ I/ ^0 U( s: y  <value>60s</value>
9 \& [, m# O8 J# r/ K' G& |* y8 B  <description>; X0 f! j+ T; A3 F
    Socket timeout for connecting to WebHDFS servers. This prevents a
0 o/ @% M7 P* j, l  [    WebHDFS client from hanging if the server hostname is
% ?( I) J) s4 Z$ `. g9 ^" ~    misconfigured, or the server does not response before the timeout* ~' ~$ P' n0 [& p
    expires. Value is followed by a unit specifier: ns, us, ms, s, m,
0 \# T% O0 e/ }  U$ u    h, d for nanoseconds, microseconds, milliseconds, seconds,
5 U; j+ Y# b4 @    minutes, hours, days respectively. Values should provide units,
7 A  g! g+ D$ l' T$ U, T    but milliseconds are assumed.3 G- `$ G  Q8 j
  </description>+ |0 j5 m' \- `7 {
</property>
) P' R5 J1 q% y% e9 P3 q/ C( j$ D<property>( }$ J+ P; q% z! a' r
  <name>dfs.webhdfs.socket.read-timeout</name>
( f1 Y0 [1 t. e) o- S( E; G: [( [$ r  <value>60s</value>
4 e/ n; \6 y7 ^! c  <description>" d! E4 p' D, R$ {0 J
    Socket timeout for reading data from WebHDFS servers. This6 Q/ v: V8 S1 l1 I0 i
    prevents a WebHDFS client from hanging if the server stops sending
* L( w6 P+ i4 N4 q( [; B- ]* e! \    data. Value is followed by a unit specifier: ns, us, ms, s, m, h,
9 L6 T/ L2 V1 x7 E3 q7 d" y  T: M    d for nanoseconds, microseconds, milliseconds, seconds, minutes,
1 F; h# D5 Q- S    hours, days respectively. Values should provide units,
$ U3 t1 B5 i; m8 i4 S    but milliseconds are assumed.: e9 W" U+ ^) j9 w% u% T
  </description>5 i! r/ M' g% v
</property>$ \+ w) d' F  y* T
<property>9 `: u; M; p0 @! `( b3 T: \
  <name>dfs.client.context</name>9 E( Z# @4 `  H1 r! r
  <value>default</value>
! g- `' S! a+ L8 y8 Q7 o  <description># `4 q  @0 f7 k; S
    The name of the DFSClient context that we should use.  Clients that share4 ^1 ~5 t0 ]) h2 P5 c
    a context share a socket cache and short-circuit cache, among other things.% S6 G; R: ~" S
    You should only change this if you don't want to share with another set of2 h) e4 U6 `- B2 |; O3 [( _
    threads.
% l1 e+ Z2 l1 u& Y# {  </description>* {7 m( S; [2 I" ]5 b' _
</property>
6 c! f# N# W7 K2 K: w) `  ^$ h<property>
, v2 w/ v4 k; ]5 J+ u/ Z  <name>dfs.client.read.shortcircuit</name>+ |- Y9 B) b! L: N3 T2 L/ w$ E
  <value>false</value>+ x. |7 f+ D. u2 _7 F( {
  <description>
8 `1 [9 G' Z% {8 a& b! ?    This configuration parameter turns on short-circuit local reads.
; b3 [: F" I" `3 T) u  </description>
# x8 B, n& t9 w6 y) k. l2 {</property>
: D' l2 y3 R! ^1 p<property>
! ~2 _, b9 N* n; H  <name>dfs.client.socket.send.buffer.size</name>0 u+ r, u1 o/ b  q2 l: U3 E" b
  <value>0</value>
/ F& V- k( H1 s9 q0 q6 z9 J+ ]7 s  <description>
, c) G! C$ }3 m* |' H    Socket send buffer size for a write pipeline in DFSClient side.( e% `8 a# w4 o! t4 O+ s) w5 g8 a$ d
    This may affect TCP connection throughput.% w+ e8 ~6 w. f
    If it is set to zero or negative value,' x. t, [  p/ v, r8 e" @8 i
    no buffer size will be set explicitly,- G5 X2 A: X+ P" b# K# a
    thus enable tcp auto-tuning on some system.
/ P1 _# T! W; o% I3 D    The default value is 0.
1 E8 Y) O6 U) c! o- I' V  </description>/ ~, d& T1 l: ~% e8 R$ M) ~& P/ P
</property>2 g' `" R" `1 C- f9 w: t' A
<property>7 F" u6 @8 i- ~8 R* g# b
  <name>dfs.domain.socket.path</name>4 e' l6 v# T/ ?& O" b/ D) o
  <value></value>
$ B! q% t9 j, o9 G- v  <description>1 I! k2 s( k" @: e
    Optional.  This is a path to a UNIX domain socket that will be used for$ m" g4 V8 |  l" _( S
    communication between the DataNode and local HDFS clients.
# r. j6 n  C; N8 \! A  O    If the string "_PORT" is present in this path, it will be replaced by the( m4 k, O4 r6 T
    TCP port of the DataNode.
) ?8 J# }4 u9 X5 ^2 n" k; F. N) W3 E  </description>% q% U$ w# |  [9 `& }! y
</property>
4 M; a: c8 c- y5 Z<property>0 p9 P  f( y& f, V1 T/ n
  <name>dfs.domain.socket.disable.interval.seconds</name>" Q& u3 B5 v, D) y4 y+ T$ {; p
  <value>600</value>$ F% m$ L  c! d: K% w
  <description>
3 @: q9 n$ P  G' J    The interval that a DataNode is disabled for future Short-Circuit Reads,! o9 m% G2 A& V1 e$ ?
    after an error happens during a Short-Circuit Read. Setting this to 0 will
8 i' G6 }" }1 x0 R9 I    not disable Short-Circuit Reads at all after errors happen. Negative values/ \* R1 e% Y% M; g: M/ X& i* A
    are invalid.& q/ t  D( h  A4 y5 ?
  </description>  {% ^" V4 ]! \2 h" V! e
</property>
8 S' u! H5 j0 r: y<property>
5 ^2 U, {! u& J4 T7 d5 j# j  <name>dfs.client.read.shortcircuit.skip.checksum</name>
" S& s( K7 _; s4 n& A  <value>false</value>8 h( D+ N, k7 i
  <description>
5 v7 W3 W- t' }; b5 @9 f" X/ d    If this configuration parameter is set,
; b& `' L0 Z$ K6 y    short-circuit local reads will skip checksums.) t. n0 k) h# E' R1 o
    This is normally not recommended,! b2 A' f) G3 B, T0 j) b
    but it may be useful for special setups.7 h& Z/ p8 W5 e, t6 |& S
    You might consider using this' C# _7 y- E5 h- g2 A* M  {- D
    if you are doing your own checksumming outside of HDFS.  K& j. d% Z5 w8 g
  </description>
" F" c+ u6 q2 E. _6 L</property>$ c  t  A+ D( P, P, D0 @* j
<property>
4 d/ }( E! m& r; [6 X- U, f9 S  <name>dfs.client.read.shortcircuit.streams.cache.size</name># \: M/ \/ U8 V8 j$ F$ r5 t
  <value>256</value>3 D6 J. ]3 t+ J2 ]9 |
  <description>4 a9 y1 ^1 d; u5 H
    The DFSClient maintains a cache of recently opened file descriptors.
/ d* `( m' Z* Y    This parameter controls the maximum number of file descriptors in the cache.6 h7 G2 l7 H+ U* A( c" ]
    Setting this higher will use more file descriptors,, U% O% J. I' A& M; i5 ~8 B
    but potentially provide better performance on workloads4 Y+ H$ ~, n! U0 L
    involving lots of seeks.; \! v/ N; x! R3 @7 G
  </description>
. M* J. x  q( ?( y* U# d0 l) S: W; V</property>) k8 U0 X  W9 C$ C" i' O! J9 d! ^
<property>( r/ R: `8 S2 ~  [  b
  <name>dfs.client.read.shortcircuit.streams.cache.expiry.ms</name>
1 k$ O( K: P9 t  <value>300000</value>
& c9 e+ W" h! j  <description>
7 W6 n" y; z" m    This controls the minimum amount of time
1 s$ M+ {! {$ C9 {    file descriptors need to sit in the client cache context
  F* b4 B, D( V$ k6 N$ q+ o( S    before they can be closed for being inactive for too long.
( z  ?& U0 ?+ G! \. g# R  </description>1 w3 w. M- h. i9 d, K
</property>0 Y5 o8 f3 I% f
<property>9 K/ e% Z% Z0 _; U
  <name>dfs.datanode.shared.file.descriptor.paths</name>
1 o9 D; u$ U# t$ F' {3 X6 [7 ^' s  <value>/dev/shm,/tmp</value>
$ o+ `! @, c+ Z8 ]  <description>
4 [7 J3 U2 E* D$ }( X    Comma separated paths to the directory on which
9 M9 a" y: Z7 i    shared memory segments are created.  N- j$ R3 [4 F3 s( {! b  a# M( U
    The client and the DataNode exchange information via$ r' z2 R- n+ c8 i0 B; f9 a; q
    this shared memory segment.
# ~; U. K! m/ q+ K9 Y+ ?2 @) E# o    It tries paths in order until creation of shared memory segment succeeds.
3 a4 m, g! R1 D  </description>3 e7 }+ T* \% B8 ]9 ?" ?& p4 e; L# j
</property>
* _  \" N' R3 r. K# B; [$ D<property>' r/ A2 L( E3 E, ^" C
  <name>dfs.namenode.audit.log.debug.cmdlist</name>
: _% u% c0 W2 Z) b, T5 l6 q$ B  <value></value>- i! P" U1 `  X* }
  <description>8 n" T* i+ J* v5 P5 I- Z1 Z
    A comma separated list of NameNode commands that are written to the HDFS
9 `9 y6 y2 Q2 ]8 p' g    namenode audit log only if the audit log level is debug.& g4 ?  E% j& d- a' ?  n% x
  </description>
2 K% E6 G3 h2 q* C  o" [</property>0 `  _/ H" _7 h0 Y6 e6 A  d( y
<property>4 Y1 I; I6 M0 @7 i- {
  <name>dfs.client.use.legacy.blockreader.local</name>
9 o0 ]. J# D- P2 Y5 Y  <value>false</value>& W$ J* D0 E% i9 j- h( M+ H0 t- b
  <description>" P8 B/ o1 z* s7 U
    Legacy short-circuit reader implementation based on HDFS-2246 is used# c0 V' ^+ \% S! G  S" f
    if this configuration parameter is true.* Q4 d  k- j! |5 M: I
    This is for the platforms other than Linux
$ T) w$ S* U- v7 F$ P, k7 b8 O" n    where the new implementation based on HDFS-347 is not available.
  g5 m6 \, X2 Z( S' ]  </description>! w0 N0 J, l" K$ w3 S" p
</property>0 b2 q; |4 F0 |8 {3 |
<property># v3 o: H2 H$ d( v' O) M
  <name>dfs.block.local-path-access.user</name>% {2 A; {0 W: Z" Z( P
  <value></value>
8 Z% f/ ]5 A# W3 d% w( U  <description>
$ e- u, e$ b" M9 T/ W+ B    Comma separated list of the users allowed to open block files
! P$ i, h6 W1 Y" t    on legacy short-circuit local read.: L& ]% _+ ^4 D+ q. J
  </description>' Q1 o& A5 [7 w0 Y
</property>
  x' S1 X% g  V$ f2 O<property>2 u' s- s2 s; E3 Q% O
  <name>dfs.client.domain.socket.data.traffic</name>! P) S/ w! F6 Z6 ]' r* ^
  <value>false</value>
/ A; A. \5 [7 n' ~, h. P) S4 d0 R7 z  <description>
  _5 {4 V% w. a) g- h    This control whether we will try to pass normal data traffic1 ~0 @/ q* f9 `$ G$ w0 a& C
    over UNIX domain socket rather than over TCP socket+ C+ \8 H2 ?5 t2 @
    on node-local data transfer.) d) [4 @1 R% p* [6 d8 c
    This is currently experimental and turned off by default.
, O1 @5 V8 C5 C- }$ ^! h- j. u  </description>
: P/ O$ ?" `+ ~7 I</property>
/ W. V: b6 h  L. J<property>
( t' X' V& n$ J5 Q6 I  <name>dfs.namenode.reject-unresolved-dn-topology-mapping</name>
) J, h/ s( J. z$ J( L! i$ ?  <value>false</value>/ o6 J! o" F4 F/ @- G
  <description>3 p9 }9 y) M4 p/ r4 Z5 O
    If the value is set to true, then namenode will reject datanode
# u  k7 O' z! y0 x    registration if the topology mapping for a datanode is not resolved and 2 Z( m' H( {( j/ o  N4 B  O
    NULL is returned (script defined by net.topology.script.file.name fails
$ T: G1 ~( f% F8 s    to execute). Otherwise, datanode will be registered and the default rack 2 K9 F8 H' A6 G! v5 d
    will be assigned as the topology path. Topology paths are important for * {) ^) e1 A) N' A+ j3 f+ f* T
    data resiliency, since they define fault domains. Thus it may be unwanted # r% L: O% A3 x& B) x
    behavior to allow datanode registration with the default rack if the 2 ]8 k( G# W5 r. \1 _$ v
    resolving topology failed.4 M# @: Y0 ]! D  I' D
  </description>7 B9 N7 w. p1 [* U! H
</property>1 f, G! [0 L: j, ~7 q/ P; e
<property>, [2 J0 A$ r. o0 t) l+ ~
  <name>dfs.namenode.xattrs.enabled</name>
. e$ S  ]! u4 }# Q  <value>true</value>
/ R+ C/ X) n: x+ w. m2 p( s  <description>" W% d  d+ D! x7 {+ R
    Whether support for extended attributes is enabled on the NameNode.
+ I/ x7 J2 _; i1 z3 ~  </description>7 n  Z. w' |" E- u# I
</property>6 v1 W; S4 O' @5 s( v, b4 n; s
<property>+ N( U. _7 |- H) [
  <name>dfs.namenode.fs-limits.max-xattrs-per-inode</name>" k; N: x; j! [9 u1 k
  <value>32</value>
9 l0 v' ?# c; ~8 I  <description># h% E3 a5 A  _. U( K5 b
    Maximum number of extended attributes per inode.- i7 T* Q: I" a5 D* r# K
  </description>, I! T) N  p1 d
</property>  f$ r: L+ I5 O+ v
<property>
% p8 ^) y1 r/ L3 Y  <name>dfs.namenode.fs-limits.max-xattr-size</name>
, `2 o/ N# i7 p! [  <value>16384</value>
' i: g4 y2 Y- I& |  <description>% q' Y  e  W2 O
    The maximum combined size of the name and value of an extended attribute& e4 t! I8 S3 I  a! g# ^, U' y
    in bytes. It should be larger than 0, and less than or equal to maximum
/ H( K! V; ^: v: `/ ]* a9 {1 L    size hard limit which is 32768.$ ?8 p4 U" A5 z8 z9 m
  </description>$ z/ M! v" T+ B4 b
</property>
/ W. I, O5 T3 a6 d& R<property>- M' y8 R3 A& c* A: K. p; q
  <name>dfs.client.slow.io.warning.threshold.ms</name>
$ Q1 u) Y7 Y* _7 [+ p7 B  s/ t  <value>30000</value>1 n7 p1 D5 U* T7 p; S1 q7 N
  <description>The threshold in milliseconds at which we will log a slow! x% u: Y( \  {% g: v9 q% @
    io warning in a dfsclient. By default, this parameter is set to 30000
5 q# L" p4 I: B5 l: `    milliseconds (30 seconds).# t/ \# b9 [, ?8 x) |! Y9 ^
  </description>
" l, A8 _+ }9 _9 P5 Q2 F</property>
/ p9 \0 x# ]0 U% d9 |9 {5 n& G<property>* g3 S4 Z5 b% Y; C# a9 G
  <name>dfs.datanode.slow.io.warning.threshold.ms</name>
! R& ]6 W% \. d4 c  v1 H$ M  <value>300</value>
' [  s. {& |8 F- X3 `- o  <description>The threshold in milliseconds at which we will log a slow
2 \& Z3 N9 [( L4 n    io warning in a datanode. By default, this parameter is set to 300) n5 s9 m7 }0 I
    milliseconds.# @0 i6 g4 y& M
  </description>
9 y7 [1 m  R, B. d8 o) \# Y</property>
0 a7 y1 Z4 L5 b( e& b<property>
2 ]; ^' T0 G6 f- F7 i  <name>dfs.namenode.lease-recheck-interval-ms</name>
8 `4 c$ c; K2 X6 _9 _9 B5 O, g% D  <value>2000</value>
- ?( A/ @) Z7 U( l. U  <description>During the release of lease a lock is hold that make any
( e! C  U9 q& i! j# r! u( I; A/ \    operations on the namenode stuck. In order to not block them during" Z& Q: i* L5 E
    a too long duration we stop releasing lease after this max lock limit." Y6 A( j' T& {0 e5 x
  </description>% J! {) @' z6 m$ L2 y: G) l2 M0 o
</property>
8 G, K' h, h2 ]  o+ P" x1 V9 f<property>+ t1 t# m- y* X7 i
  <name>dfs.namenode.max-lock-hold-to-release-lease-ms</name>
# }# N7 p$ I& g$ @4 }  <value>25</value>$ P$ E! G( Y+ y7 E. A- \
  <description>During the release of lease a lock is hold that make any/ f$ K% a7 F: H  A# l( S% ]! ~' }
    operations on the namenode stuck. In order to not block them during
, i+ K# \, ?8 s    a too long duration we stop releasing lease after this max lock limit.
, z1 [2 |7 B7 c4 ~7 V8 v! T( ]  </description>; Y- w) [) r" q: ~
</property>$ L- Q' H. K/ \
<property># T0 l, W; H8 C+ C/ o* [
  <name>dfs.namenode.write-lock-reporting-threshold-ms</name>
' g, N& ?& h7 f  B( [  <value>5000</value>
1 H$ F* D/ G; j; p  <description>When a write lock is held on the namenode for a long time,) v/ c  z# U* f' J$ X
    this will be logged as the lock is released. This sets how long the- @8 T7 T% ^: U/ n
    lock must be held for logging to occur.. J4 P, _& z4 x) E/ s4 \
  </description>
9 N& L% j1 y2 e7 I1 N7 n0 Z</property>
+ N0 Z+ p2 r. P<property>
+ ]7 S) r! M8 {; e7 n$ l6 b( M- Y1 {  <name>dfs.namenode.read-lock-reporting-threshold-ms</name>- N/ a3 u* V( _
  <value>5000</value>
+ V; V1 n2 M& V+ l2 s  <description>When a read lock is held on the namenode for a long time,
: |5 I% _2 I6 z1 m4 f" p* \    this will be logged as the lock is released. This sets how long the. t0 [8 ]* q4 i( n1 G) [. U4 g1 n2 e
    lock must be held for logging to occur.; O2 h' S, Y5 _/ S
  </description>
7 v+ `  D5 L$ o$ |$ }. a* k2 N</property>9 f$ C8 G: R( y" M* m8 |! K' }% s
<property>
  C- }3 V! Z% f0 [4 I# X3 O* F! ^  <name>dfs.namenode.lock.detailed-metrics.enabled</name>0 M; N( Y9 U) Q- e1 e5 O5 J
  <value>false</value>
& O6 L: o5 J& a. ~  <description>If true, the namenode will keep track of how long various
! X. ]" c& i. T. [9 ]" r" a  L3 L    operations hold the Namesystem lock for and emit this as metrics. These* g8 K) ]2 }$ r$ V* O* b2 P
    metrics have names of the form FSN(Read|Write)LockNanosOperationName,
" g" }8 _4 X9 i) o    where OperationName denotes the name of the operation that initiated the3 ~: y1 W0 w" B6 p
    lock hold (this will be OTHER for certain uncategorized operations) and9 b% N; T9 f# u
    they export the hold time values in nanoseconds.( s1 K. N7 _  L( l
  </description>
  p! O0 e7 H2 G. e9 J3 l</property>
, W# z$ n9 l. M/ w( F<property>1 W: j: G  `2 _5 r+ d# j8 Q# \& p' s
  <name>dfs.namenode.fslock.fair</name>
; b" u8 [  D" Y: a5 y  <value>true</value>* j! K* Y1 c! f5 o- X1 g
  <description>If this is true, the FS Namesystem lock will be used in Fair mode,
* y6 |3 V0 o6 `, i9 ]) J7 b    which will help to prevent writer threads from being starved, but can provide
. j2 Z, }5 \" B" e% i) _    lower lock throughput. See java.util.concurrent.locks.ReentrantReadWriteLock1 k0 Y4 N( y7 V! M
    for more information on fair/non-fair locks.$ B! W2 a8 w$ I8 E
  </description>
$ S, ]( d( c4 t</property>$ _  M- F" P1 H0 I
<property>
6 H/ G7 ]2 t. O' Y: g  <name>dfs.namenode.startup.delay.block.deletion.sec</name>
8 j6 k& S/ f' ^: y; L  <value>0</value>" W3 {0 R; o4 H# o5 F; z# |! }
  <description>The delay in seconds at which we will pause the blocks deletion
( [. t% ~& Y4 K, X4 A; K. d    after Namenode startup. By default it's disabled.8 b5 A9 f% R" _. X8 s( f2 \
    In the case a directory has large number of directories and files are
9 p  e% `+ E+ T' B4 f+ w    deleted, suggested delay is one hour to give the administrator enough time. t# i1 A, D9 N* f" [8 Z: k* B5 o
    to notice large number of pending deletion blocks and take corrective
; t9 ]8 M. _9 a1 S+ y    action.+ v* {3 L8 {: v8 x/ K/ d1 C
  </description>
, c0 @) C# T5 [; x" A. r) \5 o; ?</property>
# F/ o" z! K, V: U<property>( _6 L, J2 Y  C5 |% n
  <name>dfs.datanode.block.id.layout.upgrade.threads</name>
8 f) o' R% V" x, E3 p% W$ `3 ]  <value>12</value>/ T. i& W: ?+ p+ g
  <description>The number of threads to use when creating hard links from
/ u- I! h' _. o, k) G, h2 g/ r    current to previous blocks during upgrade of a DataNode to block ID-based6 w- Q! N; W$ H0 M* k
    block layout (see HDFS-6482 for details on the layout).</description>
0 ^) Z9 _) A% b' T</property>8 J( \* H. v6 ?4 W+ k
<property>" N% B! B( J* h" L& l' F  y  Q- K
  <name>dfs.namenode.list.encryption.zones.num.responses</name>! e! d  v+ {0 ~1 O+ Z5 y' B
  <value>100</value>. N0 g' W9 d) c
  <description>When listing encryption zones, the maximum number of zones+ A# O7 u* Q; Z! p. I) y1 A8 \0 N4 T
    that will be returned in a batch. Fetching the list incrementally in
4 U* U4 l! Z- z! i    batches improves namenode performance.
" ]- b% l+ _/ S  e* b4 O  </description>
7 v5 ]8 R0 k% S</property>6 ^  S1 h* X; `+ w
<property>. d* x& h5 F, C' @" j
  <name>dfs.namenode.list.reencryption.status.num.responses</name>
9 c. t, ?5 Y. `5 q) Q! y( @: r" b  <value>100</value>8 b% W8 ?9 c& `' N1 ~& k+ p
  <description>When listing re-encryption status, the maximum number of zones) S: B: M. F1 Q1 p
    that will be returned in a batch. Fetching the list incrementally in7 G0 _# Y: ^7 {$ m$ Y7 ?" S
    batches improves namenode performance.$ w, m' v7 Z2 A) H9 l
  </description>0 ?6 K9 g- E8 s8 }5 {& ~$ D
</property>
+ F3 D. o! @& H! q  <property>
) {7 s( y. B) |) X    <name>dfs.namenode.list.openfiles.num.responses</name>
) S: m4 H( p9 D3 ]# t3 g3 I    <value>1000</value>- b& X  j8 \5 @9 ^
    <description>
/ p+ Z/ Z' k, ^, G      When listing open files, the maximum number of open files that will be
/ v# k# u& ?, E9 a, _# |: L6 o      returned in a single batch. Fetching the list incrementally in batches
3 |- R' Z$ O$ z9 _2 a      improves namenode performance., j9 c! I! a: A3 s) g
    </description>
# L$ c1 ~6 w/ P, Y  </property>$ V0 N: }9 y0 \- S
<property>
+ \* |7 F* z' I% n+ ?$ l$ [& \  <name>dfs.namenode.edekcacheloader.interval.ms</name>7 M+ Z# V" P6 C  l' J+ m
  <value>1000</value>
: @! u, D& {. w' f  <description>When KeyProvider is configured, the interval time of warming
5 C( w. M7 C9 v" h0 T    up edek cache on NN starts up / becomes active. All edeks will be loaded
6 i  v6 P+ G1 ^: {- h' ?  U/ Z) T    from KMS into provider cache. The edek cache loader will try to warm up the) J3 r" @5 H% A
    cache until succeed or NN leaves active state.
: s! h8 n, f$ u5 \) Z" e  </description>
# ^8 O# q7 T7 S- \$ {" R5 x</property>
7 Q) t( L( X8 V' [" s<property>5 r! O: L+ s) I
  <name>dfs.namenode.edekcacheloader.initial.delay.ms</name>
, ^% I& o2 |8 X" i3 o& y  <value>3000</value>! |  X6 r( r% e8 K3 u+ m6 u. D! |! Y
  <description>When KeyProvider is configured, the time delayed until the first
7 J( m; Z  _6 X4 @( B4 C    attempt to warm up edek cache on NN start up / become active.5 f$ ^& y' q( m  _3 n& r: ?
  </description>2 a) K9 K* n0 E' ?
</property>
( p5 I; ~  I# @/ G% x& H" J0 s<property>
3 N  ?' x7 U/ c. m1 h1 l' N; Q  <name>dfs.namenode.reencrypt.sleep.interval</name>
/ N) x! ?! t9 |9 ]  {  Y  <value>1m</value>
9 P! u- d+ P5 ]. {  <description>Interval the re-encrypt EDEK thread sleeps in the main loop. The
4 g0 n9 |( K0 e& z, l, C    interval accepts units. If none given, millisecond is assumed.  z8 J* d" \: g) ^! ~
  </description>
  m  E  I  v7 v0 T. g9 x3 O</property>
$ n2 o* q* E4 s3 {3 W/ S, e<property>$ }$ L% O5 g* e: \& }0 c' r* g" V
  <name>dfs.namenode.reencrypt.batch.size</name>. o# X  w6 B/ E1 ^+ B8 s
  <value>1000</value>
2 d3 C$ F1 }+ J  n' _- z0 n9 U! j  <description>How many EDEKs should the re-encrypt thread process in one batch.
. U% Z0 u# _/ ^  </description>0 Q9 N- U$ S" o# C! D
</property>
$ h/ H7 S1 N* k6 }  O<property>
5 Z5 J% u) K$ U8 o1 Y/ M. O  <name>dfs.namenode.reencrypt.throttle.limit.handler.ratio</name>
% q" M# |/ L0 B- _. P: k* [' X  <value>1.0</value>/ w7 Y8 t$ s6 C$ y
  <description>Throttling ratio for the re-encryption, indicating what fraction
: D0 G% X/ z! ]+ a1 q0 K    of time should the re-encrypt handler thread work under NN read lock.
! U+ N& G* \& j0 |    Larger than 1.0 values are interpreted as 1.0. Negative value or 0 are
' S' k7 y9 F6 m( B+ w; i4 J    invalid values and will fail NN startup.8 y/ z0 o* {  S* Z; K
  </description>
, T3 k1 b& y! H  w3 q8 ?0 \1 `</property>
" B8 C- ?% E& b5 x<property>! M# ]5 \" }8 L. K- I
  <name>dfs.namenode.reencrypt.throttle.limit.updater.ratio</name>
. D- J7 a; N* F% f# B  r  <value>1.0</value>
0 A. H& Y+ ?$ Y; Q1 O3 t6 n  <description>Throttling ratio for the re-encryption, indicating what fraction! E. [$ q& C) C5 P0 o  x
    of time should the re-encrypt updater thread work under NN write lock.
5 X* J0 N6 M) D8 e    Larger than 1.0 values are interpreted as 1.0. Negative value or 0 are
; x, a- G) ~+ S/ z, ^    invalid values and will fail NN startup.6 N+ o+ z4 g2 _& {8 O" f
  </description>
- J8 o1 O5 T: |% ~. S% L</property>, s; \0 ?. j9 q% J
<property>
2 f4 E' H, D: Z" y1 K  <name>dfs.namenode.reencrypt.edek.threads</name>, w$ z; L% s# k7 Y0 k8 f
  <value>10</value>. X, w- a; n* c% D: N
  <description>Maximum number of re-encrypt threads to contact the KMS. O3 h) n+ ]; o+ s) h/ k
    and re-encrypt the edeks.4 S7 P3 q1 n$ |6 _7 D1 r) E: s! t
  </description>8 f* c" R/ M8 Q5 @6 y, `
</property>) X' q- o, j; ^
<property>% X: a: c3 Q, b8 T* l
  <name>dfs.namenode.inotify.max.events.per.rpc</name>; S8 h0 T( O2 I* ]% _/ H
  <value>1000</value>
: H: h3 r6 Z' E9 u2 u! F  <description>Maximum number of events that will be sent to an inotify client
$ c5 q# C$ k& ~/ |1 b, m$ g    in a single RPC response. The default value attempts to amortize away: U6 O8 r+ K' V2 y
    the overhead for this RPC while avoiding huge memory requirements for the8 W7 c+ u. w: D
    client and NameNode (1000 events should consume no more than 1 MB.)/ }; E3 Y3 h" h4 K2 R
  </description>
+ ]( ~; Q$ R+ y  g5 {: J. o</property>. m( G0 f; \) r! ~/ ]# m
<property>
4 _- c% u6 r" s- o. ^' }  <name>dfs.user.home.dir.prefix</name>
7 R  D8 `. R: a+ |* P  <value>/user</value>' v1 H! _1 V3 V4 X
  <description>The directory to prepend to user name to get the user's3 F5 f7 S! {+ [8 C/ f
    home direcotry./ s  V4 _3 G- t4 V9 J* E
  </description>- ]$ _- n( q" s# L: ~* Z6 }
</property>
1 N! ^9 j/ Y3 A* J! T) E<property>
; Q4 G5 W7 F' N& q: K/ }8 L$ a  <name>dfs.datanode.cache.revocation.timeout.ms</name>
4 t  D3 X( q+ S  <value>900000</value>
) ]: w9 @3 E9 O6 b8 U# w& ~- g8 K  <description>When the DFSClient reads from a block file which the DataNode is- M& D+ E9 C, d" L, H& B5 Z; ^6 d$ D
    caching, the DFSClient can skip verifying checksums.  The DataNode will
1 a. A3 H' Z: L* t* ^  ?    keep the block file in cache until the client is done.  If the client takes
1 R1 K+ E/ ^) F& u" p1 O$ D7 j! s    an unusually long time, though, the DataNode may need to evict the block, S. Z8 E7 [3 i4 \! }; ~) \
    file from the cache anyway.  This value controls how long the DataNode will
7 P) S; p# w3 z: U* G1 z    wait for the client to release a replica that it is reading without; w6 d% M8 y0 u/ N
    checksums.
1 b7 w9 S' w. a% {8 S1 [  </description>
* M; h6 `0 m. W* `% ]  N8 i</property>
1 N& H2 k3 j8 C+ V$ Y) L<property>5 G1 _8 B: R6 n* q
  <name>dfs.datanode.cache.revocation.polling.ms</name>" r$ ]$ a* d- I
  <value>500</value>
+ m% A3 I; Q/ j2 |' ?6 I  <description>How often the DataNode should poll to see if the clients have, Y( F( E% E  G! K, |1 j
    stopped using a replica that the DataNode wants to uncache.
" [& P+ I2 l& {  e  </description>
7 B/ Z6 F. t5 E4 w! {$ u" H</property>
' M9 R( F3 l; D: b6 @" Y5 W<property>
- B! A; w: J  b5 J! J% M, I+ [  <name>dfs.storage.policy.enabled</name>2 {8 x9 n4 F' p  I6 U
  <value>true</value>
# ~2 t4 m* i9 ^* [  H0 g! A  <description>9 `& w& C. F, W! g* e$ G
    Allow users to change the storage policy on files and directories.1 Z* ?( V3 C+ ]% n7 ?
  </description>
( Z- C9 D- j0 C8 ]3 w, S+ y</property>
  B6 U  _0 X) p1 `<property>% Q% d9 z! q5 L$ }& _9 w9 c  p
  <name>dfs.namenode.legacy-oiv-image.dir</name>
0 |  i" h2 d  k0 a) ?1 e  <value></value>0 @' k* T# x* r- S0 s
  <description>Determines where to save the namespace in the old fsimage format
  j3 H& E% }9 R: b0 T" V    during checkpointing by standby NameNode or SecondaryNameNode. Users can
" ^- U% l/ c# v5 g  L. M1 {% n    dump the contents of the old format fsimage by oiv_legacy command. If- O9 v: m# o! n  F/ b  {
    the value is not specified, old format fsimage will not be saved in7 k" o! D. A! c9 P4 |. c% B
    checkpoint.
0 e5 [' z: L% n; |  </description>7 ~( A9 n/ l2 Y5 H: @
</property>* w) \% t( M6 Y$ ~, H# N
<property>
9 O3 o: H* @) I: [2 b  <name>dfs.namenode.top.enabled</name>
  m/ ~9 Q7 w3 {  <value>true</value>' v$ n+ J7 D/ Z2 K& T
  <description>Enable nntop: reporting top users on namenode) o7 Y" X) ]) O
  </description>
$ f- R; z; L* L' |! X/ f</property>* s" D8 i0 n) F  |/ v/ S
<property>
& @7 A8 q9 ]- H# Q+ \- v  <name>dfs.namenode.top.window.num.buckets</name>
7 I, K. D+ b7 _& @9 G  <value>10</value>
7 K* ]2 @3 t2 N  <description>Number of buckets in the rolling window implementation of nntop
& e1 a* \# _3 D  </description>( w# ~9 ?$ i! x
</property>/ K4 p% @# R& X* K9 w: n
<property>
2 a; m3 s% f7 u  <name>dfs.namenode.top.num.users</name>6 b' u3 m, |9 O! ?8 _
  <value>10</value>& z' H% A, K- Q' b' A
  <description>Number of top users returned by the top tool
; h) v4 a0 ]: I' C' Y  </description>* [2 H( E1 b% \: z: B& W/ ~7 P9 [
</property>1 f; t; c1 V8 Z2 a
<property>
' I) U7 w4 w" u; w$ N  <name>dfs.namenode.top.windows.minutes</name>
0 X% L& S& o; M2 }  <value>1,5,25</value>2 K0 q) h" d5 P! {& R
  <description>comma separated list of nntop reporting periods in minutes1 R% V( z' P: j& Q% \
  </description>
( q) A) O& E! |: h</property>
4 c$ a8 d4 G5 l' P: u) ^* `<property>
/ x3 `1 M7 r/ M* T8 o  z3 U4 o. F7 H    <name>dfs.webhdfs.ugi.expire.after.access</name>
: _& s3 u; x6 ]4 d; ]    <value>600000</value>
- W. o% l% t7 y5 m! |    <description>How long in milliseconds after the last access
: c% n) i# n. l  W      the cached UGI will expire. With 0, never expire.5 e& @6 f6 V3 X+ `6 R$ ^
    </description>8 r' c, L7 Q  M6 d; `
</property>- V( f$ X' z9 R7 ?  j; Q
<property>3 b0 l; V2 W: G- D* w
  <name>dfs.namenode.blocks.per.postponedblocks.rescan</name>  ^- ~$ g% U* g' t, F2 d: u+ y: O
  <value>10000</value>, y0 z2 L3 K0 g
  <description>Number of blocks to rescan for each iteration of2 j# [1 d5 U# N- v, y; N; ]
    postponedMisreplicatedBlocks.9 _( R3 K0 q' [, D6 R) ~2 m
  </description>
9 d8 {- e& i( R4 r/ h1 d9 `  w, N</property>
% y. I8 ~+ Y0 Z# O6 b' U<property>8 G  d& j; B, G5 d
  <name>dfs.datanode.block-pinning.enabled</name>
# n3 ^2 W& f- c  <value>false</value>
5 T0 Y) F  S* |# s0 l/ w/ ~3 B/ f  <description>Whether pin blocks on favored DataNode.</description>% `+ I5 p' o, O9 C- ~
</property>" u% U( X8 H6 m- A
<property>) ?3 {) x+ ~- j, }5 {+ E: b
  <name>dfs.client.block.write.locateFollowingBlock.initial.delay.ms</name>$ ?8 s" b9 c" ^; n* c7 @* ]/ x5 \
  <value>400</value>
# n/ o7 y7 K: O' x. n  <description>The initial delay (unit is ms) for locateFollowingBlock,
/ c  n( D7 B+ q* T8 J9 z    the delay time will increase exponentially(double) for each retry.( d3 }1 m$ k) O9 l. W
  </description>2 t, G+ I, G8 j: y) z, t
</property>
/ i& N& ?6 e9 i8 H<property>  C! F6 F0 {: _& C9 ?% y/ ]2 o
  <name>dfs.ha.zkfc.nn.http.timeout.ms</name>
1 o2 q/ o3 s$ m( \% i  <value>20000</value>3 k& h, @4 S# ^$ g% W
  <description>
- j9 k& j$ l) Y- L1 m    The HTTP connection and read timeout value (unit is ms ) when DFS ZKFC
( _- h6 g8 \7 {( S% U0 Y% N8 H    tries to get local NN thread dump after local NN becomes
$ L7 o  g, e, |% [3 h4 Y) |+ j, B    SERVICE_NOT_RESPONDING or SERVICE_UNHEALTHY.
+ }8 W" [% w6 G    If it is set to zero, DFS ZKFC won't get local NN thread dump.
& q, O  P; x. Y" h) ~- i) E/ F  </description>
6 P7 x" o1 ~' `  u) n# k</property>
; T  L0 ?- I: ~( N9 g<property>
2 n; u5 C, _- g" g  <name>dfs.ha.tail-edits.in-progress</name>1 f9 O' X1 E1 g+ t4 @
  <value>false</value>! i" @9 D  y1 [, P
  <description>6 N" O8 i) [3 m, u" w0 W" _' ~
    Whether enable standby namenode to tail in-progress edit logs.
/ k, @$ O! h3 M) k- l    Clients might want to turn it on when they want Standby NN to have: r! U" o% w9 ~6 f* |9 G4 c
    more up-to-date data.6 i- n. W$ E2 b, e7 y
  </description>
2 F# E4 ]. a' e. s. o) n: [/ U</property>
+ S# B& w; c4 }! A! v& ^<property>9 D) @4 b+ Z: t8 Y- M, u
  <name>dfs.namenode.ec.system.default.policy</name>* {- v. [2 o/ N% |) `
  <value>RS-6-3-1024k</value>( S' W; {  g  f; m. C0 M% V
  <description>The default erasure coding policy name will be used* z" H: m6 `2 D4 i* R
    on the path if no policy name is passed.
7 |" N9 U( k' w1 X: t' K1 p  </description>' ~! h# J6 }  V: F& K/ m
</property>% d1 U' L0 q$ i  a$ C- T3 _1 g
<property>
4 J9 e+ L" u5 N/ `* r( {' H  <name>dfs.namenode.ec.policies.max.cellsize</name>& {$ i2 O  h# v/ J
  <value>4194304</value>& Z' }( U9 j: N; b+ T9 ~
  <description>The maximum cell size of erasure coding policy. Default is 4MB.$ Q4 E5 g1 p4 L" I# ~$ {1 j9 ^& U
  </description>- O) I0 e' `# n; U4 O) X( Y* P$ X
</property>
2 t, i( W( f0 z7 t0 L0 h2 h' s<property>; C# m) B8 b5 C# H2 ]$ u, q
  <name>dfs.datanode.ec.reconstruction.stripedread.timeout.millis</name>
; c% R2 S, h7 z3 m- K, d  ^$ P. _  <value>5000</value>" o2 j8 c" Z6 c' I# h
  <description>Datanode striped read timeout in milliseconds.
) V$ L( j* _5 R6 P  </description>
  V/ `6 `. l* @$ |</property>4 c' o3 x7 M) P2 w
<property>1 B! P% n: L0 E
  <name>dfs.datanode.ec.reconstruction.stripedread.buffer.size</name>
+ s- {6 i/ y/ M! _/ R& {  |  <value>65536</value>2 ^9 z0 b7 T/ z# t: [+ F
  <description>Datanode striped read buffer size.; h. l. M1 c" z( y6 Z: e
  </description>* K) r! D8 S( V# t; M+ H
</property>
6 F3 x7 B0 c% O+ D<property>, `) @& t) u) z
  <name>dfs.datanode.ec.reconstruction.threads</name>" m- ~% F4 D  G4 d0 u/ M  l5 c2 p, }* G
  <value>8</value>
/ l; [( y8 h& R* O# b8 f  <description>
  J3 L9 Z9 |; e! y( C- z$ v" d0 A    Number of threads used by the Datanode for background
2 }% l( ?1 B; f4 H* }    reconstruction work.% K2 _4 K2 _6 z0 c0 }
  </description>
6 W. y3 B% M( z- B</property>
4 [% q5 x2 I) o  S& y<property>. x5 ]2 M8 [6 v) r& ~( t8 u( Q9 ?
  <name>dfs.datanode.ec.reconstruction.xmits.weight</name>  I1 Q% p" I% ^3 X6 r' c5 {
  <value>0.5</value>7 z) r, }" G- T# x
  <description>
! {4 U& @& B  N  ^4 `' Q+ M    Datanode uses xmits weight to calculate the relative cost of EC recovery
9 L, `1 l" v" y0 a/ W6 [    tasks comparing to replicated block recovery, of which xmits is always 1.
: R% `* Q: r  I, ?3 }% i% m    Namenode then uses xmits reported from datanode to throttle recovery tasks
0 o/ w6 u! z4 h, S9 V3 N$ U# ^    for EC and replicated blocks./ I+ O( ^" Z/ L. u+ _4 d
    The xmits of an erasure coding recovery task is calculated as the maximum9 A4 \. h( H/ g4 d4 o( X
    value between the number of read streams and the number of write streams.
0 F6 z+ g/ f: X, U0 B  </description>, g) e( j5 w7 b& T; z
</property>
0 W) i2 X1 o/ O: L5 m<property>- A  l' g$ ]' s' @. U
  <name>dfs.namenode.quota.init-threads</name>
6 }% O; m6 m) d! {: l& y  <value>4</value>
4 N, @$ n% A* K3 A  <description>8 b  f: p3 b% V& i
    The number of concurrent threads to be used in quota initialization. The' i" K! X: }! H  B+ w% T
    speed of quota initialization also affects the namenode fail-over latency.3 M* }; N0 h! X8 K* i' V0 c0 l' K
    If the size of name space is big, try increasing this.. h/ x9 Z7 \8 h2 i/ I
  </description>. d7 {8 P$ E# v2 p$ }) m
</property>
3 H! z' ]0 T2 P! |0 I8 }& V! _<property>
7 D1 f; b3 t% U  <name>dfs.datanode.transfer.socket.send.buffer.size</name>
! X2 _( h6 r2 {- s& m' _  <value>0</value>* S; ~, Y  T, b4 ~; P. j, f
  <description>
7 ?2 M$ }( X( z( p' K! L    Socket send buffer size for DataXceiver (mirroring packets to downstream! Y9 r  j+ ]; @
    in pipeline). This may affect TCP connection throughput.9 }: S* Y/ y2 ?  g. m- X! Z+ ^9 w$ e
    If it is set to zero or negative value, no buffer size will be set
& E/ y  Q5 v6 `" g0 _  R    explicitly, thus enable tcp auto-tuning on some system.
% t8 z  h& d+ p. P( W/ z    The default value is 0.6 e! d4 ]1 `% R, [
  </description>
. T3 L) u8 b/ c</property>9 Z8 @4 _9 w+ U4 ]
<property>  m$ [  X7 d& g) f( S: V  f) G4 h: v+ d
  <name>dfs.datanode.transfer.socket.recv.buffer.size</name>
0 c* S, H4 R% T' g8 B& p  <value>0</value>+ }+ Q4 X% P; Z4 \; Q6 [3 V
  <description>% P+ b5 X  J0 h# E
    Socket receive buffer size for DataXceiver (receiving packets from client: u) ]2 k+ c5 Y
    during block writing). This may affect TCP connection throughput.
9 g4 V, B+ s; J    If it is set to zero or negative value, no buffer size will be set
: u6 h' i! Y& v& M0 H/ r) E    explicitly, thus enable tcp auto-tuning on some system.! W9 g% H) r" i* @
    The default value is 0.6 Q- F. e* X0 [- {
  </description>
. J! s8 |- Z2 N) \, C+ j</property>
( a7 H2 p! T) f3 \<property>
9 e! ~% t4 T. l5 A5 l8 E  <name>dfs.namenode.upgrade.domain.factor</name>
1 H6 ]5 T( G) n0 |0 s! c  <value>${dfs.replication}</value>8 D* ?% ?' ?/ Y1 \9 W2 A9 W/ U/ g
  <description>
( Z5 X! b: V; g0 f) Y, o    This is valid only when block placement policy is set to& Z4 W, ]3 r6 c7 h/ u
    BlockPlacementPolicyWithUpgradeDomain. It defines the number of
* i7 j/ ]; q6 I! `, V    unique upgrade domains any block's replicas should have.
6 |. c5 A8 P) x! ~: X$ U7 c    When the number of replicas is less or equal to this value, the policy0 E1 ?1 H# Y& X; O* g) ^
    ensures each replica has an unique upgrade domain. When the number of% R8 ]8 v+ V. z: {8 D# @
    replicas is greater than this value, the policy ensures the number of
' t& h. I) ]' L  z, n    unique domains is at least this value.
/ ^. m$ w1 H* _  </description>$ D& _# L3 A* L% `5 _3 O; x
</property>
2 N: A6 ~' ^# }1 N<property>
. w0 I) m) e8 ]7 L( h4 x  <name>dfs.ha.zkfc.port</name>
1 b, x5 w& j. K  i  S5 A& k  <value>8019</value>
5 R+ |3 U; f2 p- q. k: {# E  <description>
% _8 Z9 X( [" B3 _: a    RPC port for Zookeeper Failover Controller./ x0 B( q' j) k6 {
  </description>& V1 Z1 L( h' d% V
</property>
1 q  I5 I$ T# H# n. O8 ]" b9 ~; n5 B+ [<property>1 C) _- o5 c- I- O
  <name>dfs.datanode.bp-ready.timeout</name>0 ^, q5 B$ d8 [* t
  <value>20s</value>
% n$ ~9 T% _+ O3 T  <description>" m, B- f. D0 T0 p1 r+ ~
    The maximum wait time for datanode to be ready before failing the$ U3 q; b/ V5 H3 v. y
    received request. Setting this to 0 fails requests right away if the
9 c. F" J% m) S0 C1 G2 s: @* `) i" T    datanode is not yet registered with the namenode. This wait time. a8 E9 u# [: y
    reduces initial request failures after datanode restart.
0 B' }. B' `! r7 h2 i/ b4 f    Support multiple time unit suffix(case insensitive), as described7 H0 p  \/ P& _3 D0 T- Y& t
    in dfs.heartbeat.interval.
9 G5 H% e3 c- Q& U  </description>3 q+ ]) G' S3 B
</property>
5 }' R0 t4 z0 D# a, g  G0 w" T<property>
0 e# ^9 M# u. F2 |% N! v) c  <name>dfs.datanode.cached-dfsused.check.interval.ms</name>0 k$ V6 r0 E* s! X
  <value>600000</value>
7 k) B' f; R! d  <description>, V: _: B# N$ C" y) X/ |0 Q1 y
    The interval check time of loading DU_CACHE_FILE in each volume.% G+ g) \8 o+ [' i( R6 R6 o7 E8 N8 p+ Y
    When the cluster doing the rolling upgrade operations, it will1 v: B, ]' J# b! Y1 q
    usually lead dfsUsed cache file of each volume expired and redo the
8 E: J. Z6 v- }8 H6 g    du operations in datanode and that makes datanode start slowly. Adjust
* }5 g( `7 B. b) |- e1 V    this property can make cache file be available for the time as you want.
- K  \* ]  }# D4 m  </description>+ k% F4 _7 M& F( r0 k
</property>. @- g3 \4 |" V' f! y
<property>
$ A8 b: t9 m, K  <name>dfs.webhdfs.rest-csrf.enabled</name>
7 {' |* N# R  K! H0 N/ W  <value>false</value>
& P( D2 ?7 E, ]  <description>; Y3 W$ u% Q! v. M: m% q
    If true, then enables WebHDFS protection against cross-site request forgery1 c- {9 f% H# N# Q& n
    (CSRF).  The WebHDFS client also uses this property to determine whether or; h* o+ u0 d# p# L9 Y# ~; s
    not it needs to send the custom CSRF prevention header in its HTTP requests.
1 j* y  d& J* t, ^! s/ M& W. _  </description>
3 {5 N9 C( z4 k0 j4 A1 N</property>
/ \) u3 v( }( U0 Q1 a: [1 C<property>
8 W1 [  Z, \6 K  <name>dfs.webhdfs.rest-csrf.custom-header</name>
8 K: @8 {  i0 h& _! K- [/ K  <value>X-XSRF-HEADER</value>0 i; ]8 S# Q9 Q8 f0 U
  <description>! s/ [7 L5 n1 ?  n# c/ y9 B9 U3 T
    The name of a custom header that HTTP requests must send when protection
- I0 v  N9 F3 k    against cross-site request forgery (CSRF) is enabled for WebHDFS by setting0 x8 W4 G3 u4 g- o  Z  B* j4 Q& U
    dfs.webhdfs.rest-csrf.enabled to true.  The WebHDFS client also uses this
% q2 H2 U* S. {7 n    property to determine whether or not it needs to send the custom CSRF
; h" u, P. q5 ?" D  H    prevention header in its HTTP requests.6 x! ~/ \/ H6 K5 n# h
  </description>
4 Z8 d5 s- d" |4 K+ e6 }% A5 q</property>* m6 b" u' Z6 L8 g/ D, h
<property>
, O5 K: N2 n3 D5 B& x3 c  <name>dfs.webhdfs.rest-csrf.methods-to-ignore</name>
5 L! v# k. a) F) F# Z) [  <value>GET,OPTIONS,HEAD,TRACE</value>
0 ^( l+ m  @+ P' r  <description>
* K' T6 q1 n- ^    A comma-separated list of HTTP methods that do not require HTTP requests to+ R$ }& ?8 J8 Y2 c3 y
    include a custom header when protection against cross-site request forgery
4 I( S' y8 I% O8 n) y    (CSRF) is enabled for WebHDFS by setting dfs.webhdfs.rest-csrf.enabled to
! c/ L; ?, D4 `9 Z, H8 Q    true.  The WebHDFS client also uses this property to determine whether or
& y5 g$ \/ {* F4 Y+ }- s' G) {    not it needs to send the custom CSRF prevention header in its HTTP requests.
4 Z. f0 @$ _  F( f! ?# P( F) y  </description>  c3 W. w5 T) p' x+ @
</property>0 d, h& O8 I2 X) K1 e
<property>! k% u3 H6 X/ `  _' g- e3 H
  <name>dfs.webhdfs.rest-csrf.browser-useragents-regex</name>
6 e+ J8 R6 `/ H  <value>^Mozilla.*,^Opera.*</value>" N* |; a: O" ~5 w( m
  <description>
$ ^9 k9 w/ r6 |8 W  |) V; n    A comma-separated list of regular expressions used to match against an HTTP2 F* M0 M! m: W$ K$ s, \
    request's User-Agent header when protection against cross-site request
( @) |( d4 s+ r7 E+ w7 j) `: \    forgery (CSRF) is enabled for WebHDFS by setting7 [4 i2 Y4 q9 P
    dfs.webhdfs.reset-csrf.enabled to true.  If the incoming User-Agent matches
' o% O! H" \. `6 k, n$ W' Q    any of these regular expressions, then the request is considered to be sent
0 O7 p/ f+ ]& R$ ~9 q    by a browser, and therefore CSRF prevention is enforced.  If the request's; u3 ]) P/ A$ Y% [4 \9 h
    User-Agent does not match any of these regular expressions, then the request
: t, d9 ~) _3 M1 {' K    is considered to be sent by something other than a browser, such as scripted' P# m6 |" L- I
    automation.  In this case, CSRF is not a potential attack vector, so+ n5 L4 P0 S, K
    the prevention is not enforced.  This helps achieve backwards-compatibility
3 [7 w2 c$ Z/ ?7 v4 e    with existing automation that has not been updated to send the CSRF
* ?' e6 l0 l, \4 m* p7 X' V    prevention header., u* ^; M$ R. ~, U
  </description>
/ h! D7 i$ ]' X1 L. d</property>, \+ }9 R! F, O7 z, _. r
  <property>
; ]& F/ t# K5 x& U5 N' g9 I    <name>dfs.xframe.enabled</name>5 J7 w5 U; y. K* e  t
    <value>true</value>- ^" P6 Q1 H; o
    <description>
- \6 y" ]( D+ o7 q9 W2 f      If true, then enables protection against clickjacking by returning( i# }; L! o# Z* g5 ^
      X_FRAME_OPTIONS header value set to SAMEORIGIN.* Q* N2 b& X; F" t/ v  D
      Clickjacking protection prevents an attacker from using transparent or
& `" J8 O' o& b! X- A      opaque layers to trick a user into clicking on a button: s9 }; S; t& q; S8 p$ @, U, T
      or link on another page.
5 W0 W& t. B3 ^# w4 f    </description>6 c" \! ]# i, T/ N( Q
  </property>
6 J+ h+ p! J; Y% m" C4 k  <property>
2 C$ D# w# M7 x. g    <name>dfs.xframe.value</name>! f. W+ \. P0 U* V& f( k3 B
    <value>SAMEORIGIN</value>
1 v' j) U% `- ^0 R" b- Y6 D* B    <description>; J4 G- J  B. c5 l! F
      This configration value allows user to specify the value for the- T! _% z2 d% _  x6 ?/ T1 a
      X-FRAME-OPTIONS. The possible values for this field are' Z* a. Y* n5 C& C" P
      DENY, SAMEORIGIN and ALLOW-FROM. Any other value will throw an
5 d8 M$ _% c7 ?9 I4 z1 j& F      exception when namenode and datanodes are starting up.
  u: d( g1 o3 ^) [* q8 |6 J    </description>2 C3 k3 @' y8 ~% V3 F: n" k
  </property>
. w5 j+ I# W0 u, E<property>& S4 y, d( A& M7 j* F
  <name>dfs.balancer.keytab.enabled</name>' E- x" U3 g" B$ i3 D
  <value>false</value>2 \8 n& ^; P) n1 j. Y) s: P) h" u1 s
  <description>
# W. g9 A* {+ D, n2 Z% Q; j    Set to true to enable login using a keytab for Kerberized Hadoop.
: X" J% M/ k2 o- h; y- C  </description>+ S8 m* ?( `# ?3 d( @9 q
</property>, v, |) v& a- i! ^4 A( p2 X3 a. b) L6 x
<property>
  v4 y, ?+ g& t2 h5 v  <name>dfs.balancer.address</name>
; [7 d$ V4 L0 D9 M) I8 I* S  <value>0.0.0.0:0</value>8 Y2 T+ I0 j3 v7 a5 L
  <description>
" Q, N6 t3 H" D4 `" L$ a) q    The hostname used for a keytab based Kerberos login. Keytab based login, H, A9 ]& a4 D$ H; W' f
    can be enabled with dfs.balancer.keytab.enabled.
4 `# T- a* ?  B, E0 X$ w6 Y. G8 p  </description>
+ p7 Z% f; ^" \, r/ m$ y</property>
# J2 y  q) Y2 G2 Z, W7 i<property>8 f; @* n/ {6 `8 H: N0 T7 _
  <name>dfs.balancer.keytab.file</name>/ l' V0 O- q7 ?! [; o9 {* g
  <value></value>
) @1 y3 G8 z* H% f0 }: z& R  <description>
4 k- Z+ A7 s% `    The keytab file used by the Balancer to login as its
& b! t! t9 G/ s) D0 W& D3 {3 \    service principal. The principal name is configured with
1 \& g7 u- j- {6 o6 x    dfs.balancer.kerberos.principal. Keytab based login can be
1 [& j/ a3 [$ w3 R# T    enabled with dfs.balancer.keytab.enabled.
7 O0 |# \( A' @& Y/ x  </description>9 r3 F2 e$ d8 m# Z3 U1 S) Y/ p
</property>, I* P% g9 y) l' S$ i
<property>+ W5 p% x- F4 f/ i
  <name>dfs.balancer.kerberos.principal</name>  F- H# c9 c# {, r0 P/ {
  <value></value>
% B- P; v3 X$ W/ D: E6 V  <description>0 h, K& N" o- y8 U
    The Balancer principal. This is typically set to% S7 C, H1 R' R7 y6 o' h
    balancer/_HOST@REALM.TLD. The Balancer will substitute _HOST with its
# e8 m3 j) x7 L) ~% U4 c    own fully qualified hostname at startup. The _HOST placeholder
. z) H7 Q9 e! U0 K! d4 l8 t7 T$ x    allows using the same configuration setting on different servers.6 H: T" |" l! ]. U+ d; G
    Keytab based login can be enabled with dfs.balancer.keytab.enabled./ M. ?1 I) s2 s
  </description>
9 o" Y) ?% ^) \' @5 F: O</property>
0 R  T, X4 }6 E$ @9 B<property>
1 ]0 S* n) ?/ F0 z+ F  C( L) A( ~  <name>dfs.http.client.retry.policy.enabled</name>
, i, b5 o/ d$ n  <value>false</value>9 [6 H  y, t) s+ I2 P: Q: _
  <description>
3 y) p9 i( `6 g    If "true", enable the retry policy of WebHDFS client.
! t$ j5 g, E8 C  t1 o6 O& r( y, P    If "false", retry policy is turned off.2 a) u5 _+ ?0 M" }, J
    Enabling the retry policy can be quite useful while using WebHDFS to
* a9 y; N& \( n; H: |* }5 A    copy large files between clusters that could timeout, or
- s; [/ y" t) U$ r$ s    copy files between HA clusters that could failover during the copy.
5 a" @  w6 i. [  </description>
$ Z' `: V5 I: A4 ^' ^. C</property>; e7 b6 ?) K7 K* M2 x+ ]
<property>
6 H# ]! J7 @# N  P7 v  <name>dfs.http.client.retry.policy.spec</name>
- Z9 `' ?7 @. T; }0 u- S  <value>10000,6,60000,10</value>
( \/ S; c. U+ r- J( L, E8 [4 _  <description>  _# z% Z7 @( E/ I4 J$ V
    Specify a policy of multiple linear random retry for WebHDFS client,
  d  t8 I. h" m1 O$ `# @3 q* r    e.g. given pairs of number of retries and sleep time (n0, t0), (n1, t1),
1 p3 Q6 [  z+ R# y( Y) s    ..., the first n0 retries sleep t0 milliseconds on average,
* }/ j# k" A/ S6 R/ K6 ~5 b  Z    the following n1 retries sleep t1 milliseconds on average, and so on./ W1 A+ _. A; |" x. z- O
  </description>
& k0 t- x9 w# X" J( S) d</property>7 [7 U9 ~- S; @& v6 A# z
<property>
9 T1 ?0 ~9 g3 A, t  <name>dfs.http.client.failover.max.attempts</name>& c5 l: D% N5 o: O( e
  <value>15</value>% b, i5 ?. M6 H7 V! f; N; S
  <description>7 x6 ^% S0 }- G& A: D. {; i3 Y9 E
    Specify the max number of failover attempts for WebHDFS client
: T  m/ _0 H, t! M" t7 a+ n    in case of network exception.% u- a8 j4 f+ `2 ?
  </description>$ ~3 I, s" ?8 L, h  Y" G
</property>. [' p; V5 ^- _7 _1 X2 o; I
<property>
8 W- Q1 G2 b3 ]( p$ t8 m; A  <name>dfs.http.client.retry.max.attempts</name>! q4 v6 R6 {, q+ t
  <value>10</value>( |0 [9 S0 G7 v( J0 u* ~
  <description>
# ^( ~6 F5 x9 h, H  o7 i+ w* S    Specify the max number of retry attempts for WebHDFS client,5 H, e& c$ o/ o" _/ u0 y+ A
    if the difference between retried attempts and failovered attempts is" ?% [( H: c1 M* P
    larger than the max number of retry attempts, there will be no more
. `4 T& G! Y0 @2 E2 D; l+ }  J    retries.# n3 g3 }3 G2 _" V' i* B
  </description>
6 k; J" P- P, C. ]</property>
% `5 t5 H7 M) D  o5 W, l0 W<property>% o& x0 p0 ]! G" J' o
  <name>dfs.http.client.failover.sleep.base.millis</name>% ]0 S- a4 B: @8 o+ ?% D7 L
  <value>500</value>+ X& H- R; i% }4 E% {$ Y
  <description>! l& l- g1 k/ f6 ?4 `6 ^, S. _4 i- B
    Specify the base amount of time in milliseconds upon which the
0 C# i% c- X9 ~0 |9 ]+ u    exponentially increased sleep time between retries or failovers
; l0 u2 Q2 j! U9 f    is calculated for WebHDFS client.9 E# q4 k! }7 L  k6 e! q/ z
  </description>
, f7 Y4 ^5 ~% s  B</property>. K8 i9 H, q$ E
<property>
! @; T1 `8 J8 N/ v, s  <name>dfs.http.client.failover.sleep.max.millis</name>+ h* H( m/ ]1 z, a8 g" [& D
  <value>15000</value>
: D' w5 s" I) l; t! L, C+ u  <description>
) h! a! z% Q6 M9 k& A    Specify the upper bound of sleep time in milliseconds between, D/ g& |! S! W7 }: h, n$ d5 N  g
    retries or failovers for WebHDFS client.
8 X& i" A8 q& u  s, u5 ^8 G  </description>
/ K/ h1 }2 ]8 h</property># ~5 H! V4 [& _4 z5 q
<property>6 S( d( q$ l4 }$ @1 A3 V3 X
  <name>dfs.namenode.hosts.provider.classname</name>* _$ D7 o) U$ i6 f8 o0 B- s3 z4 v
  <value>org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager</value>$ S# \+ |' ]4 G, X6 R* g
  <description>
. {! ?1 V. u3 }9 }9 I    The class that provides access for host files.
$ L1 q& A2 d) r0 z2 N' [    org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager is used3 O- ~# \) A& i; B( \! H
    by default which loads files specified by dfs.hosts and dfs.hosts.exclude.
6 r7 q6 Q, T+ C# z. k% |    If org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager is
: h! [/ m5 m; U: q+ i4 D    used, it will load the JSON file defined in dfs.hosts.4 ~+ x8 A  z7 F5 S
    To change class name, nn restart is required. "dfsadmin -refreshNodes" only
- ]8 ^$ X. r7 i* P0 d2 b    refreshes the configuration files used by the class.
* {' z3 K) x# D/ v% C  </description>
- x+ l/ I& U8 V1 g+ z6 @1 P</property>
+ D) I: q* `* T- `2 A! ^<property>% [% a9 b: g+ S5 G) }
  <name>datanode.https.port</name>
) ?2 M1 E) O6 Q  <value>50475</value>
" b; z4 z" m4 x" Y  <description>: {4 R6 \$ I. T7 f$ t
    HTTPS port for DataNode.
2 _, P) i$ y' F' \1 u# b  </description>) v; U8 N$ c4 @  N& R& A; w
</property>
6 c6 b" K. t8 ?/ |( u) N9 Y% a* y<property>6 E  z  B. t5 {$ `8 j" c3 Z4 d
  <name>dfs.balancer.dispatcherThreads</name>1 Q6 \- W) R  ^3 w5 n' R* V  s" c: a3 n
  <value>200</value>
& m# _% t8 W8 b% {6 H7 X# L& y3 g  <description>
! M' S1 D& S+ N4 w/ v    Size of the thread pool for the HDFS balancer block mover.
5 t' P  L# I4 k* ~3 ^- f, w    dispatchExecutor+ e) G) h5 |, u3 K4 ~- I: L! ]
  </description>3 N- \( v2 N' d4 T3 I
</property>
. A( h) y( M3 f2 F6 g4 H4 V0 j<property>, `+ A8 b) o: C8 u) F. u
  <name>dfs.balancer.movedWinWidth</name>+ \( E# q, V8 J- l
  <value>5400000</value>7 R! B+ u' X$ J* i' `  l
  <description>' X, `1 J2 o: Q( W( W& y  x& p& J
    Window of time in ms for the HDFS balancer tracking blocks and its$ n7 K/ k& V3 r3 S: b+ ~+ d2 y
    locations.' Q$ G& ^3 v9 t% A: |. \
  </description>
: F# Y) l$ Z/ y</property>; \, q3 B1 B3 r* J" R
<property>0 f7 @5 B/ N* z! [9 H9 ~& k! `) E
  <name>dfs.balancer.moverThreads</name>8 G* B0 b$ @! P- H5 P+ k
  <value>1000</value>
! |( q5 v% g: z& [- ?  <description>% S5 Q) {7 y+ |% D* K* w
    Thread pool size for executing block moves.
; O! V3 V- O- j% }! [1 Y2 l    moverThreadAllocator9 R" i9 {' @/ ^" `( e/ A
  </description>1 S, w: H, K# X
</property>
" q8 g9 g! f0 m3 K<property>
9 t) N# ~8 Y6 J0 }6 @3 i  <name>dfs.balancer.max-size-to-move</name>
& t4 [* D' l) C) s9 v0 Q5 j: l8 J  <value>10737418240</value>
2 S: ~( l5 h" Q. r1 J) n5 B, j* o  <description>, \  {) s/ }6 C" T* U2 v0 Q
    Maximum number of bytes that can be moved by the balancer in a single* O; V( E: p7 N8 j# q
    thread.5 K. I: M3 H' {; M9 w
  </description>
0 t, E  |$ M- R</property>
4 e+ D' q9 U3 l; |( X<property>
# }4 f3 f. o3 W8 f1 {2 G+ G4 E9 E2 Q) }  <name>dfs.balancer.getBlocks.min-block-size</name>. D& {2 U0 R9 m* P9 m
  <value>10485760</value>" a. ^3 d: K! L8 q$ p
  <description>
3 {7 |+ h. p) q# E( h" o+ T    Minimum block threshold size in bytes to ignore when fetching a source's3 P0 l% i( s4 E& _9 T) h1 Y( H
    block list.
5 [& E/ s) |! @& Q) _3 g  </description>: n, X& H4 f2 G; o: ~* m
</property>+ q( \2 Z7 v$ e1 M" n# G
<property>& \% o' H' n5 Q3 s0 c& G
  <name>dfs.balancer.getBlocks.size</name>
, s9 b: |4 }" O% ~& N( k6 Y' E  <value>2147483648</value>9 \3 z5 Z' M0 |* T: r) |/ {
  <description>
! }  z! b' m7 }4 u    Total size in bytes of Datanode blocks to get when fetching a source's
2 H/ {4 g$ {! t: o8 q* e& g; N    block list.. ?6 ]+ l- j1 w$ U5 D
  </description>
6 |4 z" q8 i) i' J</property>" f% w; a" }1 H. f  Q2 b
<property>4 f/ s% l6 C3 B  j" ]3 `
  <name>dfs.balancer.block-move.timeout</name>
/ j3 B% l( Z, L! O  j  <value>0</value># U: T$ T, H2 y9 q& W/ S
  <description>- x7 m) W4 ]5 r8 o  h2 `6 {; ?$ d& K
    Maximum amount of time in milliseconds for a block to move. If this is set5 u( o( z* j  W; C) q3 |5 A
    greater than 0, Balancer will stop waiting for a block move completion0 [  G6 W* y9 K2 z" n/ g. {
    after this time. In typical clusters, a 3 to 5 minute timeout is reasonable.% `5 j) N6 U6 d% z
    If timeout happens to a large proportion of block moves, this needs to be8 D/ t' P% r, |% \' \
    increased. It could also be that too much work is dispatched and many nodes5 M- n# p) s8 b9 u! K
    are constantly exceeding the bandwidth limit as a result. In that case,
' I6 W) _1 N( v- V1 \, }) `    other balancer parameters might need to be adjusted.
# \2 G3 i3 H- x    It is disabled (0) by default.  s# A  C$ @" t; ]; S$ Y0 P- l0 D
  </description>! p  B* [* }& a! q# P" I: B
</property>' i) |2 Q% p: b6 f& W! C! Y
<property>. t' w( n- G8 N$ n
  <name>dfs.balancer.max-no-move-interval</name>$ D& l$ g4 |; _  b0 H
  <value>60000</value>7 o( a/ G; D0 C, J
  <description>. O" N2 i& |' u
    If this specified amount of time has elapsed and no block has been moved
8 w* X% V: A* w    out of a source DataNode, on more effort will be made to move blocks out of
7 {4 }* Y. N7 Z! J) b    this DataNode in the current Balancer iteration." j: R8 o4 o# i
  </description>
$ n' Q( ]/ i/ h" H/ V+ f: x& v</property>5 w- |  G) t+ f# Y; H. ^( V5 s
<property>
% J4 C! q+ _7 i* B; L1 i. N* _8 v  <name>dfs.balancer.max-iteration-time</name>9 M! [" X# q* I+ {( j
  <value>1200000</value>
" u& _# u2 C: `, i1 X( i9 y  <description>
8 _) M; c1 @7 X( O7 f" h    Maximum amount of time while an iteration can be run by the Balancer. After7 A" w2 D6 G  y, m+ h$ h+ I
    this time the Balancer will stop the iteration, and reevaluate the work
9 |; c' B4 A. r7 _    needs to be done to Balance the cluster. The default value is 20 minutes.. m  {2 H1 w' b
  </description>8 _& l5 i5 {; c- y+ R
</property>3 ]! D8 B; Y' T. ]( v4 ?2 ?  n
<property>
# v% i+ Z) e* X7 C, c1 r5 X  <name>dfs.block.invalidate.limit</name>- G+ d, T5 y. l9 X, i
  <value>1000</value>
5 a5 U# T( r5 Q# K& C1 @/ J  <description>
0 H; R. g. @0 s5 R- O8 E9 H, M4 q    The maximum number of invalidate blocks sent by namenode to a datanode
3 b& Z4 m" X# g    per heartbeat deletion command. This property works with
- J2 Q6 K5 t- S    "dfs.namenode.invalidate.work.pct.per.iteration" to throttle block
; e+ o6 K$ X$ _2 R' \6 a1 ]    deletions.8 \- E: D& E8 x# {5 Y9 j( ^$ x
  </description>
& h8 c6 {7 w- W4 Z</property>
1 p5 C. ~. x# K+ v<property>
( q' g2 p" l, s/ _  <name>dfs.block.misreplication.processing.limit</name>/ x: P! H/ y/ j  A# u8 Z; V/ s
  <value>10000</value>
" [( ?& W& p! k: ~1 e: u5 e+ L( {( T  <description>
/ y4 R9 Q4 A* y+ L8 i" c2 Z    Maximum number of blocks to process for initializing replication queues.
, Y1 ]1 q6 [6 }# _! Y5 W( L  </description>
! C' q5 T) S& ]( J  j</property>: |( b# C  \5 x, a8 D
<property>
, R1 t* _) b  R0 e8 A  <name>dfs.block.placement.ec.classname</name>
4 [) R( ~# M# {) {1 b" j/ g% v* N  <value>org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant</value>
0 n$ P, A3 x9 a  <description>  z2 \! {. H4 B# }( D
    Placement policy class for striped files.( S& d0 M9 k% _$ A$ B$ a0 h. l
    Defaults to BlockPlacementPolicyRackFaultTolerant.class
+ \4 ?9 u2 T  r" j- f  </description>8 ~* c) H# Q$ J) y! |" [3 \
</property>$ ~) d' M: X  F2 }' q
<property>8 Z; A. c9 e' y* P# z! E
  <name>dfs.block.replicator.classname</name>" L4 I$ u" U/ P( I
  <value>org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault</value>
' C$ f7 D9 I. I1 c: Q0 {' a  <description>
9 \0 Y1 U5 W$ m; x  Q    Class representing block placement policy for non-striped files.
3 P3 r% H/ i2 H    There are four block placement policies currently being supported:
: ^; `4 H( ], t7 }    BlockPlacementPolicyDefault, BlockPlacementPolicyWithNodeGroup,
, ]1 V, o1 d0 d% A% h# @    BlockPlacementPolicyRackFaultTolerant and BlockPlacementPolicyWithUpgradeDomain.3 C* a5 [3 s0 T4 O; ]3 M! e; o5 j
    BlockPlacementPolicyDefault chooses the desired number of targets
4 ]' ~  \2 Z1 Z9 [5 f$ D9 x5 y    for placing block replicas in a default way. BlockPlacementPolicyWithNodeGroup
$ @+ N" O8 c8 U0 U8 d$ W) ]( ~    places block replicas on environment with node-group layer. BlockPlacementPolicyRackFaultTolerant2 K, P9 l% }9 l2 v
    places the replicas to more racks.
: ?" _1 Y! G* X- e* o8 y- X    BlockPlacementPolicyWithUpgradeDomain places block replicas that honors upgrade domain policy.1 {9 I+ {8 u' Q# D3 \% U, C9 I' S
    The details of placing replicas are documented in the javadoc of the corresponding policy classes.- j: @0 [9 g* h$ Q, i  R: `5 ]
    The default policy is BlockPlacementPolicyDefault, and the corresponding class is
0 x) B( h/ D$ d    org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.; q7 l% V+ h4 O: {
  </description>
2 b5 ]$ b& ?6 |9 W4 i</property>, i7 s+ X3 d. [; ?& B5 x/ a
<property>
  I/ @$ x# e, m; p5 V7 t  <name>dfs.blockreport.incremental.intervalMsec</name>' G: N; s8 f$ K* S7 b1 d5 ^. ^
  <value>0</value>+ t( u2 _! r; Z3 ?5 ?; H
  <description>
" I" M! v2 o( Y0 {# |2 C! Y    If set to a positive integer, the value in ms to wait between sending1 y9 y/ Y5 W/ v5 Z0 x7 u
    incremental block reports from the Datanode to the Namenode., {4 I$ C" C% V" a( ~% k' X$ C
  </description>
. D. B4 J% f6 m- r5 Q' J* O0 U</property>& w; @& V+ s' S& _# A
<property>
5 s8 K# K) J: L+ F$ v  <name>dfs.checksum.type</name>* }( z7 |7 Y& z) V
  <value>CRC32C</value>, y8 n+ i( ?/ a2 K+ v% J9 [
  <description>
( D% T& ~* J6 B9 u4 k1 m    Checksum type/ ~: b+ q. M/ A6 j/ v0 M
  </description>
6 ], a2 q9 {: v</property>- K9 j$ a4 c% w* O
<property>
/ s* N: }- X) T, C  o  <name>dfs.checksum.combine.mode</name># s' c. O; Z! p5 ?
  <value>MD5MD5CRC</value>
( _4 x/ `7 O0 Y  <description>
7 d8 Z; |1 ?  W) E/ m    Defines how lower-level chunk/block checksums are combined into file-level
6 @! n' x3 ~, I# W    checksums; the original MD5MD5CRC mode is not comparable between files
! A4 e- g  v+ |' B0 y- g) S* Z    with different block layouts, while modes like COMPOSITE_CRC are; L5 h- O: E  G- V/ k
    comparable independently of block layout.) x( H$ m+ G1 W1 J1 D1 C
  </description>6 ?' {& j  B1 a' O
</property>7 n/ O6 ~) V% d0 w2 l
<property>- P  `0 x- @" F7 m1 r0 Z
  <name>dfs.client.block.write.locateFollowingBlock.retries</name>
6 p" u& O. a" s2 x+ t6 F  <value>5</value>; c' L8 p) e6 Q, i$ ]9 k- `; K
  <description>+ i& R) X6 L, u; a) x1 B$ Z3 W2 o6 _7 y
    Number of retries to use when finding the next block during HDFS writes.
! h* ?5 N/ v# @9 H% \  </description>8 G9 p% h% n0 o" a6 ]' t# B
</property>( O2 v, ^) ?* Z
<property>
/ `( z) U; T1 l  <name>dfs.client.failover.proxy.provider</name>
5 h. J5 G- [6 b. Q+ \  <value></value>
! k9 g/ t* a: L: Z  <description>
) c3 k1 Y; I7 y) t# y" W    The prefix (plus a required nameservice ID) for the class name of the
, i6 E  s. f  B1 r  l    configured Failover proxy provider for the host.  For more detailed* b' u; W  t# Y( ?( O& A
    information, please consult the "Configuration Details" section of
  s$ D$ k1 f7 h; W; R, g# W    the HDFS High Availability documentation.
" K; y3 T9 Z) o1 C) R8 G) }: Q  </description>
6 R4 V. x- C7 M</property>
" a; g5 Y$ A* f9 J<property>
, Z$ O! w9 v# m5 }  <name>dfs.client.failover.random.order</name>/ p6 t  e  d6 f4 y( A0 {
  <value>false</value>6 _& h0 i) \4 x/ Y
  <description>
' ?! g9 y3 ~" |( ^    Determines if the failover proxies are picked in random order instead of the! v+ M9 F% K! n4 c
    configured order. The prefix can be used with an optional nameservice ID  V" j  n- J8 ~  ]+ |. h+ J1 y
    (of form dfs.client.failover.random.order[.nameservice]) in case multiple' o/ w8 N) B0 C8 P" n% E+ Y8 y) l
    nameservices exist and random order should be enabled for specific9 }4 @3 k& e$ k) ^, q5 t
    nameservices.
* h( q1 b9 e6 T1 x7 M  </description>
7 A8 I: N5 k; v" \$ q+ `</property>
! @; h' `9 d6 y% k' u) c( J/ M<property>
% {1 ^& e3 _2 p+ k+ d- O  <name>dfs.client.key.provider.cache.expiry</name>
4 p) o' Z( v9 i  <value>864000000</value>
9 F. D! y4 a6 l) t) D: u2 y  <description>
# `1 [+ h& Z9 H    DFS client security key cache expiration in milliseconds.
. d) Z1 d7 L/ u- J! `$ Q! _# C  </description>
$ \3 W! F0 l, }# A</property>
# X! j# N7 q. M/ f) K. e<property>
; [+ U: l0 y' Z7 e  <name>dfs.client.max.block.acquire.failures</name>
  C3 \" T; {% e( O) D  <value>3</value>0 m2 |* W6 n- s9 E1 o
  <description>
/ f/ L7 H" Z! |  X6 N( l/ a    Maximum failures allowed when trying to get block information from a specific datanode.4 Y. h% N  f' G; S
  </description>
8 p6 ?. W& i+ q7 W</property>
+ z9 m& k" Z3 V<property>9 m, ~9 A" ]* r. \
  <name>dfs.client.read.prefetch.size</name>
& F9 c: d* ?) u( r8 y  <value></value>8 i. H6 r/ e* F3 D& Z8 X
  <description>! z7 n) S: T7 t/ T2 I4 I: n8 H$ e* W' m
    The number of bytes for the DFSClient will fetch from the Namenode
( e, V: i: L- E" Z. l& J    during a read operation.  Defaults to 10 * ${dfs.blocksize}., Q, o# r0 y; m7 Z8 ?
  </description>- u* U: |/ H. k6 ]: i/ |, P
</property>
4 N; X" V, I2 r7 D% ?<property>) B4 R" \' m3 l. T1 X5 L
  <name>dfs.client.read.short.circuit.replica.stale.threshold.ms</name>% Q% N% w2 ^+ i5 j, W
  <value>1800000</value>
5 W8 T: v( f! [+ p' q7 j& P  <description>) i* J5 b8 d8 b3 \
    Threshold in milliseconds for read entries during short-circuit local reads.5 Q& y6 J+ y& u. x; F; T
  </description>
) B5 r& X7 a0 G- E$ P) T' ]</property>' D' T: U# P: `1 O; o# y
<property>
) @; H1 K3 P( g7 ?% A6 C9 Q  <name>dfs.client.read.shortcircuit.buffer.size</name>8 g" \) w- L" T  ]/ r4 D# x* Z0 s
  <value>1048576</value>$ r9 \! ?  ~) T" W3 Y' U9 F; w: M
  <description>
4 @: T( R& i  U1 p2 g    Buffer size in bytes for short-circuit local reads./ F, {1 a7 t# O  t: W
  </description>
: _$ C" X/ T; {) A2 Q( l</property>
( W+ d& F# ^& |& U5 y( x<property>
( w: |& Z* k  x  <name>dfs.client.read.striped.threadpool.size</name>! `9 p# Y* A! H  x: t
  <value>18</value>
( O5 O7 h8 N$ o0 N% [3 ~  <description>: a' b4 C5 e0 ?& K
    The maximum number of threads used for parallel reading0 }' f9 U' c0 L3 N, a8 }2 M
    in striped layout.' [# L! B* o/ C9 C) k" p
  </description>4 I2 u3 s' S1 e+ y
</property>% J1 \" S' _3 }2 ?# E
<property>/ c& g2 b3 {- P6 _9 g
  <name>dfs.client.replica.accessor.builder.classes</name>9 N- b0 J* R9 [
  <value></value>
, D" H& n7 D; K* a  <description>
5 L" k$ @5 A6 e# W5 d    Comma-separated classes for building ReplicaAccessor.  If the classes
  [9 y1 e% x' n0 ^    are specified, client will use external BlockReader that uses the4 |- E! n! e1 g+ z0 v; c
    ReplicaAccessor built by the builder.
+ O' W+ @! K* }; y' y  </description>7 }* z" R+ `0 X7 R& r+ |% x
</property>
3 n( _% y* E8 e/ C- g* y! x" ^<property>
- C: f7 m% n) B$ p0 N  <name>dfs.client.retry.interval-ms.get-last-block-length</name>
" f5 h6 a( ?' J) i$ X  <value>4000</value>
* W3 ~; e+ d& h& I; }  <description># {% }% U# D; W( L0 v, N
    Retry interval in milliseconds to wait between retries in getting* n8 f7 E, W' ]
    block lengths from the datanodes.& |6 ]0 ^# v  t7 ^6 e4 [% I$ \
  </description>% e8 H- }" _# F+ x+ `3 c
</property>7 v9 j% m7 D4 r! C* y
<property>
" Q; m' c" W9 H  <name>dfs.client.retry.max.attempts</name>7 V* ?; D; M8 l& t
  <value>10</value>
2 z+ M# s, L% a! q+ w% }6 c+ j  <description>
0 B! s4 Q. b0 B: \7 D! V    Max retry attempts for DFSClient talking to namenodes.
3 z% l/ m) L. Q" `/ [  </description>: _1 _# I) c: W- }2 r  [
</property>
% u4 j: d$ ~# k8 _<property>
# \1 c  P4 C$ Z  <name>dfs.client.retry.policy.enabled</name>4 z! k1 ]0 a6 w9 Y; Z
  <value>false</value>$ o+ w) [$ g, @4 t. `" _( q9 \
  <description>) e5 x, W+ q( ~
    If true, turns on DFSClient retry policy.: s; Q  d4 h) a" i
  </description>
1 C; B  S2 i$ l4 E! A' B</property>
- W. ~& z0 d. ~* c6 i& x7 Z! a<property>
, `7 ~) s+ ^' ]9 R! ?& b3 J8 m  <name>dfs.client.retry.policy.spec</name>  L/ ~: \( W, \
  <value>10000,6,60000,10</value>
* B1 T" @: X' f4 I9 K% m  <description>
' N5 R/ @) y2 ?: @8 d  l1 C! E    Set to pairs of timeouts and retries for DFSClient.
/ b( C4 @7 a3 o: }8 V- @# a  </description>. H4 T; {) w2 }9 `" G
</property>
+ A  I1 T6 s( [/ n) z, f: _<property>
; |) {! L( g) D6 F  <name>dfs.client.retry.times.get-last-block-length</name>- [( {+ I0 d& Q/ ~1 C5 x) j  K* E
  <value>3</value>' T1 }) N. }; t# M# U
  <description>( ]8 e# P2 Q1 v! I/ W; X. b- Q* m
    Number of retries for calls to fetchLocatedBlocksAndGetLastBlockLength().& u) W& h9 m7 h: E0 w6 W9 ~
  </description>
* E. W% M9 w/ s& w/ l7 x1 o2 T5 A</property>
; X; B5 R4 }  M6 f<property>
/ g: ?- W! p0 @4 w8 e& `( v  <name>dfs.client.retry.window.base</name>
- J0 k* f4 `9 L2 u  <value>3000</value>. P) E7 m9 C1 ~
  <description>
5 U# K- H9 b, o5 a    Base time window in ms for DFSClient retries.  For each retry attempt,
7 H# E* J/ p! g& K7 z0 v9 B+ g    this value is extended linearly (e.g. 3000 ms for first attempt and
1 Q% F, s. r' c' U    first retry, 6000 ms for second retry, 9000 ms for third retry, etc.).
7 Z! y2 T  n2 [* n* x0 Z  </description>
8 Y7 _5 |) w# ^3 x1 U" T; t</property>/ y* Z6 `, [. e- Z3 u, I. e' g" l! N
<property>% H0 ^! ~5 i. a$ |- N" }
  <name>dfs.client.socket-timeout</name>1 k( j3 }( S% o
  <value>60000</value>5 s5 P" _4 Q6 E$ e" p" S; e: R. c
  <description>6 d; o: K  ~& n8 Y' X& J
    Default timeout value in milliseconds for all sockets.9 ~& {! x5 u7 x+ a
  </description>
! A* H, ~, _. o4 v2 d; L  b+ e: e</property>7 S% B' d( G. }* m  v! q) A8 ^0 S
<property>( z6 C; W: W0 K& g
  <name>dfs.client.socketcache.capacity</name>0 H4 e  S) F0 K- l' Z+ i
  <value>16</value>9 t4 o6 f$ R! E7 q& ^. E% M
  <description>$ q0 A8 x8 u& [: |4 V
    Socket cache capacity (in entries) for short-circuit reads.
( o! z. v  k& W5 u5 Y7 i6 K  </description>
6 Z6 G1 s5 F7 o9 t3 l9 D</property>: F/ c+ Z6 ~: H6 R
<property>$ x5 c1 h; B# E% d2 s" y7 q
  <name>dfs.client.socketcache.expiryMsec</name>
6 J+ ^: I7 O. q1 ^; ~, T  <value>3000</value>
5 I0 s. g  B( a5 k! O# S' y; P  <description>! B$ }" R4 X" U4 b: \# r
    Socket cache expiration for short-circuit reads in msec./ |8 s2 s3 @/ z) u4 _
  </description>
9 W6 t% x% ^. X1 ^! P</property>6 s* E/ t5 `! X. q
<property>" y! ^& n5 p& L7 p5 c) w$ q
  <name>dfs.client.test.drop.namenode.response.number</name>
  c$ S: j  c9 L  <value>0</value>: z: X. y: \* B7 v$ ~/ h1 B
  <description>3 {) a4 ?: v1 G6 V* V+ o
    The number of Namenode responses dropped by DFSClient for each RPC call.  Used7 R; }9 d# N6 Y# W; W' Y6 ^
    for testing the NN retry cache.* i/ R8 b% U0 }' w  B6 u
  </description>
0 P5 S7 l; i$ s% c* _( e2 I</property>1 O1 n7 F0 @2 Q/ L
<property>
1 {( _! F5 W; A  <name>dfs.client.hedged.read.threadpool.size</name>+ _7 p8 m' W6 M  d9 N) J+ l$ d/ r
  <value>0</value>7 E2 O8 y8 H' X( P1 j6 n, G
  <description>: p, r2 q0 L# e4 K
    Support 'hedged' reads in DFSClient. To enable this feature, set the parameter' c: b, o! A6 `# z. x5 r+ e% z
    to a positive number. The threadpool size is how many threads to dedicate
) n* ^) _$ r% z1 x4 J: h+ e    to the running of these 'hedged', concurrent reads in your client.
& t- u8 d$ Z2 _/ j1 X& f: F3 X  </description>
/ G7 I( G/ J7 R* a</property>2 ~7 Q, u5 n' y5 l
<property>& L0 U3 k$ F% A1 L8 d
  <name>dfs.client.hedged.read.threshold.millis</name># k$ t( l8 }4 Y
  <value>500</value>4 x0 |1 F# Q/ R; f0 Z- Y
  <description>1 Y' Y4 `/ [9 Q! z
    Configure 'hedged' reads in DFSClient. This is the number of milliseconds% s' G" o% I! z" \0 H
    to wait before starting up a 'hedged' read.
' I9 W9 P" K) d: i3 }1 ]- c5 Y$ W5 W  </description>
) H$ a/ R- w6 V( k! @; ~. c</property>- ~6 Y9 R) N/ k8 B# @/ r: }, y
<property>
" V$ j+ T$ A6 p+ n' z8 B; ?1 r  <name>dfs.client.write.byte-array-manager.count-limit</name>  A4 d! u" u+ u$ k# M$ ~
  <value>2048</value>9 k7 k5 S' `+ Y, V' b. d
  <description>6 y  t) ]% T' S! ]+ U
    The maximum number of arrays allowed for each array length.
# ]. m* X0 h/ w, U" e- A/ ~  </description>$ ^' p- a0 C* v- b* V# J  R- {! G' v
</property>
2 b7 E! ^8 v* I0 {9 ^( h4 ^1 U<property>/ L. e* B* x5 E' V; e) ]( [
  <name>dfs.client.write.byte-array-manager.count-reset-time-period-ms</name>
/ e/ F2 ^6 o/ A: y  <value>10000</value>: ^: M6 h7 o8 m! _+ e4 V" o- g
  <description>
! {+ ^1 a! y+ B% n2 \% ?- u    The time period in milliseconds that the allocation count for each array length is
0 L8 z( _) ?6 ]    reset to zero if there is no increment.
6 c5 Y' I8 Q9 \# c2 w" r9 x. b  </description>
! z! S* h, {; X! `9 k- I</property>
9 w; S9 y1 W8 Z  s; O9 ?# R" Z<property>9 z5 J8 q9 ^& c, D
  <name>dfs.client.write.byte-array-manager.count-threshold</name>% j3 S3 g% J* L4 ^) U- ^
  <value>128</value>+ ~3 l; G0 J2 E8 h" r
  <description>
* z1 }8 z- k8 H- V8 n% ^( n4 T    The count threshold for each array length so that a manager is created only after the) b2 Y; J. ?3 j; B! E6 D
    allocation count exceeds the threshold. In other words, the particular array length
- @6 j2 D6 w8 y4 {5 F    is not managed until the allocation count exceeds the threshold.
- z( T. K, e- o; P! Y  </description>, w' d1 |8 P; h8 ^
</property>: W! V. W& h$ @2 X) o
<property>
9 @" u+ k! b' o: g3 q( p% c8 O  <name>dfs.client.write.byte-array-manager.enabled</name>! c+ d1 W+ K, D$ O* t: H
  <value>false</value>- R8 b9 O' a; J
  <description>$ o' |, i6 ~  c7 E' G
    If true, enables byte array manager used by DFSOutputStream.
3 K9 n: l* _7 }( E5 n. ~/ Q  </description>7 F- ^/ M+ \, i8 {  j8 x! `
</property>
' G! Z4 k' S/ u( v" O9 V<property>
+ p' j" B# C( h. a! U' r  <name>dfs.client.write.max-packets-in-flight</name>& j5 K# L5 d- t) k
  <value>80</value>! B  E2 B0 E6 Y! T
  <description>
0 |, P. {, f4 t  n( x    The maximum number of DFSPackets allowed in flight.1 K& ?8 ]0 ?( i: w& d) `! o
  </description>( L% ]1 N# U7 i' N) y
</property>: K/ o5 X. G/ P1 B( w, @: z
<property>
$ ]. a2 p. L2 v# ]  <name>dfs.content-summary.limit</name>
  C+ ^3 r4 f1 @1 ]  <value>5000</value>
. o2 o. S9 T+ N' G- ~  <description># y# @3 H7 [# W& d0 p, F2 [/ K+ C
    The maximum content summary counts allowed in one locking period. 0 or a negative number* A3 R0 f0 \; I: I- g
    means no limit (i.e. no yielding)., Q! ]( r1 q4 l+ y( b3 K+ d( U- _
  </description>7 e4 A* D+ b! O/ n# Q
</property>: v( A) U* }6 S7 C
<property>$ e' W& R9 w' _* y8 A$ q! [: }) }
  <name>dfs.content-summary.sleep-microsec</name>! _# {+ Y! y& u" O4 s
  <value>500</value>
( {% `8 I* ?. G4 I  <description>( y2 k. b) e) s" |+ }, g' q
    The length of time in microseconds to put the thread to sleep, between reaquiring the locks+ [) g4 D. C0 T: ?% w; y
    in content summary computation.
$ q) q3 N* S) E) @  </description>8 x8 b% s( H3 |- }
</property>; H( ^/ L9 K0 C: d1 ?' p6 M
<property>
0 d  ~  [% z7 ?1 m3 h- G" \* f  <name>dfs.data.transfer.client.tcpnodelay</name>. ~, `8 d2 Y: l: f7 V
  <value>true</value>
3 J5 o# r& R: B9 I; \  <description>
( r) o, P" P9 [7 x3 V8 A5 ?    If true, set TCP_NODELAY to sockets for transferring data from DFS client./ X: q' w2 l1 f" p- c$ o, f
  </description>% a9 i% G+ r$ h
</property># L" P7 O% w% J. D( V3 p. A; _; @% v* j
<property>
' G# t- ?- w6 n; M9 N  <name>dfs.data.transfer.server.tcpnodelay</name>
. j& k) |0 I9 M  <value>true</value>' F: ~' B( ^4 I7 e; f8 S
  <description>
" ?2 b. \/ t# V% J" m0 @/ n7 W    If true, set TCP_NODELAY to sockets for transferring data between Datanodes.
! ~, z: F& V- v- b  </description>
$ n: Y0 Y2 M! H' H2 v/ d1 [; r) Q</property>9 c: ^! c3 W4 b( P
<property>6 w# k8 k& `1 D4 m9 \0 H+ B
  <name>dfs.datanode.balance.max.concurrent.moves</name>3 L0 Z8 X- w+ w
  <value>50</value>5 H# v5 ?. U$ m: U: U4 Q* K
  <description>
& m* G( [% u8 a' U' N: u5 @" R2 r    Maximum number of threads for Datanode balancer pending moves.  This
. z5 L: q  k$ H    value is reconfigurable via the "dfsadmin -reconfig" command.0 B1 `7 d! W% ~
  </description>
& Y7 {, w# |, H- `" d- q</property>( A( U9 S# x# m1 S6 m: L. g
<property>1 z* w  H6 u' Z5 F
  <name>dfs.datanode.fsdataset.factory</name>
3 c& Z, f0 r4 s0 l  <value></value>1 C! w+ v. h7 v9 o1 s+ D; r$ Y
  <description>
) P8 l) l1 m- @$ i4 e    The class name for the underlying storage that stores replicas for a
- q8 B$ s3 |1 M5 X    Datanode.  Defaults to
. U# f9 j  L9 I3 M. q    org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.
' ?! V7 ^: `8 {; ?* K+ C6 S5 ?  </description># u% l# L! o7 v$ `: Z1 \5 |
</property>/ f$ @3 j& n* Q6 K" L
<property>
. p) G( {0 r& n6 n: D/ B4 S  <name>dfs.datanode.fsdataset.volume.choosing.policy</name>- G% M: K* A$ B& R
  <value></value>
: l% R' A; z$ u/ r* j" }  <description>
# S1 t0 N. M7 w8 e" n8 r, o    The class name of the policy for choosing volumes in the list of  l+ ~' B1 R; P/ q8 d# O* r
    directories.  Defaults to
' |6 u% y) `% A: B    org.apache.hadoop.hdfs.server.datanode.fsdataset.RoundRobinVolumeChoosingPolicy.& Z! U" }. i6 M7 K0 D
    If you would like to take into account available disk space, set the4 v9 H8 p% a7 R& M3 I$ G0 L
    value to
5 e  J- f0 G% W: p& A) T/ _4 h* `    "org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy".
" E  y% @0 P" L  </description>1 N; n6 y2 r. \$ J: w
</property>- G' e' [6 s5 t0 u1 v& q% n# d$ _) e) x
<property>
. T; Y& L" v* m7 A6 ~9 R  <name>dfs.datanode.hostname</name>
* o; I0 M2 l3 ^  <value></value>
+ Y3 A8 L  Z' H8 ^' C( e1 k' l  <description>4 r& x* v, w9 y3 B
    Optional.  The hostname for the Datanode containing this
3 F$ j5 Q) o2 K. d    configuration file.  Will be different for each machine.
3 b% O% @0 d4 t) }    Defaults to current hostname.
3 c5 k6 q, n, ~# [5 A  </description>; P  I7 _# Z- e! H% a% t+ U
</property>
) L7 M2 \% R8 a. O+ n- U' e( f+ b<property>
; ?$ |  @4 ~# s8 }  <name>dfs.datanode.lazywriter.interval.sec</name>
8 `/ `. O7 J* v% N2 D* |8 Y! E  <value>60</value>, z; J6 \9 M4 X6 q9 \: H
  <description>( U8 F/ X- D. N5 t) q  F$ n
    Interval in seconds for Datanodes for lazy persist writes.
$ J% M& \7 c8 x5 ]) ^- `$ z4 V  </description>* P* c7 H5 g- T! L
</property>
* g( @5 [2 j' N2 e6 r7 e5 U/ q- F% I<property>
, u. h. J. l- e% \/ }# A  <name>dfs.datanode.network.counts.cache.max.size</name>" b+ Y9 S, D  V/ C9 `+ H9 @: \! m2 I
  <value>2147483647</value>
) @+ [3 F& d* b& z5 B% b  P% N  <description>, q& W% C* @4 x" |
    The maximum number of entries the datanode per-host network error* z  K$ D7 I5 }8 k4 j6 _2 v8 r
    count cache may contain.7 u( l2 c$ n% I. [
  </description>8 l/ N" ~# R" x; j9 w9 z6 D
</property>+ S1 A0 L" A$ I9 m. e8 @; N: @. s/ Z4 U
<property>8 G% B, p* R7 S
  <name>dfs.datanode.oob.timeout-ms</name>
7 h# Y* r) t1 ]; I9 m; L( P3 \+ e  <value>1500,0,0,0</value>
/ @$ ^! u% [* Y. r" w  <description>+ q' {& V5 T9 I( A/ k* b: X2 v
    Timeout value when sending OOB response for each OOB type, which are9 E' d+ T; p7 D8 R! L
    OOB_RESTART, OOB_RESERVED1, OOB_RESERVED2, and OOB_RESERVED3,
7 q7 Z3 f% U. ~0 ?    respectively.  Currently, only OOB_RESTART is used.
8 @0 J7 E+ Q; n5 r% T  a  </description>
% Q, @$ g: v/ {( M1 C4 H</property>& Y& _/ Y3 g1 i8 X) V/ ]( {
<property>2 w$ i0 P) ?( L% W
  <name>dfs.datanode.parallel.volumes.load.threads.num</name>6 E* ?6 K/ W0 E- A
  <value></value>
- X2 W( h# t  s" I" f2 @; h  <description>! j9 d( |" a. N  y, r
    Maximum number of threads to use for upgrading data directories.
, X! {2 L( x3 y* k    The default value is the number of storage directories in the& P5 o1 J2 B. C0 t" f- b+ G
    DataNode.; d: m( P6 N# j! |
  </description>
3 A8 z% V& h9 l3 A</property>. {( P4 N4 F8 J0 m0 y* e: t
<property>
/ l7 S( w3 {0 U( f8 O3 a  <name>dfs.datanode.ram.disk.replica.tracker</name>' i5 h& S" P+ [1 h4 k5 P/ F
  <value></value>7 k7 Q  B8 B2 _& q& m! p
  <description>% L; b1 M3 a3 U# E+ Z" N
    Name of the class implementing the RamDiskReplicaTracker interface.  {  ^1 ]8 q1 n4 x
    Defaults to5 M5 H# u$ t8 {& w& \
    org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker.
/ m( T, o5 E* q3 |$ G; f  </description># ]0 j$ a; r& S7 Q
</property>
0 \& H2 Q3 T: S$ d& e5 O: x0 F& _<property>
* G  @  I5 R$ c, ?: I/ Q  <name>dfs.datanode.restart.replica.expiration</name>
7 M: r% G3 C3 A7 L) d, Z- V! N+ d# z2 @  <value>50</value>
1 ~. r7 h" D, M7 e  <description>5 J( e8 V4 U, c, |& u6 O% [* K
    During shutdown for restart, the amount of time in seconds budgeted for0 m# z2 m( n" V* f7 |8 e; ?
    datanode restart.
+ n6 u. w. S* D+ B+ v# j9 v  </description>
: E, g2 G+ @0 Y6 \, o- g' l- c</property>
9 F. R3 Y2 k( _<property>9 i4 S  m" M( @* ^  S% R
  <name>dfs.datanode.socket.reuse.keepalive</name>5 m5 a( l0 N5 ^7 ]! ^1 |
  <value>4000</value>
. u7 p8 @0 X, s7 Z5 b) K  <description>
: m, ^) G9 A; X% E    The window of time in ms before the DataXceiver closes a socket for a
/ s! |1 O' B  x' O. {" k2 X/ ]$ ^    single request.  If a second request occurs within that window, the
+ t  N. \) G+ Q3 a9 ^0 R% G# Q    socket can be reused.# I+ Z# m& |, O$ e% G5 U1 g8 v
  </description>3 K; U4 |7 u0 t  c# b9 L8 _/ q
</property>
1 B# ]* n( \; [* g$ ]<property>
/ e/ |: j  Z' }* d4 L  <name>dfs.datanode.socket.write.timeout</name>
2 B6 L6 C0 @* }  <value>480000</value>) q: _! p: R; W& o/ x. a
  <description>
6 E  v- A2 }. W0 ~1 E+ W+ |    Timeout in ms for clients socket writes to DataNodes.+ V3 o2 G2 P8 K- Y8 N0 C; N& j/ q
  </description>
: J! x" o8 B6 t* G6 M; J</property>
' y7 I( c0 F2 P4 q& d<property>7 i6 d1 B, m; J4 X
  <name>dfs.datanode.sync.behind.writes.in.background</name>1 m- g# C& t( U" ^8 Z. i: T
  <value>false</value>6 M/ u$ }, E& t8 Y0 T* k8 p. o
  <description>1 l( Q6 M+ U9 `" t8 Q
    If set to true, then sync_file_range() system call will occur
9 b9 R% D' n) A" a7 g) t, @' C2 R. z    asynchronously.  This property is only valid when the property/ w3 u: _6 l" z' s* D! g6 J  F# F
    dfs.datanode.sync.behind.writes is true.. @; x5 S' S. _7 H; {% a* D% |. C
  </description>
& H5 Z8 R& \* Z& |! d7 {0 t: Q9 f</property>, j; F4 g* t5 s9 A( F1 Z: N
<property>
, O" Y" s* e3 S# ^9 t* J0 }  <name>dfs.datanode.transferTo.allowed</name>0 A! `) c, y0 A% M
  <value>true</value>
7 X7 T3 q) ^$ J! q/ q  <description>
2 r+ ^& D- q% C% h0 l4 |& A+ c    If false, break block transfers on 32-bit machines greater than
! B3 g! D, s+ Y7 S    or equal to 2GB into smaller chunks.( {2 ^. }( `' w
  </description>, o/ }& U2 u: o; S, g
</property>
* O$ `. Z3 ?, w<property>( D  [7 V( \# O2 ~' r  U4 X
  <name>dfs.ha.fencing.methods</name>
- Q* ^6 b! f2 t% t8 H# J' t  <value></value>
' c+ S( a1 g4 S' ]7 K: U/ H  <description>+ q" X. H) a* e
    A list of scripts or Java classes which will be used to fence6 F3 k3 ~, T# |4 g+ ^7 h, _
    the Active NameNode during a failover.  See the HDFS High
7 Z: T0 ^8 z' w# B    Availability documentation for details on automatic HA+ v5 A2 R9 e9 o0 ~
    configuration.
4 ?. n) |( E+ V, I$ U  </description>7 o/ C$ _# j7 Z4 D7 ]
</property>
9 G  c' y6 b3 L$ w<property>" i4 v; |# ]; i2 A! e
  <name>dfs.ha.standby.checkpoints</name>' a% {, N# m$ z  h& h" E4 _& c& S
  <value>true</value>
' l+ `. H% Q# M  <description>
  K* b( y* ^6 E  K    If true, a NameNode in Standby state periodically takes a checkpoint
8 |2 B8 E& d$ M8 \1 b    of the namespace, saves it to its local storage and then upload to
' l1 V- s# N9 K/ P/ K* w7 W  ?    the remote NameNode.
4 _6 y$ I& T$ O" r# g. x  </description>' Y$ |3 m" {6 q7 F
</property>
0 r1 A, ?$ F) G<property>
" ?# y# B  t5 M! ^5 @  <name>dfs.ha.zkfc.port</name>
/ H! l9 O, `; M0 _% c  <value>8019</value>
! i+ ]) z4 z4 ?! e  <description>  W6 I8 [4 c& u" A2 G
    The port number that the zookeeper failover controller RPC( k* R/ h2 K5 K7 _; ^
    server binds to.
8 `! M1 i$ w  q2 C% ^1 ^6 D( T9 g  </description>% V2 K; Y- d1 ?" x
</property>1 d0 u- ^0 f! Y( X; Z
<property>
+ \3 j! Q4 Z( `  <name>dfs.journalnode.edits.dir</name>
+ }/ T, X) t* [: ]6 [. ?, P/ v% I  <value>/tmp/hadoop/dfs/journalnode/</value>
7 g! M# C, T* I2 W4 [  B  <description>
- b5 u" Q: p3 U, G    The directory where the journal edit files are stored.
" y. w5 R! b4 \/ S  </description>
& _0 y( N+ g+ x7 T</property>9 F7 \5 J4 H/ U# q' h$ s
<property>
4 N8 }$ [+ O) M5 l  <name>dfs.journalnode.enable.sync</name>
& o5 Y% X2 R% o7 ?  <value>true</value>
  T& u: W( ~* c6 N2 o3 V6 a3 V  <description>5 T" Q, l/ U3 s/ v3 [
    If true, the journal nodes wil sync with each other. The journal nodes+ d% N) N3 ~* M
    will periodically gossip with other journal nodes to compare edit log
, [, E" m# [. F+ a+ _6 e+ M    manifests and if they detect any missing log segment, they will download% u5 {7 s  r+ D6 b/ f. D5 v" _2 u7 i, F
    it from the other journal nodes.6 r7 ?& ]; |) H2 h, r% E9 R; S; v
  </description>( w) w2 D* x0 s1 w8 ~' S7 T( g
</property>7 r9 l; Z1 V" j( \1 l" o
<property>
+ m5 y, s$ z1 r; V( S( L$ Q& Q  <name>dfs.journalnode.sync.interval</name>/ a& C9 D4 J3 C  O5 D0 ]. T2 ]( R
  <value>120000</value>
3 W3 z1 H* s6 l" K  <description>4 |0 |7 \9 S6 A1 ?
    Time interval, in milliseconds, between two Journal Node syncs.% n- P9 j5 y* o0 H
    This configuration takes effect only if the journalnode sync is enabled
1 \( N1 Y: Y& |& r    by setting the configuration parameter dfs.journalnode.enable.sync to true.
& r! w/ ]+ h8 @9 z0 p% q  </description>
2 z# Y: U1 t3 d3 G</property>2 n' b9 B, M5 z# G3 _- n
<property>" n: W; u' _3 A! s
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>0 A# ~2 O( I; ~4 d9 [# T
  <value></value>& h* R; q5 q' z9 T) s  J
  <description>: x2 {5 |- W$ C" u) j  n
    Kerberos SPNEGO principal name used by the journal node.1 R& h. ?" Q7 |: ^
  </description>4 }. x( t; S" a. x2 s( P1 C
</property>% u# _9 N0 e+ K
<property>
9 k0 n% y5 ]+ _; n, d* j9 H: A) d  <name>dfs.journalnode.kerberos.principal</name>
: Y& U8 t9 v7 H$ Q8 G/ a$ k  <value></value>
" M* i4 i- j, }& p6 W' A8 J  <description>
* T  q2 Z$ ]7 O1 D/ [8 @+ k, ~/ {    Kerberos principal name for the journal node.
: t4 g: V, y3 t  </description>, t$ e! Z9 ?( s/ Q
</property>
9 _) e8 |) f9 n<property>
5 F1 {+ m# v; R# ~. A5 L8 c  <name>dfs.journalnode.keytab.file</name>
2 Z3 u) A, }7 B8 P: k  <value></value>0 T, w3 d! I6 Z) w1 C* T$ s
  <description>
# ?3 `) B, ?$ `2 j! L    Kerberos keytab file for the journal node.' d7 _& w  c+ }" g# ]
  </description># \- M" R. r& }7 b9 X; T
</property>
+ q/ {3 R4 g& D<property>% X2 g7 q) k$ m9 c
  <name>dfs.ls.limit</name>
7 f/ m; E3 g  p1 r6 A' N+ Z  <value>1000</value>
. E) j2 L' F0 p  <description>1 W: q# \9 n0 y$ D
    Limit the number of files printed by ls. If less or equal to
* c5 {7 S: v" q1 w( @) G6 |    zero, at most DFS_LIST_LIMIT_DEFAULT (= 1000) will be printed.& t7 }4 f8 l0 Z
  </description>2 z3 ~3 T3 q1 d8 Q* k6 h8 u, p
</property>
+ s2 J# o% U7 q( c) r" N& W<property>
+ e5 Y' d  P, p& R4 R; w0 {7 y4 ]" h5 X  <name>dfs.mover.movedWinWidth</name>* G% t0 m+ M- a- r/ f
  <value>5400000</value>8 f' y! S1 Z5 l- g) T
  <description>5 I# P+ H  o+ n
    The minimum time interval, in milliseconds, that a block can be  p" L9 d/ `! f& H0 O) B' u
    moved to another location again.5 V5 t0 G* c* m3 W7 }
  </description>$ J' z% @( K0 t8 H% {
</property>5 I# u$ t- C: N6 M
<property>3 X9 [0 L' _7 u% |, G: A8 @+ q5 V
  <name>dfs.mover.moverThreads</name>: O6 }+ \5 d( R
  <value>1000</value>
. ?% x; h# m( Z/ v/ A' O  <description>1 }; j+ u2 k0 F: W
    Configure the balancer's mover thread pool size.
) M2 S3 n% ]' C. V6 u  </description>/ w" k$ s& O/ O/ ~
</property>
! w5 l# @3 K1 P/ g<property>+ P: i+ X" M! d6 r# f  U0 ~" f# x
  <name>dfs.mover.retry.max.attempts</name>3 X- d% x/ R+ `/ C: J; O
  <value>10</value>
! L: T5 X( o3 N$ B+ O  <description>  P7 L8 C+ H2 t
    The maximum number of retries before the mover consider the
  t7 `; f& r+ R. {5 m+ _0 Z    move failed.* o' ~7 [9 ^0 L& P# H, `6 g
  </description># E  M8 B' y0 ?0 J  @
</property>
& k1 U) x6 s9 E9 r<property>
! H# ~  I  m; S2 i. b: P  <name>dfs.mover.keytab.enabled</name>( ?% ~, L6 S3 p
  <value>false</value>/ ?1 _" \# X5 F4 O3 P9 a
  <description>  W7 e/ _' b- B1 W$ y+ e
    Set to true to enable login using a keytab for Kerberized Hadoop.
! E, J6 j4 f+ O/ ?) Y$ E' ^  </description>9 q8 h7 F+ P1 @; X) m1 g
</property>
% P  i" w9 ?& g<property>2 Q7 L8 O+ z% h
  <name>dfs.mover.address</name>" f/ ?8 _- j: L: v1 _* E
  <value>0.0.0.0:0</value>
9 l& i/ c6 ], H  <description>8 K* A+ p+ j* F( f$ F& g0 @  T/ g
    The hostname used for a keytab based Kerberos login. Keytab based login$ ^# K; [; o/ \# c/ M
    can be enabled with dfs.mover.keytab.enabled.. E0 W6 \+ h) P
  </description>6 S! F, ~# ^7 L2 y' m. X
</property># \0 _- F/ t2 e& c
<property>) ~0 {" N$ g" s& X3 {+ c: S5 ]$ ^1 F6 r
  <name>dfs.mover.keytab.file</name>( w+ `% F. I# T# w
  <value></value>
3 h3 r1 F+ W( b& I/ }  <description>4 a, i; }$ x/ x. d
    The keytab file used by the Mover to login as its& A; S6 I* q  T! g4 D( m
    service principal. The principal name is configured with& a/ z: P9 u! g8 `  j$ Q" t
    dfs.mover.kerberos.principal. Keytab based login can be  m9 a( R' ~. V7 V9 z- l
    enabled with dfs.mover.keytab.enabled.
% N5 d0 T1 j; l$ g: w+ l" {  </description>
0 t+ u7 Y/ `$ E% f6 L% y. E$ z</property>
3 X/ S+ u! N3 ^% Y<property>
/ {7 o2 Z. i) w/ K# Z7 x( d  o  <name>dfs.mover.kerberos.principal</name>* j4 W0 ]& x; |2 N$ T. q% ^$ A! x
  <value></value>& e" M4 m/ V: H5 v
  <description>
- [( x# S9 D- ?4 N( T1 L6 ^! n    The Mover principal. This is typically set to
0 W- u5 `7 c5 K+ s    mover/_HOST@REALM.TLD. The Mover will substitute _HOST with its5 ?0 J$ Q' X0 u) }2 o1 T
    own fully qualified hostname at startup. The _HOST placeholder2 d+ y& m! t  M, `, a
    allows using the same configuration setting on different servers.  C6 J9 i8 i% P0 P% L" u
    Keytab based login can be enabled with dfs.mover.keytab.enabled.- U9 \4 J* j/ y; [* E9 k7 b
  </description>
: w0 W( E1 [* Y  `: I- P: L7 K</property>
, B; D4 k* `9 q! e" @* g  Y<property>
9 y8 j1 z) v+ L! W' C8 A( c! _  <name>dfs.mover.max-no-move-interval</name>
2 a3 f+ ~# D  i  <value>60000</value>
$ o* {( P6 q4 Q. z  <description>/ k; W8 d0 H5 _" L; _
    If this specified amount of time has elapsed and no block has been moved
& @1 H+ Y7 [$ x: `) B& M! ^9 z  A& d    out of a source DataNode, on more effort will be made to move blocks out of
+ \* d' R% k3 Z7 X, a& p$ a    this DataNode in the current Mover iteration.
/ g" |) M' v$ C  </description>
" U' }* v0 q+ ]</property>
3 K( G- i- b% x) v. R0 u<property>
6 Q7 d* m% d) e  <name>dfs.namenode.audit.log.async</name>
# k3 @+ P8 n$ \, A3 G  <value>false</value>
  N% R, r( Z8 z' C5 K: ?  Z4 S  <description>
1 W( J# Y9 N$ Q5 V0 S, s* u* H    If true, enables asynchronous audit log.
$ ]- t5 S; r1 p  I: x  </description>/ `/ L0 H- f% ~4 H. E$ h
</property>  c/ S- z1 d' D# @3 f: H$ y3 a& y
<property>& }5 \8 ?, r, B, x% t0 X3 t
  <name>dfs.namenode.audit.log.token.tracking.id</name>& R; A6 A6 C8 w; K  h0 k1 T! I
  <value>false</value>& K( n$ A# G5 h0 v* {& Q
  <description>% D! n, O' g* W5 \# j' C
    If true, adds a tracking ID for all audit log events.' c: h4 q! I5 u) J0 u, c' S" e" g0 v
  </description>
. Z* N- H( I" l+ z</property>% M- r+ O2 b. N3 [! j3 U" s' W
<property>1 P: ?) x# Z5 q; T" H" n
  <name>dfs.namenode.available-space-block-placement-policy.balanced-space-preference-fraction</name>1 s9 o+ [, n/ e6 q, k' ~5 m$ @
  <value>0.6</value>
6 {1 @0 S$ A" r( y, \2 [  <description>  |* w7 t7 U  Y& @8 p
    Only used when the dfs.block.replicator.classname is set to, j; }7 ]+ z* L
    org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy.
* B9 [; b# @, @: j, u% T    Special value between 0 and 1, noninclusive.  Increases chance of+ B  J# P# C1 ?4 Z  {7 F' y
    placing blocks on Datanodes with less disk space used.
$ X$ {7 J# a! @# E+ N  </description>
- }+ n; k: y; ~' n</property>
, F- g$ d5 s- t/ D, f4 M<property>/ T7 v8 g$ B1 X" V, ?% W; r% \
  <name>dfs.namenode.backup.dnrpc-address</name>: c; P- {; @/ w5 M6 Q. q" H
  <value></value>
; n1 j1 t0 r3 r6 L) ~2 h4 W  <description>8 D% h  I# x% X6 `9 n4 L* S
    Service RPC address for the backup Namenode.
- b: Q: r9 _7 n/ M  </description>
: t. d" d9 c. q</property>1 x+ m. j( e$ \% ?1 g- k
<property>1 q1 m3 C, F& m  H. K2 s, }  ~  L
  <name>dfs.namenode.delegation.token.always-use</name>
; k  O% Q2 _6 N/ g1 l7 _! T  t, k  <value>false</value># n6 I" f3 z" H2 I$ B) L" z
  <description>
4 L1 G5 [6 F; |8 E) n, V  S    For testing.  Setting to true always allows the DT secret manager
% \( p+ I. }% C' Q3 a( w$ K    to be used, even if security is disabled.
7 p! i1 ~& e3 }* D4 x8 q2 b2 V  </description>
4 [! v6 P9 A+ c& ^/ E</property>7 r) |% f- T- L) W
<property># i7 ]* X% |6 y
  <name>dfs.namenode.edits.asynclogging</name>
; I- ?0 d1 e& }1 z) I  <value>true</value>: I% N9 K" u5 o
  <description>
* j! s; g1 A( M$ i4 \4 _$ Z8 W    If set to true, enables asynchronous edit logs in the Namenode.  If set
# f7 D4 E# b0 m1 s* ~: m( ^9 R- H    to false, the Namenode uses the traditional synchronous edit logs.
  b" W* g8 B5 V  V, u% k& L4 O  </description>
7 I% ?& }' q3 F# k* v) W- L</property>
$ l+ z' X$ Q. z7 i<property>
. j; n+ v. B" _2 d  <name>dfs.namenode.edits.dir.minimum</name>: q6 g& H+ ^7 R/ G7 T8 o
  <value>1</value>) m& j& q+ c2 X, R0 P
  <description>
) t' n& x5 y( k$ E' j    dfs.namenode.edits.dir includes both required directories" f! _: C( T( L
    (specified by dfs.namenode.edits.dir.required) and optional directories.
( V$ U" \/ S$ \) z9 ~    The number of usable optional directories must be greater than or equal- V6 h- w  O* d% p5 L& @
    to this property.  If the number of usable optional directories falls) }" [5 C0 k! b" M. I% ^: L) n* D
    below dfs.namenode.edits.dir.minimum, HDFS will issue an error.; c# f0 D9 z: N3 w8 \
    This property defaults to 1.
  C' u' m2 [. f  </description>1 p0 s' r; G, T* ]5 D- k
</property>
& ~" C: F" F, L* h1 K" R( D- O2 x2 p<property>
  K# N  D0 x" @& Y- B6 ?  <name>dfs.namenode.edits.journal-plugin</name>
; R- A9 c* C9 X! h0 O* b$ S, Z  <value></value>
, Y9 a' ^* I/ W' w  <description>! l5 n5 n2 w; ~( E: L$ q) w2 f, O
    When FSEditLog is creating JournalManagers from dfs.namenode.edits.dir,! i! @7 `7 a4 ^6 n# ]3 i3 w
    and it encounters a URI with a schema different to "file" it loads the
5 E* m+ L* v9 Y: Z' A: I, h; Z. x    name of the implementing class from& G6 w' L9 I( f; J! c3 R+ U
    "dfs.namenode.edits.journal-plugin.[schema]". This class must implement
; T- N, i) x3 V3 u2 {    JournalManager and have a constructor which takes (Configuration, URI).
4 _: g* A4 Y' {- Y' x+ p9 \  </description>
; _/ a6 [9 a% G8 F6 S! D</property>6 Q/ O8 N$ T5 o, |& r5 C! B' f: Z$ p
<property>2 \; r6 Z! y1 b" m$ w
  <name>dfs.namenode.file.close.num-committed-allowed</name>
, _) D& ^' o! `5 T3 u  <value>0</value>7 r1 [* t. M6 ~; X) a
  <description>
4 d8 B3 W* M/ j+ }! K7 D; a    Normally a file can only be closed with all its blocks are committed.
, P- h! Y* w8 E# A, B+ Y0 S% B  j    When this value is set to a positive integer N, a file can be closed
) C9 O. R& C# m( [& A8 d    when N blocks are committed and the rest complete.
4 R) ?* n1 l$ K# J  </description>0 Y, V) C$ L7 N% N
</property>" d8 ]8 e4 C4 u) U6 u7 ?
<property>1 |0 q( l! n' A- t
  <name>dfs.namenode.inode.attributes.provider.class</name>5 i4 I/ p, D* u0 x* s9 N
  <value></value>
3 \8 D- Q. k7 K  <description>$ j) |0 C7 E7 O* h  ?
    Name of class to use for delegating HDFS authorization.# O; J3 ]- `6 e0 v% P) S1 b0 J3 ?
  </description>
. `6 f* A& {/ \$ }1 c) C* _</property>; t  E8 r: ]0 x0 a6 V( k4 H" B
<property>
3 G: N1 n2 Z5 r# t9 w  <name>dfs.namenode.inode.attributes.provider.bypass.users</name>
  n/ N' C" J, v: Q3 p  <value></value>
1 r2 C, l* p  ], {! W8 V# t  <description>
7 i% s, T9 S/ ]) I2 h    A list of user principals (in secure cluster) or user names (in insecure, b3 O  o' M1 d
    cluster) for whom the external attributes provider will be bypassed for all* ]8 T1 n8 ^0 o3 f
    operations. This means file attributes stored in HDFS instead of the. e! d) c* b$ W) F$ y+ H
    external provider will be used for permission checking and be returned when1 ]3 X+ c% B2 v, R6 G0 j
    requested.& n8 G8 w; w$ V4 v5 _% W1 y
  </description>
) s: {2 ]: h& u! K</property>
3 ^6 \) |. R8 D$ V; F* z<property>" v4 `# Y# }# i; _5 s
  <name>dfs.namenode.max-num-blocks-to-log</name>6 v9 }9 l; L& m) ~' s' x9 y
  <value>1000</value>
2 U. r$ Q8 y7 j) N  <description>
4 a% S! R- L. `: T" {: I    Puts a limit on the number of blocks printed to the log by the Namenode
# |1 h8 D4 j% R6 i5 z    after a block report.
' l0 w8 ]$ u8 u3 @. G5 w% N9 ]" R! k  </description>
0 d/ R6 O# k- p8 B/ R" g</property>
  ?3 o! B. v1 A  w7 L8 @) }<property>
0 Q0 h6 O' k, q  O1 a  <name>dfs.namenode.max.op.size</name>  F. W2 a6 S4 x+ f1 E
  <value>52428800</value>
* z9 _: C% p0 ~% S( v( D/ C  <description>
: X" l  L5 R" H3 i  ?' f0 t! i" F# p    Maximum opcode size in bytes.$ ]6 L. I- E2 Y' C- e
  </description>& J9 `5 y; b( N
</property>
5 D2 U2 K; F1 L$ F- U: x<property>
, B  E( ^( E2 j- L+ b  <name>dfs.namenode.missing.checkpoint.periods.before.shutdown</name>$ G9 e" [- x1 V" @( ]
  <value>3</value>& u4 W, x" r1 q7 f* e7 r3 J$ l
  <description>& T$ ?8 Q- R+ B3 `3 c2 Z7 k7 Q
    The number of checkpoint period windows (as defined by the property
6 u! J, J. H! P. B    dfs.namenode.checkpoint.period) allowed by the Namenode to perform# F1 K6 @4 v7 Y: N) M
    saving the namespace before shutdown.% O+ g; R1 [5 X2 F6 ~7 V
  </description>
  c$ M" x9 {" E, d& M% A</property>
( Y  V: ^  T* s" I( L<property>* g9 P( n4 S' V0 T* [3 H3 f5 `
  <name>dfs.namenode.name.cache.threshold</name>
+ a9 X  g( ~  U  x1 {% x& W  G# c' d; Q  <value>10</value>% V7 W3 u* A2 @' W/ {5 H# I5 \
  <description>0 M; X' N" J* H. g4 E5 j
    Frequently accessed files that are accessed more times than this
3 v6 I& ^! B: M: j' }9 o  p    threshold are cached in the FSDirectory nameCache.  }8 J" J" I6 O/ T' V: [; l
  </description>
6 j( d9 W& x: ^( `</property>  s- j4 |2 n( |: H3 P7 k% x( \3 _
<property>
- n- N, K4 P9 T2 ^) `$ z  <name>dfs.namenode.replication.max-streams</name>/ q- S0 J& l' Y9 W$ ]. R! D( R
  <value>2</value>
% [7 K/ T1 u8 Z' T  <description>) U4 D8 D' e' S3 @# ^1 F( t! A
    Hard limit for the number of highest-priority replication streams.
* g; ^. h/ }- q. {7 G9 K2 Y  </description>
, Z. Y4 q! r+ j) R9 ], @</property>" j8 x6 t$ t. n4 ]) n& {
<property>; I6 Q* d( a6 X! l
  <name>dfs.namenode.replication.max-streams-hard-limit</name>
0 \; H) |% r+ O* J" W  <value>4</value>. {  x1 r3 ?) J
  <description>
$ c  X' f5 o- M4 n( z2 H6 i# g    Hard limit for all replication streams.0 ^- k( |/ Q& r8 n7 S4 F4 }  z/ W
  </description>
: Z2 J5 r- S; ]9 ^% _# W) L4 C8 m</property>
% u- O9 I; o7 {+ l" P! @# X( ~<property>
8 O7 I9 J7 X% A1 y! k  <name>dfs.namenode.reconstruction.pending.timeout-sec</name>
7 S3 g- Q/ K9 n( j  <value>300</value>
6 D# W5 M0 ^$ k) W! g# K" a9 J9 r  <description>7 k7 D2 L6 Z" {- M+ L
    Timeout in seconds for block reconstruction.  If this value is 0 or less,
9 T0 o4 ?) u7 p, x8 N    then it will default to 5 minutes./ B7 f! |# _( A0 v; V# h& [
  </description>
" H, f% B! F: d; A4 z1 x: j4 L+ c</property>9 F4 r* M. s% U/ m6 v
<property>
( A7 D+ O" a/ \/ P. @# s  <name>dfs.namenode.stale.datanode.minimum.interval</name>! G, V5 W+ p2 c' V
  <value>3</value>
0 B2 }- J( Y. G. S  Q  \  <description>
( A" o1 T/ w; z' W    Minimum number of missed heartbeats intervals for a datanode to
$ A# h$ z* p0 J+ W    be marked stale by the Namenode.  The actual interval is calculated as
) c% l* S6 u9 F; e. M    (dfs.namenode.stale.datanode.minimum.interval * dfs.heartbeat.interval)
, y/ a. p5 o9 d6 w) ?9 @) g; k) ]    in seconds.  If this value is greater than the property' _8 ^% P$ r- S9 L; Y, q" H
    dfs.namenode.stale.datanode.interval, then the calculated value above- H7 O8 o  @# S" p" w$ Q
    is used.
; R  S% z7 ^2 p% X0 n) p4 V  </description>) i) q0 Q& I& X  c4 S5 D  x
</property>
7 s, d' e" |, ~6 g<property>
" y- o) g  \$ q1 r  <name>dfs.namenode.storageinfo.defragment.timeout.ms</name>: m" o) M' v) l: `$ l2 \
  <value>4</value>$ y& f5 X& R! L  y% t# r. F1 r
  <description>1 S3 D, K) h: f- N+ K( q
    Timeout value in ms for the StorageInfo compaction run.2 K5 r. N( }! W, ]" a9 w* S
  </description>: h. y. x) u) M
</property>% F/ a1 y4 H& k4 L! y! @$ d
<property>% B" X% Q0 h4 \. [, d
  <name>dfs.namenode.storageinfo.defragment.interval.ms</name>, |( i$ |6 A% u9 g
  <value>600000</value>; z+ y" c/ ~) c! Z
  <description>
- ?. Z* x. S- N+ T. V- P: |    The thread for checking the StorageInfo for defragmentation will
' C, L5 `; x/ J  A    run periodically.  The time between runs is determined by this
7 T( O; D" |+ }; D( M    property.' m# R5 a! [5 c. Y4 D1 l2 M
  </description>
2 `" h/ x; E$ @) m% O</property>
, f3 D6 j4 ^$ Q5 h2 \; a<property>7 c4 F. H! j) i- ~9 }6 U
  <name>dfs.namenode.storageinfo.defragment.ratio</name>
* t; h5 z$ a: f1 }6 e8 N$ W  <value>0.75</value>% X6 ~+ J( t1 `( s( j
  <description>
/ ]* M( B' X, r" A; C6 {- d    The defragmentation threshold for the StorageInfo." l, c  |' S+ R+ o% ]) U
  </description>$ o7 l5 p1 l& z4 d' B) C2 W
</property>: m2 }0 }! E9 s5 Z8 |% g" i
<property>
1 ]% t0 M9 j8 ~/ U& W  <name>dfs.namenode.snapshot.capture.openfiles</name>
0 Z" f1 u, ~% `) h% K( J, t  <value>false</value>" m9 H/ d; J! S2 H2 `
  <description>& u0 G9 i4 e6 O7 r$ \2 R# u- j$ Q
    If true, snapshots taken will have an immutable shared copy of7 m5 v% d$ z' |  T
    the open files that have valid leases. Even after the open files
& C) U9 V! J) n7 L3 p    grow or shrink in size, snapshot will always have the previous
% s# W  j$ Q& p. u: ~) g9 t' d6 m    point-in-time version of the open files, just like all other4 z8 s: P9 A4 a* B5 B6 o" B# A
    closed files. Default is false.1 o6 ]# _9 ]# y6 Y: v
    Note: The file length captured for open files in snapshot is
( k3 B; z  _$ M7 ]6 U- ?( H    whats recorded in NameNode at the time of snapshot and it may
& e0 {, l: k7 S( g( o    be shorter than what the client has written till then. In order; a6 B- N9 D) u9 Y$ p3 x5 U# ~
    to capture the latest length, the client can call hflush/hsync1 W7 q. I/ M7 K# [) l
    with the flag SyncFlag.UPDATE_LENGTH on the open files handles.
9 Y4 W8 o6 q5 s  </description>
) o- X% q3 E$ A+ t</property>- A- w: f# }" @! T1 S" ^
<property>
. m/ w2 m+ e# ^4 [9 C# T  <name>dfs.namenode.snapshot.skip.capture.accesstime-only-change</name>
" P- Z: q, o" |5 g3 R: y  <value>false</value>
6 @% C  |; G# v/ V, v  <description>
( \/ t5 _0 d3 e1 K" K+ ?8 ?    If accessTime of a file/directory changed but there is no other
# E# Y& u7 b! N  N- F    modification made to the file/directory, the changed accesstime will
. _1 m' u, u' a1 x  e    not be captured in next snapshot. However, if there is other modification
+ w3 \+ ^* a4 _$ g* @6 `) h    made to the file/directory, the latest access time will be captured
- `! M  y) t" s$ o7 D) m/ Q    together with the modification in next snapshot.* I2 b% o9 k; v
  </description>$ M4 v# g% G0 ?( I. C
</property>
6 |0 \& Q  k4 Z6 N<property>
# d4 B: w9 _( y# j/ U  <name>dfs.namenode.snapshotdiff.allow.snap-root-descendant</name>
+ F2 \- }& l: D  k2 C  <value>true</value>+ O& W2 k! f2 B9 P. l, \5 X
  <description>
8 [* e1 p7 s+ i% B  z    If enabled, snapshotDiff command can be run for any descendant directory9 s; F: B( @0 Z: t5 c
    under a snapshot root directory and the diff calculation will be scoped+ I1 E* h1 Z3 q) ?
    to the given descendant directory. Otherwise, snapshot diff command can
$ Q( A6 Z, A# K1 |3 y6 j3 d/ I    only be run for a snapshot root directory.
4 O3 L% [+ J. [  </description>
2 Q! ?( w. \9 Z4 A+ R7 \</property>% Y: q8 N3 s& j$ r6 w
<property>
# {  |3 o# B. t  <name>dfs.namenode.snapshotdiff.listing.limit</name>+ t* L5 H& K8 O4 s# r: ?
  <value>1000</value>' O5 m1 n8 z$ m
  <description>
2 x: O0 v& @, Z3 w% t1 Q9 @( C    Limit the number of entries generated by getSnapshotDiffReportListing within
' |9 c* }% D' [4 C$ m    one rpc call to the namenode.If less or equal to zero, at most
' e% X7 A1 {  ?1 s: r! c0 m    DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT (= 1000) will be sent/ S6 V3 @1 r9 R% \% K, v/ n
    across to the client within one rpc call.- d, W# |3 G" ]+ W( O# Z
  </description>1 w: N/ ~, F+ G# v- Q4 C. d
</property>  B9 l* K; h* v) L$ t/ _
<property>
; w/ M, s& m" Z/ Q  <name>dfs.namenode.snapshot.max.limit</name>6 s1 C7 j# n5 l* j' V  G6 A$ k
  <value>65536</value>2 Q: e& w9 F5 X- l; ~, [; o
  <description>" |3 `1 c3 G4 h
    Limits the maximum number of snapshots allowed per snapshottable+ y" a( K# l4 S0 L" z
    directory.If the configuration is not set, the default limit; E* S! }5 v# e# B% K) u% e
    for maximum no of snapshots allowed is 65536.! E" A# O: j; Q; e  X# _; p/ S6 F8 `
  </description>
  X; n& s" G& {" e</property># n8 \' `% D0 u5 U' A
<property>: E( Z% ^6 ?2 b# d' w6 Q7 \- ]6 f
  <name>dfs.namenode.snapshot.skiplist.max.levels</name>6 e: H! {9 {/ H3 D: W6 C5 Z+ Q! T
  <value>0</value>5 \5 c4 x3 ^% h: N7 S; H
  <description>+ z2 J% `! j4 K
    Maximum no of the skip levels to be maintained in the skip list for
* ~2 ?. o5 K) B    storing directory snapshot diffs. By default, it is set to 0 and a linear5 U, N& j' `; {: Z" A
    list will be used to store the directory snapshot diffs." \* R) j' G/ s, e
  </description>7 u, z% i1 ~: @' P( W4 M
</property>
; r6 P+ W& k# v% v( T& |/ ]1 M& U, K<property>
& j; T1 n* g; a1 i* L6 Y( n  <name>dfs.namenode.snapshot.skiplist.interval</name>+ B0 S/ h2 Y: r5 J1 ?  _( d
  <value>10</value>- |* t0 C# w% ]. i) }. V& G
  <description>
9 z2 W- T7 N7 z# {# K2 G9 ]    The interval after which the skip levels will be formed in the skip list7 L" k- U9 M& w- \  E+ R: U& ?
    for storing directory snapshot diffs. By default, value is set to 10." A, S& `1 a1 O( F0 a% v. M
  </description>
% S: G5 l" s8 V2 R0 W</property>
8 d0 s9 L& ?0 X8 Z0 v3 A4 @<property>
- q' P  h8 [& H4 L9 @9 }1 Q  <name>dfs.pipeline.ecn</name>  c5 z9 P$ t5 t$ A
  <value>false</value>" x4 h/ z# v8 w3 c: y5 {( N( D
  <description>' N2 I( \- O. S" v+ S
    If true, allows ECN (explicit congestion notification) from the
. a- f7 |1 E; j, C5 u7 @( N( {    Datanode.
. {# Y% q+ g4 ~9 D- g5 |3 z) A, C  </description>
  g* a& D, h) i$ _</property>) C: \9 N* a, ~5 G
<property>
# e4 f1 P2 w1 _  <name>dfs.qjournal.accept-recovery.timeout.ms</name>% b! Y$ ]# L  `8 c: K0 S. G% z
  <value>120000</value>0 W6 ^/ e5 j2 g" ~0 |) p
  <description>  F! N3 t/ {& s3 T' |
    Quorum timeout in milliseconds during accept phase of" D% g2 c  e: l# c3 j
    recovery/synchronization for a specific segment.
; U. Q: y- B2 _! I4 @7 P' S' p  </description># B; i2 e4 }5 \9 U
</property>7 ]5 L) G" t+ M
<property>4 j: j3 F: ~0 Y5 y* W
  <name>dfs.qjournal.finalize-segment.timeout.ms</name>2 }  i8 O7 M+ @6 T1 R% [& V
  <value>120000</value>
1 B$ k6 M% x9 S8 P( u7 ~  <description>
8 k3 x% G5 j* T; t3 A    Quorum timeout in milliseconds during finalizing for a specific
% s; m& _( q" K- [    segment.
# _0 [1 S( Z% E  </description>6 C4 C2 P0 s! m4 ~4 B) H
</property>
7 a- h% _* L5 P8 `2 K<property>
5 m- z/ I. D+ a  C$ }  <name>dfs.qjournal.get-journal-state.timeout.ms</name>4 E  ?4 r4 K9 ^9 x4 o
  <value>120000</value>
! c* b+ S/ b0 h- A% V  <description>
' N% ^5 w3 d- `" r7 ~    Timeout in milliseconds when calling getJournalState().
( n/ b: A1 D8 ^! L0 R  P# V    JournalNodes.
/ i/ O0 j+ O1 u$ O# B) T5 |4 @  </description>
. J+ z/ I$ o/ ?& X2 i</property>$ F  L+ d4 B" p% C. \
<property>; Y2 Y/ T/ a: [' n8 R! j
  <name>dfs.qjournal.new-epoch.timeout.ms</name>; ]3 ~$ d$ P2 X/ h' ?. \
  <value>120000</value>
+ T7 k: G* H+ X& t2 p3 k$ Z5 Q  <description>6 k% V0 D, F) G$ z
    Timeout in milliseconds when getting an epoch number for write
. R$ O, q$ B9 n    access to JournalNodes., ~% ^7 ~  A$ T/ i
  </description>
. c2 C" l% u1 M6 u3 c4 ?</property>
. e% ^% p5 q$ W+ Z" T2 @' f<property>9 |8 v$ g8 s" c5 {
  <name>dfs.qjournal.prepare-recovery.timeout.ms</name>
; u/ E/ k+ J2 x* ]$ L1 |) h( m  <value>120000</value>+ \' Y" A1 _0 i
  <description>
3 Y/ x3 n, l: _7 s2 ?    Quorum timeout in milliseconds during preparation phase of
: @8 N! z7 A1 x    recovery/synchronization for a specific segment.
7 c  I2 u: @& S! O) b1 @- q7 d* l" B8 }  </description>  q7 `" i& ]7 i
</property>
4 `" t1 J; v/ c) |- {7 A! E! V<property>
: ?) a: D7 J7 ]' G2 L* Y  <name>dfs.qjournal.queued-edits.limit.mb</name>
- R/ {$ v+ |# W9 a! H$ u  <value>10</value>
4 P5 C9 q( d% T4 y+ L  <description>! r: L, Y( j0 o0 u* `' {
    Queue size in MB for quorum journal edits.; h( @; N+ k9 C" Z8 @6 {
  </description>
% ?+ ~6 A! M0 a) W! U</property>
  y% t9 L, A9 s7 E4 A' o+ {<property>
3 r( ^/ t4 w9 D# @  <name>dfs.qjournal.select-input-streams.timeout.ms</name>: T; t0 i' E6 u
  <value>20000</value>
' L# d+ k" x- ?$ o  <description>
  W6 }6 B7 I3 x    Timeout in milliseconds for accepting streams from JournalManagers.
' k2 J! D0 @- _! p  </description>! u. {0 m% L9 L7 D) N; S/ Q
</property>
* }/ I' ], k# H, u<property>
9 h# \$ V+ P1 ]" f* X* W; M  <name>dfs.qjournal.start-segment.timeout.ms</name>/ Z/ c- W4 _8 [/ u
  <value>20000</value>
+ l* o- E( ^: \" W9 a3 o( F9 u  <description>
0 X) P6 p' k1 P9 o6 ^& f! w    Quorum timeout in milliseconds for starting a log segment.
2 T" M5 L1 {8 _4 i$ M' i# P  </description>4 C2 u4 W& m2 T* R/ n9 U; S
</property>$ @4 ^9 E$ m) p8 ~3 r& R; b8 u3 v
<property>
) w1 c' C- ^8 r) R2 Q5 ^; q% p8 c+ _! g  <name>dfs.qjournal.write-txns.timeout.ms</name>
( F4 K2 L- p# }3 c  U  L  <value>20000</value>
$ z6 _* t2 x, A- w( k4 f" t  <description>
1 B* i# Y" ^( B6 }% q    Write timeout in milliseconds when writing to a quorum of remote' \- u6 D7 `' t/ `  Y+ S
    journals.
0 \  B4 c% M5 t! @) g1 \- O  k' _8 \  </description>, d- `1 a7 ?* a  {
</property>5 y6 g8 A" b0 I5 f
<property>7 Z+ Y" q9 V' ]" g( }; w9 }
  <name>dfs.quota.by.storage.type.enabled</name>
8 M; o5 a) @; }- x  <value>true</value>$ j* O  v# q6 {& U# |( E
  <description>0 h' Y: ?3 h  C+ k" [
    If true, enables quotas based on storage type.
( a  K1 |: n9 ~0 {6 m  </description>3 Q! f9 K8 e0 L6 Q* f8 X  C
</property>
5 V5 f* W+ X3 s; K" c' N<property>
8 ^6 e1 C2 n5 d! F& b7 I+ P. \  <name>dfs.secondary.namenode.kerberos.principal</name>. x8 M# s% c% V
  <value></value>
' m( H5 O- p$ t' g  <description>
& F0 j/ `+ ]& s/ r7 r2 H    Kerberos principal name for the Secondary NameNode.
1 h& ~: S! {1 `* j, q  </description>0 _6 o0 p' Q. {# y! o
</property>
1 @* m* p1 V# e2 L, a4 ?<property>
4 C5 m; ~* }" n4 r3 t# j6 |  <name>dfs.secondary.namenode.keytab.file</name>
( Y/ W) W. y# x3 e  {) \7 _  <value></value>$ `& A$ j# U( k, m- ~7 r* L
  <description>. @+ `! w; G' L: o
    Kerberos keytab file for the Secondary NameNode.2 P  D- P% B. i- x5 _9 e
  </description>$ e. l% N4 {8 B0 S
</property>( M. _" {& ]( M5 S
<property>+ I3 ^0 V2 D" {, _; [7 t6 ~2 c+ R
  <name>dfs.web.authentication.filter</name>& m2 Z+ C  x; @( f( c
  <value>org.apache.hadoop.hdfs.web.AuthFilter</value>, V" w& ]) [% Q+ p4 m  I/ x2 d
  <description>
. G5 c, d' C2 N9 r3 ^7 {    Authentication filter class used for WebHDFS.
" c) Q0 Q, ?# r8 }, ~' N, Y  </description>
/ t( E% r# [( s2 i- j</property>
" @7 b6 {9 ^5 h, z1 S5 P<property>' V: r2 }5 C  n# F2 }) U
  <name>dfs.web.authentication.simple.anonymous.allowed</name>
! o. A/ p" J0 K  <value></value>
& t0 @9 Q: ]% T. Q. i  <description>' b+ _1 @7 X6 ^5 J1 P! _1 Q, |+ Y+ m
    If true, allow anonymous user to access WebHDFS. Set to
) {3 d# _& a$ ~& }: l6 n7 E% m    false to disable anonymous authentication.* ]; S$ |: t: c# N
  </description>
, q* e& N3 |0 ^+ r$ ~</property>
; F) F6 p" `- k' F6 X& N, V, r<property>
- e3 K. Z* c0 P  <name>dfs.web.ugi</name>
7 \+ t* X) C2 v1 z" E8 z8 c( G  <value></value>
/ g: k# f9 z2 l& N" y% E  <description>
' L) c& s9 r9 M    dfs.web.ugi is deprecated. Use hadoop.http.staticuser.user instead.. i. t+ D. G, g+ i0 W  A) \( c4 B. t
  </description>
1 A; H, O" ?3 @  p5 A6 N% C</property># X) c1 s# {1 K0 y$ Q/ d- w$ Z
<property>  c$ m: e* e( m: h* \9 _
  <name>dfs.webhdfs.netty.high.watermark</name>( Q9 w2 g$ Q. d. V" p
  <value>65535</value>5 R% [. t8 i; Z
  <description>
) d$ N. R( j; z6 w! Y2 `/ @    High watermark configuration to Netty for Datanode WebHdfs.5 e; t. n6 e2 F
  </description>- B* j0 L6 C, I; p* H* y% `" P( W
</property>
8 A2 v( h0 M  d<property>  Y8 m, a1 M' ^3 d* m7 h+ y& [' C+ l
  <name>dfs.webhdfs.netty.low.watermark</name>0 }; T! J' P2 s
  <value>32768</value>
- [/ E; m9 ?. L5 V# {/ H: {; [% O  <description>
$ \5 N4 J  A9 \9 E  j& i    Low watermark configuration to Netty for Datanode WebHdfs.
9 p5 A& h& f/ x$ K  I& I  </description>
& O" ?- p; V4 w7 z! g</property>
1 {$ `/ |- a9 u1 C) p<property>
% n2 v* u3 j. T  <name>dfs.webhdfs.oauth2.access.token.provider</name>4 N  _& B* \& k3 `; q8 l& z- o' w( C
  <value></value>" d1 C5 _, C6 p2 h* N) q- s  [$ l
  <description>
. [1 K+ O# D& T/ C    Access token provider class for WebHDFS using OAuth2.
* p6 |* ^  N- R% U5 H6 e; E. M$ s+ |    Defaults to org.apache.hadoop.hdfs.web.oauth2.ConfCredentialBasedAccessTokenProvider.4 k! g& r. k9 \! Z  \
  </description>
2 ~+ N, ]0 l! T% i8 y7 N</property>
# V7 ?2 `: f' z: v! |* z- }<property>0 T$ o$ {2 C5 Z, C2 y
  <name>dfs.webhdfs.oauth2.client.id</name>1 \1 o  K# e5 V: B
  <value></value>/ t3 ^0 D1 \% t! Z) }' P
  <description>
1 C; Y' ^) {4 s/ K6 {    Client id used to obtain access token with either credential or6 E7 {- s! G. y9 B8 i5 ^$ L
    refresh token.
5 [1 H. l9 m& V/ h- \0 U) v% X  </description>+ }3 m8 ^9 _2 u- m, O( c% Q
</property>1 N6 L: K- M1 k) y/ h
<property>
6 d- b- z: i+ }: N  i) j, ?  <name>dfs.webhdfs.oauth2.enabled</name>
# h" t9 \2 I; y' r  <value>false</value>$ R# T  T, u& X8 Y7 r/ ]& r
  <description>
0 U6 r: n+ w, x2 b: G: Q& V1 L    If true, enables OAuth2 in WebHDFS
0 M; T' C+ v. H! M/ L+ |  </description>7 K6 p3 |, Y9 r& ?7 r, g
</property>
1 f* x/ n, L4 v( v<property>$ S  m; N* p; F+ x0 B
  <name>dfs.webhdfs.oauth2.refresh.url</name>% n9 [4 T6 _, N5 I. k5 N% t+ ]
  <value></value>
! d; [( ?+ t! J0 ~8 N: ^5 {  <description>) Y4 {- h: J9 {- K; P
    URL against which to post for obtaining bearer token with5 P& ]) [5 S- H5 e( a) f' C
    either credential or refresh token.% T5 F* j/ r) f" z' B' i3 l# f
  </description>4 f( d2 o. h+ B/ U% R
</property>
! j) {6 q. t( ~. j5 D8 r9 D<property>; V$ j) e0 \! ~0 f
  <name>ssl.server.keystore.keypassword</name>* ~# S3 O9 W" W# J$ u
  <value></value>$ H# y- z. E7 l/ n
  <description>4 y* E3 Q# a, p# Q& \+ J
    Keystore key password for HTTPS SSL configuration$ |: ]" q- Z6 S$ X3 ]
  </description>
: w+ @) o% A$ O* _4 d</property>9 M1 Q. V! N/ a/ y& w, Y
<property>
. t4 I& v0 o3 E+ j3 e  <name>ssl.server.keystore.location</name>" [6 X% R6 e$ @
  <value></value>
; Z9 e$ j: i( B/ t  y& o1 b2 l  e  <description>9 I. F& Y4 }+ S5 Q0 S- n$ E
    Keystore location for HTTPS SSL configuration2 \- C, s& Y# ~" y1 P. Q) }, [
  </description>
; |2 Q, M! _: c$ `6 Y* G; B</property>
6 |0 Y3 F  ^% E1 \9 W<property>
% P6 A7 g. _, V4 F+ ~' W  <name>ssl.server.keystore.password</name>- j9 C9 i' a! W8 o1 N: l
  <value></value>
% v( G1 r8 r2 w' k  <description># w! z5 p9 `# h" x: w
    Keystore password for HTTPS SSL configuration, j! l) Y+ k+ a  \, h& `1 Z
  </description>: s! g6 p* b, \& v) K* Q7 {. r" S
</property>
. W" i- h, g% M( {  n9 e5 F7 C<property>- K9 C0 B( c3 y! ?+ V0 h, z
  <name>ssl.server.truststore.location</name>/ n. j8 k6 o) `1 \& ~& ~
  <value></value>
9 {8 P6 A' r! O7 W: r1 B  <description>
  ]( M; g: y  C! z    Truststore location for HTTPS SSL configuration
6 P& U# m+ h0 N% B0 s7 I3 n, C, k- K  </description>
' F' f! R+ [' K* K</property>( ^9 ~' n  K( t
<property>
; {$ U% b1 B( o' |/ d; w2 D  <name>ssl.server.truststore.password</name>4 D/ S' l% j( H" f8 i
  <value></value>
- j6 I  R; ?" I$ G  <description>
3 {1 @4 ^4 }! A( Y8 Y6 l& S    Truststore password for HTTPS SSL configuration
8 o& o" p5 a4 @2 q3 C; @  </description>+ @! J, R8 L% o  t$ P) ?
</property>
( C5 j4 G. F7 z( ~6 Y* C9 l4 K, j$ K! B<!--Disk baalncer properties-->
; b( B( A% M$ ]* d  <property>$ J( E2 }* v1 B3 e( W
    <name>dfs.disk.balancer.max.disk.throughputInMBperSec</name>8 ^9 A, E6 \. q5 s, @9 _$ N- G% b
    <value>10</value>. r8 U$ _$ m2 I7 T8 ~6 }
    <description>Maximum disk bandwidth used by diskbalancer
. z& o7 v, {& ^- n- g7 T      during read from a source disk. The unit is MB/sec.4 D1 k/ E7 e& U. _' @" Z5 m6 Q
    </description>
& U( z/ w3 }) s- t  B. D! C  </property>
: Q2 I0 p7 _6 F. q  <property>
4 C  B3 O, ?, P5 n2 ^. q0 g6 X% ^    <name>dfs.disk.balancer.block.tolerance.percent</name>
4 ^' U. k' {( T# @5 d( [0 Y( |    <value>10</value>
7 S, Q3 r; ~; E    <description>, b, ]- E( T1 b1 F) G. C9 _
      When a disk balancer copy operation is proceeding, the datanode is still" q! e# j3 W3 g8 I% b; E
      active. So it might not be possible to move the exactly specified
" Y3 @- d' H8 ?& g      amount of data. So tolerance allows us to define a percentage which
2 N! M0 ]3 o0 s$ ~! Y6 |, \0 {; n      defines a good enough move.
9 C% Y7 W0 t& ?5 H7 l# Y    </description>1 f1 i) n& o! _* N9 s) I3 m9 q
  </property>' e( i8 L* u9 ^5 L
  <property>* U: S4 J* u& r
    <name>dfs.disk.balancer.max.disk.errors</name>
3 C8 ?7 V: [9 \' G: N. r# R    <value>5</value>& V8 P. |) h4 k4 {
    <description>
5 {/ D& U' F5 |* s& K7 ~: T      During a block move from a source to destination disk, we might% Z' z* a. m% I4 d+ C, |
      encounter various errors. This defines how many errors we can tolerate
/ i8 }6 g- Z" M      before we declare a move between 2 disks (or a step) has failed.
- q% {+ Q. M: V9 e1 X1 J    </description>
1 ?$ Y( s  B: n  </property>1 E+ d( L+ l( Y% L
  <property>
4 t' W3 f5 z0 H; C" c    <name>dfs.disk.balancer.plan.valid.interval</name>
4 e, n( L  E9 a- ~    <value>1d</value>
* \. {3 T- \7 f# ?$ [9 V    <description>
+ s, |% F6 C+ t3 D9 I# x. q3 E5 N      Maximum amount of time disk balancer plan is valid. This setting$ e3 ]9 u) S: X; g1 @
      supports multiple time unit suffixes as described in( R4 D/ I% s* q2 s( \4 I6 S
      dfs.heartbeat.interval. If no suffix is specified then milliseconds
. M/ J# z3 C& ?) y# R6 a/ ?8 h      is assumed.
% q" B5 r" @! J2 E; L( ]' y7 c' x    </description>- V+ k8 C& B) G, O* X- G  \# A
  </property>7 o9 g: i; l8 ^6 N; f. v0 _
  <property>% B8 v: m, P5 o0 q; A: W
    <name>dfs.disk.balancer.enabled</name>  I+ h* E/ b0 A5 u" m+ R0 t
    <value>true</value>
. m0 T1 F: F8 T    <description>
+ O. ], C  O4 o+ k! F        This enables the diskbalancer feature on a cluster. By default, disk4 p1 v! N# ~( s) S( D* p5 G" J" C
      balancer is enabled.
5 Y3 i) [1 g2 ?& Z) l# b    </description>" j& i1 r( q$ w9 G6 A# I
  </property>
" q* u. D8 d) T& B- M9 H  <property>
+ G. O8 i* u2 [5 @/ w* [8 A5 e  q    <name>dfs.disk.balancer.plan.threshold.percent</name>& C7 {( }6 g$ G4 ?7 l3 @# q
    <value>10</value>: t; ~/ s3 h0 k+ [- h3 u
    <description># L( m0 D, ]4 K" F; N9 K7 e$ c
      The percentage threshold value for volume Data Density in a plan.
/ @7 O7 e$ H( H5 q5 w4 e; O      If the absolute value of volume Data Density which is out of5 f, x7 G, v# j
      threshold value in a node, it means that the volumes corresponding to0 K8 ^" `& ?7 K" z1 T" ^* p
      the disks should do the balancing in the plan. The default value is 10.
3 G/ @/ A3 W) D; u! O- m4 y    </description>, A; U& A9 z3 g, s5 I, @5 X* u2 q
  </property>) X1 A% y8 B- c# W
  <property>
. q/ l* v5 S* e0 z& v& J, t& E    <name>dfs.namenode.provided.enabled</name>* g; j$ e/ E  b4 F/ O1 c+ u+ A9 e
    <value>false</value>9 A2 O* y$ S, A  ]" M+ [3 ?6 i
    <description>
) J0 [% @$ Y( Y/ |" T" _4 `4 X      Enables the Namenode to handle provided storages.
' x, l5 [# |  h' r1 K  a' @2 O) x    </description>9 U' x/ g  i7 g" m+ L3 n. O
  </property>
4 I7 {6 l7 E, k  P* D6 U/ H; I% O  <property>/ N# ?/ I" L& m0 ]# o
    <name>dfs.provided.storage.id</name>
: X% X+ w" s9 g4 ^6 _# a/ ^8 }    <value>DS-PROVIDED</value>
1 u+ R/ H* n2 O! J4 m- K0 [    <description>
0 G! v% C3 {( d; z/ l      The storage ID used for provided stores.0 O- c, K# p; c+ ?
    </description>
, l# J: J- q5 B6 \( M5 T+ E  </property>
1 m- [6 B7 Z+ [) L( ~/ S: f1 n  <property>4 W% y$ T! s& K! d  ?+ E
    <name>dfs.provided.aliasmap.class</name>
. q# b1 e% P/ W    <value>org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap</value>7 Q3 H! |8 p; U4 z3 n" r: V6 l
    <description>. E0 d0 I4 V( I+ S
      The class that is used to specify the input format of the blocks on9 B, m& \5 E" M- U" Y
      provided storages. The default is& F, s1 P$ Q8 |" o5 {
      org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap which uses$ F4 ]- A  V6 D
      file regions to describe blocks. The file regions are specified as a( @7 n9 I9 E0 x: z' C1 c% h
      delimited text file. Each file region is a 6-tuple containing the3 X4 X3 y9 k, Y, |8 v7 @
      block id, remote file path, offset into file, length of block, the& U! `, h$ V7 A6 k
      block pool id containing the block, and the generation stamp of the
6 J+ @; j0 n% }6 @1 H/ H4 p8 \4 c: s      block.2 v5 C* `# S( M
    </description>' B7 `( b. g0 X
  </property>0 O" @1 S, K% W- E. J# L
  <property>
: _2 I- {+ @9 ~    <name>dfs.provided.aliasmap.inmemory.batch-size</name>
. N% Q" x( ?( o6 R    <value>500</value># a* \- _: n! A. O
    <description>- k# p: ]( Y5 B& d& }8 ^- s' U
      The batch size when iterating over the database backing the aliasmap
; z' S6 ?( C7 z; r' R# [& e; L    </description>
0 q6 }8 N1 W6 \9 T  </property>9 {9 I, M/ F: U  X! _5 d' I
  <property>: \: G* [; q% B5 P* ~
    <name>dfs.provided.aliasmap.inmemory.dnrpc-address</name>4 N' Y( F5 h4 P& k
    <value>0.0.0.0:50200</value>
$ n' V% K" E, g1 n! R6 S8 N4 z5 H    <description>
2 D$ x. C! C# N% `* i' O      The address where the aliasmap server will be running
2 \# Q0 q: L; |    </description>; w! }' E2 {* V: F1 W# |
  </property>
0 R) M5 D) h. V+ c  <property>: C* R# ]7 W* I' ?  U$ C- J
    <name>dfs.provided.aliasmap.inmemory.leveldb.dir</name>% J6 h* A: U* d( T
    <value>/tmp</value>
) o5 s+ s5 }2 x6 u    <description>' P- P" j9 [6 f. b3 S: T
      The directory where the leveldb files will be kept$ }% G, X4 G6 s, b5 g
    </description>
1 r& |; `/ {0 F8 P" X  </property>. T/ T2 k% Y1 v  I0 Q
  <property>/ I8 d' ?1 e' x/ b. \7 u
    <name>dfs.provided.aliasmap.inmemory.enabled</name>) ?3 ~1 ]* X7 _8 T2 O* x3 U6 j
    <value>false</value>
6 A/ w/ M# w& C* L$ L$ H: L7 z6 n5 S    <description>
. `- Y  t6 ^% }5 p+ R0 l      Don't use the aliasmap by default. Some tests will fail$ ]$ e8 y4 S. C# \$ W
      because they try to start the namenode twice with the
# c. Q& Q" D+ b% X      same parameters if you turn it on.2 x$ t0 G* @8 L# c
    </description>0 L! t& t3 z. K# R* {
  </property>
0 u. Y& Z$ c* w4 k  <property>. S5 _) {9 k3 V1 `
    <name>dfs.provided.aliasmap.text.delimiter</name>
( F7 g1 Q& k+ K% \. _' e    <value>,</value>" d* W7 |5 n1 }9 t7 \- q
    <description>
3 l' O0 o! F, v2 C        The delimiter used when the provided block map is specified as- C  H# Q1 ^3 [1 }
        a text file.' i) j2 Z" F2 [6 h0 R/ H% C. ?
    </description>
! j: r1 \' A$ _: ~* c( C' s; X  </property>& _$ U: z* C" _; T4 `, M" j
  <property>
9 z7 k4 b; E" ]: J, G6 ]& a    <name>dfs.provided.aliasmap.text.read.file</name>
0 `1 C$ V. w7 x! o6 |# |$ C    <value></value>" F; s6 P) T* P( K0 }
    <description>
5 z5 H  {; b2 |# u7 v2 e" k' O) W        The path specifying the provided block map as a text file, specified as
; b  k: g( ?. z4 d' {7 ?# q4 n        a URI.. U& K( u' R! f: ~1 d/ n
    </description>2 \7 s. \! M+ y: T
  </property>3 Z) l% T6 @5 o: E$ Q2 s8 G
  <property>
! r0 O* n* x/ _1 w! c# X. L9 _8 h' t    <name>dfs.provided.aliasmap.text.codec</name>7 N8 N3 {2 ]% \2 L
    <value></value>
9 w+ s* ?1 I( ]; \1 G" _% K. b3 a    <description>
: d5 m9 t/ v1 B8 ?        The codec used to de-compress the provided block map.$ b. T1 |+ N: @9 C
    </description>: m+ e9 `) [1 p7 X. J- C) G+ U
  </property>' q5 A0 H. Z' {, d: `; d% B& _
  <property>
* ?# w* K4 v; ]8 z5 l2 h0 a    <name>dfs.provided.aliasmap.text.write.dir</name>; }: G' R4 E& g4 F( j! _0 X8 k% Z
    <value></value>
/ N+ Y" e/ ?3 B+ y$ m/ y' `- D. d    <description>( X% _- D4 o* M: W
        The path to which the provided block map should be written as a text! L3 g: x( U2 Q; h
        file, specified as a URI.
1 ]) w) K# S. K6 X. Y    </description>
( ^) z6 |  |0 i$ m  </property>
" o+ Y% o5 ?2 Q0 c( ~  w) f  <property>
; H# y6 A) l! c# b7 U! b    <name>dfs.provided.aliasmap.leveldb.path</name>
1 K& v/ C% G. q+ [3 Z    <value></value>3 S6 C5 M/ U% B% \  ?5 k; ]
    <description>6 z6 j( L8 w, W# o- i6 \
      The read/write path for the leveldb-based alias map* R- ~/ [" B9 U2 i0 Q
      (org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap).
7 _" F9 G( K1 C0 d2 y; Q$ B% @      The path has to be explicitly configured when this alias map is used." R: ~' {9 u  L6 g# U1 n3 l* O
    </description>
4 q8 i5 }, M( x  l  </property>
; Z+ d% M9 ?7 K  <property>
( C1 H+ R. R( w& p1 Z    <name>dfs.provided.aliasmap.load.retries</name>
' d1 I" J6 C- z/ _5 I    <value>0</value>
% k0 @. q; I: B0 Q" x    <description>& D  _8 ~* k9 |$ y( ~2 Q
      The number of retries on the Datanode to load the provided aliasmap;2 b" z' c6 K( x: z: O- H
      defaults to 0.
- ]# ]1 B) F# \, E7 [2 ?% ^, D    </description>. v( T+ c" d. ^- ~) w* `4 s. s7 V
  </property>, S! {2 ?1 B$ a
  <property>
* K% n- h! M% T    <name>dfs.lock.suppress.warning.interval</name>
0 ^4 X! q4 P3 b& a* d! E! d' v8 z    <value>10s</value>+ M# j: j" C, Y- ^0 C
    <description>Instrumentation reporting long critical sections will suppress6 z. ^' s5 Q0 `' E% j; |! M4 \
      consecutive warnings within this interval.</description>/ b' q" f+ o9 z" [% B
  </property>1 b: K3 S. y4 l) o/ ]: e/ I- x7 ^
  <property>
7 P# c' E! q4 J    <name>httpfs.buffer.size</name>
& a! x( x6 H2 P5 I4 m' O    <value>4096</value># t: V! A2 {& G
    <description>' V; z$ l) k- L/ K1 Y
      The size buffer to be used when creating or opening httpfs filesystem IO stream.
+ t& r/ Q7 U- }- J' S; B" h    </description>
% |" v3 M/ Y8 |  </property># s3 R0 B5 y" }) K
  <property>; Z; {3 N' G6 ?/ ?/ w! l$ W
    <name>dfs.webhdfs.use.ipc.callq</name>
- P2 H0 C0 k6 z$ a. {    <value>true</value>6 O' o. z* R' n+ x& c1 _
    <description>Enables routing of webhdfs calls through rpc; s% i3 j, ]* }6 M& e0 x8 Q# ?
      call queue</description>
6 `4 ^7 p0 A5 R- B8 L  </property>8 v3 A0 R- q. t5 M
  <property>) _3 o1 w% M3 r" ~6 ?
    <name>dfs.datanode.disk.check.min.gap</name>
7 W6 M: o' K& b) P    <value>15m</value>
& c4 h3 \9 j" O6 g    <description># W! Z- l% v% ^
      The minimum gap between two successive checks of the same DataNode: d8 o* Z( Y3 M4 t2 s3 o
      volume. This setting supports multiple time unit suffixes as described6 D# o" v% k/ n/ \' L* e
      in dfs.heartbeat.interval. If no suffix is specified then milliseconds
; c0 P" ^2 d' P2 o      is assumed.
& h9 ]8 b$ O3 G    </description>5 v8 V5 N+ n7 r# c( P. [5 V& a
  </property>
  A+ Z4 `7 r1 @6 {" H+ V4 B. F; l  <property>' U. E4 k0 G1 ~$ A8 A& \
    <name>dfs.datanode.disk.check.timeout</name>
7 W6 Z$ Q( w0 D% |; l% _    <value>10m</value>1 M5 g  Y  H; K" n' W/ N) p3 B: s+ d) K
    <description>5 @4 C& x8 J+ F8 B: V; N, B: E- `# Y
      Maximum allowed time for a disk check to complete during DataNode8 n% k8 w0 X+ _3 ]& k- u$ D
      startup. If the check does not complete within this time interval+ y6 F: I# z( n: J; R- N. R3 P
      then the disk is declared as failed. This setting supports
2 Y$ z0 T! J  \" Z# w4 K/ N      multiple time unit suffixes as described in dfs.heartbeat.interval.& c3 R1 a* X/ I5 \
      If no suffix is specified then milliseconds is assumed.1 u* d+ N  \  r8 N' P& ~& B( D
    </description>
1 |, ~% x- Z# L# Z8 u: j  </property>
2 G% S* G& `/ Y) R  <property>
$ I6 m5 a7 B: r. Q' v  X    <name>dfs.use.dfs.network.topology</name>, t! y; _, b) _4 l: k3 C0 n1 o
    <value>true</value>* m2 i9 Z8 o- t/ T
    <description>  j5 h* h  i" Q) v1 B8 J- S
      Enables DFSNetworkTopology to choose nodes for placing replicas.
% W9 ~( k" p2 z* F      When enabled, NetworkTopology will be instantiated as class defined in3 S0 L. [2 j2 J+ S2 ?! Y0 u
      property dfs.net.topology.impl, otherwise NetworkTopology will be
' z2 f' Z8 I7 W      instantiated as class defined in property net.topology.impl.
( }1 y' i, X# S( M0 J  b4 f: z    </description>
* ^3 h4 K! z. h2 d  </property>
& c0 R; P: @4 d1 ~$ |4 T  <property>: F2 z( z" ?; F6 i
    <name>dfs.net.topology.impl</name>
! o- [6 a2 J! N! f: n6 t    <value>org.apache.hadoop.hdfs.net.DFSNetworkTopology</value>
. t0 J* ?* ^5 B- F2 k    <description>
4 ?9 D  T) F( `1 B* C      The implementation class of NetworkTopology used in HDFS. By default,
( h6 n2 }7 c4 b, L  J      the class org.apache.hadoop.hdfs.net.DFSNetworkTopology is specified and
/ z+ ~  \8 m% x  I/ u$ p8 F) ?$ M      used in block placement.
' ?1 h2 {1 f3 n2 r      This property only works when dfs.use.dfs.network.topology is true.. O% N+ O" K% F/ U& j
    </description>
( W; l9 r% Z3 g' s  </property>* a5 @/ m9 O5 j$ e! n$ z1 }
  <property>! A% {+ q+ A3 k( _; ~: L$ |3 G
    <name>dfs.qjm.operations.timeout</name>
6 Q7 o2 D) Z+ H8 K6 E+ `7 n    <value>60s</value>
& K$ D/ ^5 @/ Q3 \/ K: s+ \    <description>
: C, @: f1 E0 `3 y) a7 f: \) c, V      Common key to set timeout for related operations in6 B$ A+ ~2 |( d  M. ~" Q
      QuorumJournalManager. This setting supports multiple time unit suffixes
+ E2 _9 B8 k" `9 [% x, O2 T% S      as described in dfs.heartbeat.interval." N4 L1 p$ F6 D* c8 x( g0 L
      If no suffix is specified then milliseconds is assumed.
( C0 _7 [" d3 h* N* o+ O" k/ k    </description>$ b5 L( w( W( u/ \# H
  </property>( e, G' G1 h, L. p8 Z
  <property>
  u' a. J% e/ q& S/ `    <name>dfs.reformat.disabled</name>. \: r0 a3 _: G: o$ M1 B
    <value>false</value>: r9 K2 r8 I2 r
    <description>
+ c" X. ?/ u6 P# b      Disable reformat of NameNode. If it's value is set to "true"6 f9 C3 S5 @# g( o6 U! L+ ^# L
      and metadata directories already exist then attempt to format NameNode
9 W; i; x. [& P. s3 t      will throw NameNodeFormatException.; H+ l3 t2 V; Q
    </description>+ q( G) {1 v/ F- b3 z7 G& T6 l& \$ X
  </property>  U5 k1 F' D1 _2 D4 k- P
</configuration>
+ l# h+ k$ i; c6 G% y3.mapred-default.xml
/ H5 a3 |, r6 ?4 U. ?% X: f
6 B7 l& @( w: _7 G! S6 @<?xml version="1.0"?># a8 c/ S& i+ \  m4 ]
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>6 _$ a' @5 t: H1 M+ Z
<!--
  z0 ~: V) w+ Z   Licensed to the Apache Software Foundation (ASF) under one or more
4 W* Z' f' ^& U; ?% K. L2 D: k   contributor license agreements.  See the NOTICE file distributed with8 S6 ?* _1 M. W$ R
   this work for additional information regarding copyright ownership.
. b: M% p; t$ X' z6 }. I   The ASF licenses this file to You under the Apache License, Version 2.0  J2 I7 ?+ g! [0 B- Q
   (the "License"); you may not use this file except in compliance with
) \$ Y/ d! }1 ~( |   the License.  You may obtain a copy of the License at
3 p1 u2 j. r4 d. f       http://www.apache.org/licenses/LICENSE-2.0
( V3 H) y2 L6 b  v   Unless required by applicable law or agreed to in writing, software' `- T- X# |% T7 h/ }6 o
   distributed under the License is distributed on an "AS IS" BASIS,4 ]2 E4 @! ~; |' x
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.7 ^% E+ J% H1 ?; z3 A( T
   See the License for the specific language governing permissions and; o+ U  f. c4 D
   limitations under the License.& |8 P) |- \9 r- O- |8 A
-->
. r: ^0 [2 o% ^( J  x<!-- Do not modify this file directly.  Instead, copy entries that you -->) u$ G5 z" \6 |6 @6 E7 Y5 E
<!-- wish to modify from this file into mapred-site.xml and change them -->
, D3 a* B0 N( ^' G# ^6 q<!-- there.  If mapred-site.xml does not already exist, create it.      -->* z, z! Y8 T% V+ ?" b% i: K
<configuration>
& U& q( ]# W, H/ T" O9 Y5 ^) l<property>" Y' b  x4 C% j8 R
  <name>mapreduce.job.hdfs-servers</name>, q. h* W1 z. V' G4 s9 \' n
  <value>${fs.defaultFS}</value>$ T! }' U6 \% u/ R
</property>
8 ]/ D6 ~- i+ o7 B5 W<property>: Y' I( z: B; V4 z" g# F4 i* W
  <name>mapreduce.job.committer.setup.cleanup.needed</name>4 ]1 H7 a" ^  c2 y
  <value>true</value>9 d. b. Z- m: y$ I  s7 c; t' u
  <description> true, if job needs job-setup and job-cleanup., _' {( o( M- v8 r- o
                false, otherwise# ?, \$ z% z& \" e6 i" a
  </description>( P$ x" p7 [9 h7 ~
</property>
! r4 E) v) G/ l) Y' L<!-- i/o properties -->! F3 S# w) X3 w8 a) q5 b
<property>
  i" P. _- Z+ R! I! l, z1 o  <name>mapreduce.task.io.sort.factor</name>; [4 s; H4 `/ r# X* I
  <value>10</value>
1 {, b( C9 }" m, ?0 y) K  <description>The number of streams to merge at once while sorting( E% _# T4 w) @! a+ u/ F9 C# G. |
  files.  This determines the number of open file handles.</description>
2 q3 T9 x! h- h. x) k</property>
5 k4 G4 n. v0 N; F" Q<property>, E6 P9 Z5 _' ^" }
  <name>mapreduce.task.io.sort.mb</name>: k! N3 q4 U6 V- @! Q  g  L
  <value>100</value>
$ P8 i9 r0 x4 _$ W) w& t  <description>The total amount of buffer memory to use while sorting
0 g" X& U$ g6 a* L. h: q% A  files, in megabytes.  By default, gives each merge stream 1MB, which
. C, H2 P; M0 M  {+ h- F  should minimize seeks.</description>0 x' p% P, v1 j% o
</property>6 `" z! O' B. e
<property>
( t) j5 r. f0 Q) e6 B  <name>mapreduce.map.sort.spill.percent</name>
  y: ^& \- T8 H" T  K  <value>0.80</value>
1 p* S# `  _$ h$ c% P, L  <description>The soft limit in the serialization buffer. Once reached, a# U1 X& l* a+ k% Q) I( D! Z+ ?
  thread will begin to spill the contents to disk in the background. Note that( v, v/ o8 q7 a& z, f
  collection will not block if this threshold is exceeded while a spill is) j' _$ @4 ^, X9 e
  already in progress, so spills may be larger than this threshold when it is9 P' |3 n; n0 X; U! Q# b
  set to less than .5</description>
. d! q$ X& d/ c) L& Y3 @</property>
, s) o$ ~  n5 [5 \<property>
0 w7 [- ^$ j' e: H8 T. T  <name>mapreduce.job.local-fs.single-disk-limit.bytes</name>
: j" n  U. K9 o. m5 U1 F: N: [  r  <value>-1</value>1 A; l6 ^9 A! u+ f& i/ n! X9 \
  <description>Enable an in task monitor thread to watch for single disk
/ T% S* z- O1 _% k. F0 Z: m- n- i    consumption by jobs. By setting this to x nr of bytes, the task will fast
( w3 L$ a% w* s; p9 G    fail in case it is reached. This is a per disk configuration.</description>
$ p2 a* m" y/ }: q7 W! S$ t% }) v% l</property>- O, q, W. {) ^* W: p% R. a
<property>
- H  y) m- e9 F( Y! K- o  <name>mapreduce.job.local-fs.single-disk-limit.check.interval-ms</name>
2 M) V; Q2 e4 J$ Q  <value>5000</value>
5 S( m4 _1 g" D; _% j# U  <description>Interval of disk limit check to run in ms.</description>
# e! i' [' ~, N+ S</property>; z% x1 r9 M6 q6 D+ T/ a
<property>
* O# Q& x& P; b6 O  @  <name>mapreduce.job.local-fs.single-disk-limit.check.kill-limit-exceed</name>
/ |7 [& s5 x  D: W6 n  <value>true</value>' r- g8 E2 l; _1 \; B2 g
  <description>If mapreduce.job.local-fs.single-disk-limit.bytes is triggered
, j: P6 h1 D9 x    should the task be killed or logged. If false the intent to kill the task
# z8 n  h  u% h# b, p. s1 X    is only logged in the container logs.</description>5 \$ X; S, x1 b1 s7 o4 C  Q, |
</property>8 U! F# i) c9 K' }2 N6 {/ J
<property>9 |( ^- l) v, |+ N
  <name>mapreduce.job.maps</name>
  r8 O4 |+ J  m) V& N! i- I  <value>2</value>
/ g9 l7 M# }! [7 x9 I  <description>The default number of map tasks per job.
' v: B$ t) }$ N  Ignored when mapreduce.framework.name is "local".- n% e- j0 N, X& P: K- ?
  </description>
9 b) @; R6 t/ m! W0 E</property>- X$ q: r( }7 b8 d. |( u
<property>
1 }) X. z0 Z/ e& j9 A1 L  <name>mapreduce.job.reduces</name>: F+ L8 _5 A6 o2 u  G- p$ T( t0 h
  <value>1</value>/ ^- }# F4 W/ S* \
  <description>The default number of reduce tasks per job. Typically set to 99%# R5 W' j) ^' F, }
  of the cluster's reduce capacity, so that if a node fails the reduces can2 x# U8 Q4 W: H/ c' m( O
  still be executed in a single wave.
5 n9 A% v# B0 [$ J: n# c  Ignored when mapreduce.framework.name is "local".
2 C' R4 X' v* d9 o- e1 n, U  </description>
' y3 M# D* |( C; _8 a: w! r# E7 I9 m</property>4 D* v( s3 Y0 _3 K
<property>
; u( J4 c' I! \$ |3 M  <name>mapreduce.job.running.map.limit</name>
, d' t. O8 B6 C& u+ g  <value>0</value># r: ]5 h1 @2 v" k$ K5 s8 a
  <description>The maximum number of simultaneous map tasks per job.
* g# z- v$ s) R' B  There is no limit if this value is 0 or negative.1 Z7 o2 ^( o5 K- o4 }, Y
  </description>) U. `# G- V- Z  H3 p6 _6 |
</property>7 V9 h/ X- [) o1 A: v5 b
<property>* s7 b! y# M4 W: H
  <name>mapreduce.job.running.reduce.limit</name>* h) Z* q8 ?$ c) K1 T' i9 ^9 x
  <value>0</value>3 _  E7 A3 j: w/ S5 c4 V
  <description>The maximum number of simultaneous reduce tasks per job.
3 a7 s" q  F& c7 X  There is no limit if this value is 0 or negative.7 T3 \2 T: J( P# C/ `1 }0 n! @
  </description>
1 h& N/ v2 A% x0 G; V</property>4 b# x* h. U+ b! G! a: ?, j$ p
<property>- g* Q" I2 }$ W" Y+ m0 x
  <name>mapreduce.job.max.map</name>; P+ k7 f( |, s
  <value>-1</value>6 y6 ?3 a0 K' L
  <description>Limit on the number of map tasks allowed per job.
3 R$ d' R' X- ^, Y- Q% z  There is no limit if this value is negative.! Q( }" ~' G+ J3 x! K
  </description>
9 Y9 J1 y& m+ w, ?# I7 a</property>
9 z5 B. ?  C5 W1 P$ i; d  <property>
1 ~# {/ O8 a8 f5 B5 s* G    <name>mapreduce.job.reducer.preempt.delay.sec</name>1 p% z" j. a& n* c2 x4 I
    <value>0</value>. @2 I( ~3 d5 m3 m* f3 Z
    <description>The threshold (in seconds) after which an unsatisfied
9 T& i. G4 F1 A9 t4 [. V; u      mapper request triggers reducer preemption when there is no anticipated5 s6 Q$ T! K) B/ n6 J- L" T( D
      headroom. If set to 0 or a negative value, the reducer is preempted as
7 `% C. X0 K0 F7 P% N8 W      soon as lack of headroom is detected. Default is 0.
: W  a$ i9 \& E! H- p    </description>( @# X; q2 w7 v3 t( P
  </property>& ?1 I, n" S+ `1 J
  <property>
) N" c6 b% ^1 N2 Q9 r) c    <name>mapreduce.job.reducer.unconditional-preempt.delay.sec</name>5 X' X  Q7 B4 {) n1 d& E& D4 p
    <value>300</value>
2 l" A% |. c. H6 k    <description>The threshold (in seconds) after which an unsatisfied
7 E9 [- U$ d# j- j9 Y5 N. V      mapper request triggers a forced reducer preemption irrespective of the
! y' ^% i( Z6 e  `, y- L, G/ Q      anticipated headroom. By default, it is set to 5 mins. Setting it to 0
. j% M" o- ]) Q  }8 P      leads to immediate reducer preemption. Setting to -1 disables this! i# X+ ?1 m2 e' M- p, k' d
      preemption altogether.
8 E* n  R# S0 X# ]: r  s( d    </description>+ \' i- G5 O3 U# I
  </property>
% w5 E/ j. ^; _1 n( j  <property>
' f; R) H: m/ L8 ~$ v# U6 `    <name>mapreduce.job.max.split.locations</name>
; n8 a1 [7 P  i! I7 n0 N! u4 D    <value>10</value>& h% [/ V7 e( F" M2 n0 q( a: l
    <description>The max number of block locations to store for each split for8 V, \$ Z: ^) f  W- C8 I( f
    locality calculation.
: j* C( ?8 G) A    </description>
# Q" Z9 z+ x! [# q</property>9 Y  l2 f) b$ o
<property>
1 A3 f# x& l% b) X" T* j" X5 y  <name>mapreduce.job.split.metainfo.maxsize</name>
( ^# U9 m. x( v8 r6 f, C  <value>10000000</value>
0 M* D9 V0 k" d3 ~  <description>The maximum permissible size of the split metainfo file.1 [" c- o$ f2 C8 c. {
  The MapReduce ApplicationMaster won't attempt to read submitted split metainfo
$ {0 e1 K' _, N$ A! R) V% M) |4 j  files bigger than this configured value.
$ r  `/ p# a4 \% U2 z( G- y$ m  No limits if set to -1.  D2 P  e; V- c! H
  </description>
+ R6 S1 L$ U7 `' K1 h</property>9 m9 r9 j& f& E
<property>' X! Q( N: U2 K: q$ [. G
  <name>mapreduce.map.maxattempts</name>' r" B8 a$ v1 ~: j2 @
  <value>4</value>
8 n3 x# W& v6 H8 H4 u, p" I  <description>Expert: The maximum number of attempts per map task.7 H: t( b: Z5 ]' S' i
  In other words, framework will try to execute a map task these many number1 Q  `6 o' x* U/ F1 G" R5 d/ [
  of times before giving up on it.
2 y- U1 Y1 r) y) j+ V  </description>1 X6 n1 w1 h* Z
</property>0 Q/ A% ?3 S) i7 y
<property>
# v, P/ l. ~4 u4 g0 F  <name>mapreduce.reduce.maxattempts</name>
2 L3 r. i( S7 I  u  <value>4</value>
. [4 e" l% c. L" {8 Q' c8 @  <description>Expert: The maximum number of attempts per reduce task.
4 p) k( G! |& H. G5 w4 c  In other words, framework will try to execute a reduce task these many number6 D# n7 |# i* x+ b2 W+ y! `, e
  of times before giving up on it.: _7 V+ M4 G" `- E4 v5 V6 e' l$ m% n
  </description>
5 z8 B$ ~! K  Y0 l) R$ ?/ z8 v</property>
) t" r0 h* a9 k2 \# C! u1 K<property>7 a4 h! [) V5 q1 i  h9 B5 H. ~
  <name>mapreduce.reduce.shuffle.fetch.retry.enabled</name>5 r" }' @+ H2 `
  <value>${yarn.nodemanager.recovery.enabled}</value>0 i0 h) Q* Q! F. x
  <description>Set to enable fetch retry during host restart.</description>3 Q' A7 m1 i) k$ b3 C5 X" }, s
</property>* G# Z9 B( h0 e5 {3 E7 r6 d' q% Q
<property>
4 V. F6 V0 l2 H, H; o' h8 M4 p  <name>mapreduce.reduce.shuffle.fetch.retry.interval-ms</name># E* \8 y0 U& l1 Y) ~* x
  <value>1000</value>
: x8 S' F- G0 C4 _; f2 U  <description>Time of interval that fetcher retry to fetch again when some
1 G- Y  l+ q9 ?0 C* k  non-fatal failure happens because of some events like NM restart.
; I! v* ?" M$ b0 M  </description>
" e- u- ]( Q7 B( q& n; c. A7 E9 ^6 d6 j</property>$ ~8 s3 m$ m/ K1 k+ H3 d( q# C
<property>
6 U  p$ t/ O8 O6 n  <name>mapreduce.reduce.shuffle.fetch.retry.timeout-ms</name>
/ {  U; y1 b0 k* J  <value>30000</value>
: K+ b6 _+ w9 O5 k  <description>Timeout value for fetcher to retry to fetch again when some4 f# \% T; k/ p7 I- Y4 B% Q4 D
  non-fatal failure happens because of some events like NM restart.</description>& o4 H4 e3 W. Z; [$ z) o
</property>" j$ g* c1 z# _" O( K) D
<property>$ [- g' |4 @& k/ K; u9 p
  <name>mapreduce.reduce.shuffle.retry-delay.max.ms</name>7 m* B$ }. b3 ~
  <value>60000</value>& }3 J( c* z* O' u8 G" ^- F8 }
  <description>The maximum number of ms the reducer will delay before retrying- d; J5 C2 {3 Q
  to download map data.
) u, ~) O/ @: M9 x( t, K. X  </description>
1 w+ ]0 \7 e" d5 d</property>
/ {' g; \! l, |# V; b<property>1 |6 X8 w$ i$ x7 E" x
  <name>mapreduce.reduce.shuffle.parallelcopies</name># Y) L4 l9 Z8 l" ~  G+ {6 _% K
  <value>5</value>( @7 I" i+ a# E6 W
  <description>The default number of parallel transfers run by reduce+ D, G  Y& S+ L6 Y1 C+ M
  during the copy(shuffle) phase.
$ L6 j. ]) K0 C9 Q  </description>, d  l' z# L+ e1 Q! ^6 f6 i/ |4 _
</property>
4 q6 m3 a. Z/ e, [<property>
# P# h$ k3 W6 V9 }( {  <name>mapreduce.reduce.shuffle.connect.timeout</name>
8 u2 C$ F  d2 X0 x- W! k  <value>180000</value>
- z# A7 B4 o& P* _9 @; g: n  <description>Expert: The maximum amount of time (in milli seconds) reduce7 }+ s4 d+ F! K. d
  task spends in trying to connect to a remote node for getting map output.$ b0 u3 H9 |. x- d0 r
  </description>" E$ S, ^8 x) o) k% Z
</property>: D+ W# W% \9 L: `/ x" t# A
<property>
# C5 B+ I3 x2 Q  <name>mapreduce.reduce.shuffle.read.timeout</name>
& Z* j. F2 b7 O  \+ Z  <value>180000</value>
0 d4 [+ y: Y- d+ B  <description>Expert: The maximum amount of time (in milli seconds) reduce
2 Q) P4 \/ h  q# J/ o3 d  task waits for map output data to be available for reading after obtaining0 i% l8 G4 U2 g+ ?9 C9 K) K
  connection.
# e1 H2 Y: O. g  </description>
6 H, I* W% p% l& K* M9 R; w3 ?, A</property>
) P% y5 ^  U+ I8 E" c2 d<property>
  B) d% {8 f" r. R  <name>mapreduce.shuffle.listen.queue.size</name>- N3 b# E( T( k; m6 J) v: r$ a
  <value>128</value>% c* n4 [( `) _' q4 A7 l1 ?! \
  <description>The length of the shuffle server listen queue.</description>
4 }) G! o5 E2 y</property>8 `2 m% F7 S; s  p% s
<property>7 @# [' B( S: E1 v( a& H0 ^
  <name>mapreduce.shuffle.connection-keep-alive.enable</name>  L8 d3 o4 s0 v9 O
  <value>false</value>; B" l' i# C2 b
  <description>set to true to support keep-alive connections.</description>
1 j" T( k+ Z( d2 V0 N4 a, {</property>
: F7 g& K) _& [' z5 j. Z" o$ v<property>1 W# S/ Z& g$ k# p$ g
  <name>mapreduce.shuffle.connection-keep-alive.timeout</name>1 c2 \' \1 l, ]- x3 }
  <value>5</value>" ^* D! L5 c3 Z. [, [
  <description>The number of seconds a shuffle client attempts to retain3 n2 H% n4 ^( i
   http connection. Refer "Keep-Alive: timeout=" header in+ ~. }3 D; D5 S4 ^# }6 s
   Http specification% U" _" q4 G- n1 b; Q
  </description>$ i# i: L, D- D8 X  j* p: @7 ]! B- l
</property>) f* Q# |( u+ k  d
<property>
" X8 Z+ g  e; M. v  <name>mapreduce.task.timeout</name>8 E9 {9 H8 c( b
  <value>600000</value>
# b3 a4 k* z4 f. x+ n3 C7 }  <description>The number of milliseconds before a task will be
$ o# z/ T  a$ S$ l' P  terminated if it neither reads an input, writes an output, nor
: X' n. v$ `0 Z7 T: V% ], O* \  updates its status string.  A value of 0 disables the timeout.
' j6 w" `. m  n. W' V3 V* m7 N  </description>
8 T% _" W1 F6 ~& ~# j</property>' v' E! J5 H. `) |0 P3 E0 D
<property>
" o8 p+ L8 U: p0 A  <name>mapreduce.map.memory.mb</name>
3 M9 ]" z% T% T  |$ P6 V2 f! o  <value>-1</value>
' O$ x+ h4 B% H/ P2 v- y  Y" P  <description>The amount of memory to request from the scheduler for each. c) B8 |9 x8 L( y5 B- @+ p
    map task. If this is not specified or is non-positive, it is inferred from2 w. |* A$ m7 k' o! Z- ]. ]. P
    mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio.
2 u, T6 g: J5 o' c+ N/ V* g2 n, p    If java-opts are also not specified, we set it to 1024.% ^8 f6 O( W/ r% g3 t2 H0 I4 b
  </description>
- l. m0 B3 c; O. C6 ]( I4 |</property>
3 c& ?/ r% N- L& U! s# y1 b<property>
) t; Z" P+ t/ O, \, W  <name>mapreduce.map.cpu.vcores</name>
6 F4 m0 e1 f: @0 n" `! I  <value>1</value>- o" g& g+ V" t( z8 v7 `5 k6 Y
  <description>The number of virtual cores to request from the scheduler for4 {3 ]. R& H( r& R3 q6 i
  each map task.) ]9 w! d& N5 y# h- d2 x
  </description>4 v) p0 t0 m! Y9 z
</property>8 l; J, ?! J9 C- g7 R& c' P1 E2 O+ T
<property>
/ |( L0 i9 g4 J4 k, f  <name>mapreduce.reduce.memory.mb</name># @3 m+ ]( d& Q. w9 z1 u
  <value>-1</value>
' _- j0 P" s5 Q$ W4 _; Y; L  <description>The amount of memory to request from the scheduler for each
; B/ c: u8 H8 U2 `    reduce task. If this is not specified or is non-positive, it is inferred/ H" Y' C2 D3 Y9 x5 ]2 u2 t* b
    from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.
3 u' r) F9 r+ Y! N    If java-opts are also not specified, we set it to 1024.
5 [3 V( v2 Y* Q$ W2 l  </description>  I- R! Q7 d# q
</property>0 Y# j; ^* Z- K5 J- Q: H4 ]
<property>4 m0 n$ w' a; m/ i  `2 W
  <name>mapreduce.reduce.cpu.vcores</name>
3 ^! r8 f0 z" q6 i! h6 j2 J  <value>1</value>
0 n" ~4 l! [3 ^% ~, M1 g5 `1 ?  x  <description>The number of virtual cores to request from the scheduler for* s8 k8 t4 T# `% O2 S
  each reduce task.
7 g: ?% P% }' l! S  </description>- N& l& d; V, c3 i. m7 X5 i0 [; _
</property>7 l1 y( R* j! J0 h& O& W
<property>
+ A& B! o! g2 Z5 _9 ]/ A  <name>mapred.child.java.opts</name>& N2 w) U0 z: @" _( \' C
  <value></value>5 F! _$ h! D- ~. y" V
  <description>Java opts for the task processes.
. f. ]3 Y, \/ E" _  The following symbol, if present, will be interpolated: @taskid@ is replaced! b; t1 `$ ?3 \5 C' v
  by current TaskID. Any other occurrences of '@' will go unchanged.
2 l% r. o/ t, L  For example, to enable verbose gc logging to a file named for the taskid in0 V# d5 q6 u" q0 c
  /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
+ w; ^4 F; k1 {4 R2 p        -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
0 n5 L; E8 Q5 Q+ c! v( ~- G+ ?  Usage of -Djava.library.path can cause programs to no longer function if
+ v; k8 ~+ {: H  hadoop native libraries are used. These values should instead be set as part% x& k' [, d& y' F
  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and5 n8 x- q/ r9 \1 Y$ t7 u
  mapreduce.reduce.env config settings.
+ `/ `# h) z$ {6 z  If -Xmx is not set, it is inferred from mapreduce.{map|reduce}.memory.mb and8 W2 q2 D! _8 M& A, W
  mapreduce.job.heap.memory-mb.ratio.$ H1 ?7 y+ G% ^/ T8 ]! J7 Z
  </description>
$ B+ Q' b& M8 }& l  D- n</property>
+ u/ l0 w3 M5 f. M* e2 `+ M6 e<!-- This is commented out so that it won't override mapred.child.java.opts.0 l# a# I, q0 o+ o* r  Q
<property>  D& q, N1 Q/ z. n. L
  <name>mapreduce.map.java.opts</name>+ T9 @  E" ~# B% p
  <value></value>- d2 t! ~7 {! j) R# X: Y+ b
  <description>Java opts only for the child processes that are maps. If set,
! _( g; C2 P1 u. p  Z# ]  this will be used instead of mapred.child.java.opts. If -Xmx is not set,/ F1 N( F7 [0 x
  it is inferred from mapreduce.map.memory.mb and$ M6 S- v4 z# d# I( ~  E, G
  mapreduce.job.heap.memory-mb.ratio.6 _" T5 t' ?/ x
  </description>4 _4 \2 _" L3 m% e
</property>4 \: x0 Y0 g* U& C- Z
-->1 c8 w: s4 K2 z9 e
<!-- This is commented out so that it won't override mapred.child.java.opts.
# E  ]4 W+ K; E$ J5 ~<property>
- S, d1 {+ Y3 J! N  <name>mapreduce.reduce.java.opts</name>( H+ Q  ?  c1 n8 Y% |( Z6 p- l
  <value></value>0 m4 w4 E1 A* G0 d! I; a$ M
  <description>Java opts only for the child processes that are reduces. If set,& ?/ e! _. q* R' l
  this will be used instead of mapred.child.java.opts. If -Xmx is not set,
# q* l/ B  {5 x/ R, G  it is inferred from mapreduce.reduce.memory.mb and: _/ B/ L" h* I/ k- U9 p3 D
  mapreduce.job.heap.memory-mb.ratio.; p& u3 b/ \0 Q! U2 C; M2 x
  </description>
8 b5 @7 }0 z2 r+ T: H</property>. a1 Y1 }) i! T
-->
" a  z3 z, E3 u2 ]4 F<property>9 I* c( g  z1 `3 ^" Z' l% I/ U
  <name>mapred.child.env</name>4 v: ?$ ^6 O: u- M+ p+ S
  <value></value>
6 x6 F6 e0 E+ s( M) f" }  <description>User added environment variables for the task processes.! R" i7 `: ]8 p; Y
  Example :) F4 ?1 G) A5 F3 D. E
  1) A=foo  This will set the env variable A to foo3 J( m6 U7 p8 `
  2) B=$B:c This is inherit nodemanager's B env variable on Unix.
5 _- J, L. n" C* L  3) B=%B%;c This is inherit nodemanager's B env variable on Windows.
& i  c$ y! e0 G/ j7 b  </description>; s3 o" x4 G. B% `  j
</property>) Z1 X1 s: J* w* K! @) K1 M  A- z& Z
<!-- This is commented out so that it won't override mapred.child.env.& p/ K: Y1 O5 F5 s; X& n7 @4 Y- S
<property>% }' f7 h: S. w) @, P
  <name>mapreduce.map.env</name>
2 r$ j) F1 H# z$ q# e! ]  <value></value>
7 x' \( h- I" \% g, G9 Z  <description>User added environment variables for the map task processes.5 u7 f2 I% c  a( J- y2 V
  </description>
* D' M( x* U+ p. J; r/ i</property>, ]5 q) F; Y5 h; R6 @  ]8 K
-->: N0 ~! e( x8 ?: O; z$ o7 B1 v
<!-- This is commented out so that it won't override mapred.child.env.
7 u: F& k& F  K<property>+ G) d+ E- N4 K; D4 l
  <name>mapreduce.reduce.env</name>" h; ^3 ^% o  T
  <value></value>
' T, h: s4 G5 d* ]  <description>User added environment variables for the reduce task processes.
. O; v7 q; c/ f9 U  </description>% q* E7 T6 V( O& \% J! f
</property>
0 y. T+ ]0 H+ n% ?. Q( H-->. Q# G5 D6 i! n4 k' z. v. v
<property>" u, ~- i& r' f' H6 l6 p
  <name>mapreduce.admin.user.env</name>& f' @( i4 _; s: V& z7 {; \
  <value></value>! c3 c  K  Q! u
  <description>
  b8 w$ z' g% T* Y  Expert: Additional execution environment entries for
; N3 g5 b. v6 N! x3 \0 m2 N  map and reduce task processes. This is not an additive property.* B- l! o5 f% s( O# W1 w7 H! y3 O
  You must preserve the original value if you want your map and! z5 N( [. |: _1 G
  reduce tasks to have access to native libraries (compression, etc).
, x  X" L: C4 {; y( {% k  When this value is empty, the command to set execution7 X+ Y0 L& M9 Z) Q$ ]& C. z/ I
  envrionment will be OS dependent:2 a# J; N$ V# |6 D3 @% b& Q  z
  For linux, use LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native.$ M4 I/ U7 a, u, L
  For windows, use PATH = %PATH%;%HADOOP_COMMON_HOME%\\bin.7 h, A7 E2 S0 }4 E6 |# p( D
  </description>) M  `. R8 X* z+ f
</property>1 e8 V+ V; ]+ {; W
<property>( A% |% d  \9 L" r0 ^
  <name>yarn.app.mapreduce.am.log.level</name>
; ?9 r! P. |2 V6 Z# ]  <value>INFO</value>
/ u+ \' J5 S( m; `& x) z  <description>The logging level for the MR ApplicationMaster. The allowed
! u$ \3 A: W5 r+ o/ ?6 z, }  levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
4 W  b. f, S  S& S# ]  The setting here could be overriden if "mapreduce.job.log4j-properties-file"5 T0 F, k9 F  H, B# S. b, X
  is set.
2 O8 K, O* B+ A% X  </description>% @- ^+ I% w/ H- n/ B( z
</property>
" @3 ~2 z3 C) ?& J8 r<property>0 a! h9 ~$ {6 u' C- h; g
  <name>mapreduce.map.log.level</name>
% [( M  L) @. T- z" F) k; F: q8 T! k  <value>INFO</value>
4 [) s0 G5 T& k& J1 Y  <description>The logging level for the map task. The allowed levels are:+ Q' D8 Q4 p% Z. \& y' h
  OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.6 U  V' q5 U: I6 H- v5 h5 p
  The setting here could be overridden if "mapreduce.job.log4j-properties-file"( h  ~3 i0 u/ s: \8 [4 @8 `+ J
  is set.
+ x8 l& Z5 P  J$ K  </description>
( |6 Z8 i' p' Z1 ^. c3 I7 \</property>- p3 X1 h( P- v# V: X$ y1 e# S9 Q: e* H
<property>$ [% |, s9 p* ^& I; }3 ?
  <name>mapreduce.reduce.log.level</name>. h9 k) w, L' L1 ^
  <value>INFO</value>! _( Y+ u$ o1 Y. F
  <description>The logging level for the reduce task. The allowed levels are:
- k* K9 ~3 d6 u- U+ n2 i1 L  OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
' i# D+ |3 l- s( W. d1 k' X2 p- Q2 ?6 h. I  The setting here could be overridden if "mapreduce.job.log4j-properties-file", f4 w) b3 e9 L; J' x. e
  is set.5 y  y; r- `' d
  </description>
9 [+ |& g, S$ y# ^$ m0 E</property>7 K0 D  T  C+ s" y% W5 S! F
<property>- K* z& O7 k. f; Q
  <name>mapreduce.reduce.merge.inmem.threshold</name>, F' c+ w. l% z8 E4 H2 e4 e
  <value>1000</value>
: ~  F4 i$ {$ E6 ?0 @, J  <description>The threshold, in terms of the number of files$ Z2 q9 ^2 F4 n( B  E' A
  for the in-memory merge process. When we accumulate threshold number of files- d' }6 @6 K8 p% ]" z
  we initiate the in-memory merge and spill to disk. A value of 0 or less than, g  }+ l9 I; p. g6 f. ?7 A7 `
  0 indicates we want to DON'T have any threshold and instead depend only on* c( v  a% z. }
  the ramfs's memory consumption to trigger the merge.
% F$ A; m8 `: M  </description>- p# @% F# ], H7 `- ^1 t
</property>0 D: ], F9 W1 v: D5 a1 \
<property>
7 H. J2 G5 n/ u+ p  <name>mapreduce.reduce.shuffle.merge.percent</name>4 y' }8 k2 l  @# i- r0 \
  <value>0.66</value>
/ N& b' w8 K! U$ b  <description>The usage threshold at which an in-memory merge will be
# l' `4 `# V" C/ r" M  initiated, expressed as a percentage of the total memory allocated to' j- z" q6 e2 x0 p3 e
  storing in-memory map outputs, as defined by
! ~( n) \& s7 ~5 z/ \, o, t  mapreduce.reduce.shuffle.input.buffer.percent.4 f. |) n* {4 S( a  n* F4 J" t
  </description>
7 v. [+ s# Q! F9 n  B</property>" e2 {3 T3 |4 {" k
<property>: W* u- Y8 A) g7 v2 g
  <name>mapreduce.reduce.shuffle.input.buffer.percent</name>. q! m; f5 ^2 T& s8 e3 e( u. ]
  <value>0.70</value>& z* i, H  {  J
  <description>The percentage of memory to be allocated from the maximum heap4 a5 ~% o% s" r+ w4 D, x
  size to storing map outputs during the shuffle.
: B2 I* }; t) q- d( u  </description>
  V8 y" Z6 d) M( a* G</property>$ e; W# F# x( i7 N4 l3 ~" u9 o; R1 q+ c
<property>$ [0 P8 z( J5 c$ q) a  i/ Z+ u* A/ a
  <name>mapreduce.reduce.input.buffer.percent</name>' x; Z. I7 C( _2 O$ j* w3 F
  <value>0.0</value>
" t$ U& {  W9 D" J5 V- ]  <description>The percentage of memory- relative to the maximum heap size- to
7 G8 P8 V& k6 Y  retain map outputs during the reduce. When the shuffle is concluded, any3 N4 ^+ S; x6 z8 J' d
  remaining map outputs in memory must consume less than this threshold before
6 `- p. G0 y5 ]  p  the reduce can begin.
6 {% j( c5 C/ n. H% t  </description>7 U7 a! K7 |( r2 z; j, p# W
</property>
! u& y, ?3 e/ V8 N) o<property>9 e) Y' S; _2 O+ A  a
  <name>mapreduce.reduce.shuffle.memory.limit.percent</name>' R5 W8 Y9 q4 F( M( h
  <value>0.25</value>
$ M! K2 i% ~7 \3 p, A3 S, L  <description>Expert: Maximum percentage of the in-memory limit that a
% j. U( J. i6 s4 B6 I) r7 s  single shuffle can consume. Range of valid values is [0.0, 1.0]. If the value1 {; F. K) F& k0 e+ ]# c9 u! b, U
  is 0.0 map outputs are shuffled directly to disk.</description># L1 d9 f) M; \
</property>5 @6 Y  p9 t# _( K. L  K! M& T6 [
<property>
2 U+ x& C) ^) h, x2 g6 ~  <name>mapreduce.shuffle.ssl.enabled</name>8 x0 Q" k  c; t7 B6 `# Q
  <value>false</value>
" J) X. a. H* {" q) w  <description>9 Y8 s' n+ A0 r
    Whether to use SSL for for the Shuffle HTTP endpoints./ A- B: a1 g" o$ p; x3 t) I% [
  </description>1 X5 q* u1 A0 e! c
</property>
2 s0 m; C4 @: K; H( [; t) A5 L<property>3 r# T1 j, @, d
  <name>mapreduce.shuffle.ssl.file.buffer.size</name>: v; D5 G: c9 b. A3 ]
  <value>65536</value>
- o/ a% p5 A  e- v  <description>Buffer size for reading spills from file when using SSL.3 }) h! y) i9 T) W! x% R* ?
  </description>
' J% c' M% s9 U) _( N% T</property>
5 x5 W$ |. q9 z9 c/ E) U<property># ?) E* K& e) ]* E; w6 _4 p* g
  <name>mapreduce.shuffle.max.connections</name>
9 E1 |4 B- L; D' O' K/ X$ L  <value>0</value>& ]$ W$ \' Z! M. |
  <description>Max allowed connections for the shuffle.  Set to 0 (zero)
/ k6 W3 n4 V5 V  g               to indicate no limit on the number of connections.
. {8 Q& P+ }# O- B' }& G* z  {  </description># X6 u2 j% I) o5 j1 \6 ?
</property>6 G9 {/ q. t  |. e. |+ G
<property>8 T& E8 Y9 I: ?6 m3 P  ^
  <name>mapreduce.shuffle.max.threads</name>
1 x! D! a1 u' R. p# s4 \  <value>0</value>
( }/ T* n  j$ a. g1 `  <description>Max allowed threads for serving shuffle connections. Set to zero" f, ?* w. Z* k& @  l6 M
  to indicate the default of 2 times the number of available0 t. z* ], S0 O, J( R4 I
  processors (as reported by Runtime.availableProcessors()). Netty is used to
, O+ U/ t+ _4 e: Y3 s& P# f  serve requests, so a thread is not needed for each connection.
: n8 z3 d% p  U/ \6 G: i7 ~  </description># R# b7 t$ u. N$ ?
</property>
2 D! Q5 A* L; a5 O8 e* e<property>
# A! I4 D# S3 L0 v  x  <name>mapreduce.shuffle.transferTo.allowed</name>5 E$ C$ n. B2 b, H3 O* A/ [
  <value></value>  \' n6 m5 I$ L, R
  <description>This option can enable/disable using nio transferTo method in
( y6 u/ {! b7 b( z4 ?7 f! ^  the shuffle phase. NIO transferTo does not perform well on windows in the
; A0 p# n+ ?# I, r2 [  shuffle phase. Thus, with this configuration property it is possible to
. T  g$ Z8 r: W5 c6 {  disable it, in which case custom transfer method will be used. Recommended
: x* k9 s2 Q* Y+ A  value is false when running Hadoop on Windows. For Linux, it is recommended7 y2 s1 W# x$ l/ y8 J
  to set it to true. If nothing is set then the default value is false for
8 E: s0 v$ d, c) P. \# a  Windows, and true for Linux.
! z, u: f8 g! r  </description>
; y( E6 X" `" e% W1 L) X4 v</property>5 L* t0 B$ Q  B
<property>
* }+ w6 m* L+ G: N! z3 N  <name>mapreduce.shuffle.transfer.buffer.size</name>2 A% k/ G) z( s- ?* q
  <value>131072</value>7 w  c4 l9 a4 Q6 W8 ^4 Q
  <description>This property is used only if- ?" L1 D2 B1 q: W) |& A; s
  mapreduce.shuffle.transferTo.allowed is set to false. In that case,$ H# e1 g& W5 j" ?$ M% E  }
  this property defines the size of the buffer used in the buffer copy code
0 m* _* N. z5 O8 @9 v3 I2 h  for the shuffle phase. The size of this buffer determines the size of the IO
/ y% J5 Q8 X$ ^  requests./ D! E+ K+ g* y# v3 B3 j- M
  </description>' h$ g8 {( M* {+ ^8 m
</property>
# v% ^& {8 o; b% O* b- P<property>
& D& l3 S6 C* g- Y  K4 N% Y7 ?  <name>mapreduce.reduce.markreset.buffer.percent</name>
2 V% @- W0 k6 G! Y. ^( F" s" d9 O: \  <value>0.0</value>) a$ T0 ?+ m! Y0 w  S* a
  <description>The percentage of memory -relative to the maximum heap size- to
* J6 s  f0 A# I0 z; Z  be used for caching values when using the mark-reset functionality.
2 T# G$ N3 P8 X  </description>- e0 S5 g) u; P% m. |1 N0 e/ Y  J
</property>
1 l- ]! J1 D' y1 _0 q7 J<property>3 J. P  f; @# i4 m: F" f
  <name>mapreduce.map.speculative</name>1 x; ^, D2 y# h- G0 C
  <value>true</value>: {& z! |/ R: Z! _
  <description>If true, then multiple instances of some map tasks: d0 g6 d! w2 _; W- F
               may be executed in parallel.</description>
7 r3 M- ^4 W( M" U! K, }</property>
; E# L& F1 Y+ [/ ]6 P5 C) S, M" o<property>7 U0 d' h: W0 c* H( G& o- X# e5 [
  <name>mapreduce.reduce.speculative</name>
+ ?* _- q1 D; p7 o4 v( J  <value>true</value>
* C+ g5 u- w3 q* y! t* S" A  <description>If true, then multiple instances of some reduce tasks' I1 U, X, k1 y4 @
               may be executed in parallel.</description>2 Z5 W, \& R+ y* ~+ _/ y3 ^* G3 U% J- s
</property>& n6 }  D' |8 }/ e( c
<property>5 e4 `) m7 M; z, @5 L# A
  <name>mapreduce.job.speculative.speculative-cap-running-tasks</name>. b  `3 ^5 D$ D( k2 j: P1 ^
  <value>0.1</value>
5 Y$ W% R$ O, }  <description>The max percent (0-1) of running tasks that
" m. _) q: k5 c) D2 C6 b  can be speculatively re-executed at any time.</description>
- G; w, ]! S, v2 a$ _</property>$ c6 E0 K7 \2 ?' E
<property>
, k; P' O' W; u4 Y  <name>mapreduce.job.speculative.speculative-cap-total-tasks</name>
; j; c  _! o$ N5 D2 z1 Z5 Q  <value>0.01</value>. H& d5 w5 e; R( r  p/ G1 u
  <description>The max percent (0-1) of all tasks that
8 v7 Y  _2 w1 q" ]  can be speculatively re-executed at any time.</description>
0 d- G: D/ o4 Z9 o" w</property># w& O3 U4 ?0 f- z
<property>3 @/ L) M& C- T2 ?- p% g2 V$ C
  <name>mapreduce.job.speculative.minimum-allowed-tasks</name>! U1 M8 m, v% E
  <value>10</value>
+ Z& u  r7 m" w+ b. m1 F6 c  <description>The minimum allowed tasks that
8 _1 |7 G8 G) \$ Z# l  can be speculatively re-executed at any time.</description>
" N0 g9 {5 f7 Q; ]6 Y</property>/ P3 N4 [* N. M
<property>, p# k* H% l, C, L
  <name>mapreduce.job.speculative.retry-after-no-speculate</name>
0 S% c' Q& O3 x6 W/ ~4 |  I  <value>1000</value>
. l0 n, ^- h% [  ]9 p  <description>The waiting time(ms) to do next round of speculation& i$ ^& ]* q  z
  if there is no task speculated in this round.</description>% I% @5 z  \& r
</property># x9 I8 `$ B: A. L
<property>
' P$ M! M/ J: J1 S- |2 S  <name>mapreduce.job.speculative.retry-after-speculate</name>
+ X2 M4 Y( O1 A# X' F1 Z0 x& _  <value>15000</value>
- f8 L; c/ _# {' j  <description>The waiting time(ms) to do next round of speculation
: {" i* h1 ?: Q% ~& v3 [  if there are tasks speculated in this round.</description>9 ~6 c9 \; o7 B$ y7 ^
</property>9 U6 k% d$ M3 y1 H6 s! @) u5 p$ r
<property>" v/ x, G4 R# d& Y5 c% X4 B3 r. q1 P
  <name>mapreduce.job.map.output.collector.class</name>
: G) A4 x+ M( r! C2 Z) ^9 g0 T" O  <value>org.apache.hadoop.mapred.MapTask$MapOutputBuffer</value>
( x0 p; u- i' b$ @  <description>/ ^: F6 p5 r5 k: q
    The MapOutputCollector implementation(s) to use. This may be a comma-separated
! p; b- x2 M" v5 k    list of class names, in which case the map task will try to initialize each5 g0 y! k3 m, S: ]; h. Z
    of the collectors in turn. The first to successfully initialize will be used.
8 u6 s' @/ K1 c  O  </description>7 i: P4 A- S8 n3 ^& T+ U2 N
</property>2 s+ T' c& q8 B. Z! ^
<property>; l& s. O% J6 r; K( a
  <name>mapreduce.job.speculative.slowtaskthreshold</name># _' u5 B$ _* N3 a
  <value>1.0</value>& m* h& j/ @7 Z! |
  <description>The number of standard deviations by which a task's
1 E0 h- X( }. D+ f1 l  ave progress-rates must be lower than the average of all running tasks') c3 w; p. H: u2 D$ X! ~, e; _) g
  for the task to be considered too slow.0 S8 X  v0 O* z6 R3 N" c- y
  </description>
+ i. L2 l6 @3 R; N' Y</property>: p. c+ R4 n& Z7 ]
<property>! Y$ L$ s6 y  R4 }
  <name>mapreduce.job.ubertask.enable</name>
$ h) `: D3 l1 l$ R. j% j8 X" H+ P  <value>false</value>$ g: w  K$ Q, L3 K, \9 R
  <description>Whether to enable the small-jobs "ubertask" optimization,
' A9 A7 ?+ x7 ~7 Y5 M  l" L. ^  which runs "sufficiently small" jobs sequentially within a single JVM.: [) ?7 }; [* ^: I, X5 D: D, s
  "Small" is defined by the following maxmaps, maxreduces, and maxbytes
* e: t- J0 J% |/ D& x' Q" J7 c  settings. Note that configurations for application masters also affect
. n( \: |' ^: r: j+ n) X1 }" F' t  the "Small" definition - yarn.app.mapreduce.am.resource.mb must be2 h( V: r( A# ^5 j7 R5 _8 k
  larger than both mapreduce.map.memory.mb and mapreduce.reduce.memory.mb,+ Q0 c+ i8 v( M' B  p
  and yarn.app.mapreduce.am.resource.cpu-vcores must be larger than; E. l& K/ N" {+ I! ^
  both mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores to enable# e4 F2 f" _" x
  ubertask. Users may override this value.( s& H; v! w. Z' q
  </description>
3 X2 c1 p+ P3 R6 A0 O' X</property>9 b% O( s1 f* \$ o$ Q9 G6 K+ V
<property>
7 S! u( q/ {2 ^* J- b9 Z  <name>mapreduce.job.ubertask.maxmaps</name>
6 P# `2 b: p0 I+ H$ H  <value>9</value>
- w! q2 u9 T4 [$ g  <description>Threshold for number of maps, beyond which job is considered2 i: r7 r8 W! T" ?, S0 Y
  too big for the ubertasking optimization.  Users may override this value,
/ H, t9 k$ m- w' S$ [  but only downward.2 ~& ]8 n. \/ Y3 i8 X* L) g8 t5 E
  </description>& B+ ], m' ~( r: e1 {
</property>/ W4 R0 d" b4 P% d
<property>! z# G$ T( r) g$ _$ }% N2 \8 P5 q
  <name>mapreduce.job.ubertask.maxreduces</name>8 |6 \( G  N9 b
  <value>1</value>
5 }! ?* L1 E& K$ J0 Z  <description>Threshold for number of reduces, beyond which job is considered3 T3 G. ?6 [; I
  too big for the ubertasking optimization.  CURRENTLY THE CODE CANNOT SUPPORT
5 I: C8 n& e% g  MORE THAN ONE REDUCE and will ignore larger values.  (Zero is a valid max,
9 [, k- n! x+ @  S  however.)  Users may override this value, but only downward.
9 x9 \5 s4 m6 b2 Y# S. |. J$ `  </description>- p. d# H8 @( Y( Y# _+ d7 o+ a/ H' [
</property>5 Y; R' O3 K9 }0 ?
<property>2 G% v! s; ~' v  c
  <name>mapreduce.job.ubertask.maxbytes</name>8 W8 k' r2 Q& e. j
  <value></value>
3 h0 n! [9 i: t0 s/ _# _- B  <description>Threshold for number of input bytes, beyond which job is7 Z1 B- B, v/ l' N0 A& D- t& y% Q
  considered too big for the ubertasking optimization.  If no value is9 _) o: {, z4 W+ z( o
  specified, dfs.block.size is used as a default.  Be sure to specify a/ \4 }" p4 m8 N
  default value in mapred-site.xml if the underlying filesystem is not HDFS.: I' J5 \) N  G2 K# A$ L- t
  Users may override this value, but only downward.
) b6 l8 R- o4 D7 h: ~; T  </description>
# t2 M: _$ W9 w0 l- @</property>/ `) n- I! x  V2 ]9 _
<property>: g% r. N4 y$ Y! F4 z  K' z
    <name>mapreduce.job.emit-timeline-data</name>
  a+ a0 z+ j7 O+ l, C- j* t    <value>false</value>
( a" E- W3 {- a5 ~    <description>Specifies if the Application Master should emit timeline data
0 {' P3 G/ w  _1 [6 E0 k' A' ^! k    to the timeline server. Individual jobs can override this value.
# C1 n% E  q5 I8 z, ?. w% P( C& G    </description># V# b  v0 J; O+ o7 m3 D( l6 `# V
</property># y/ j8 }! ~, B5 n2 x  N' n- v
<property>$ T. A# T/ z' m# b# H" Z. g
  <name>mapreduce.job.sharedcache.mode</name>& n( v( x+ x& L9 ?
  <value>disabled</value>* n: G: X& k! W- N7 \! _
  <description>! {, T! o# q( C1 N0 A- F( z& d7 t
    A comma delimited list of resource categories to submit to the shared cache.; ]0 Y6 z: s% g) y
    The valid categories are: jobjar, libjars, files, archives.
( {, A, p- S' {" w: \  x9 N5 m4 i    If "disabled" is specified then the job submission code will not use9 h" s7 o# T: |' j: O0 x; E7 ]( D
    the shared cache.
7 S% s* H' e, i  </description>! k  b# s! v( \+ F" \0 }- E
</property>
- E6 R4 m2 G& m* H9 o2 u8 H<property>& y6 u. N8 P! p
  <name>mapreduce.input.fileinputformat.split.minsize</name>
- }1 }: S2 Y2 _  <value>0</value># W9 |/ g8 H. |% u# J
  <description>The minimum size chunk that map input should be split
* E" N4 ^. {2 i! c  into.  Note that some file formats may have minimum split sizes that
& b; b$ L7 i6 i- w5 ]  take priority over this setting.</description>
3 }, f$ Y8 R) ~3 _% y2 \! k</property>* m; j% {% {) @# F, X
<property>
/ [0 a7 Q# A9 P( @4 b  <name>mapreduce.input.fileinputformat.list-status.num-threads</name>
1 ?* \; K+ f. p5 X! h' r  <value>1</value>7 Y! H  N6 a4 S. d1 I' M
  <description>The number of threads to use to list and fetch block locations3 {: X9 S8 T- S8 X
  for the specified input paths. Note: multiple threads should not be used- B& c- D) }8 f9 |- x
  if a custom non thread-safe path filter is used.5 V/ i; i( N8 U  N( ^( U* j
  </description>2 T! p& y9 L9 U) u8 @
</property>
& l4 I' C  y0 Y3 X# L- y0 Y  j2 V<property>
1 F/ g; X* W7 f" m# b+ A0 K  <name>mapreduce.input.lineinputformat.linespermap</name>9 ?& K+ s& [) Q7 ?) g6 z; [& N
  <value>1</value>
; u5 x8 ~" O& {* v. h  <description>When using NLineInputFormat, the number of lines of input data2 ]3 r! G( Z" Y0 F
  to include in each split.</description>
( P# b7 B% b2 K, O# B1 S4 a</property>$ H" N9 i& y. t1 T/ ^9 k( S
<property>
) B# A8 q3 Z, \' C) b9 s1 X" }  <name>mapreduce.client.submit.file.replication</name>, M( `! q, D/ o% X" p( |; E' C
  <value>10</value>5 w9 {6 E: v. V  S. U% b
  <description>The replication level for submitted job files.  This
  P3 i) g! e7 V5 M4 [  should be around the square root of the number of nodes.
) ~0 [$ k  V' v; s' {" I8 F  </description>) m# v7 l3 L6 \8 n0 J
</property>( N1 t+ |. u: r- N
<property>/ U$ Q, l4 R2 I5 ~# g' ?0 e
  <name>mapreduce.task.files.preserve.failedtasks</name>
8 i# |& s  G: o  d, ]  <value>false</value>
8 I& j  m+ V1 \6 e! T; o: d  <description>Should the files for failed tasks be kept. This should only be
& I# e# A0 |& Y! N4 q: ~# p5 ^               used on jobs that are failing, because the storage is never
& ]8 \; }. k6 {; g/ a: d3 `/ T               reclaimed. It also prevents the map outputs from being erased+ D* L; x$ }4 U; ^9 g9 `
               from the reduce directory as they are consumed.</description>1 S$ i0 D/ i7 _, Q
</property>$ F$ p: }/ e& O" Y0 ]4 I( ?
<!--
9 B9 Q- x/ E9 g  <property>* A& `& s- ]7 Q- y2 K% S5 p. [5 s
  <name>mapreduce.task.files.preserve.filepattern</name>% Q' o7 b; j) b' r
  <value>.*_m_123456_0</value>
8 w7 h( ]; K8 N* ~! U! a  <description>Keep all files from tasks whose task names match the given8 z. g! Z- V! U3 p
               regular expression. Defaults to none.</description>
7 d; z5 g  f' i6 j6 w  </property>9 y. F- ~4 J! r( j/ Q! O
-->4 L" U1 y/ y& F# u" h
<property>
8 C8 h0 H) h4 }  <name>mapreduce.output.fileoutputformat.compress</name>; B) [% h9 |! f2 n8 j
  <value>false</value>& I: e/ e7 }6 @; ?, R7 v) `/ P
  <description>Should the job outputs be compressed?
4 \. L9 F/ i# ]" L4 q% D' [1 `/ d3 `, M  </description>
3 k1 i, n4 m6 K7 W8 e</property>
) E  ]( h6 A4 u/ q8 _# M" x  w<property>8 U8 L! Y3 S$ G
  <name>mapreduce.output.fileoutputformat.compress.type</name>2 A& l8 D: R5 G" u
  <value>RECORD</value>
/ O% H& A% O& I  <description>If the job outputs are to compressed as SequenceFiles, how should
& Q- X9 J" `0 U               they be compressed? Should be one of NONE, RECORD or BLOCK., a# a  D2 M2 ~$ a
  </description>6 D; a8 P* s6 X% V% f( C
</property>
6 }) ~7 K# h5 h, S5 C. q<property>
( E1 }( W+ c, u  <name>mapreduce.output.fileoutputformat.compress.codec</name>6 _7 q/ j  |; v4 T" |7 e0 \# N6 g
  <value>org.apache.hadoop.io.compress.DefaultCodec</value>+ m' v6 `4 I8 o
  <description>If the job outputs are compressed, how should they be compressed?
) ?7 E/ p- X' P- n  </description>6 F, I0 G- n5 ?% [) z& {- p6 d4 Z
</property>& l+ ~5 Y3 U' @! U" C. G  M
<property>" X( c) F4 q# y0 w4 C9 ^9 u9 l
  <name>mapreduce.map.output.compress</name>' O  b3 `+ D8 O" Q+ Q) ^4 |9 e
  <value>false</value>
- R4 `% x" m3 D' F1 p  <description>Should the outputs of the maps be compressed before being9 B" t4 T) F. S( c8 K2 M
               sent across the network. Uses SequenceFile compression.- W8 a  e2 c7 R3 u1 E- [
  </description>, ~# r% E0 w+ I/ U; _
</property>$ r1 p( O- k. R; m  n
<property>3 ]4 m3 j8 ?& ]' I; s. l  S
  <name>mapreduce.map.output.compress.codec</name>
4 }! b. ?7 b  X# O$ J9 A  <value>org.apache.hadoop.io.compress.DefaultCodec</value># V* n$ b  D0 g% H  S+ O8 G2 V
  <description>If the map outputs are compressed, how should they be$ B" ]& q9 M3 o" {/ |* m
               compressed?
3 n8 k0 S" E0 y9 {1 X/ a4 j6 w( i, ]/ W  </description>$ L# ~, e8 H2 u, V' t, Q
</property>: F( m- z) M$ W3 Z3 x8 ?
<property>$ X6 Y* r( o8 G$ k
  <name>map.sort.class</name>
; ]9 |- w$ _  }+ Y( r  <value>org.apache.hadoop.util.QuickSort</value>
; j. u# [) H! e* j6 g* ]2 R- |$ p  <description>The default sort class for sorting keys.0 J/ {- n) \4 _
  </description>
; n+ g- a7 a0 |- H1 e</property>
3 d: e1 I  B  w0 w<property>
* r  ?/ j! w  `  <name>mapreduce.task.userlog.limit.kb</name>
; X$ `, l4 e1 n  <value>0</value>
3 u9 _( G: w1 O  <description>The maximum size of user-logs of each task in KB. 0 disables the cap.% a+ Z* A0 V# T
  </description>! A- v# x9 |- c6 M1 N& h
</property>
$ W/ q8 ^# x% `' K  u5 F% L8 ~6 I<property>: Z% z: o. ~$ ?0 q8 i: s% m
  <name>yarn.app.mapreduce.am.container.log.limit.kb</name>" }6 r6 D1 F0 d. X! z* U9 P
  <value>0</value>
7 I0 I+ U. |# O8 G6 B  <description>The maximum size of the MRAppMaster attempt container logs in KB.3 ]' c# ]8 @! @8 U& m: f
    0 disables the cap.
0 c; N" I( d/ n# d2 u+ c  </description>! K: l( y. E" p( z( B
</property>! D* e% S6 }/ K
<property>- g; \6 v, A; ~4 ]
  <name>yarn.app.mapreduce.task.container.log.backups</name>7 Q( c0 w9 R" l5 i! \. \( l
  <value>0</value>9 Y+ ]0 I, t6 y. M
  <description>Number of backup files for task logs when using9 ^& k4 B/ q: `2 w( R* d
    ContainerRollingLogAppender (CRLA). See0 ^2 k" |# w& {( E' k+ X' _1 Y& [
    org.apache.log4j.RollingFileAppender.maxBackupIndex. By default,
* i. m$ m% l7 g* d* Y    ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA
5 C  [2 Z) o' u" g& [    is enabled for tasks when both mapreduce.task.userlog.limit.kb and
+ h2 q* ]4 k3 s% Z+ ]5 C6 r) W6 U  Z    yarn.app.mapreduce.task.container.log.backups are greater than zero.3 U. l: S5 p* L9 _7 M
  </description>0 K* \; e( Y  k/ ^7 }, [
</property>
! M& n: f& X1 R<property>0 V; R; ~- G; E* K0 ]
  <name>yarn.app.mapreduce.am.container.log.backups</name>
# T) ^% ?2 V9 F- l  <value>0</value>0 p, L: Q8 D, f2 b& {5 e8 L
  <description>Number of backup files for the ApplicationMaster logs when using0 K* `4 W& ?9 ^
    ContainerRollingLogAppender (CRLA). See3 d" t, I5 G& I, \3 W6 b4 U3 z2 Y
    org.apache.log4j.RollingFileAppender.maxBackupIndex. By default,& c8 R0 d, N* Y) x
    ContainerLogAppender (CLA) is used, and container logs are not rolled. CRLA
, F6 D: w. g; g) D" Y9 K( X0 j    is enabled for the ApplicationMaster when both* f: g. }2 l  |& n4 [6 v# v& e: a
    yarn.app.mapreduce.am.container.log.limit.kb and
0 @5 _% E& o& y/ l    yarn.app.mapreduce.am.container.log.backups are greater than zero./ h3 a% \3 N; k+ x
  </description>
9 H9 p& Q! }; P6 F8 w7 Z- b& U</property>! H, [9 ^3 P. u9 ^% I+ ~
<property>
$ ^) l) b& d7 F9 _% Z3 M! e  <name>yarn.app.mapreduce.shuffle.log.separate</name>
8 g+ d2 l: R4 t1 T+ @* m  <value>true</value>( _: x* q9 s' R( n6 n  C6 J
  <description>If enabled ('true') logging generated by the client-side shuffle8 k# v+ H6 g9 O- a* v# P. |" o/ {- E
    classes in a reducer will be written in a dedicated log file
# V9 Q; o( ~1 w& C0 I    'syslog.shuffle' instead of 'syslog'.- C% J3 W- u4 `
  </description>4 a5 ?" i1 ~6 {' J9 Y) q# g
</property>" X: j1 E! p3 W
<property>9 K  Q/ a7 b' C- m; D5 s0 v
  <name>yarn.app.mapreduce.shuffle.log.limit.kb</name>
" h" c$ j9 e6 R6 p2 A* c+ w  <value>0</value>
9 P) d1 |4 x4 f% T  <description>Maximum size of the syslog.shuffle file in kilobytes- D& R4 T2 Z3 \: N0 m2 p
    (0 for no limit).
% y3 b- P' w2 p1 S2 L) M  </description>1 r- U$ c% h' o8 F% \7 b5 F" ~
</property>
- e# P0 v( w" T. [; G! ^1 e<property>/ L' J+ X+ I1 b, `3 k
  <name>yarn.app.mapreduce.shuffle.log.backups</name>
% r: O0 L5 ~7 L1 p8 w  <value>0</value>$ ?& e) H' ^4 u* M- k+ }
  <description>If yarn.app.mapreduce.shuffle.log.limit.kb and2 p2 q0 _0 f/ N  @: v
    yarn.app.mapreduce.shuffle.log.backups are greater than zero% H/ G! B3 n( N. j& l
    then a ContainerRollngLogAppender is used instead of ContainerLogAppender
+ {: c8 Z% X3 u% {0 L    for syslog.shuffle. See& r9 F8 q  R8 x* T  z0 M% }) a
    org.apache.log4j.RollingFileAppender.maxBackupIndex5 m/ G( T$ I, U$ P
  </description>+ M& ~$ ]- [5 O& Q7 t( E
</property>0 E6 u1 Y7 p4 D6 W9 ]9 k, O" ]& a
<property>3 n6 R$ J9 O+ V  C" O5 g
  <name>mapreduce.job.maxtaskfailures.per.tracker</name>- U! w& C  ]" ]0 X: P$ p8 m$ h! V
  <value>3</value>
2 R6 r  g% Z7 @  s  <description>The number of task-failures on a node manager of a given job  Z" W- i7 |. y
               after which new tasks of that job aren't assigned to it. It0 V  P" b. q+ i: [
               MUST be less than mapreduce.map.maxattempts and8 o6 ^2 }0 E! E. ^* Y$ i% a6 I
               mapreduce.reduce.maxattempts otherwise the failed task will! b  h6 I8 V0 c+ g
               never be tried on a different node.4 c* j  q7 s1 ^. S" o5 \
  </description>7 n$ m2 ?) m! a) ]: p
</property>7 V  e: F7 z, G: g) H3 S
<property>
4 d/ O3 c& \8 C3 X: }  <name>mapreduce.client.output.filter</name>
& }" e+ e4 e- N0 Y$ `1 {2 x  <value>FAILED</value>9 j9 ^& O. h2 W
  <description>The filter for controlling the output of the task's userlogs sent
2 @( z4 P" s7 f) b1 c               to the console of the JobClient.+ Z7 Q  q$ F. t8 L0 o
               The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and# e0 t) k  F1 ?
               ALL.
% _8 s& S3 |4 `3 z, I! B" ^5 x  </description>& M1 `/ q7 q9 n9 ~! d9 A
</property>
/ p* I1 D% E+ U3 U# z  <property>% s0 M4 m9 y& t% [0 _3 k* d
    <name>mapreduce.client.completion.pollinterval</name>! H, i6 r3 W6 l
    <value>5000</value>" p5 B1 B; x! h& V
    <description>The interval (in milliseconds) between which the JobClient( I  i7 f- H( F$ o3 |/ q
    polls the MapReduce ApplicationMaster for updates about job status. You may want to/ G& L5 T; f2 p+ e
    set this to a lower value to make tests run faster on a single node system. Adjusting
' L8 a. h. ?1 j% y    this value in production may lead to unwanted client-server traffic./ N1 g6 P1 _! ^' ^# N  g8 G
    </description>6 E" T' T' @; s) q
  </property>1 m1 B' t, D( Z( }  j& P8 G& `
  <property>: V5 N1 R. W, O
    <name>mapreduce.client.progressmonitor.pollinterval</name>
  G5 F7 e0 K# y* O    <value>1000</value>6 I5 S7 s0 e$ _4 Q. l
    <description>The interval (in milliseconds) between which the JobClient
9 e' R( g8 x3 s& ^    reports status to the console and checks for job completion. You may want to set this. t7 j; p0 r6 k: D9 Y2 T* q& L
    to a lower value to make tests run faster on a single node system. Adjusting& Z6 u5 n% j: c4 w  q: E8 m/ I3 X
    this value in production may lead to unwanted client-server traffic.
3 u, c2 n( W. [3 h    </description>1 z7 |" |* f, d; J
  </property>
$ u! d  ~* ?" r% M8 Q  <property>8 A0 q& _0 W! A2 J/ [8 N( L& Z" [
    <name>mapreduce.client.libjars.wildcard</name>: l7 K4 l4 R6 g4 p; I
    <value>true</value>
; G4 K' A2 d( q- q. M    <description>2 v3 D4 @* m' C4 u& T
        Whether the libjars cache files should be localized using
1 E) v( U" a% T3 u        a wildcarded directory instead of naming each archive independently.; a& D) k- q& X+ S0 l/ n# Y
        Using wildcards reduces the space needed for storing the job
, X# A, |' {8 I# x; L/ G        information in the case of a highly available resource manager* H' i; M2 N0 C* R
        configuration.
0 t7 \) m( V9 }' p: t        This propery should only be set to false for specific
: g6 i4 j  D* Q# i/ m        jobs which are highly sensitive to the details of the archive
& Q' n$ Q2 D( t# u- ~        localization.  Having this property set to true will cause the archives
0 v2 i3 Z& v* A* x/ `/ g# V        to all be localized to the same local cache location.  If false, each4 Y+ ~, O- W) Z9 ~
        archive will be localized to its own local cache location.  In both" g, U  m) p4 a$ N# j( I( F
        cases a symbolic link will be created to every archive from the job's9 n# f  z- A  v
        working directory.
! t- y. n- S. X# _    </description>2 C9 ~% d/ F8 l4 \" z( B+ t$ V; m+ Y4 x4 U
  </property>
1 k# _. a  w& Q  C+ p- J9 d  <property>
: u$ [9 V4 o1 D( G" `8 U    <name>mapreduce.task.profile</name>
" d0 H/ O) b. u8 U4 S, L: K    <value>false</value>
+ ]0 b2 O! @7 f    <description>To set whether the system should collect profiler: `$ U  ~, c3 ~
     information for some of the tasks in this job? The information is stored% D- F+ x9 o( r( c1 @6 @6 ?6 J
     in the user log directory. The value is "true" if task profiling5 Y3 P4 ~% z+ T% R2 e
     is enabled.</description>( L, k  `1 b( W
  </property>, r" t0 T! e1 S( t
  <property>
3 Y2 R2 \& j' [  c9 U    <name>mapreduce.task.profile.maps</name>
; s6 S! x7 A4 @8 W2 }4 [    <value>0-2</value>. q2 |' \5 Z- I
    <description> To set the ranges of map tasks to profile.
4 |: j, K7 t0 n% g- M    mapreduce.task.profile has to be set to true for the value to be accounted.% f8 n. i* l7 S
    </description>5 Q8 b, Q  H# V
  </property>
5 w) `0 B4 m- u; n  ~  <property>2 s1 z9 r9 x; K
    <name>mapreduce.task.profile.reduces</name>
0 u6 n7 `) s0 w) L1 y- p    <value>0-2</value>
0 w; t3 \9 i* P) g/ r6 |    <description> To set the ranges of reduce tasks to profile.
- m7 C7 l4 p( @; l0 P9 Z    mapreduce.task.profile has to be set to true for the value to be accounted.( Q" o/ M. Z0 w6 r, A* p" r' B
    </description>9 d( T/ m9 Y% S! W, A) b# O% T1 f
  </property>
0 G, i$ L) u' I, C/ z( l% @1 @  <property>. q- T' ~% o- M
    <name>mapreduce.task.profile.params</name>
1 O' o- E! F+ M1 f( J. }: T    <value>-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s</value>" c6 o# Q" g+ ~3 K2 k- B
    <description>JVM profiler parameters used to profile map and reduce task
6 s& {: Z$ P8 S) p      attempts. This string may contain a single format specifier %s that will+ ^" E% ]5 |4 P  E9 y
      be replaced by the path to profile.out in the task attempt log directory.. M) R( ]+ E  V0 v) j# A5 e, R$ V
      To specify different profiling options for map tasks and reduce tasks,9 S$ \% O4 \% D
      more specific parameters mapreduce.task.profile.map.params and8 Y$ W* b3 L+ T% ~3 \8 E
      mapreduce.task.profile.reduce.params should be used.</description>
0 S' y% ~( j% |5 J& r& d$ }  </property>% T6 w3 |, U. t$ W8 S2 ^
  <property>% c! }6 q- ?- r7 t, M& k
    <name>mapreduce.task.profile.map.params</name>
* X' b& w6 s  V# ]' w2 e8 f    <value>${mapreduce.task.profile.params}</value>
3 {+ P. c: U( w! i1 P% F    <description>Map-task-specific JVM profiler parameters. See/ U% G) R6 x& A+ Y% ]
      mapreduce.task.profile.params</description>  ]4 V( B6 e5 C" e+ y! ^
  </property>
3 @) m. v, u% i9 N2 p2 K  <property>
7 N2 Z0 S1 V/ {- m0 e7 O    <name>mapreduce.task.profile.reduce.params</name>& S+ g) C& n" z1 w0 f) _
    <value>${mapreduce.task.profile.params}</value>, g! B/ {9 `/ u5 b8 o; {6 _" e  N# ?( W& G
    <description>Reduce-task-specific JVM profiler parameters. See
: {+ ]) U& T4 T      mapreduce.task.profile.params</description>
8 N; y" n& M! J& o  </property>4 D, f: C; v& q- {) l
  <property>! [" E+ P/ j% C
    <name>mapreduce.task.skip.start.attempts</name>4 e. ?# }% Z$ _% x8 X( A9 y- R
    <value>2</value>3 i( }  v& F5 L5 j, s) G- h" z1 J7 o
    <description> The number of Task attempts AFTER which skip mode2 L( e7 E/ s0 U/ h2 l
    will be kicked off. When skip mode is kicked off, the
: Q- k( w+ t$ H3 k9 s    tasks reports the range of records which it will process
( J, a3 W- |9 |) J: {' V  ^    next, to the MR ApplicationMaster. So that on failures, the MR AM% K+ U, I- G! q0 ]
    knows which ones are possibly the bad records. On further executions,! R* o7 J% z( {, `/ \7 N2 k
    those are skipped.9 Y, z$ N8 E7 a' l
    </description>! l& a# d. j/ `. X# K
  </property>& P% H9 ?4 Z3 B& b+ z
  <property>
: G5 {. J3 M* X& }% x2 X    <name>mapreduce.job.skip.outdir</name>
; ^' H0 S8 a/ c; V6 h- q+ {$ Y    <value></value>9 `. b: U9 [3 c/ \% l
    <description> If no value is specified here, the skipped records are$ @. {) ?: E8 O( j6 X
    written to the output directory at _logs/skip.: T# x# U4 P% N3 r' g; m$ D
    User can stop writing skipped records by giving the value "none".0 H' b% Z( ~2 z- {
    </description>
" `. i: z$ T: {. }* n. |& q2 u  </property>/ n8 X* A' C7 K# D* z
  <property>6 ], N+ l4 R: A: C/ \
    <name>mapreduce.map.skip.maxrecords</name>8 {7 n' T* ?0 }
    <value>0</value>
/ R/ d8 M/ K7 v" o1 R    <description> The number of acceptable skip records surrounding the bad
0 q9 Y8 @: B2 ?& Q/ c    record PER bad record in mapper. The number includes the bad record as well.
: [; m1 {: C( J2 L& K9 {/ \    To turn the feature of detection/skipping of bad records off, set the' U3 o! L7 g' i, C; g
    value to 0.: o3 o4 |9 P) M9 p
    The framework tries to narrow down the skipped range by retrying
7 E4 a8 E- I2 O; P9 z4 d# a6 O    until this threshold is met OR all attempts get exhausted for this task./ e# A$ P  h% R* e( A
    Set the value to Long.MAX_VALUE to indicate that framework need not try to
; M+ O1 d4 Z. e8 U) ^! a% \+ C2 i    narrow down. Whatever records(depends on application) get skipped are! F; Z( D3 E/ @# X6 P1 R
    acceptable.
7 _3 m! z% u! m+ u1 v3 A) f2 j    </description>
% n0 X8 V# y1 J; ^" z  </property>
8 w/ M% y. V7 C& P1 s  <property>
' A% |/ C4 ^0 [2 E    <name>mapreduce.map.skip.proc-count.auto-incr</name>6 J0 G; E1 F5 q: h) ^1 y: A5 G
    <value>true</value>
' _1 M4 w5 s: s' n8 J+ I5 I; T% D1 Z    <description>The flag which if set to true,% k7 q2 I( K6 F0 ~5 W/ O% O
    SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by. m. T4 W0 G3 w- F3 U7 F
    MapRunner after invoking the map function. This value must be set$ Z) V  r% w( c1 M. s6 U
    to false for applications which process the records asynchronously
' B' ^7 V7 E2 V8 g: g; K    or buffer the input records. For example streaming. In such cases
0 W1 K; F$ E3 G; x    applications should increment this counter on their own.' \6 u1 ~+ c9 B, ]% y# b5 u3 x
    </description>  m) R- C$ `3 i
  </property>  r3 f7 J$ y: {  y
  <property>5 \) K- }7 h& K3 P3 R0 A/ @( p
    <name>mapreduce.reduce.skip.maxgroups</name>
3 ~; [  h' }( J. E    <value>0</value>- }; |" K$ ]& p
    <description> The number of acceptable skip groups surrounding the bad1 a/ H8 R' U  I5 q. Q- |1 L, G
    group PER bad group in reducer. The number includes the bad group as well.+ `1 L" ^0 N) S6 v! T! y$ y
    To turn the feature of detection/skipping of bad groups off, set the. t2 F0 n. X- e; c  F
    value to 0.
3 z7 E; y# n/ t8 V: ]; B    The framework tries to narrow down the skipped range by retrying
0 T  _$ t3 Z1 b- _* N    until this threshold is met OR all attempts get exhausted for this task.4 u" q3 G, r+ `6 \) M) C# |
    Set the value to Long.MAX_VALUE to indicate that framework need not try to2 M! G. A* W1 v9 v( n
    narrow down. Whatever groups(depends on application) get skipped are' O2 {6 R; k6 c, N! R
    acceptable.
9 u) N. I5 e! Q/ p" N0 K- \8 ?    </description>
0 ?% `( P, Y# i& y  </property>
( ?# F2 \) z9 b2 P  <property>
  c5 H" f" ?: |- U4 F0 X    <name>mapreduce.reduce.skip.proc-count.auto-incr</name>
; V1 R+ ^* s0 y/ I    <value>true</value>" `/ M6 `3 ?: B. M  {+ T* \
    <description>The flag which if set to true.
" ~; p& w3 K' p. N    SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework
. h7 K7 d" W7 P  W( S9 F    after invoking the reduce function. This value must be set to false for1 `2 v( _: s& F$ r  v  z: J
    applications which process the records asynchronously or buffer the input! C; x6 V3 n7 X: i9 ~" X* I5 ]
    records. For example streaming. In such cases applications should increment+ i1 T8 R) Y5 ~
    this counter on their own.% w1 g* `; e  N, J* Q
    </description>5 Y2 c. Y! N( j  ]
  </property>
: B8 B# T3 S9 `7 y$ k% B  <property>9 E- i; q: F( s) }: q
    <name>mapreduce.ifile.readahead</name>
5 H! F% d7 o3 @9 F' ~! H* N    <value>true</value>
7 m3 |2 ~2 D- ^! w2 K: }    <description>Configuration key to enable/disable IFile readahead.
. T2 }# [: E* k( W1 B    </description>
# N7 K2 E2 l( ?9 P- v  </property>; W+ b7 g; ^. @/ y2 u
  <property>
9 `8 t+ Z6 c1 C" }9 w+ J7 T0 r) u    <name>mapreduce.ifile.readahead.bytes</name>
' h: C+ |% M0 s8 s, k1 W$ S    <value>4194304</value>$ L' z" {& k( |8 \
    <description>Configuration key to set the IFile readahead length in bytes.
/ H" P4 F1 H: W& I9 J  d    </description>
, F  g8 q% ^3 v' A! Q8 a8 O3 D8 `  </property>" D* ?. ^. L9 A" N& w
<property>
5 X% M, b0 s# z  <name>mapreduce.job.queuename</name>8 h7 a+ F; C: W
  <value>default</value>* q( h8 u* l: J& c+ m
  <description> Queue to which a job is submitted. This must match one of the
1 j/ X5 r5 S3 v+ t" f    queues defined in mapred-queues.xml for the system. Also, the ACL setup
! v' G# p6 L1 p  w3 d7 A7 N    for the queue must allow the current user to submit a job to the queue.6 F% u! X  ~. F
    Before specifying a queue, ensure that the system is configured with' ]# ]$ k# d1 J
    the queue, and access is allowed for submitting jobs to the queue.; j$ X1 D% U- N8 s" f2 H
  </description>
0 g3 M0 A. U( c* _" f2 x, p8 M</property>  Z" D# ]+ o% o/ C& [
  <property>
. \$ O0 ]" r+ K0 l( }    <name>mapreduce.job.tags</name>8 g$ Z# q+ F6 R( R, O
    <value></value>, E' T2 I3 y5 O6 d# A! s
    <description> Tags for the job that will be passed to YARN at submission
# G* C- q% d$ \+ {: F. Y+ J      time. Queries to YARN for applications can filter on these tags.6 \5 m4 ]9 D  I: ?$ R, X
      If these tags are intended to be used with The YARN Timeline Service v.2,
$ G- |+ F: T  e! q. S6 b4 r. v+ {7 X      prefix them with the appropriate tag names for flow name, flow version and* G+ `( r# ^( S* G" E, e: }
      flow run id. Example:
, S. i; y% G+ z6 U# o& v. w) P' F+ W      timeline_flow_name_tag:foo,
5 `, E0 ^2 F) H; O      timeline_flow_version_tag:3df8b0d6100530080d2e0decf9e528e57c42a90a,+ Y" a; y3 ~. c
      timeline_flow_run_id_tag:1465246348599
. O. ^. t9 G5 c0 ]# W    </description>2 W. U& d. z" X) S- e
  </property>
; ]$ W# t; x, y. p+ C: D<property>
, r! b) E" v1 W3 W; t  <name>mapreduce.cluster.local.dir</name>! ~, z! d. K9 \
  <value>${hadoop.tmp.dir}/mapred/local</value>
4 w* ]5 ]+ M0 U  i. _4 o$ T  <description>5 @6 h. k2 Z% [% r2 V! `/ Y3 B
      The local directory where MapReduce stores intermediate9 l8 h5 y7 E' y% {
      data files.  May be a comma-separated list of. k0 b" [% J% ^1 c' E2 Q
      directories on different devices in order to spread disk i/o.
$ a. x3 b4 z+ B# ?4 S1 d0 w( D      Directories that do not exist are ignored.
4 _9 S: |' V1 X8 Q) X  </description>
3 N7 `# b7 t" a! ~( r0 q5 V0 X$ T</property>4 N$ f- h& K: V# k
<property># Q( o- s9 ^1 m" g( p# C
  <name>mapreduce.cluster.acls.enabled</name>. Y) c/ ?5 ~* x7 Q3 E
  <value>false</value>  e; q+ Q# R0 o0 v0 d; u# E
  <description> Specifies whether ACLs should be checked% c4 \  |, N0 P  ^, z
    for authorization of users for doing various queue and job level operations.! m/ C: i$ X+ \1 {7 Z% ^
    ACLs are disabled by default. If enabled, access control checks are made by
; A# @% i, b$ I; A, j0 p! I    MapReduce ApplicationMaster when requests are made by users for queue
6 H7 G* R0 Y0 a5 o$ e; o& ~2 ]    operations like submit job to a queue and kill a job in the queue and job
. {2 Z$ }2 P' V5 O9 F    operations like viewing the job-details (See mapreduce.job.acl-view-job)' o# y9 X7 l0 h3 j) O
    or for modifying the job (See mapreduce.job.acl-modify-job) using* @/ R: L3 X) K5 x+ d& b" J5 s
    Map/Reduce APIs, RPCs or via the console and web user interfaces.
6 r) t) m. }! Z: ?$ I- W    For enabling this flag, set to true in mapred-site.xml file of all
- H7 ?; }! e) y3 h! a. h2 `  A    MapReduce clients (MR job submitting nodes).$ l9 g: ^& X% _: K
  </description>
; `( Q- U* F2 z/ T+ W) H' H</property>
: C0 _/ @# w. E) I! x/ o<property>. A2 D2 h, s" R. }5 o
  <name>mapreduce.job.acl-modify-job</name>
& s" {& `4 _0 w* j* Y+ @6 M  <value> </value>
4 Q* f( a6 _3 x  <description> Job specific access-control list for 'modifying' the job. It+ h" ^7 Z! d. o: E, W3 r
    is only used if authorization is enabled in Map/Reduce by setting the
) S4 Q: p) a) R( l+ K0 ?; i    configuration property mapreduce.cluster.acls.enabled to true.
" {' c. a, K$ Z8 N7 m3 J    This specifies the list of users and/or groups who can do modification
7 p1 q2 M; J% r' F5 f    operations on the job. For specifying a list of users and groups the& e( @2 Y" o0 F$ @# Y
    format to use is "user1,user2 group1,group". If set to '*', it allows all
1 d6 C4 V/ d; j% i, z1 L1 l7 Q    users/groups to modify this job. If set to ' '(i.e. space), it allows
1 h! z+ L4 Y+ i- A7 D    none. This configuration is used to guard all the modifications with respect4 V  V5 B; i+ [) [1 J- C  d
    to this job and takes care of all the following operations:
+ r) M3 y7 B0 z) [8 |      o killing this job
! Z, h7 z6 h9 U' F+ l) h      o killing a task of this job, failing a task of this job
9 ]- u9 O) d1 M& F* X$ O( E  g5 @      o setting the priority of this job) T9 H+ a/ B0 Z' N% K% y
    Each of these operations are also protected by the per-queue level ACL
; U3 k6 C0 x% h5 M    "acl-administer-jobs" configured via mapred-queues.xml. So a caller should
  p! p6 ~8 m# S    have the authorization to satisfy either the queue-level ACL or the" i" H( @+ L) S  B7 i- z
    job-level ACL.
* T: H9 C& L* H+ R! U) v    Irrespective of this ACL configuration, (a) job-owner, (b) the user who
- X6 s5 I0 C" U  [! w6 O3 c    started the cluster, (c) members of an admin configured supergroup, R2 g6 e# x% [. T! E3 t
    configured via mapreduce.cluster.permissions.supergroup and (d) queue9 {% P/ R: e. e( U8 {- S
    administrators of the queue to which this job was submitted to configured7 }- F% j1 I" b3 Z
    via acl-administer-jobs for the specific queue in mapred-queues.xml can- Q4 Z8 Y4 w9 Z: z% u1 _0 o
    do all the modification operations on a job.
8 t! s3 g: f, y+ E: ?    By default, nobody else besides job-owner, the user who started the cluster,2 ~  J- u/ [4 Q. n
    members of supergroup and queue administrators can perform modification
/ M6 L0 N# e0 ^    operations on a job.
  l5 e* Z& o' J4 F. a: @: l  </description>
  b/ g( |+ ]8 n, y7 M0 C! r</property>0 G& _% l2 a6 |* K* Q  |
<property>. S; [+ e8 W4 k
  <name>mapreduce.job.acl-view-job</name>
$ T% L5 ?5 m4 J  <value> </value>5 x- J' G4 \/ m/ `" ?  [
  <description> Job specific access-control list for 'viewing' the job. It is) e; p7 Q+ ~0 C2 n; x0 k% i
    only used if authorization is enabled in Map/Reduce by setting the( d. [0 W/ n5 P3 p$ I4 @& G9 e5 x5 ^& S
    configuration property mapreduce.cluster.acls.enabled to true.
* p+ E! |( ^2 l/ O+ Z& d    This specifies the list of users and/or groups who can view private details
/ H3 }, C8 N8 x& V: p% V0 X    about the job. For specifying a list of users and groups the5 U9 W; T( }9 U+ M' F; d3 l4 }( p
    format to use is "user1,user2 group1,group". If set to '*', it allows all, [* Q! z" ]! R- t
    users/groups to modify this job. If set to ' '(i.e. space), it allows
" h1 T+ Z) l2 h    none. This configuration is used to guard some of the job-views and at* ]6 |1 W% j& m1 f0 j9 q
    present only protects APIs that can return possibly sensitive information# [( Y9 I" A6 X7 r# C
    of the job-owner like+ O% W8 c! l; E, d" B  m
      o job-level counters/ x% Q+ N2 a% ~  C7 }9 G. q
      o task-level counters# [. O) }: b9 C' V0 e+ Q  m6 h
      o tasks' diagnostic information$ \' ]0 N  h! L7 L% S* m) f+ _  |
      o task-logs displayed on the HistoryServer's web-UI and' {# I) x* @5 k) P% s6 z$ w% @
      o job.xml showed by the HistoryServer's web-UI
9 w+ y+ O  r1 Y4 g1 ~: X    Every other piece of information of jobs is still accessible by any other
: N( ?: e# T2 P. E& ]2 P  R    user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc.3 g- R$ k0 x% k; C3 a% A3 _# \& D9 M
    Irrespective of this ACL configuration, (a) job-owner, (b) the user who4 v& Z+ ~6 F6 y+ S7 x0 k: j
    started the cluster, (c) members of an admin configured supergroup4 I8 C; S8 ?; ~% S/ Q' G
    configured via mapreduce.cluster.permissions.supergroup and (d) queue9 U9 z  @" v- r5 `1 f7 f9 A9 M
    administrators of the queue to which this job was submitted to configured. s+ i# |3 M) \, U
    via acl-administer-jobs for the specific queue in mapred-queues.xml can' C" k+ ?0 z/ v) `: F9 p, R
    do all the view operations on a job.
6 x0 {. s7 v7 b( p, _2 Z. N! T# ]    By default, nobody else besides job-owner, the user who started the
1 H  y$ w2 l+ ], V& c    cluster, memebers of supergroup and queue administrators can perform2 \5 L* r9 M. q' W3 M( l9 q
    view operations on a job.
: }! M/ |3 T8 N) R) o( X/ n* A  </description>& O3 H: w) N  j2 [$ N0 g
</property>
4 N; T* p3 H! d. j<property>
3 o# I; Q* F3 s; O; ]6 t1 c  <name>mapreduce.job.finish-when-all-reducers-done</name>5 P- ]- W# @/ M; {! n
  <value>true</value>8 v0 T9 p! O5 G8 C, L  Y* |
  <description>Specifies whether the job should complete once all reducers
. y5 a; b" T. L+ @     have finished, regardless of whether there are still running mappers.
$ c* }& N) h3 M/ w9 o) p  </description>
9 F  }' |2 }1 c% ~( p/ K9 u</property>
0 l8 R" k5 _) V. x) c<property>( a% r, s. n% e" O, ]( j
  <name>mapreduce.job.token.tracking.ids.enabled</name>
+ y/ R  M  p! D! N# a/ G  <value>false</value>
8 {/ `* B0 ^1 S1 Y7 M( c  <description>Whether to write tracking ids of tokens to! e0 W3 x- P& ~/ g/ }2 W! m
    job-conf. When true, the configuration property
3 K3 n& \* e8 A( j9 y7 b/ g    "mapreduce.job.token.tracking.ids" is set to the token-tracking-ids of9 U: j: I8 C9 K* ~  q9 @
    the job</description>
! k) }6 r1 [. @, m</property>
  h) M, f- `( B<property>2 H5 K8 p6 T, r9 S; I: l# j+ D- a
  <name>mapreduce.job.token.tracking.ids</name>$ O1 ^& F% V6 K* @2 a
  <value></value>
3 P8 I+ \/ |7 e4 U  <description>When mapreduce.job.token.tracking.ids.enabled is( Q( Z7 O$ \3 g  F; I0 J( }0 \" `# z
    set to true, this is set by the framework to the% I7 m1 q1 D; g5 ^* J
    token-tracking-ids used by the job.</description>- G0 v  Q8 Y; _1 B' F" o9 @5 f
</property>
" ^/ V3 G" T  G* j<property>* a' T8 D. U/ D1 m$ {& I- w
  <name>mapreduce.task.merge.progress.records</name>
. l6 d6 _: S9 {6 d  <value>10000</value>
. N2 w  W% y6 `) ]% m( ^$ M( @  <description> The number of records to process during merge before
( L6 {: [+ r; L+ ]( S   sending a progress notification to the MR ApplicationMaster.9 E9 S( i) y2 h& m
  </description>2 m$ x2 ^3 u2 a, O& B
</property>- A8 @$ l0 y" L0 t1 ~, Z
<property>
2 A( i. {( Q/ {* Y  <name>mapreduce.task.combine.progress.records</name>
5 h5 |; I$ H$ c( @2 M4 m, Q  <value>10000</value>
& L8 e$ V$ H) J3 m2 X% d8 ^  <description> The number of records to process during combine output collection$ X: S% x2 e# H! q7 P
   before sending a progress notification.: L6 f3 H* ]% t/ O
  </description>
- S& g$ ^- g4 I</property>
: s# o" D; r/ j, p: ?<property>+ x) ^! i) _0 u. ], E$ a
  <name>mapreduce.job.reduce.slowstart.completedmaps</name>7 m: d, C! q. b
  <value>0.05</value>
1 N$ L* Y7 j' Z6 z8 r  <description>Fraction of the number of maps in the job which should be
9 `  g) l7 v) s1 T6 E8 @% G  complete before reduces are scheduled for the job.
) D  w: [4 N) a# I: T, b# S  </description>
: ?* @4 v5 t2 t7 z, M2 ^. l9 x0 q</property>
) p/ M' Y7 ~1 ^/ f5 V- U1 c% {( h<property>" N" M% R4 h0 F1 h9 H
<name>mapreduce.job.complete.cancel.delegation.tokens</name># q7 p* S- U  J/ R
  <value>true</value>
" b# }3 [  x1 i% e  <description> if false - do not unregister/cancel delegation tokens from0 v+ R) t' t' F
    renewal, because same tokens may be used by spawned jobs
7 ]3 s$ \  z8 ~  </description>! |+ F7 c2 O& Y/ I
</property>
6 ~0 Y4 ~: a! ~& p! y+ D3 t) h8 L<property>$ f8 F7 r$ m3 j8 e' _  a! L& T6 W
  <name>mapreduce.shuffle.port</name>
- G5 H" Y7 s1 \" Y6 ?7 h  <value>13562</value>
" Q* D) \8 r" b& C  <description>Default port that the ShuffleHandler will run on. ShuffleHandler
, @7 p. U8 i. ~5 ]2 O   is a service run at the NodeManager to facilitate transfers of intermediate7 M% E4 q& y0 Y3 {$ T. k3 T
   Map outputs to requesting Reducers.
0 c  B+ u; S  P  </description>
2 T- u* K$ A& l; N0 a</property>+ V7 [6 n, h" Z3 Q: U- B4 t9 P
<property>
3 M4 q& u" K( C% Y$ Y  <name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name>
1 F( s) m6 e' m* F8 O  <value>org.apache.hadoop.mapreduce.task.reduce.Shuffle</value>- A5 c. y) U1 y
  <description>/ z& f6 r) e7 T# Z$ [+ B
  Name of the class whose instance will be used
8 w' W( E# i- l  O. L3 n5 H: U  to send shuffle requests by reducetasks of this job.$ |1 ~' x# O# S* ~* C. }8 X0 K
  The class must be an instance of org.apache.hadoop.mapred.ShuffleConsumerPlugin.
" d: K! J: h# X! J  </description>" V3 }4 D8 G5 _. h
</property>3 X& W+ c' H6 o  H2 G
<!-- MR YARN Application properties -->
. o) c! y* \% q* f<property>* }# a( I8 W- X! I; Q' g$ p0 z( m( z
<name>mapreduce.job.node-label-expression</name>
) G/ U% D1 o9 F7 C9 J  <description>All the containers of the Map Reduce job will be run with this4 S$ ~- h3 a% {: U4 k& n5 H4 U% J7 u7 V
  node label expression. If the node-label-expression for job is not set, then) l6 c, f7 P7 v4 a4 W
  it will use queue's default-node-label-expression for all job's containers.
3 _" h9 a/ m6 N% s  </description>
0 A& D6 r$ x& B</property>
" R' {* I1 |% Z$ V; L3 |  z<property>
  K+ S8 H! H. [* }* w  <name>mapreduce.job.am.node-label-expression</name>
$ k) m  Q% ^" F8 U# B- H8 N  <description>This is node-label configuration for Map Reduce Application Master
, p; k; M$ W0 _; b; `" n" o" q  container. If not configured it will make use of$ d7 x) ^! l$ z* Y
  mapreduce.job.node-label-expression and if job's node-label expression is not
+ [- B: i$ \- t, h  configured then it will use queue's default-node-label-expression.
1 ]! Q+ r: q% @  </description>
9 u4 ~# U- G! F. `- z* |+ E</property>1 U! t& W, V! v& O" z0 E
<property>* X! U% v. n$ u4 o( _! I; h
<name>mapreduce.map.node-label-expression</name>: l( J6 }0 [& h( ~
  <description>This is node-label configuration for Map task containers. If not
8 F: C! z6 Y* G3 ~6 J  configured it will use mapreduce.job.node-label-expression and if job's
" ]% x( g1 ]3 P, M5 m0 A  node-label expression is not configured then it will use queue's1 f; ?3 h6 G' m! b
  default-node-label-expression.' O- M( |+ v- t: C1 ]+ u
  </description>
) z% v* m6 x- ^</property>
9 Z4 \3 ^, V% }, Z& o* e/ [( k" J: Z<property>4 Z) W7 k/ X) g; j; H, ?  u4 k9 C
  <name>mapreduce.reduce.node-label-expression</name>7 j' |6 F. h0 q# L* n0 a$ N% C1 u
  <description>This is node-label configuration for Reduce task containers. If
  l: n5 r; N+ \" d4 X- J2 F! u2 U  not configured it will use mapreduce.job.node-label-expression and if job's
* h& o" S2 e0 e( t- y  node-label expression is not configured then it will use queue's
! h& E# @; _. f, _  default-node-label-expression.
: V' [0 y" [) U2 j7 b- K+ s  </description>  {  y( ~8 }1 i7 E! j5 d+ e) J
</property>5 M8 }% y% o! X% m- s( p/ R
<property>
6 z  S# }+ J4 p8 z! F/ U <name>mapreduce.job.counters.limit</name>5 Z8 t% R  u% F' V" J& `" l- H
  <value>120</value>5 N7 E4 m, [  G) T9 P
  <description>Limit on the number of user counters allowed per job.
! n5 ]0 H9 B& _5 a  r/ N0 i  </description>
0 s# S. L6 }* O/ G</property>0 w4 C: y. G1 H+ g# v
<property>1 m' W; l& a$ T, E( `$ F, x3 {
  <name>mapreduce.framework.name</name>( W- P/ j! e$ L& v$ [' k
  <value>local</value>. V/ F  P9 \2 Q3 u; ^
  <description>The runtime framework for executing MapReduce jobs.
6 _- p2 V/ z! C+ s. e: a  Can be one of local, classic or yarn.
8 E( t. V1 g0 }; N  </description>
) {( R3 }1 w6 q0 Z4 r6 ?9 j( p0 L</property>
$ o$ p6 o  S* J. y<property>; j$ X5 U# t- s; Z, D8 ]! S
  <name>yarn.app.mapreduce.am.staging-dir</name>
% [* x- l" A# g* q5 F. K  <value>/tmp/hadoop-yarn/staging</value>. r* n* b" S  P6 d0 a& T: ]! i( L
  <description>The staging dir used while submitting jobs.6 Z6 B! f$ _$ F0 L3 T6 D: \
  </description>4 Y0 q) U% w- A, I0 V
</property>
! u: P7 S" B$ O& Q  }<property>5 G/ C2 ]# e7 |4 b
  <name>yarn.app.mapreduce.am.staging-dir.erasurecoding.enabled</name>9 B/ m2 w: V% p  z- \" F
  <value>false</value>5 I; Q. y7 }) a, x4 d
  <description>Whether Erasure Coding should be enabled for* y. k9 v' Z1 _4 {
  files that are copied to the MR staging area. This is a job-level! v0 D# V9 g& z& F- i, r% E! A0 v
  setting.
* ?( y- w; B2 p* u  </description>2 V, l% o$ @, u  i4 o3 {! _7 q
</property>
2 A- p' P& J* f9 \9 L4 X<property>: Y' S' ]6 \! j7 b$ B3 v! [
  <name>mapreduce.am.max-attempts</name>
) \& F2 X4 f7 u0 R) e& g  <value>2</value>- C' C+ |( z# z" W" u
  <description>The maximum number of application attempts. It is a* }3 k. E5 c+ ~* |
  application-specific setting. It should not be larger than the global number- @3 t  c9 Z3 o) e$ M1 K3 K
  set by resourcemanager. Otherwise, it will be override. The default number is4 O( r$ v, u" t* Y6 o! d- k
  set to 2, to allow at least one retry for AM.</description>
' m8 m2 Z' L( c</property>
& r) n- m: z' [5 {% U<!-- Job Notification Configuration -->
# s% l2 J& W# n. `$ i<property>
2 n1 O1 w, ~8 W2 B4 T* x/ Y <name>mapreduce.job.end-notification.url</name>) l% b( e# c' @4 W- D
<!--<value>http://localhost:8080/jobstatus.php?jobId=$jobId&jobStatus=$jobStatus</value>-->
0 O% G+ b5 j3 n* e7 q3 ]$ K <description>Indicates url which will be called on completion of job to inform2 \9 d3 D$ ?/ d  r$ O
              end status of job.
- j+ B2 m1 `% ]3 Q3 e3 J2 L  N              User can give at most 2 variables with URI : $jobId and $jobStatus.
% e2 O4 ^1 c  }& K4 y3 }9 U              If they are present in URI, then they will be replaced by their& }4 `8 R* G* {
              respective values.
/ @$ c2 i/ a8 Z2 B( {* |</description>) t7 f) _& \- h. s; V( p0 q. L
</property>1 w4 g* u5 ^) o/ g* m0 |
<property>, Y2 m& ]1 @. x- ?1 J# A( }
  <name>mapreduce.job.end-notification.retry.attempts</name>
& K/ V1 |6 N7 \9 p/ p6 m  <value>0</value>
8 i2 a$ I3 G( I1 w- B* D. g5 z  <description>The number of times the submitter of the job wants to retry job; N. @0 Z! T% Z4 P+ X1 ]  y
    end notification if it fails. This is capped by
0 z6 @1 R; q# @7 J    mapreduce.job.end-notification.max.attempts</description>
) Y/ T) h  c. e9 G3 ]9 N</property>, }0 d# }, a% a
<property>7 l; y! O  H" `
  <name>mapreduce.job.end-notification.retry.interval</name>
) T& }% J& y2 o) n8 k8 e  <value>1000</value>
! X9 o! G6 Y3 \6 X2 o1 I9 u  <description>The number of milliseconds the submitter of the job wants to
- Y6 h  c. E- D1 ^6 D( ?    wait before job end notification is retried if it fails. This is capped by
* y2 r: b, A& S9 v" V. d- ]    mapreduce.job.end-notification.max.retry.interval</description>+ m2 m5 b2 K5 ?; C& U( N
</property>; m; E7 b, E% I3 b- p  y
<property>% e/ y- t- [0 P) |
  <name>mapreduce.job.end-notification.max.attempts</name>
. t7 A( u' x+ M1 G0 ?+ @  <value>5</value>  s! C( i3 \' |! z8 _; H3 v8 e
  <final>true</final>
! x' X0 Y; Y* w, X  <description>The maximum number of times a URL will be read for providing job8 S' _, l" E! u# E# G& Q* n" Y
    end notification. Cluster administrators can set this to limit how long8 ?6 g" l; g8 b+ w: U4 U
    after end of a job, the Application Master waits before exiting. Must be! \) t* x2 ]  ^/ a9 b) g
    marked as final to prevent users from overriding this.
: e( D0 q& _9 {. S  </description>5 T' V) e$ [- a, d/ f& h+ R# l
</property>
" C7 w( s% T+ b  <property>8 |. l, {/ u- K5 M/ h# @3 o9 P
    <name>mapreduce.job.log4j-properties-file</name>
0 J  t) o. j6 _/ g& U  P    <value></value>* r0 E! _6 h9 I
    <description>Used to override the default settings of log4j in container-log4j.properties
+ c( ^6 k3 c" |* ~    for NodeManager. Like container-log4j.properties, it requires certain
" N. h; O0 D2 D    framework appenders properly defined in this overriden file. The file on the* z9 u8 P2 f: o* G# c- W
    path will be added to distributed cache and classpath. If no-scheme is given
7 [& |# Z$ g7 s3 w7 L- l# A2 u5 m    in the path, it defaults to point to a log4j file on the local FS.+ ^. T1 @! C0 c/ p4 b
    </description>
. F! ?* ^- q: |+ W$ O  </property>
( `" b/ t$ S- [  Q5 E& o  J<property># ^! ?+ O) \0 g: {* C* E
  <name>mapreduce.job.end-notification.max.retry.interval</name>
- w$ T& x$ C1 }" |7 a0 u# X+ ~% @0 o  <value>5000</value>' R/ _! n+ j: r( y
  <final>true</final>
1 a* ^: n9 `/ I9 m; o6 v0 h  <description>The maximum amount of time (in milliseconds) to wait before* G  t# N# H3 i4 B
     retrying job end notification. Cluster administrators can set this to
: k" r' ~4 u& C     limit how long the Application Master waits before exiting. Must be marked. N' \: C7 ^* ]1 Y1 {, `
     as final to prevent users from overriding this.</description>
! Z8 ]! b* x( X; j: D. _% S</property>
+ n/ z3 m# s9 W<property>, g  ?. ^. o  j4 h1 l
  <name>yarn.app.mapreduce.am.env</name>- K; O9 h& q; A9 f% E, k% o7 p! a
  <value></value>' o* }: {  ?7 h6 w+ A2 p
  <description>User added environment variables for the MR App Master5 w7 d+ F) h- s4 B8 x' t
  processes. Example :
- @/ C; R' w2 F; \  O  1) A=foo  This will set the env variable A to foo- ^0 }  M8 D3 h1 Y- D9 w9 n
  2) B=$B:c This is inherit tasktracker's B env variable.4 G5 L* y$ Y: ]
  </description>
& K6 w; ^$ k9 _, ~: e7 U</property>, x. G$ u6 P7 x9 t2 I' V/ f
<property>8 b- ~0 e) C; w0 k
  <name>yarn.app.mapreduce.am.admin.user.env</name>
0 B% ^% ?: [. H6 M2 z2 v  <value></value>
$ V* D" s4 B6 ?  k  b$ P  <description> Environment variables for the MR App Master8 ^; _6 j) M' {2 c
  processes for admin purposes. These values are set first and can be
6 a' }8 Z1 w; F$ M8 g6 M  overridden by the user env (yarn.app.mapreduce.am.env) Example :
2 [' a6 i, k  i( S- s3 ?" e  1) A=foo  This will set the env variable A to foo
$ I0 ^5 u$ y1 m( H: F+ l: Z4 a3 b  2) B=$B:c This is inherit app master's B env variable.- u$ L- {# i5 @
  </description>1 s: p! \5 E( z0 F
</property>2 \+ H$ O0 V# y6 j3 r
<property>
7 S/ x9 X, U5 g  <name>yarn.app.mapreduce.am.command-opts</name>
7 x7 o3 u3 w/ _0 \& ~  <value>-Xmx1024m</value>; n( F# T" s+ C7 _, Z' @: @# a
  <description>Java opts for the MR App Master processes.
1 Y1 ^/ E: p1 P  The following symbol, if present, will be interpolated: @taskid@ is replaced( W, c) _! E  x. O* l8 D
  by current TaskID. Any other occurrences of '@' will go unchanged.! Q5 v3 d# I2 `4 X
  For example, to enable verbose gc logging to a file named for the taskid in
2 Q. x, o9 N# V6 n: Y7 K  /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:4 }8 Z9 }; y& H  w
        -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc  {5 A8 t& |' W2 H7 q
  Usage of -Djava.library.path can cause programs to no longer function if! Y( d; e) U! T% B
  hadoop native libraries are used. These values should instead be set as part
& R- v) A' I& B* p% ]9 Y  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and$ u( X  |- Z8 |3 q' p/ j
  mapreduce.reduce.env config settings.9 E* o$ |/ y" Z4 a* {, z8 _, M
  </description>- T0 G1 z# q( W8 H* A' Y1 I
</property>) A& I6 t- G, g" x1 ]: v3 p! M
<property>
4 q  \' c6 h+ S! {2 K$ ~2 B0 I9 Z  <name>yarn.app.mapreduce.am.admin-command-opts</name>. f' ~/ H# k+ Z2 ?) O* n* `5 ^
  <value></value>7 ~5 N. K" u! K* \8 K6 k
  <description>Java opts for the MR App Master processes for admin purposes.( c* o. b, @6 c: v! e8 q1 K( I
  It will appears before the opts set by yarn.app.mapreduce.am.command-opts and
* j' ^7 {. {3 A* h2 E  thus its options can be overridden user.2 ]3 u9 N$ x" w# ^/ h$ |# E. K
  Usage of -Djava.library.path can cause programs to no longer function if
; y0 _: ]! @$ X2 a9 w  hadoop native libraries are used. These values should instead be set as part5 b8 w! ^) F) c% z
  of LD_LIBRARY_PATH in the map / reduce JVM env using the mapreduce.map.env and7 v' a$ E( d* Y9 y" l
  mapreduce.reduce.env config settings.
3 S8 S: }3 w& ]+ S  </description>
' c5 T) d- ~- O% h0 K9 C</property>
- d- I& U- x, B5 Q<property>: ~; r) v) o2 n0 j# Q
  <name>yarn.app.mapreduce.am.job.task.listener.thread-count</name># k' p2 s" U1 _0 W
  <value>30</value>$ S! }. j3 y% i, r8 T
  <description>The number of threads used to handle RPC calls in the2 e3 ?! ^& ^+ }. {; U8 e* O
    MR AppMaster from remote tasks</description>) B4 P" V8 ^1 l0 D; l% o+ U
</property>, B& p+ A( N6 J; M9 C
<property>2 ]( _4 V! c, v+ H
  <name>yarn.app.mapreduce.am.job.client.port-range</name>
; ?" h' ?5 m8 h; t  <value></value>6 B! p, f4 M* j# E) d
  <description>Range of ports that the MapReduce AM can use when binding.
$ e, c: ^& n3 _6 }. r- E    Leave blank if you want all possible ports." }9 T0 H: c2 W- D
    For example 50000-50050,50100-50200</description>$ n% ?$ @, x" B9 z2 o, X# ?
</property>3 t3 G& l4 v0 ]& I( x" a
<property>- ]7 C2 y; f+ i# u6 t" r
  <name>yarn.app.mapreduce.am.webapp.port-range</name>% T  @' Q! x1 }8 \/ R& O9 v! }
  <value></value>
; v9 j! [( F5 r3 i4 b/ O# }% s3 Q  <description>Range of ports that the MapReduce AM can use for its webapp when binding.
" H2 e2 g+ m' K3 d    Leave blank if you want all possible ports.
5 X/ M  w) {+ _2 [( R    For example 50000-50050,50100-50200</description>
: u# L9 t, |. _2 ^# }& i</property>; E3 a$ A& x! N4 ?" t
<property>0 {" z! \, s5 N/ J( ?( |0 H
  <name>yarn.app.mapreduce.am.job.committer.cancel-timeout</name>
# D  O; s8 p( P. B  <value>60000</value>
. X# T) r& I+ `$ r2 T  <description>The amount of time in milliseconds to wait for the output
+ W: t" T" D" v( h5 t1 ]    committer to cancel an operation if the job is killed</description>) z" l9 n8 F/ M* h4 ]
</property>
; w; L8 _6 o$ t# [<property>  ], G! U6 j' H% A+ A
  <name>yarn.app.mapreduce.am.job.committer.commit-window</name>
* Q) a. ^& D) N0 F  <value>10000</value>8 g# G. b9 U  O. z
  <description>Defines a time window in milliseconds for output commit
/ {6 |' {3 |) M4 a% q2 p  operations.  If contact with the RM has occurred within this window then0 F+ |' g3 `/ ?( @) P0 h
  commits are allowed, otherwise the AM will not allow output commits until
3 o  Q  }2 K' J/ }- \& h, B3 G# i5 F  contact with the RM has been re-established.</description>
4 p% o& ^6 q) f, H( j$ ~( o</property>8 {9 c- C# n; h# ^" u
<property>
5 l& E: m9 v% k  f: L  <name>mapreduce.fileoutputcommitter.algorithm.version</name>) u) q1 v) U0 ?, m" Y5 J3 Q/ z
  <value>2</value>
7 P" f- S/ ?% H2 v: @# N  <description>The file output committer algorithm version
+ P" b1 W1 a, {0 E, F2 y$ t4 \9 U  valid algorithm version number: 1 or 2& Y. [8 A) {8 U) b+ w" K4 Z7 @
  default to 2, which is the original algorithm
# C9 N6 E! _" k6 a5 R/ L( G  In algorithm version 1,  o+ A9 s+ \, J% b5 H3 T
  1. commitTask will rename directory
7 u+ t" [% s4 }" n6 c6 I. Z, h  $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/
; g; `: u' y) d' x' E  to, F# x* O  s* J
  $joboutput/_temporary/$appAttemptID/$taskID/
+ ^, G2 K0 k1 u- c$ F/ [  2. recoverTask will also do a rename
- U% S1 r! b) |) l* l0 V3 h2 d  T  $joboutput/_temporary/$appAttemptID/$taskID/$ _; F0 T$ z) I1 \* \
  to1 {8 x( ]; v+ r: x4 q5 _
  $joboutput/_temporary/($appAttemptID + 1)/$taskID/, \0 K; ^. U) D: O% E" o
  3. commitJob will merge every task output file in$ o# e9 }6 B: d2 `! A- y
  $joboutput/_temporary/$appAttemptID/$taskID/' D9 |' y+ o1 O' w# v0 B
  to8 r; H" V& P, [7 R& A
  $joboutput/, then it will delete $joboutput/_temporary/
" g( G3 t) \: q  and write $joboutput/_SUCCESS' r9 k' l; i1 D  C! V* h$ f
  It has a performance regression, which is discussed in MAPREDUCE-4815.& t: l& o& j8 t7 K, \
  If a job generates many files to commit then the commitJob
8 l5 I2 \& Y" V! K9 Q  method call at the end of the job can take minutes.7 p  a' U( k; |4 d* E- k
  the commit is single-threaded and waits until all
3 @+ g1 x. N$ j5 |. z  I0 s  tasks have completed before commencing." f8 S$ L6 k* v* Y4 L% m: n0 Y! b
  algorithm version 2 will change the behavior of commitTask,
3 @3 `$ l6 G+ Z# W$ V  recoverTask, and commitJob.+ X7 a% b8 F( {& h; O
  1. commitTask will rename all files in
$ T" ], M) X' ~* g3 Q6 a) h  R  $joboutput/_temporary/$appAttemptID/_temporary/$taskAttemptID/8 K. g. T: g) h& ?1 u1 B
  to $joboutput/' e& T$ H4 P$ o
  2. recoverTask actually doesn't require to do anything, but for
0 M5 q' k& Y$ X' `" K  upgrade from version 1 to version 2 case, it will check if there) N5 y" O2 L4 m* R% R/ J
  are any files in: F; m  l& o0 l$ V8 w- v. [
  $joboutput/_temporary/($appAttemptID - 1)/$taskID/- h7 G* Z1 C5 ~) F1 v* S
  and rename them to $joboutput/# P5 U* S; w8 s  v5 C
  3. commitJob can simply delete $joboutput/_temporary and write
0 k* [9 M0 R3 X# U7 f  $joboutput/_SUCCESS
6 Q) a& g5 w5 Y  e; X: F  This algorithm will reduce the output commit time for
3 s& ~8 W$ S# l  large jobs by having the tasks commit directly to the final" b" L, y6 M( v7 r
  output directory as they were completing and commitJob had
/ m, ~1 N; d/ |' s* N  very little to do.
! d; u, S' p- w" \0 u5 Z( i, W  </description>4 a, t- t" T: I$ f, I: T
</property>" L; ?+ w+ H3 j& u: ?2 s. n' W
<property>
, }' b7 g- m9 J2 W; w  <name>mapreduce.fileoutputcommitter.task.cleanup.enabled</name>
/ ?! ~5 N' g. v, j) x! D$ D* j  <value>false</value>
1 b8 y- a: v. P/ y9 n- ?  <description>Whether tasks should delete their task temporary directories. This is purely an5 V6 o/ g* r9 a. L( A
    optimization for filesystems without O(1) recursive delete, as commitJob will recursively delete+ v9 ]3 K2 y8 Z  p6 B; c" S- s# j
    the entire job temporary directory. HDFS has O(1) recursive delete, so this parameter is left% d9 ^! R) b$ ^
    false by default. Users of object stores, for example, may want to set this to true.
9 P& x& O- Y* V    Note: this is only used if mapreduce.fileoutputcommitter.algorithm.version=2</description>
$ y/ ?# ~' E  T' k# P4 i</property>& s8 {! m2 H9 L, n4 ~  l
<property>
; @& ^: K' Z& j1 B9 h  <name>yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms</name>0 s) ]3 L8 H; l& M3 g
  <value>1000</value>
, \4 \. x$ n; \# x  <description>The interval in ms at which the MR AppMaster should send
3 ~; Q2 Y# B: y* r6 W. [* D    heartbeats to the ResourceManager</description>
" o5 Z1 f/ q5 c- v$ M</property>( W9 O* u. E3 m$ ~3 ?# b
<property>  I9 ]$ X7 N3 A4 ]' i: g
  <name>yarn.app.mapreduce.client-am.ipc.max-retries</name>7 q& Q$ o4 h9 X& P7 A
  <value>3</value>  M" M7 r: J! [' g2 \$ u' R$ e$ z
  <description>The number of client retries to the AM - before reconnecting
0 F& F5 K. y" R0 K4 t    to the RM to fetch Application Status.</description>
" @6 B: d9 `% y! G3 Z</property>
9 M! z: k, q" a6 j% ?<property>7 S7 v: b1 }8 X0 m
  <name>yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts</name>( ^* q0 c+ l! G3 v
  <value>3</value>2 K0 l, z2 j: ?  b
  <description>The number of client retries on socket timeouts to the AM - before9 f0 `5 r, W  ~; ?
    reconnecting to the RM to fetch Application Status.</description>; O( M( R$ w$ n
</property>
# _- [% r) t# m' m<property>
5 i- I0 J# Z. j6 N  <name>yarn.app.mapreduce.client.max-retries</name>' }3 W- r0 m: _  j) J$ g7 f* K  q
  <value>3</value>, j1 G( o) `2 s3 S0 U: D$ E
  <description>The number of client retries to the RM/HS before
- F* O3 ^- [; _$ F1 Z/ e; g    throwing exception. This is a layer above the ipc.</description>" X' v2 q9 _) O) o
</property>+ m9 u0 q8 z: n3 j( `+ z
<property>0 A" D3 w: [8 E4 _
  <name>yarn.app.mapreduce.am.resource.mb</name>) B. e# o6 J5 ^2 j
  <value>1536</value>
5 x1 H2 {) H7 g/ J  <description>The amount of memory the MR AppMaster needs.</description>0 U% ~& g$ B% ?, q
</property>4 B  ?/ D* a6 ~* E$ I
<property>
7 r% Z. W' l0 E! A8 h) }: m2 ?  <name>yarn.app.mapreduce.am.resource.cpu-vcores</name>
/ Z; [% F' R3 [$ k+ S  <value>1</value>: {8 M4 b; ?0 U# m* K% e. T- b% @( Y
  <description>
, _% ]: R3 f, Y  ~      The number of virtual CPU cores the MR AppMaster needs.
" I4 j$ l8 K7 x  M. D  </description>
- W* y% L; O" _* I</property>
3 e6 Y' A. N4 T0 N<property>
; S5 {% \. M4 t8 Y% T1 q  <name>yarn.app.mapreduce.am.hard-kill-timeout-ms</name>1 q. u' a  v+ A* b) v
  <value>10000</value>
/ E' f/ p3 `* F* }& f  <description>5 r( [3 i0 r2 l8 o
     Number of milliseconds to wait before the job client kills the application.
% Z6 @: q" c, n" S  </description>
# t4 ~; B0 G# N" K2 ?: N6 q</property>
% P& ]1 A& w1 z! u7 N( L" e<property>8 _7 q  d0 `# ?& @; ^
  <name>yarn.app.mapreduce.client.job.max-retries</name>0 a, \( O/ f3 q" y
  <value>3</value>
4 t. t. p8 G0 H' a! t  <description>The number of retries the client will make for getJob and
: R  P; ]0 Z' h/ u! c. t9 Q    dependent calls.
; ^6 U( h" Y* }) E2 H) B    This is needed for non-HDFS DFS where additional, high level
# H% x3 L) |; B# z& t" F" e    retries are required to avoid spurious failures during the getJob call.
1 v% ~6 V  q# o4 x. u+ n! \    30 is a good value for WASB</description>- q+ n; {3 Q4 @# m" h* G
</property>
: m; |2 `' B8 ~<property>
- X& `" \% G0 p  <name>yarn.app.mapreduce.client.job.retry-interval</name>4 y5 g. g: f  ~
  <value>2000</value>
6 y. H( V3 s1 z+ K9 ?  <description>The delay between getJob retries in ms for retries configured* |  M: j! F6 A/ A7 o
  with yarn.app.mapreduce.client.job.max-retries.</description>2 w) v. t  m' m  C4 x
</property>
7 s( Z3 l2 [, R<property>- f! A) D5 s6 w- T
  <description>CLASSPATH for MR applications. A comma-separated list
4 j/ {( i+ Z4 \  of CLASSPATH entries. If mapreduce.application.framework is set then this3 b; U; Q5 \- u  x9 {
  must specify the appropriate classpath for that archive, and the name of  s4 Z5 G$ j$ _$ H  P5 P
  the archive must be present in the classpath.* v! r* \0 I" _$ D6 U# O# w- t1 y8 p
  If mapreduce.app-submission.cross-platform is false, platform-specific
7 e. p  f. W8 y+ O0 a  environment vairable expansion syntax would be used to construct the default3 L9 D! Z6 V( x7 e! N
  CLASSPATH entries.
+ N% S; |4 o0 d; Z  For Linux:. P4 _) Y3 q7 y1 t% [9 g
  $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,
# ]% e% L7 O* i8 M: C  $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*.4 O3 P9 @0 l4 E3 y
  For Windows:
) i* R. L/ M8 o/ p7 o  %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*,
& {  y2 L. O* y* @$ s  %HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*.
% |( C1 g" N+ `. g; u  N7 e# g  If mapreduce.app-submission.cross-platform is true, platform-agnostic default( k  A. ?: P6 H( N1 r5 d1 E
  CLASSPATH for MR applications would be used:6 t0 T* b/ e+ i: z( V) w% `. b5 @
  {) g" i8 t& _' ~& _4 ]. d! o2 c4 k
   {HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/*,
. n3 X+ m$ ^' ~  {
3 f$ \: X4 V9 @4 I   {HADOOP_MAPRED_HOME}}/share/hadoop/mapreduce/lib/*' B# o) c9 k( I1 @" b& G! q) y( W
  Parameter expansion marker will be replaced by NodeManager on container
8 U  ]' J1 a) [  ^: W8 T5 m  launch based on the underlying OS accordingly.' Q; r6 i( H0 C6 p
  </description>
) X* o+ k- P6 ^! g- I  d   <name>mapreduce.application.classpath</name>
$ \: v: r# F* S5 P$ T   <value></value>3 o9 O9 n2 d4 S7 t
</property>
4 v, U( t, F9 Z7 G2 @<property>
9 X* E; e8 K1 ]! \7 P  <description>If enabled, user can submit an application cross-platform
# K" z4 Y: T9 ~8 C; p6 T0 k  i.e. submit an application from a Windows client to a Linux/Unix server or
- ~; B6 }- c+ ?  vice versa.
  K0 i0 _6 H& v. e3 i9 H  </description>" ?9 f; }# R( |" b  c4 i9 y  g( Y
  <name>mapreduce.app-submission.cross-platform</name>
& X9 v) s7 k9 L5 c) D  <value>false</value>
& ]. j2 C, e7 k- J& W" B: v9 C</property>
6 w/ z! n" ?& k/ e7 L<property>: ~2 K/ d$ p3 s) m; c
  <description>Path to the MapReduce framework archive. If set, the framework, \) d- D) P* \8 h* ^1 N
    archive will automatically be distributed along with the job, and this
' j* t. |* u# G9 B8 x' j9 P4 |3 L0 W    path would normally reside in a public location in an HDFS filesystem. As
- w3 E+ c0 Y. _$ d& U  y* c    with distributed cache files, this can be a URL with a fragment specifying
1 w0 a7 K6 Z1 R" I5 i& G9 g( {4 z* Z    the alias to use for the archive name. For example,
9 d8 L6 f& ~6 K3 D: X    hdfs:/mapred/framework/hadoop-mapreduce-2.1.1.tar.gz#mrframework would
/ @: K# Z. V( O1 a: m5 i# \7 X. p; W    alias the localized archive as "mrframework".1 I. l0 U: @; Z
    Note that mapreduce.application.classpath must include the appropriate7 n! O$ x! P& B% D8 _. L  M7 L, n
    classpath for the specified framework. The base name of the archive, or/ a9 s5 i" [3 J& _- A# W+ ?
    alias of the archive if an alias is used, must appear in the specified# h# q( \# M5 T6 C
    classpath.
: H# `7 R: I, E/ a2 d2 F' o  </description>/ J+ S. D9 R( f
   <name>mapreduce.application.framework.path</name>
9 C" W  _) t. S  b, G- B! J   <value></value>& w# A6 _$ D% S" `5 L. ]
</property>. o1 g/ }/ `, Q0 H& l8 j! x
<property>; M7 F" M8 t; ?* _# h9 D) X
   <name>mapreduce.job.classloader</name>
. `2 N& ^7 W  t3 u0 a; x$ a   <value>false</value>
0 y7 x% V6 S' F  <description>Whether to use a separate (isolated) classloader for
0 H3 X* g- j, N- i    user classes in the task JVM.</description>, j& ~: f/ I& X
</property>3 [3 V, M3 Q' E8 g- z) C; _
<property>7 s. q4 J: V& i$ Z# j
   <name>mapreduce.job.classloader.system.classes</name>: _9 R1 l! e5 Z+ s+ o* ]
   <value></value>
4 E& H0 Z+ }. k  <description>Used to override the default definition of the system classes for7 [$ n2 l( k# c1 w
    the job classloader. The system classes are a comma-separated list of
9 p$ W. \7 s6 }    patterns that indicate whether to load a class from the system classpath,5 J* W3 e# v, y3 v- |
    instead from the user-supplied JARs, when mapreduce.job.classloader is
; w: V/ ^+ l- d9 Z" K7 ?. V+ J1 p    enabled.
3 l/ G3 h" F* m" I& j    A positive pattern is defined as:
5 b6 W4 n3 F+ V: M0 P        1. A single class name 'C' that matches 'C' and transitively all nested
3 }# Z' m" l& ^. k7 K5 O            classes 'C$*' defined in C;  I9 P% m4 |; i: q% _
        2. A package name ending with a '.' (e.g., "com.example.") that matches
7 t/ N/ T5 E; J9 k            all classes from that package.
$ n8 v; l8 c: L' g+ n8 u" k    A negative pattern is defined by a '-' in front of a positive pattern1 h, \. r: F9 i$ z- h; n
    (e.g., "-com.example.").
2 s! x% y) S( f1 y4 R0 O    A class is considered a system class if and only if it matches one of the
) @/ H. i% g9 {6 W% A( D9 z    positive patterns and none of the negative ones. More formally:* B, M0 H, C. z" E- O
    A class is a member of the inclusion set I if it matches one of the positive5 ?: V) g: y, b4 s4 P4 S
    patterns. A class is a member of the exclusion set E if it matches one of# T  n7 B2 V; @  V
    the negative patterns. The set of system classes S = I \ E.
2 ]; T' q; J9 ]+ m" I4 l7 ~1 D  </description>9 l% d7 ]% I* ^
</property>3 T" s3 r2 K) m6 D1 A
<property>2 ?/ P  s# [/ I) X* ?& ^
   <name>mapreduce.jvm.system-properties-to-log</name>3 E# |( {  r1 |% J5 S: t; ~: c2 V. Q; |
   <value>os.name,os.version,java.home,java.runtime.version,java.vendor,java.version,java.vm.name,java.class.path,java.io.tmpdir,user.dir,user.name</value>
2 D5 Z; d+ R7 v: ]: q* E  _   <description>Comma-delimited list of system properties to log on mapreduce JVM start</description>* D. A4 o) K9 Z* ^% \( x8 H7 C
</property>( N+ R# x, S: g" ~( F' E' F" I
<!-- jobhistory properties -->( k/ g6 Q+ F2 Q! a) }
<property>8 i( H  l+ ~" W' p( @" q
  <name>mapreduce.jobhistory.address</name>" x. F9 g; s9 X" i# A
  <value>0.0.0.0:10020</value>* }; K$ v. |" X/ ^
  <description>MapReduce JobHistory Server IPC host:port</description>
$ C5 G" G/ ]9 U4 F</property>7 N2 K  g, b! O& @" }. U
<property>8 T; O* n( T. I* Z
  <name>mapreduce.jobhistory.webapp.address</name>  p7 {1 W7 i% b1 y  r2 W
  <value>0.0.0.0:19888</value>
2 S/ |4 I9 q4 G& w  <description>MapReduce JobHistory Server Web UI host:port</description>
( U) P  s  A. S* p  b</property>
% W7 B: _3 i5 R. X' W<property>9 Q( j; U, }7 ~  C; H9 x
  <name>mapreduce.jobhistory.webapp.https.address</name>
8 W- a: ^# t7 w7 l- Y  <value>0.0.0.0:19890</value>$ i( P+ N/ {- C) [0 j8 }+ X1 K  a
  <description>
0 |$ n% T- [/ N. S# Y' N    The https address the MapReduce JobHistory Server WebApp is on.7 H" Y0 u! h+ l% ~; f1 m
  </description>
1 X9 J* V9 \  J</property>. A9 `1 X* g2 ^) o/ j& i
<property>- `0 {5 N) O" f. B$ X( M. a
  <name>mapreduce.jobhistory.keytab</name>
1 D6 ]9 _2 A' Y! i; M/ r6 o% o  <description>
: |* b( z% ^! B1 ^# O    Location of the kerberos keytab file for the MapReduce$ {1 t( [+ }9 A6 Y
    JobHistory Server.
/ ]* x# q* T/ s4 x9 I' A  </description>3 w- z3 W/ x/ Y. Y
  <value>/etc/security/keytab/jhs.service.keytab</value>
( t1 F5 u' T) m. `</property>
2 s; h4 p5 E/ t1 h5 B<property>( i+ m2 v$ ^1 A- o
  <name>mapreduce.jobhistory.principal</name>
2 {* J5 u! _0 u2 ]# [4 f  <description>3 u2 y/ h* J3 z" p0 M+ B
    Kerberos principal name for the MapReduce JobHistory Server.
  s4 u1 B! w, X4 M3 F  </description>" R& g0 E  H! _6 v- X' D( c: x9 V
  <value>jhs/_HOST@REALM.TLD</value>
% Q$ C) r  V) c( a% e; x</property>
: J5 s+ j* [( S- g, ^2 X<property>
" G- p! h; F& q- p  T  <name>mapreduce.jobhistory.intermediate-done-dir</name>* w: r0 C, a. X) H* O( W2 J" Q+ t  W
  <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>+ S( s6 Y8 u. k  }" l
  <description></description>0 j5 H# P! s, P6 N& A
</property>
) |/ y% r& T; y' f2 K<property>  n  O8 C3 _5 M/ n5 ?2 l) c
  <name>mapreduce.jobhistory.intermediate-user-done-dir.permissions</name>0 y/ Z+ \/ K4 t8 ^
  <value>770</value>& H# e' S: H. {2 |" ?$ ^  \
  <description>The permissions of the user directories in
: ^+ R7 j: y* g+ C: A0 M6 F4 P8 g  ${mapreduce.jobhistory.intermediate-done-dir}. The user and the group0 C6 L( ^! y4 {! [3 A; v: g9 N* A
  permission must be 7, this is enforced.' q' I% E" d; k7 @
  </description>
- Q6 j- |/ G( n% a</property>
) ]& E, Y+ \) N3 g1 l2 t3 _<property>
, U: e- q# ?2 p7 `4 P4 B  <name>mapreduce.jobhistory.always-scan-user-dir</name>0 _! R: @$ o- V; M* ~4 O
  <value>false</value>
+ s- k; a  d5 R  <description>Some Cloud FileSystems do not currently update the: f; i/ v4 h% [. ~5 w4 k6 z
  modification time of directories. To support these filesystems, this
1 J/ J0 ~/ \0 l# z  Q  configuration value should be set to 'true'.
) x  `0 v) J$ A: N* M6 V6 j) G" ?  </description>
7 r& C' O8 G# _- c$ K4 X- d. X</property>
" H3 G/ F. p$ e0 [( v7 k  y# u( |<property>
6 n. R1 L  ~* z( y  <name>mapreduce.jobhistory.done-dir</name>
- ]# C- E0 @1 i. k  <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>$ `! G+ e. ?! B5 @/ K5 ?
  <description></description>7 s6 m# ^0 a# V* |0 b
</property>: @. S' q9 c# |$ w* l
<property>  T3 ?* X8 ~0 e0 r; t
  <name>mapreduce.jobhistory.cleaner.enable</name>
' b8 c7 _, h- V' I- R1 k  <value>true</value>* U7 w9 _8 Y: S) ^$ Q! X
  <description></description>
& y# o. h3 X" ?</property>1 d3 F* b  c3 g" |# Z5 e5 d9 q
<property>
5 h& E, V1 P' F; m( ^0 h  <name>mapreduce.jobhistory.cleaner.interval-ms</name>9 o( F+ `- R! `2 T
  <value>86400000</value>+ j2 Q% b# Q& k) B, b6 K) S& N- o3 U
  <description> How often the job history cleaner checks for files to delete,1 i7 D* O+ S, V* s
  in milliseconds. Defaults to 86400000 (one day). Files are only deleted if( {  U  ~' q+ f- S3 g0 N: b
  they are older than mapreduce.jobhistory.max-age-ms.# a7 T. r. [# C- |
  </description>1 [2 ]1 E: |3 J  S, N6 r1 d
</property>
5 D3 \' U' s" ?9 n1 q/ r% O$ H<property>2 t5 p* y* |/ A- k8 D
  <name>mapreduce.jobhistory.max-age-ms</name>
& O; v) r0 M7 p! C  <value>604800000</value>
# }) W5 n. W. i2 ~. d  <description> Job history files older than this many milliseconds will
1 `" G( o8 g8 U$ F, l: x  be deleted when the history cleaner runs. Defaults to 604800000 (1 week).
$ {# Z; a. k' x, K& t5 |  </description>$ q& c0 Z: l' p5 Q! V7 R. i, e9 u# y
</property>
6 _( U5 v) e4 ?3 E<property>
4 T. Y9 y9 C+ h3 b& w" [5 w  <name>mapreduce.jobhistory.client.thread-count</name>3 N( f- d! s& U0 A; w/ U/ Z
  <value>10</value>
- d8 d/ f: j  [3 [, \  <description>The number of threads to handle client API requests</description>
4 v. _" _. T) v2 p- u</property>- j6 f' _! E/ L1 P* s
<property>4 l3 Y. s* J! P/ [
  <name>mapreduce.jobhistory.datestring.cache.size</name>
* }8 X# y8 T+ Y2 T  <value>200000</value>  U' K/ ]4 }- y/ \3 @+ U0 R7 J
  <description>Size of the date string cache. Effects the number of directories/ G- i. k4 j1 E* R% P2 Z3 N8 {
  which will be scanned to find a job.</description>
* i8 n3 j( X: Q8 d' T6 a</property>+ q" `- V; Q1 |; H$ m) s+ w6 f
<property>
; D" p( y  P) T. E$ w# f. d$ C  <name>mapreduce.jobhistory.joblist.cache.size</name>
( e! r' T0 c1 K: p7 ?/ k/ H  <value>20000</value>$ {$ W: o3 c/ G1 o, \
  <description>Size of the job list cache</description>
2 \4 t; v$ ~' n) c" ]1 E6 D</property>) h$ ?0 H1 v* B1 @
<property>
' |) P8 M  t: P) ]  <name>mapreduce.jobhistory.loadedjobs.cache.size</name>
( R9 H0 M8 |1 c, A/ O) A  }( N" l1 a  <value>5</value>9 `$ U% u0 x" L) R
  <description>Size of the loaded job cache.  This property is ignored if* `4 d  ?+ i' ?8 o$ O8 I3 m9 W
  the property mapreduce.jobhistory.loadedtasks.cache.size is set to a8 y. X* _6 w0 N9 p* z  C% Y
  positive value." O6 |4 v7 Q  f; q7 A1 j+ f) P6 ^
  </description>: o: w$ R: J* g* w* M
</property>7 v* N0 Q2 J: j
<property>
% P: E; f. H) w" s0 m% K' p+ W; h  <name>mapreduce.jobhistory.loadedtasks.cache.size</name>
& G$ V* R6 C6 \6 m  <value></value>/ l0 p2 ]2 K& d- Y8 c& i1 Y# j  w  ~
  <description>Change the job history cache limit to be set in terms
) f2 S1 H9 ]8 X4 N5 F7 M' x- I  of total task count.  If the total number of tasks loaded exceeds+ x/ I9 R$ {+ o  ]5 d, ~9 F& J; E
  this value, then the job cache will be shrunk down until it is3 b6 J. Q5 M/ F7 Y6 w* a8 Z+ p" d
  under this limit (minimum 1 job in cache).  If this value is empty
7 c2 j2 a" J/ y- D  or nonpositive then the cache reverts to using the property9 x) b( c, F5 B8 A* ^' z1 ]5 \
  mapreduce.jobhistory.loadedjobs.cache.size as a job cache size.0 u8 U; f% @) X2 y8 r7 Y. X5 V
  Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size0 X! j: b/ ^0 l- a2 \
  property:
, p6 l5 z9 M4 s2 T  1) For every 100k of cache size, set the heap size of the Job History
, M' e% w) b' ]     Server to 1.2GB. For example,
6 ]5 I* N7 [4 Q% Y2 k, i2 O; f     mapreduce.jobhistory.loadedtasks.cache.size=500000, heap size=6GB.: K) x3 P& }+ C! W5 v
  2) Make sure that the cache size is larger than the number of tasks" {& |/ c3 v3 s9 P
     required for the largest job run on the cluster. It might be a good
! |5 Q5 q2 A3 ^0 f, ?: M     idea to set the value slightly higher (say, 20%) in order to allow
! q8 \$ W* S4 N  x# ~     for job size growth.
# b7 @" [' S! G4 z4 X- n4 d$ a+ m* h0 h  </description>
0 G4 V- b6 ?7 h( d# R: Q0 \</property>
+ k' x  Q" U$ n. v0 s0 v  U<property>
; s2 h; Q( l0 [  <name>mapreduce.jobhistory.move.interval-ms</name>
* P+ H6 H& j, g$ C; _  <value>180000</value>: R* y7 e, Z" U4 E) e
  <description>Scan for history files to more from intermediate done dir to done
2 H  [2 L3 _/ v5 L2 ?& H  dir at this frequency.  Q3 W2 J/ R' O$ \" p- ?) s. I! t2 J
  </description>
8 b5 q5 k+ I' m1 Z% R" r</property>6 B$ E& L! m1 n
<property>1 s; I/ N+ X  d/ s  p
  <name>mapreduce.jobhistory.move.thread-count</name>5 o. h6 E/ ]$ S
  <value>3</value>% e; l4 K! \( E7 K, e6 T
  <description>The number of threads used to move files.</description>
( @; f$ C% b& d& b; |) u7 g</property>' G+ Y% p5 M+ e1 j9 M
<property>, K# A1 h) U* y! N% z: P% e  G
  <name>mapreduce.jobhistory.store.class</name>0 y1 ^- l( i& R$ f( B
  <value></value>
/ a- a8 {# f3 O/ M9 X# ~  <description>The HistoryStorage class to use to cache history data.</description>+ |6 q6 m0 ]5 ]
</property>$ `6 D$ }2 h9 N  ^% ^" S2 A
<property>( J( X$ i; d" ~$ _
  <name>mapreduce.jobhistory.minicluster.fixed.ports</name>3 C) y1 V3 h0 a: |8 c$ [1 r
  <value>false</value>- x4 C' \0 ]# N' f$ @
  <description>Whether to use fixed ports with the minicluster</description>" C; Y# g: g" V% f
</property>6 p# F  @5 {& j, [
<property>* m$ W( e% Y$ T  k
  <name>mapreduce.jobhistory.admin.address</name>  |# d; A5 n) ~9 {9 ?# u/ u
  <value>0.0.0.0:10033</value>
4 o9 v8 z6 L, i, w  <description>The address of the History server admin interface.</description>
' c% ]1 r8 ]0 c</property>
4 X- G/ j5 I' n7 ]0 X  s& h<property>* X, A. o! ~. J1 B# E( [; [
  <name>mapreduce.jobhistory.admin.acl</name>7 v* U- G8 D# b6 p# \& ?% {) D. S
  <value>*</value>0 F6 ~0 Z- x  s& I) f6 A
  <description>ACL of who can be admin of the History server.</description>0 K! c/ E$ ~/ O
</property>( o! g9 Z2 f& q* f
<property>* }/ f8 w( U! B3 D6 \& Y9 {% e
  <name>mapreduce.jobhistory.recovery.enable</name>( I; P- H. n: @, n" h8 S5 k" j5 P6 o1 Z0 W
  <value>false</value>& m! i: @1 z4 h
  <description>Enable the history server to store server state and recover
/ h$ M- l2 _5 f* E  server state upon startup.  If enabled then
0 m& N- ~7 {+ H% \+ Y  mapreduce.jobhistory.recovery.store.class must be specified.</description>( W: m' D1 e6 w2 B
</property>
7 C( [  ^6 h2 C- G8 c, k, ^<property>( @" E4 g6 R  W6 y7 X+ k
  <name>mapreduce.jobhistory.recovery.store.class</name>3 ~* t1 v, {6 w" |7 M, H3 r
  <value>org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService</value>' _# g+ ]4 x# Q" O! A- l. y/ S* e
  <description>The HistoryServerStateStoreService class to store history server/ f; ?% D* w( E" W! U' q
  state for recovery.</description>
; a9 r- d3 W3 m3 ?) k</property>
. s- f* a- _3 W$ }) H( B<property># l& Y) {$ Y. M9 A
  <name>mapreduce.jobhistory.recovery.store.fs.uri</name>  \7 C& K1 b* ^8 ^/ @  \& e
  <value>${hadoop.tmp.dir}/mapred/history/recoverystore</value>3 o+ k4 t2 [! s) E0 v5 ^
  <!--value>hdfs://localhost:9000/mapred/history/recoverystore</value-->3 W8 Y) O( ]* z- ?. A0 Y* b
  <description>The URI where history server state will be stored if) h; I3 a- D3 Z; O, \
  HistoryServerFileSystemStateStoreService is configured as the recovery
* L) K6 J+ R; A: r, L  storage class.</description>
. x, f: o: L9 S4 N2 r</property>6 h& `) H5 X/ t" [1 u0 t
<property>: W- n0 G' ?! Z- H# P. O
  <name>mapreduce.jobhistory.recovery.store.leveldb.path</name>
" g5 l' v' n  q  <value>${hadoop.tmp.dir}/mapred/history/recoverystore</value>: M; @- m* h& x& v
  <description>The URI where history server state will be stored if
/ o) h. o  p6 J( f8 H7 I  HistoryServerLeveldbSystemStateStoreService is configured as the recovery
  J# B; r* g% C- k8 i  storage class.</description>
1 c. o# I8 e/ z( G" M6 M</property>* z! _8 O- D, z% O
<property>
9 p4 W& [$ f! _# R1 K0 p) a) x4 v  L  <name>mapreduce.jobhistory.http.policy</name>4 v9 Z3 a9 W# j# U/ h8 I! i* Q
  <value>HTTP_ONLY</value>) T) t- G4 z: z* M  b- X6 O: Y7 v6 J
  <description>
8 Z2 X' T' @/ v5 q* A6 n" {    This configures the HTTP endpoint for JobHistoryServer web UI.
% f& W- Z/ U7 W/ Y: W* ~3 ]! o    The following values are supported:
. m% b) B* z$ A$ y    - HTTP_ONLY : Service is provided only on http; a. n0 ^' W) K$ _2 n# S* F
    - HTTPS_ONLY : Service is provided only on https
: E" z+ M$ B. z& ~7 Q$ N* B$ G: e  </description>
8 x2 O+ a8 B) y3 |# l2 |</property>
; p1 F4 r& @; Y7 |8 Y6 A9 q1 q6 {<property>9 o9 c2 z- H; J# R
  <name>mapreduce.jobhistory.jobname.limit</name>2 A; a- n9 g6 l3 x, l7 F
  <value>50</value>. k2 ?2 h3 l1 i, X
  <description>
# _& V2 `7 r6 V8 ^- N/ k. O& }# p     Number of characters allowed for job name in Job History Server web page.
6 A- B! b5 P- A3 R. d  </description>* i; A" j! |* R, Z& d2 s2 q) h# _
</property>
2 H/ Z+ m1 c& L/ N<property>
3 z/ u4 Z( w+ i- R" F5 X3 n% `: W  <description>9 y& j5 m# p) m
  File format the AM will use when generating the .jhist file.  Valid+ C6 n) T. U1 ^* y+ K, g
  values are "json" for text output and "binary" for faster parsing.
; t) j7 y: ?1 \2 z$ z/ v  </description>$ t. A5 R  J! N0 s& e2 K5 ?
  <name>mapreduce.jobhistory.jhist.format</name>) ?7 W$ ]4 _5 z1 ]$ i) |8 ^
  <value>binary</value>% m+ s' \) v8 ~' m
</property>: _* m  A% t' A6 o8 l& q( W7 X6 W
<property>
3 B: D: }1 {/ J, m" K  <name>mapreduce.job.heap.memory-mb.ratio</name>( ~8 I: l7 d0 B! p
  <value>0.8</value>
5 W) e0 x% |! {* J  <description>The ratio of heap-size to container-size. If no -Xmx is6 l4 [) Z( f5 \5 c" g
    specified, it is calculated as5 ^( r' }- h8 k: S" H7 A* }
    (mapreduce.{map|reduce}.memory.mb * mapreduce.heap.memory-mb.ratio).
; `* w, \3 v0 ^3 |    If -Xmx is specified but not mapreduce.{map|reduce}.memory.mb, it is% f: @7 D, R( E/ S5 F6 ?
    calculated as (heapSize / mapreduce.heap.memory-mb.ratio).+ b( S: {  R% \# C/ c" P
  </description>2 `+ r0 e+ e: W  Y
</property>4 d3 ?/ q$ y( t7 M+ K
<property>5 s8 }3 _/ a4 d" `
  <name>yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size</name>! [; T2 _- E: m1 y
  <value>10</value>
5 Z$ ~, Q+ [2 E; v5 E2 \  <description>The initial size of thread pool to launch containers in the  i0 c5 z$ [8 v8 T+ q
    app master." G3 B0 k/ a: W5 n+ U  o
  </description>3 F7 c( u" q1 S9 k; P) I! q. \) w
</property>  \/ }3 N  z6 r' K( H4 A0 D6 `8 E
<property>$ {# h/ n" y/ }0 [, O2 Q1 z
  <name>mapreduce.task.exit.timeout</name>7 v' ?+ H( }5 R! p7 P+ y# x# @- [* e  T
  <value>60000</value>
# Z7 t7 a; r( P5 k$ M3 a) v  <description>The number of milliseconds before a task will be
7 \3 k& i7 [7 ~$ ?% h  terminated if it stays in finishing state for too long.( q) i+ {( b! p( x
  After a task attempt completes from TaskUmbilicalProtocol's point of view,& B/ g4 A1 C8 t, B
  it will be transitioned to finishing state. That will give a chance for the
6 G  R6 {' C$ X, _  task to exit by itself.: x2 C; a" U1 D( _  @
  </description>
& t- A8 ^- Q( l/ h</property>0 L% j' J  \( `+ c1 X, G1 B% B+ E) B
<property>( i" Y3 Z( w2 r6 J! m
  <name>mapreduce.task.exit.timeout.check-interval-ms</name>$ h" W% C& T/ ]' Q3 [. o$ v
  <value>20000</value>' T- @0 [6 ]" W# f
  <description>The interval in milliseconds between which the MR framework
9 b% M4 \* G* a  ]& f) J2 s  checks if task attempts stay in finishing state for too long.4 T$ i' r7 G- ?
  </description>
) T( X4 q; H( ~$ L, J</property>- E1 H1 ?* ?; K
<property>
) y# u* U* S9 j' E$ U  <name>mapreduce.job.encrypted-intermediate-data</name>
3 J! y4 t' T4 [  <value>false</value>+ [( G/ m! |* S- v
  <description>Encrypt intermediate MapReduce spill files or not5 N! X6 L$ p. `6 @- T( `, n
  default is false</description>
' }0 q8 u7 |- f4 O9 P# V</property>
  `  N& i6 x- {, \) D<property>
- z& ^& A5 t. C/ X4 E: f4 P  <name>mapreduce.job.encrypted-intermediate-data-key-size-bits</name>
+ f1 T& w) G* q: T% A4 S+ m4 p  <value>128</value>. K( ?7 r! n) H2 C& T( |
  <description>Mapreduce encrypt data key size default is 128</description>
9 n: @) H" k- B: h# G5 m0 L</property>! Q1 Z/ Y$ @8 m- `' ]. x
<property>1 i3 j/ x8 N( N0 s
  <name>mapreduce.job.encrypted-intermediate-data.buffer.kb</name>. o1 t6 c7 C$ f- h
  <value>128</value>
0 G% a5 H9 ^6 u, b: a2 j! z  <description>Buffer size for intermediate encrypt data in kb( R( o9 I# ^- Q. ]" ]
  default is 128</description>1 w3 @5 ?# y8 g6 v( B, b* X
</property>& I; E) U/ b4 l1 {) J
<property>& c. Z% n; i" G' a1 X
  <name>mapreduce.task.local-fs.write-limit.bytes</name>
7 V0 o7 V1 c3 N/ l$ F* g/ M  <value>-1</value>8 q1 x) s1 D' i. n8 t2 }
  <description>Limit on the byte written to the local file system by each task.. ~7 D0 r, I5 X4 o7 W+ K
  This limit only applies to writes that go through the Hadoop filesystem APIs7 e3 ~4 j: t, u) F5 F% T
  within the task process (i.e.: writes that will update the local filesystem's8 U; v6 a- [+ y) T& l5 }8 |: a3 j/ l7 o
  BYTES_WRITTEN counter). It does not cover other writes such as logging,
, O' D9 W- ~5 C# |: B# s  sideband writes from subprocesses (e.g.: streaming jobs), etc.
; H3 e9 L' ?% n6 N. W  Negative values disable the limit.
& A7 s1 G# V/ U6 u  default is -1</description>7 r8 H0 Y( a; ?- F& F- G
</property>1 @/ e' m7 P/ `& D# g2 E0 ?
<property>! P  w$ Q' R( U, Z* S8 s3 `
  <description>5 W  U; V! P% S8 o
    Enable the CSRF filter for the job history web app" U# _% d  }5 v4 |; ?) k; d
  </description>
4 k5 B6 I! m( p7 n1 G$ A4 L  <name>mapreduce.jobhistory.webapp.rest-csrf.enabled</name>
1 n# \! N- M0 x; |9 T9 F  <value>false</value>
8 `# O2 N  {7 v# b</property>
2 f- v# @: |. Q  `) A<property>4 j# x' U' o% Y+ O5 K" d+ Y
  <description>
$ G( K6 @2 X' H9 H1 W/ c    Optional parameter that indicates the custom header name to use for CSRF. k% ?, Q2 h) O; _
    protection., a* V1 a1 U  p) d+ N! w3 w
  </description>
2 ^3 e  n, @  A# e  <name>mapreduce.jobhistory.webapp.rest-csrf.custom-header</name>
' l* o% r$ v& b; Z$ {  <value>X-XSRF-Header</value>
. w+ F3 B2 H$ R  Q/ j* I</property>
# g3 }2 e3 E7 H5 @# f<property>5 r: H" Z7 }0 \4 A( i5 x
  <description>
# u- e$ G2 R! l; ], A8 L3 ]    Optional parameter that indicates the list of HTTP methods that do not
: y2 d) ?3 O: k. C  m    require CSRF protection
: D5 F% S) P* l5 w2 \  </description># Z1 l2 z! u# X) C" P( U4 B
  <name>mapreduce.jobhistory.webapp.rest-csrf.methods-to-ignore</name>
) k) w( a: h( M  <value>GET,OPTIONS,HEAD</value>7 \- `- Z& N1 B' m4 y: @
</property>8 W% q0 x2 P' c, K
<property>
" P6 Q+ ^# b; p, o9 v! J# |; @& \9 a  <name>mapreduce.job.cache.limit.max-resources</name>
+ z/ M$ X+ V! }9 Z4 i2 r. Y2 R) e1 v, a  <value>0</value>, v/ J6 P5 C. o6 B( |  G
  <description>The maximum number of resources a map reduce job is allowed to
* P0 X- e; S1 p    submit for localization via files, libjars, archives, and jobjar command
' v: ?5 Q3 h! {0 A    line arguments and through the distributed cache. If set to 0 the limit is
. U5 I0 B, ?; @8 i, F, W    ignored.) P8 x0 h6 {) f6 h9 p
  </description>5 H0 N- Y# [. z1 ?
</property>3 T8 s! T$ d8 q. o2 ~
<property>3 C4 T- L$ ]+ a
  <name>mapreduce.job.cache.limit.max-resources-mb</name>. [( p. I6 ?8 i, H
  <value>0</value>+ M1 n, w; G5 n) p" O0 z
  <description>The maximum size (in MB) a map reduce job is allowed to submit
/ s3 C; Z! ^# n6 w8 e) e) Y    for localization via files, libjars, archives, and jobjar command line. X3 L: ]: J% l& K
    arguments and through the distributed cache. If set to 0 the limit is1 [+ I6 \' o% f  V
    ignored.
' a1 y, M, L% F9 L0 w+ u6 ~  </description>
" i3 K* L! z7 Y0 e. `) F, U  D8 s7 f</property># Z! U, E5 ]& K3 I  r& D
<property>7 a% `" d3 C" |4 ]
  <name>mapreduce.job.cache.limit.max-single-resource-mb</name>
. U$ r0 q' M+ G/ R& J9 [! B8 z  <value>0</value>
: j' O, P' t' P. o  d5 ~  <description>The maximum size (in MB) of a single resource a map reduce job* U1 P/ q& B8 w, ?
    is allow to submit for localization via files, libjars, archives, and' [; k: {; E) X
    jobjar command line arguments and through the distributed cache. If set to
% z* }. [) g6 Z    0 the limit is ignored." t5 E  B+ W6 v0 }5 `2 K1 E! z6 }
  </description>
. O" A* U% K9 i  q# x" J! e4 t</property>; _/ W6 d  x7 d0 O; _# B+ ~
<property>9 ^4 f5 r' W+ ]
  <description>- o6 m% L. Q+ A9 p3 G  A
    Value of the xframe-options" G! c8 W; ]8 p
  </description>
7 Q0 V1 g* t3 F' z# w- Y  <name>mapreduce.jobhistory.webapp.xfs-filter.xframe-options</name>
  U+ f3 j  I1 v3 T( }  <value>SAMEORIGIN</value>
) I4 a* D  B- b6 _$ L/ L4 m</property>
( {2 A0 [! z9 N5 g' B<property>8 Q  ]2 z2 ~, P- O) N
  <description>4 m8 h$ B, _( e3 A& f# s
    The maximum number of tasks that a job can have so that the Job History
8 r; Y  O: f) z/ J    Server will fully parse its associated job history file and load it into
" T% m$ c: T4 ?; V- m! p6 l    memory. A value of -1 (default) will allow all jobs to be loaded.
3 y' ^3 c' X$ J- i0 W+ r  </description>: A, O1 I2 u" k$ e
  <name>mapreduce.jobhistory.loadedjob.tasks.max</name>
& ~  C/ y0 ~/ m; \  <value>-1</value>
5 p1 X1 ^. N1 l1 X- _# f</property>
1 [3 C3 }4 }7 j/ C5 i) \3 U<property>+ X; h) @) X) n( r( z& O
  <description>
4 \1 l0 Z, f1 ?    The list of job configuration properties whose value will be redacted.
4 X* U9 R; s( Y  </description>
. d( j0 W! d7 I* |0 o  <name>mapreduce.job.redacted-properties</name>. h! f, X: C0 d8 V
  <value></value>. d9 b5 D& N! W" @* s0 D
</property>& {1 M9 P6 \4 O3 i7 }
<property>
: ]- G, \3 o' n4 O  <description>
/ v4 n  u5 z, N/ i% |" d    This configuration is a regex expression. The list of configurations that1 p- Q% H$ T6 n3 E/ B
    match the regex expression will be sent to RM. RM will use these
  `7 `0 a0 e9 L/ u    configurations for renewing tokens.. F5 w9 u3 r4 c0 l9 V1 I
    This configuration is added for below scenario: User needs to run distcp: M/ ]6 [, h5 a; ~
    jobs across two clusters, but the RM does not have necessary hdfs
3 V% O( x5 F7 b% @2 |' ~1 I    configurations to connect to the remote hdfs cluster. Hence, user relies on- ]) F/ g% `- P9 f# k; ]
    this config to send the configurations to RM and RM uses these+ j/ X% k! \% [. ~- g- z. Q' j
    configurations to renew tokens.
+ u) ~  {2 z2 B4 ~0 B    For example the following regex expression indicates the minimum required% a' z' P8 p3 c( O7 ?7 P
    configs for RM to connect to a remote hdfs cluster:% q* k* x6 p1 g% d
    dfs.nameservices|^dfs.namenode.rpc-address.*$|^dfs.ha.namenodes.*$|^dfs.client.failover.proxy.provider.*$|dfs.namenode.kerberos.principal
( x: s2 j, C2 |2 `5 i  </description>
* h) @( C) O3 K) |/ o" x$ }* l  <name>mapreduce.job.send-token-conf</name>) ]- k4 S- Z3 L- |0 V7 r# p
  <value></value>1 @& o' Y% g$ m1 ?' ~
</property>. H; b. z5 e" `# O1 N
<property>
$ G# I$ G3 N6 Q( t. G; ?! C! j  <description>
% p5 K7 m2 y5 Z9 U& c    The name of an output committer factory for MRv2 FileOutputFormat to use
) R) Q8 v9 h( T# F  e    for committing work. If set, overrides any per-filesystem committer
6 G0 j& g) d! x+ G0 W' G0 m8 Z    defined for the destination filesystem.$ Z! v& @1 j( G9 k$ O
  </description>
. E; w7 ^0 l4 u  <name>mapreduce.outputcommitter.factory.class</name>) q) U, D5 ~+ E  N: n. b$ P! y4 }6 h
  <value></value>: u) c3 R$ ^. X
</property>
4 Y; O( I- w( V1 j3 F4 ^- J( e<property>
& R! S9 N7 ^7 h* N" L- g( C  <name>mapreduce.outputcommitter.factory.scheme.s3a</name>" Y" L! Z0 N( O; D! l/ H7 ?
  <value>org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory</value>( Q8 A  d- u: b) F  U# ^0 m4 M
  <description>
' W9 y& Y6 W! N# ]    The committer factory to use when writing data to S3A filesystems.7 v! D/ }2 {, m3 {7 }
    If mapreduce.outputcommitter.factory.class is set, it will
6 ~/ u: \& U- P( f/ _- \3 O    override this property.8 ?1 _3 N+ N! [: l! w" f" n+ j
  </description>  Q! a% S4 W/ N& Q  {
</property>/ c4 r7 e& }  ]
</configuration>
+ p9 O2 ~+ n5 q& _* D* P2 `7 q* l4.yarn-default.xml
4 m9 I2 q+ E' Z8 m; W
6 u% s% \) K2 f2 E. o<?xml version="1.0"?>4 O( d, a& M; J8 O
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
9 S6 j6 j1 G$ u4 ], r<!--/ L/ `4 {9 u, v  u/ p
   Licensed to the Apache Software Foundation (ASF) under one or more( [! @: ]  m; l( S
   contributor license agreements.  See the NOTICE file distributed with
5 R+ q1 h2 m3 H/ [0 r   this work for additional information regarding copyright ownership.
9 d* K; J9 }' a+ B; q   The ASF licenses this file to You under the Apache License, Version 2.09 r' Z! Z' a" N( [' \
   (the "License"); you may not use this file except in compliance with
( j' K5 Q! o" I# {   the License.  You may obtain a copy of the License at$ q2 ]& h2 O* _1 P& ]
       http://www.apache.org/licenses/LICENSE-2.0
! h( @7 b2 Y$ y; e2 M   Unless required by applicable law or agreed to in writing, software% x4 T: N3 K  T0 Q: _7 I
   distributed under the License is distributed on an "AS IS" BASIS,
) v2 P7 d% y' T- f   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# }4 m; r7 B7 N  W
   See the License for the specific language governing permissions and$ \6 Q2 t' I; t% X0 r' E
   limitations under the License.
( }) F) A% x) p9 ^  K-->
- u, t# b: Y' D; S% L, B7 F<!-- Do not modify this file directly.  Instead, copy entries that you -->
8 t, t, C: S! u( c8 m<!-- wish to modify from this file into yarn-site.xml and change them -->
& l  D5 ~- ]  H; S/ \8 J9 h<!-- there.  If yarn-site.xml does not already exist, create it.      -->" G7 P! ]8 D' s: I% i% D9 }& @
<configuration>
1 r1 o9 }9 b$ D+ t  <!-- IPC Configuration -->; K+ S( Q3 D+ {2 j
  <property>
. n7 n, i+ o! l& N4 n    <description>Factory to create client IPC classes.</description>: H+ m1 U# |5 V4 D( B0 T
    <name>yarn.ipc.client.factory.class</name>
  e! G. N5 K; Y* l0 w  </property># Q: c; ?& ]" G& ]1 Z8 J1 N9 C* c9 w
  <property>" a$ P% \, }6 w- g2 H
    <description>Factory to create server IPC classes.</description>, \3 B2 N2 G7 Z  [0 x" L+ t3 K( A; T  K
    <name>yarn.ipc.server.factory.class</name>
' ~. X) K; J, A6 R, M5 F# l9 t# B  </property>
1 Z* S1 ~: I4 {: R0 ~  <property>
# Z; v  ~0 ]: L7 c- U    <description>Factory to create serializeable records.</description>
, A. S7 u9 B" j1 t    <name>yarn.ipc.record.factory.class</name>
' t1 z: o* Y- {3 W2 B9 u  </property>5 H  }2 `4 M! G/ @" V" w# H
  <property>( H" p- _, L" g$ p+ R& k
    <description>RPC class implementation</description>7 B  q$ h5 h- A* K0 F
    <name>yarn.ipc.rpc.class</name>6 Q  B* l5 R+ r- B) d1 V; K6 P! e
    <value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value>
3 X" A: Y) U9 H) C# ]) O( C  </property>3 p! x' q' O% W9 D8 v4 D5 a/ v+ \
  <!-- Resource Manager Configuration -->
0 X: j! J5 A: R- n. i% v  <property>. u5 c3 H# c4 [$ X# d. H1 Q
    <description>The hostname of the RM.</description>
- k4 y7 n" ?2 {6 a) S* R& U8 r, W# b    <name>yarn.resourcemanager.hostname</name>8 g2 c4 Q) P! B( a
    <value>0.0.0.0</value>- {2 D: R. _! H' n: i2 m
  </property>    6 I5 m% |- L0 T
  <property>
" I+ H, f0 g: o8 ]2 X- D: c* f    <description>The address of the applications manager interface in the RM.</description>" p$ e$ M: g" T( M. O3 H) q
    <name>yarn.resourcemanager.address</name>
6 A5 W; Z0 K1 P: ^' n. D# ^9 G, \! \1 x    <value>${yarn.resourcemanager.hostname}:8032</value>( {9 d! L' G6 ?: M* ?
  </property>6 w# z( U/ C/ l
  <property>% O& D! |! W! K  A7 g
    <description>
- O4 i; v, M1 m% Z/ [      The actual address the server will bind to. If this optional address is
0 l+ g4 a4 k5 W% m8 c      set, the RPC and webapp servers will bind to this address and the port specified in
. J1 j5 q% S6 t: n" \3 f; h  j      yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This
+ @: C3 b& [! ~/ |* E      is most useful for making RM listen to all interfaces by setting to 0.0.0.0.2 {4 b* t$ I/ d+ ]' x9 \9 w, z' X
    </description># ]4 u" W( g+ L! e6 R
    <name>yarn.resourcemanager.bind-host</name>; h0 D/ G0 p9 b7 }+ L9 ?
    <value></value>, g0 `6 U: Z. A% ^" @, R8 c/ i
  </property>0 d9 l9 x( s  l8 M
  <property>
# N. q4 a1 f6 T& Q* Z0 O/ Q6 X    <description>: u6 ?. H, y: }& y: a' E) Y
      If set to true, then ALL container updates will be automatically sent to  S: b3 I/ L+ Y  v$ g
      the NM in the next heartbeat</description>- B7 N# `' v# }
    <name>yarn.resourcemanager.auto-update.containers</name>. H  L  n; C3 L: n
    <value>false</value>! X# r, x2 i  l: Y2 }& b* Y
  </property>
1 ]7 S6 {4 ^" M  u- Z' e  <property>& U9 p8 ?: z. k+ \0 h/ q
    <description>The number of threads used to handle applications manager requests.</description>
' J# G" t/ J& _    <name>yarn.resourcemanager.client.thread-count</name>( V1 Q1 F  {+ Z2 ~# A
    <value>50</value>6 N) |  M% L0 U% E: C- b6 ?3 P
  </property>
& w: I  j3 W) N! \' M  <property>, i3 i- ?/ @; A
    <description>Number of threads used to launch/cleanup AM.</description>2 l& J" o! ]$ q( u
    <name>yarn.resourcemanager.amlauncher.thread-count</name>
: c  J. y2 {& x3 e8 Y: `. T    <value>50</value>
5 f# X+ q7 B1 {! Z' ]  </property>
0 R: e$ \8 K9 ]. o. N/ {" r  <property>$ I* b( S% y5 k# b8 b/ o1 t
    <description>Retry times to connect with NM.</description>$ L, \6 h3 x( x- D
    <name>yarn.resourcemanager.nodemanager-connect-retries</name>
' c  d- ]% t* {' @# X- V8 u1 z    <value>10</value>
1 i# S' q' W% s9 y2 o4 |  </property>' D, e. \2 m- z& i7 }3 i
  <property>
, W# x% O" d7 J# g* s    <description>Timeout in milliseconds when YARN dispatcher tries to drain the$ J- q7 O9 g) \$ K  N
      events. Typically, this happens when service is stopping. e.g. RM drains
( h/ v+ {3 O3 z4 a+ H      the ATS events dispatcher when stopping.7 ]$ N( f4 o4 u, M7 |
    </description>
& R3 F( \$ S1 p, w% ^8 D    <name>yarn.dispatcher.drain-events.timeout</name>. j9 R( ]4 A2 |6 Q
    <value>300000</value>
5 I$ ~9 g5 p+ ?7 [% R" |  </property># V6 {, a9 b; K
  <property>
/ [# y: G7 z3 h( ?% F' v    <description>The expiry interval for application master reporting.</description>
% z: h0 j6 W% \4 c$ S! B$ f    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>% j5 a7 O+ F; L! m, n4 t) J. T1 L  w# q
    <value>600000</value>5 o* R% a8 R, K
  </property>
! S4 P" s. x. X  <property>
+ K" C! _' a) \1 k. i: q' n* E% i1 y    <description>The Kerberos principal for the resource manager.</description>% n1 l7 b5 i$ l/ |* R# ]: b, H6 A8 L
    <name>yarn.resourcemanager.principal</name>  H, T! o+ \7 g* p( {
  </property>
9 D/ S! v5 }1 m1 m+ m, Q  <property>
# O+ f  w: V$ ]9 m    <description>The address of the scheduler interface.</description>
- Q% l' B  r/ K5 H) Y    <name>yarn.resourcemanager.scheduler.address</name>
, ?5 [; u) x. v! s    <value>${yarn.resourcemanager.hostname}:8030</value>- k6 \) a/ }; ]6 V" {; f+ I) Y& e
  </property>
6 K% F( ?: `9 b1 m2 n  <property>
" }; a& u' h( ?9 A    <description>Number of threads to handle scheduler interface.</description>
* a  t% s! f9 [; c' s5 w4 @; ^0 x    <name>yarn.resourcemanager.scheduler.client.thread-count</name>( T% C: U' k! @  S
    <value>50</value>. ^7 p0 _/ V0 g7 i5 b6 H4 |
  </property>% `) |. M9 b( h- [+ Q( w% `
  <property>
' B, p! Y% A* n* w    <description>
2 J" I' G9 n+ o3 E/ q      Specify which handler will be used to process PlacementConstraints.
& r( Q, N4 }' r2 c2 V      Acceptable values are: `placement-processor`, `scheduler` and `disabled`.. J5 v/ R1 E% C+ |2 V  j) m
      For a detailed explanation of these values, please refer to documentation.
, g- U. Y8 L2 S' k8 w    </description>; z+ ?* r0 C5 l, D  S- ^- J$ a
    <name>yarn.resourcemanager.placement-constraints.handler</name>2 n' B4 M# Q/ ]" n, J
    <value>disabled</value>
, z9 s8 y3 u7 I- c$ x& v( A0 s* B7 x  </property>
& y  a' Y0 V) B  <property>+ i9 _8 K- y7 m6 ]
    <description>Number of times to retry placing of rejected SchedulingRequests</description>9 e! l9 R+ c  ~$ U: u  M* X
    <name>yarn.resourcemanager.placement-constraints.retry-attempts</name>3 Y- C3 J+ v" Y
    <value>3</value># x  q; D9 m: Y- x7 P3 h# Q/ m
  </property>
; u8 [# E; D5 b7 M8 {5 d% X  <property>8 Y1 u0 Z5 O& f4 p8 x6 v8 Z5 v" U
    <description>Constraint Placement Algorithm to be used.</description>: {/ ~; h& W  T
    <name>yarn.resourcemanager.placement-constraints.algorithm.class</name>3 [# W+ w. j  ~4 s. W0 g( M
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithm</value>
. D6 H5 @" X4 U/ s' i5 D  </property>% H7 ]/ Q- Q8 U
  <property>" n- W* E7 V/ A, L
    <description>Placement Algorithm Requests Iterator to be used.</description>7 u9 C0 Y2 v& U
    <name>yarn.resourcemanager.placement-constraints.algorithm.iterator</name>
3 u' a3 x5 X6 M% \' i3 c; F8 ~( N; j    <value>SERIAL</value>
: a9 z) i; v) I6 W  }. _# b  </property>" Q6 A7 q% @, o3 _3 T
  <property>
$ I0 `4 W# m! g; G7 o  ^7 t    <description>Threadpool size for the Algorithm used for placement constraint processing.</description>0 T7 a% P3 u3 K0 Q3 ~% Z
    <name>yarn.resourcemanager.placement-constraints.algorithm.pool-size</name>
9 C7 A8 N% n+ h3 K% J* e/ ]4 k    <value>1</value>
; M+ f* F' ]0 }. O/ b% d  </property>
  @+ w: Z$ s# v: a) ^* [  <property>2 q4 Y0 S, R, X5 S
    <description>Threadpool size for the Scheduler invocation phase of placement constraint processing.</description>
3 Z' L2 h) Q% k; A* \( G! d% C2 s9 T    <name>yarn.resourcemanager.placement-constraints.scheduler.pool-size</name>6 M7 ?- H% {1 Q4 o6 Q# j
    <value>1</value>
* b7 a& A: j* {* }7 G  </property>+ N" r& e1 w3 `. o1 U' ^8 r
  <property>( J8 Y) M( h# Y" s
    <description>7 Q' l8 q+ S/ P; z% y; Q. p: i
      Comma separated class names of ApplicationMasterServiceProcessor% A8 D: A3 Q0 ~$ W' C. K; [, D7 k
      implementations. The processors will be applied in the order/ E) R& ~. e9 }; X6 \
      they are specified.
# P7 @- t0 g+ k3 g. l2 {    </description>
' z  j* M; X. W( R    <name>yarn.resourcemanager.application-master-service.processors</name>
5 C$ R: g; d2 h0 F    <value></value>4 t' X$ |- s" g/ @/ q
  </property>* g2 g% u4 b4 r) v
  <property>; A: k6 ^1 m/ A
      <description>
5 n" b0 s- Q* V* Z4 Y+ i        This configures the HTTP endpoint for YARN Daemons.The following
9 s4 K, S) y& o# g# h) W! I        values are supported:
* I- A, M+ W# e) s2 X        - HTTP_ONLY : Service is provided only on http  M; @! k& _5 e! T
        - HTTPS_ONLY : Service is provided only on https0 p$ b7 l: |# T7 f/ z/ m; A
      </description>3 l9 Z5 v6 w. [: G9 o' N& h
      <name>yarn.http.policy</name>$ g# M. {( T5 q$ m- b$ I% b: \
      <value>HTTP_ONLY</value>
& Q' x& s! a. c  u$ \  </property>
" y( Z3 \, ]" I# o' f1 y  <property>6 U! X; {+ ~; z
    <description>5 g. X- f! @5 C% M4 |
      The http address of the RM web application.
6 L' @1 |# X8 h" c      If only a host is provided as the value,& q# |/ m5 ^9 e8 p2 ?
      the webapp will be served on a random port.
5 Z$ F. @* w  g( v" M+ T+ [    </description>
) ?$ ]7 l7 T& t# W    <name>yarn.resourcemanager.webapp.address</name>: `+ }. z& R! R
    <value>${yarn.resourcemanager.hostname}:8088</value>
& O( p+ l2 b% D9 E) t( K4 d  </property>2 M9 E5 T/ R" V% C9 g1 \" R
  <property>/ ~1 r+ }* E1 x1 v
    <description>
; t$ ~3 q+ m7 N" x% U" v      The https address of the RM web application.
1 L5 ?% r2 @/ m4 G      If only a host is provided as the value,6 p6 _9 o/ C5 W/ c+ A- G
      the webapp will be served on a random port.
6 [- F+ y, p5 @, B" N    </description>3 |* n( v& M( ~" c
    <name>yarn.resourcemanager.webapp.https.address</name># [2 A$ x8 ]/ I4 N
    <value>${yarn.resourcemanager.hostname}:8090</value># O' _6 x" _: X
  </property>& _6 z4 k- E' p9 G' ?
  <property>( e! a4 [  P" o. A! R
    <description>7 h/ \. ~, V9 L: Z+ V. p
    The Kerberos keytab file to be used for spnego filter for the RM web
0 Q, v( L# v2 M# T    interface.7 u9 ]/ F. T8 I. H
    </description>
! Y# e4 ~# Z, C    <name>yarn.resourcemanager.webapp.spnego-keytab-file</name>
. l' [$ y8 ]4 z7 |0 c# M8 F    <value></value>7 I" R5 Y$ e7 B# ^* c) |' z; G
  </property>2 p. d5 N4 v$ e0 w+ S3 s
  <property>8 O- b% L, X+ k- N+ I
    <description>
6 _$ U- |6 A3 X    The Kerberos principal to be used for spnego filter for the RM web- s7 j9 K( g2 }- [. C' y" P5 S
    interface.$ G& |- [7 ?8 q
    </description>" A( a: g  x4 A* c" ]
    <name>yarn.resourcemanager.webapp.spnego-principal</name>6 v& l- p$ v" L/ W! `
    <value></value>6 T, C2 h; ^9 ^; J7 C
  </property>
' @) V" t7 R- C; Y. T  <property>7 q6 Y: g0 @8 \: \! o
    <description>' ]" w+ m. i# N# K" X" l
    Add button to kill application in the RM Application view.
- [6 w. @/ v' H, F8 s    </description>
; J& M! C/ R6 c$ O) s    <name>yarn.resourcemanager.webapp.ui-actions.enabled</name>
2 y5 @$ W; N) \, g. H    <value>true</value>1 M6 b) P+ E2 x' v' a
  </property>, l, r9 W* x+ r! F
  <property>  o: J' I  X5 Z6 L
    <description>To enable RM web ui2 application.</description>
& u8 q  u' B1 r0 D/ {# w& A( Z" q    <name>yarn.webapp.ui2.enable</name>* P. y4 O/ n" y% p8 U
    <value>false</value>/ p3 u7 i4 @; O. ~
  </property>0 H/ o' d1 B6 W: s/ x/ e
  <property>
& P( S% k8 d6 Z6 \9 M& P    <description>
8 _6 c9 ?+ H( [5 ~2 Y      Explicitly provide WAR file path for ui2 if needed./ X; {" K# R' M5 [/ H
    </description>
  d0 A# J+ s4 Q2 ]3 @# z# W0 P    <name>yarn.webapp.ui2.war-file-path</name>
$ F' T% D, K9 ]4 U+ g    <value></value>2 j% O  f! F7 ?2 N1 M5 Z. s" e
  </property>
  S4 E5 M8 T. E  <property>
8 S- @, t/ t* b6 v) {' Y    <description>- o+ y1 ?2 V" I$ V9 }$ Z! q
      Enable services rest api on ResourceManager.: R& ~) m5 B  ^/ e7 Y& T6 @( q
    </description>9 A( k* m2 k# x) b2 w
    <name>yarn.webapp.api-service.enable</name>5 @. {+ u. O- Z' H4 D, A3 P
    <value>false</value>
7 ]2 T0 M$ N1 n" {3 f9 X  </property>; Q. W( I8 ^: D% @7 q, j3 k
  <property>) t( y0 l' z0 G9 _4 ^' d
    <name>yarn.resourcemanager.resource-tracker.address</name>- l% p$ U& N, Q4 E. ~! _  u0 X! k
    <value>${yarn.resourcemanager.hostname}:8031</value>
3 ~" t4 P2 d( b9 M( q- U( B9 X2 @3 H( X: b  </property>
  ^- _2 d4 R1 q) _$ d  <property>, L0 W4 H. \; ~3 Q4 Y* s
    <description>Are acls enabled.</description>
* A" |/ O; N$ E# R& o  ?/ \    <name>yarn.acl.enable</name>+ b6 j1 v. r  ~# T9 k$ p
    <value>false</value>
- d, p8 g; O: t' a3 E( X  </property>
; Y7 Y+ A" d7 \* {. m4 R/ K  <property>
' n* X7 x8 o( i3 a- `) ^" M) v0 r8 }    <description>Are reservation acls enabled.</description>0 l) ?* y$ k8 a1 ^7 {( \5 T
    <name>yarn.acl.reservation-enable</name>( [+ I7 @! c2 m# r/ X
    <value>false</value>! j7 a3 n  L( I3 g' {# C
  </property>
  L0 Z- g# ~: E% @( r* b7 B, T: d5 _  <property>1 l; r. F, F5 `$ D2 u2 L; V' X, o/ @
    <description>ACL of who can be admin of the YARN cluster.</description>8 k1 ]7 W# a& ^) a" ^7 ]
    <name>yarn.admin.acl</name>
! t* Y+ i$ n+ A# V" B% x    <value>*</value>( `) \& ^: s9 C3 {( O6 `+ {
  </property>
8 ^5 D- W* I  W* g9 c; S  <property>8 ?3 U6 M# b# i: O
    <description>The address of the RM admin interface.</description>8 j5 l5 S% U- B2 o' e0 q( p
    <name>yarn.resourcemanager.admin.address</name>
+ h0 X' l+ n7 k( p- k; K& j    <value>${yarn.resourcemanager.hostname}:8033</value>$ c( {5 T. |1 Q) u, W
  </property>
2 V. W6 ]5 z6 ^/ i  <property>0 c9 a' p" I0 m; w
    <description>Number of threads used to handle RM admin interface.</description>
, l; a; a, P, W4 }* t* {    <name>yarn.resourcemanager.admin.client.thread-count</name>- @' b. ~( n% m7 H2 K
    <value>1</value>. v$ d3 l3 o, O. W2 n
  </property>
+ l! u6 I, N; B+ `! t4 X  L/ L$ }  <property>
# e, H6 Y0 i8 Q0 s; o3 t- _3 Z; x; J; l2 o    <description>Maximum time to wait to establish connection to
) j4 I+ G  D0 [/ Z9 p    ResourceManager.</description># t1 Y7 V6 }8 Q+ z
    <name>yarn.resourcemanager.connect.max-wait.ms</name>: |& s; I2 H9 t; m/ W# B- W. j
    <value>900000</value>% u. i4 \+ `! g+ l$ R
  </property>  A' I2 c$ Z! Y4 `
  <property>
- @/ ?" B3 _4 U) L    <description>How often to try connecting to the! I, m6 [& }- p) S/ }
    ResourceManager.</description>( X5 C1 v! K; V+ F
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>( V( B! h* ?4 A3 M9 K4 s- q- ]
    <value>30000</value>, H2 L+ B2 Z1 K. g& ^+ L) {
  </property>
$ _9 [0 X" @6 f4 S1 ]  <property>& }/ U6 [, w; p% _, ]
    <description>The maximum number of application attempts. It's a global
' f0 D% G1 ?8 e. [/ m. c    setting for all application masters. Each application master can specify
( c" u1 z5 V% u% _    its individual maximum number of application attempts via the API, but the
# k2 I6 g) X5 X* K- w; e* m4 n    individual number cannot be more than the global upper bound. If it is,% j2 M0 M' \: w9 H" b/ J
    the resourcemanager will override it. The default number is set to 2, to, m- o2 b) P8 R, F- ?3 x, _& [
    allow at least one retry for AM.</description>. J% R3 ?0 v7 y! d) X1 c% g9 Q
    <name>yarn.resourcemanager.am.max-attempts</name>% S7 H4 Q* B! E$ d; d& q
    <value>2</value>
; z- p5 i  ~" a# Z& W* `. G  </property>
: @/ A3 }* x7 ?. c/ T  <property>
. t7 r( R7 P4 g) F2 q1 A0 F    <description>How often to check that containers are still alive. </description>
3 ?$ Z8 ?. H- F1 T# F  h    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>
) v* k' G8 w+ U    <value>600000</value>0 b5 G; c- s+ F4 b  X7 M5 y/ c
  </property>
8 L  \9 y# {8 a4 i0 `' `  <property>7 Z9 H, c0 s$ g& {6 h' k
    <description>The keytab for the resource manager.</description>5 F6 X% N7 [: U+ s9 \% o# S
    <name>yarn.resourcemanager.keytab</name>
2 \) N; R# O3 e' C8 U7 {    <value>/etc/krb5.keytab</value>
1 r4 E+ \$ s+ t4 z5 i5 a. x  </property>
- F/ F) d+ }# D6 S% a& b  <property>4 w9 ]9 E5 u  U% q
    <description>Flag to enable override of the default kerberos authentication
6 f( b* U6 x1 c$ j    filter with the RM authentication filter to allow authentication using
5 b4 F: F) ]1 A3 t/ v1 O    delegation tokens(fallback to kerberos if the tokens are missing). Only& A5 l" l9 A4 \3 u+ ?* A) F/ Q" G
    applicable when the http authentication type is kerberos.</description>
% _: z9 k( n  S' z    <name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>& O4 U9 f2 p' [
    <value>true</value>! C& Y; w. P7 l  R. W: U* D* g% e
  </property>! l* a' }: o- J% y4 d+ t, U
  <property>( _( k6 C& u" q
    <description>Flag to enable cross-origin (CORS) support in the RM. This flag. m8 k' p0 {. w+ V( Z& j! w1 G
    requires the CORS filter initializer to be added to the filter initializers1 ?- W- d& y9 U
    list in core-site.xml.</description>
" J+ _! i; H; [5 ^, B8 v4 J9 z    <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>7 s! [- b9 v* _  R6 f) m
    <value>false</value>* K+ F8 H: [1 K1 l* Q
  </property>+ _+ [5 k5 ^4 V4 z
  <property>$ k! s3 E5 |, z, G8 F4 v7 U
    <description>How long to wait until a node manager is considered dead.</description>
# p. p% K# V/ ^% d    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
- ?) R. N3 a# Z- C) p8 i# I    <value>600000</value>' ?% x) n0 {/ G; C
  </property>
% x: ]6 @2 B9 |+ e/ k  <property>5 c8 M' i, t* s5 B0 J* K# j
    <description>Path to file with nodes to include.</description>9 l9 v4 s( w) S" s; q- Z- O
    <name>yarn.resourcemanager.nodes.include-path</name>
1 v' E, s" D5 z4 l0 m; C0 h    <value></value>
' \; Z* w: v1 r  </property>
7 Y& A  t; K4 X: b5 ^0 R  <property>( ~& ]: P; Q, [$ ?1 g# }2 [$ e
    <description>Path to file with nodes to exclude.</description>& E- P9 w9 Y7 r' L
    <name>yarn.resourcemanager.nodes.exclude-path</name>
. T2 W  A7 R( t+ r- h& _2 B0 M  [    <value></value>
& g5 \; h8 s3 J9 S; D8 w  </property>, K6 Y. R4 C8 t) s% r
  <property>. B6 @7 V0 g1 Z) {1 D- I
    <description>The expiry interval for node IP caching. -1 disables the caching</description>
+ q4 ^  N* M# X6 n4 X; {8 F# N    <name>yarn.resourcemanager.node-ip-cache.expiry-interval-secs</name>7 s& l) E* o5 J: X, ]7 y
    <value>-1</value>
0 [+ v# r2 h6 P9 @  </property>6 E7 p3 l3 O) W6 [6 e
  <property>3 K* ~8 l1 I; |! m- ]2 Q
    <description>Number of threads to handle resource tracker calls.</description>
. R* b4 o, O; |% a* N7 I8 y9 y2 ?    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>( s3 @( U- e$ l$ v* y" g% A+ [
    <value>50</value>
. F% f% z( L$ M4 h; ^8 J* o* A  </property>  n- z0 `1 p, T! h9 t
  <property>
" V& B* Q4 {0 w2 J    <description>The class to use as the resource scheduler.</description>
5 n; g# ?' L$ M# @8 Z9 f  ?    <name>yarn.resourcemanager.scheduler.class</name>5 ]. [4 M3 }2 Q" a2 D& x
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>) ^5 J" S6 v- i. T# Q  z2 d  n
  </property>2 \5 y% N$ t. Y+ q' v( j% d
  <property>
" t, e. M  J/ D6 v- N8 H$ _    <description>The minimum allocation for every container request at the RM
5 e, e( r/ E( V6 D    in MBs. Memory requests lower than this will be set to the value of this
* \: {# l( r" x: N    property. Additionally, a node manager that is configured to have less memory
. ^! y# S1 G9 Q: e% a2 g( |$ X# ?    than this value will be shut down by the resource manager.</description>
( `: p" i2 u; S$ y% p3 E5 j    <name>yarn.scheduler.minimum-allocation-mb</name>
5 d6 ?2 w1 c1 T% @    <value>1024</value>
4 x, R9 _( E% Y6 G2 Z6 D* y  </property>2 Q6 Q0 g' x( y3 g
  <property>
5 F! {) y5 E! S, z8 M( Y    <description>The maximum allocation for every container request at the RM- p4 l4 @) _. m# P& s3 Y% i
    in MBs. Memory requests higher than this will throw an: G# N# B( j. O7 K0 z/ Y5 h
    InvalidResourceRequestException.</description>& i) g5 p8 }0 ~5 [8 ?& r
    <name>yarn.scheduler.maximum-allocation-mb</name>
/ L# ?; a' l3 i( P1 c  Y; u1 O    <value>8192</value>! B* y! R- d8 W4 B3 a2 C# G
  </property>/ v  y1 I. \3 M4 i4 s
  <property>0 S# o0 @2 g7 M/ @  |, w
    <description>The minimum allocation for every container request at the RM) X5 E3 p; E2 }5 v8 m
    in terms of virtual CPU cores. Requests lower than this will be set to the
  T5 s7 ]$ _! o6 x0 n) `    value of this property. Additionally, a node manager that is configured to
  p4 ~  e: \  ~. B( r1 l$ h    have fewer virtual cores than this value will be shut down by the resource
9 r. d  N# c$ R0 n' D  J4 [7 M: T3 S    manager.</description>
+ n; y* O- s' {8 {0 B- b    <name>yarn.scheduler.minimum-allocation-vcores</name>- @' J0 U" T, ]5 a
    <value>1</value>
3 z4 x; P/ k: W& A9 Z# Z; l  </property>9 y4 @/ l3 F, Z! N
  <property>
. v9 o7 b+ D: u) X, ^- y    <description>The maximum allocation for every container request at the RM( w4 g1 F- X, ?4 s. J5 z% b
    in terms of virtual CPU cores. Requests higher than this will throw an% W' |! B9 M  @. s; B
    InvalidResourceRequestException.</description>, O2 w( i3 z+ Q3 `1 G1 I# ]$ M
    <name>yarn.scheduler.maximum-allocation-vcores</name>
% y; X) {3 v5 j# V    <value>4</value>6 x. ?9 e' R! y0 s. o& x
  </property>
% {7 q% Z5 Q/ Q" r% M( O' e1 ^8 \, ~. P  <property>( `1 T9 ~* D, O/ c1 X/ w1 q, ^
    <description>
+ n$ s- X7 Z6 a$ J2 z6 R    Used by node labels.  If set to true, the port should be included in the
' q" [& F4 V# |( t% F% E    node name.  Only usable if your scheduler supports node labels.
3 K- u- c) Y" a; d2 N    </description>
7 t( R- f1 A! U3 {. b    <name>yarn.scheduler.include-port-in-node-name</name>
6 M% b) k; f& `6 l9 F    <value>false</value>
0 Y$ _. V3 t5 h  </property>
9 c2 `5 K( q. h# D  Q" o  <property>
% k3 L& P/ r  ~; P, ^    <description>Enable RM to recover state after starting. If true, then
! h8 z8 A3 \# e. }) ^- ^3 E) B      yarn.resourcemanager.store.class must be specified. </description>
6 O+ ]* A8 u! n% F2 Y- `, U    <name>yarn.resourcemanager.recovery.enabled</name>
- {$ T$ U9 ]  z7 S2 |    <value>false</value>
& p& g! b. n( h/ z9 t  </property>
$ a5 G4 }3 U; q* w  <property>
4 u% J4 G- X3 B    <description>Should RM fail fast if it encounters any errors. By defalt, it
9 V6 ^0 l( N; C) c      points to ${yarn.fail-fast}. Errors include:
+ n3 X8 y9 D. C      1) exceptions when state-store write/read operations fails.( f/ a. f3 ]7 }. k6 `) X7 G; C
    </description>. ]6 S9 ?3 C. M
    <name>yarn.resourcemanager.fail-fast</name>
, @, G* ^+ G$ n% b5 I$ d    <value>${yarn.fail-fast}</value>
  }" ]- E; K5 g" i- P5 b" r5 z  </property>9 K9 ?; L3 M9 W+ |
  <property>! o0 [# F% U1 |- C" }% Z/ |+ x
    <description>Should YARN fail fast if it encounters any errors.4 ?' I' ^4 H2 h: B9 P' g
      This is a global config for all other components including RM,NM etc.# R. m/ w6 J" C; i' O9 d
      If no value is set for component-specific config (e.g yarn.resourcemanager.fail-fast),
, F9 |$ O/ R$ z/ k2 P# z9 o% i      this value will be the default.
% L* q2 K: ~- e% K* @    </description>/ n: x7 k* }& Q5 h8 ~# H; t
    <name>yarn.fail-fast</name>' n0 p* ^* c3 S! v4 L: c
    <value>false</value>
9 {6 x5 ^* x2 _, k; b  </property>
9 s: t- j2 ]- m0 s  <property>) U" J/ w( u9 `- ?. a! g$ ]
    <description>Enable RM work preserving recovery. This configuration is private
, w' r! H' D0 p0 N0 c1 r    to YARN for experimenting the feature.& m5 z  b  G* J" S$ h
    </description>
* ?: u. K& u* J3 d    <name>yarn.resourcemanager.work-preserving-recovery.enabled</name>$ X! G9 k& U" C  l! m8 J
    <value>true</value>
3 w8 p2 Q& v' A2 O* }; W; ^8 D  </property>
1 A: ]% g$ R; U5 X7 A3 n7 l  j  <property>
1 G& S6 N7 g* Y0 q: j% o    <description>Set the amount of time RM waits before allocating new
0 ~0 ]" t# N/ |. w! t) a5 ]( q    containers on work-preserving-recovery. Such wait period gives RM a chance0 O% c. O1 u. |. I: x& n- N
    to settle down resyncing with NMs in the cluster on recovery, before assigning6 J: y; f9 O; P9 e, Q: }
    new containers to applications.' A" W# y2 c4 I5 \$ J# j5 I3 i* B
    </description>
: [$ V) }( Z: j5 {5 k4 Q- a    <name>yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms</name>. f  L  p3 R# E; d9 n
    <value>10000</value>7 R- p9 I5 K# X
  </property>: o+ K- l8 @' G2 U& T. |  b$ e# A
  <property>
7 q0 I9 `  N) E6 C+ t' N6 j    <description>The class to use as the persistent store.# k) r# [2 `" \' K* }0 O; a5 a
      If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
0 y6 z. v0 r) t      is used, the store is implicitly fenced; meaning a single ResourceManager" Z9 d* H) V0 ?( ^+ A
      is able to use the store at any point in time. More details on this
! H1 i% j3 c# Z+ U      implicit fencing, along with setting up appropriate ACLs is discussed# V0 B' K/ P* u) o- l
      under yarn.resourcemanager.zk-state-store.root-node.acl.  H+ G' b8 }: Y( h* l
    </description>- o3 J" y! \0 G4 v1 [
    <name>yarn.resourcemanager.store.class</name>
: T8 V. |! c8 x) Z: [7 A5 e    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value>; Z' N* ^+ v5 g" c) n" _. |
  </property>
' `  f  i5 r9 o8 t& w  <property>* m( Y9 p( _- j
    <description>When automatic failover is enabled, number of zookeeper$ i( w* h. Q. R
      operation retry times in ActiveStandbyElector</description>* `7 Z4 R+ a+ \9 x, t0 @
    <name>yarn.resourcemanager.ha.failover-controller.active-standby-elector.zk.retries</name>% h/ w/ s4 a) O( O. k. U
    <!--<value>3</value>-->% L1 i% m- n9 F7 m7 B+ n" f0 k! q
  </property>- s- U5 b) H3 `3 p9 Z  @$ @: d; x' g
  <property>
2 {  C" K: S( e% M, i9 O: K    <description>The maximum number of completed applications RM state5 P$ k' A4 v3 p9 S
    store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}.
9 d7 U6 g( W/ d' Z7 t. |- v4 a% k    By default, it equals to ${yarn.resourcemanager.max-completed-applications}.
9 X+ F4 k- |# T, D# p* N    This ensures that the applications kept in the state store are consistent with
2 q! e. H/ o  s! Z" Z( ^- ~9 s( O    the applications remembered in RM memory.' Q, @6 R0 V% r' Y
    Any values larger than ${yarn.resourcemanager.max-completed-applications} will
. L; m2 L& v3 s% p: \5 E% d4 [    be reset to ${yarn.resourcemanager.max-completed-applications}.
7 Z# T4 m9 _4 Q/ D% V, {    Note that this value impacts the RM recovery performance. Typically,
% R% |4 C. L4 x, R9 B% Z    a smaller value indicates better performance on RM recovery.
( _! H% M# H9 p3 ?$ e    </description>& D1 j4 J2 Y4 n2 @
    <name>yarn.resourcemanager.state-store.max-completed-applications</name>' O# R0 m; P6 [! ]1 Y* W* F
    <value>${yarn.resourcemanager.max-completed-applications}</value>" ]4 M+ w! Q0 [  ~. G4 T
  </property>' B5 z, q* [6 z+ y8 q2 G$ K, h
  <property>7 c) [% z8 J- }9 l0 r
    <description>Full path of the ZooKeeper znode where RM state will be4 i4 ~& `' r; T: }% v* o
    stored. This must be supplied when using
, H7 `) U! y: A2 b9 G9 b    org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
/ ^" f6 M/ m) o2 o    as the value for yarn.resourcemanager.store.class</description>& W% p; y0 g2 Z' I
    <name>yarn.resourcemanager.zk-state-store.parent-path</name>3 w8 w! r, J: c
    <value>/rmstore</value>. k7 m# S  I3 O! z; t  k
  </property>  x5 a! H. f# j; o$ Q
  <property>3 N" L; ^' [; Q3 A% J
    <description>& G- v$ W( V' f# X
      ACLs to be used for the root znode when using ZKRMStateStore in an HA
" D9 p$ G( y! K; ?2 \7 |      scenario for fencing.& P, I2 w* }& W  J( o4 u
      ZKRMStateStore supports implicit fencing to allow a single8 P( x4 |' Z1 l$ g; N: O& X
      ResourceManager write-access to the store. For fencing, the( S/ ]: O# E  s/ I, q
      ResourceManagers in the cluster share read-write-admin privileges on the; V. V; N, i+ ^4 E" O
      root node, but the Active ResourceManager claims exclusive create-delete
! ^  P% W- L! w& q! x; r      permissions.9 }, r6 R$ w& b. `! w
      By default, when this property is not set, we use the ACLs from
9 q. K$ T! L( C' m- ]% x      yarn.resourcemanager.zk-acl for shared admin access and
8 n6 d7 t) k6 H: T      rm-address:random-number for username-based exclusive create-delete2 R7 ]  i% s4 U- a& d- f
      access., `- z% E3 ]$ H6 x6 V2 {/ \7 T& W5 g' v
      This property allows users to set ACLs of their choice instead of using
, J" T7 n8 e9 [' c" _: N; v      the default mechanism. For fencing to work, the ACLs should be
: I  [  D& I# A# [5 G* _      carefully set differently on each ResourceManger such that all the1 o9 U8 n5 l, L
      ResourceManagers have shared admin access and the Active ResourceManger  m6 e# `3 ~  Y# {9 I" N  j+ Y" C
      takes over (exclusively) the create-delete access.; [# m$ K5 \2 |3 y, I
    </description>
2 O+ _' ~$ L: r    <name>yarn.resourcemanager.zk-state-store.root-node.acl</name>: {$ C- X  i* l
  </property>
9 c3 p+ w4 i0 |2 H  <property>' }( r( F8 N: `, k5 ~
    <description>URI pointing to the location of the FileSystem path where3 P! r% O: }; g+ M
    RM state will be stored. This must be supplied when using
0 E' d. y! B- J" O: n3 X9 s! C    org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore1 x2 w1 O0 K7 d$ a
    as the value for yarn.resourcemanager.store.class</description>& L. G$ u( r# R' u6 [* n. K8 {1 y
    <name>yarn.resourcemanager.fs.state-store.uri</name>
, J+ ~( ~& `+ i    <value>${hadoop.tmp.dir}/yarn/system/rmstore</value>
) D# O1 k( P4 U4 G% R    <!--value>hdfs://localhost:9000/rmstore</value-->
" R" M4 q; k+ @' N  </property>
; I$ `# Z+ q2 S9 X8 o1 d0 m: \  <property>6 }) r( r& z% V- W  {; W! F
    <description>the number of retries to recover from IOException in7 K- G8 a8 Z0 e* ?
    FileSystemRMStateStore.
" x. \* K; h' W1 G& X  K    </description>
+ d  ~2 R, ?2 I! M8 `6 x; W4 V    <name>yarn.resourcemanager.fs.state-store.num-retries</name>
/ ^8 ^$ G; Q- J  b  N    <value>0</value>
# c' v9 E2 t5 c" a# u: m  </property># X+ K! [! D- z' V
  <property>
) y: @, K- {) Z2 x* j    <description>Retry interval in milliseconds in FileSystemRMStateStore.% f' I+ O8 B) u  C# d2 X/ a
    </description>
7 V* P$ `# S7 _$ Q5 D    <name>yarn.resourcemanager.fs.state-store.retry-interval-ms</name>
7 V  d5 N( ~! Y* q  x    <value>1000</value>, @9 {0 o! M) |: |, S# ]3 `; T
  </property>
- ?/ g+ L0 f3 w3 [  <property>
( P& Y6 z. R$ H" c0 m. ~/ z2 K    <description>Local path where the RM state will be stored when using8 t- \; ], a( S+ L" x; I
    org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore2 o% `/ ]* G, H# g, Z; X
    as the value for yarn.resourcemanager.store.class</description>$ j: B% W4 G' L6 _8 I0 g" o$ ?* p
    <name>yarn.resourcemanager.leveldb-state-store.path</name>% l9 E( C% [. S+ ]2 C! ~
    <value>${hadoop.tmp.dir}/yarn/system/rmstore</value>
1 A# D3 T$ g* G: W! K6 {: Z* L7 L  </property>
  Q  M- j% w  y# T2 H, }2 }  Q  <property>
: z: Y* k/ h2 \! r6 s: b# o    <description>The time in seconds between full compactions of the leveldb2 U# o" G3 t, G# F5 d
    database. Setting the interval to zero disables the full compaction
, j5 T* c: K1 M    cycles.</description>! c4 j! W3 f! Q9 P& t+ a  _: i
    <name>yarn.resourcemanager.leveldb-state-store.compaction-interval-secs</name>! G( T# \8 ^3 k# ]# t+ A) m! t
    <value>3600</value>
; `- b% z8 u* \$ V  </property>
, ?6 Q1 F5 o4 c  <property>
2 Z( R$ E; K1 t    <description>Enable RM high-availability. When enabled,
9 X. i4 h. J8 }/ R      (1) The RM starts in the Standby mode by default, and transitions to# o8 g/ d# R7 h9 X$ q5 i- o
      the Active mode when prompted to.
) F( |8 n$ q8 n# p; y      (2) The nodes in the RM ensemble are listed in5 O- Y! b) F( n
      yarn.resourcemanager.ha.rm-ids6 l- D  r1 Z, P4 \+ W; N& r
      (3) The id of each RM either comes from yarn.resourcemanager.ha.id! v" d( f4 s: k
      if yarn.resourcemanager.ha.id is explicitly specified or can be
# V( i  H" N" |% y" i+ h      figured out by matching yarn.resourcemanager.address.{id} with local address
: ^- b: Q: U. ^- k' |      (4) The actual physical addresses come from the configs of the pattern9 B& s- d* G; ^( V4 n
      - {rpc-config}.{id}</description>* w/ e! C+ d) M+ ?, n4 |
    <name>yarn.resourcemanager.ha.enabled</name>
: H( k7 D' ?  h/ Z& k$ k    <value>false</value>8 w3 }9 x# W' `% c) Y& x
  </property># e5 y! p8 E5 [+ ~! c4 H
  <property>
1 ?. s& ~- _6 o  a4 Z    <description>Enable automatic failover.
$ K0 ^4 O* c$ t# m      By default, it is enabled only when HA is enabled</description>
# z0 X3 b$ k, O8 K7 Z, b    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>4 c; S$ k0 @7 o; P: m8 ?  S3 ^( s
    <value>true</value>+ H6 d1 h  \7 a: D
  </property>) D; o; u- j9 B& x
  <property>
  s5 @0 V6 Y* S" Z" ?9 g  |    <description>Enable embedded automatic failover.0 h% Q  V7 s% A# S
      By default, it is enabled only when HA is enabled.
9 X! d7 `8 p$ H      The embedded elector relies on the RM state store to handle fencing,3 y- t$ x- u  E3 F* O/ G0 N
      and is primarily intended to be used in conjunction with ZKRMStateStore.
3 \7 O% U! j/ B" H% M/ N    </description>
8 m6 Z. E# J+ V    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
6 t5 u, l7 h" W" x7 T  v$ }    <value>true</value>4 \9 x/ |  N. S6 C+ ]' z- _
  </property>
; K0 \. e: E$ }  <property>
6 `* R( A$ f# T* |, G    <description>The base znode path to use for storing leader information,
$ m3 ^: T. Q! y$ s9 v* j, T/ H: g      when using ZooKeeper based leader election.</description>
1 N3 X8 P$ @) C    <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
, U) v3 u: m' {    <value>/yarn-leader-election</value>! u( v( q4 Q3 D3 o& x& o0 p
  </property>4 E0 V8 t+ R* o' o) i2 P, C+ x5 |
  <property>
. y7 ]3 W4 ]) ?% ~    <description>Index at which last section of application id (with each section  t: H( \; L8 M1 E
      separated by _ in application id) will be split so that application znode
) l8 @3 U# d* i      stored in zookeeper RM state store will be stored as two different znodes( w4 F. F, T2 S. T! w
      (parent-child). Split is done from the end.' m5 C5 _/ M  @6 o
      For instance, with no split, appid znode will be of the form
, a, X6 K; N: t- M  l      application_1352994193343_0001. If the value of this config is 1, the9 p# }/ h! `: e& }
      appid znode will be broken into two parts application_1352994193343_000( N  x3 _% R. l) q1 S
      and 1 respectively with former being the parent node.- |6 q( Y9 r3 K; r" `) F4 q
      application_1352994193343_0002 will then be stored as 2 under the parent! W( |+ H- L5 B$ M5 p" n
      node application_1352994193343_000. This config can take values from 0 to 4.+ `- W7 \$ G3 A# c! E  N8 Q6 \
      0 means there will be no split. If configuration value is outside this# a- {4 v/ n+ o
      range, it will be treated as config value of 0(i.e. no split). A value* E% D9 }" d6 e4 I. ^2 r
      larger than 0 (up to 4) should be configured if you are storing a large number8 E8 O' x& Y) G$ v
      of apps in ZK based RM state store and state store operations are failing due to5 c; T1 x: D1 D* u/ T7 g
      LenError in Zookeeper.</description>. P' s8 S6 N7 T8 u6 f/ _( x) K  o
    <name>yarn.resourcemanager.zk-appid-node.split-index</name>, t( b2 w- @; i: F5 q
    <value>0</value>
0 {# \2 R1 R: [  e  </property>
" U8 S+ D( Y: k' J5 h  Q  <property>) [/ I6 I# _, q8 V9 q5 ?+ \
    <description>Index at which the RM Delegation Token ids will be split so
, {8 z* C! L; _      that the delegation token znodes stored in the zookeeper RM state store
- D0 q9 n0 F) u5 L      will be stored as two different znodes (parent-child). The split is done8 x3 Z* D4 r2 E9 x, x, m
      from the end. For instance, with no split, a delegation token znode will) w0 Q* h8 _* x
      be of the form RMDelegationToken_123456789. If the value of this config is% a/ L; I& O( m8 U' m/ D- k
      1, the delegation token znode will be broken into two parts:
: Q( M: P0 Q- l      RMDelegationToken_12345678 and 9 respectively with former being the parent) j- P" w$ A) T. B/ k. C0 t6 ]
      node. This config can take values from 0 to 4. 0 means there will be no, n; U7 P4 p" {/ s  ~% H( X& P+ S
      split. If the value is outside this range, it will be treated as 0 (i.e.5 M5 S$ I# d! N0 R; S! y; V
      no split). A value larger than 0 (up to 4) should be configured if you are/ Z7 q. T* T0 e! A, }' O7 |
      running a large number of applications, with long-lived delegation tokens, \7 E: n3 L9 L5 h8 ]. O/ [% z% q
      and state store operations (e.g. failover) are failing due to LenError in" d8 N3 z8 Y6 L; A2 m
      Zookeeper.</description>
9 G* I; U4 k) H1 z. S0 a$ }    <name>yarn.resourcemanager.zk-delegation-token-node.split-index</name>
8 B6 u0 Z( z/ {$ r7 M' K8 P3 n    <value>0</value>
' l, P( b, K9 F5 x+ a% p. C  </property>
4 y2 o6 F7 l9 G& u: `* z  <property>
% q6 L4 n% K  s    <description>Specifies the maximum size of the data that can be stored- G5 g- Y8 o7 x& X" D
      in a znode. Value should be same or less than jute.maxbuffer configured0 C6 v& S: @$ Q* t$ _7 X8 W) U
      in zookeeper. Default value configured is 1MB.</description>
* P! B7 s/ `4 m9 Y5 Y! J$ N6 z5 o    <name>yarn.resourcemanager.zk-max-znode-size.bytes</name># n/ N1 W3 x! g
    <value>1048576</value>
  h5 u8 s8 Y" n! I  </property>
7 e# Z  ~0 g- `+ y$ g8 A0 L  <property>
) d, p% q& _6 \1 A( v- T! Y- M1 h    <description>Name of the cluster. In a HA setting,
; B$ E9 W0 a# r9 P% \      this is used to ensure the RM participates in leader
6 E, Z9 k- |; z5 k  j! u& ^9 F4 o" [      election for this cluster and ensures it does not affect- A% m; ?4 i3 x2 K, Z3 C- J
      other clusters</description>/ X% c+ B* T" ^. k- C6 N
    <name>yarn.resourcemanager.cluster-id</name>
( P( t$ ]5 g: V' P# q  A' a. L    <!--value>yarn-cluster</value-->- ]& G) K$ r  i, w3 o% W# ?
  </property>/ v# o" G" {: T8 G9 u& y
  <property>, Z- J( I$ P  V2 l3 ?
    <description>The list of RM nodes in the cluster when HA is
3 D8 W7 E& J9 \  X      enabled. See description of yarn.resourcemanager.ha
$ D  v1 M5 _) l. I$ _1 L! N8 {! \      .enabled for full details on how this is used.</description>9 S/ h/ Y' C6 P7 x
    <name>yarn.resourcemanager.ha.rm-ids</name># a/ ]9 w; L: S6 k' Y) Y
    <!--value>rm1,rm2</value-->
- P# o0 ^2 W! D" t6 j" b& m: |  </property>
' j$ Q8 Q0 }8 ]7 @1 |  <property>4 P. e9 \- g+ O, J% j' [
    <description>The id (string) of the current RM. When HA is enabled, this4 D' U& \8 `/ k7 K7 g8 m( l6 F& p
      is an optional config. The id of current RM can be set by explicitly
& n& U% K* F% A  P4 ^; @      specifying yarn.resourcemanager.ha.id or figured out by matching6 k3 b( R3 P9 n! G
      yarn.resourcemanager.address.{id} with local address: t' s( Y/ Z4 W6 f. @
      See description of yarn.resourcemanager.ha.enabled
0 g+ f( y5 M. q* Q: V4 C/ U  [5 V! u0 ?      for full details on how this is used.</description>
% b# ], ^5 q# T% _8 T! ^    <name>yarn.resourcemanager.ha.id</name>; `! `6 k+ ?$ C, h( R! ~# @, ]
    <!--value>rm1</value-->
5 [4 c9 C1 r8 S9 u- Y  </property>
0 r& B3 ~( d9 f3 \3 i  <property>
2 G' P8 c  ]/ k- |! ~    <description>When HA is enabled, the class to be used by Clients, AMs and' h* D' M8 }( J# W
      NMs to failover to the Active RM. It should extend
4 T0 X) L. W, d6 B1 O6 c& a: f      org.apache.hadoop.yarn.client.RMFailoverProxyProvider</description>. m* U3 t8 x+ B9 `' b7 C) b3 V4 u
    <name>yarn.client.failover-proxy-provider</name>, ?& V8 _0 b$ [8 R5 G
    <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
8 j" v, Y- Y0 f  e" g  </property>
; G9 x$ W' v: b  <property>
8 c# K- G2 E" S9 y) |' Q1 R    <description>When HA is enabled, the max number of times
$ x; G2 J2 |9 V8 H5 v9 Y% v      FailoverProxyProvider should attempt failover. When set,
6 G% a# G$ B+ M0 H1 X# b: P      this overrides the yarn.resourcemanager.connect.max-wait.ms. When' S0 R) \: X* D1 g* }: W
      not set, this is inferred from  k. e& R- ]0 r0 Y; V8 a: {
      yarn.resourcemanager.connect.max-wait.ms.</description># D0 C! ?5 Q" I
    <name>yarn.client.failover-max-attempts</name>
4 A- M9 E$ [7 t2 z, l: N+ |& `% C    <!--value>15</value-->6 K7 b6 U% L' L
  </property>. @: @! w. i3 f3 A) \
  <property>8 h; X: G; i3 Q1 l' D7 y0 d. A/ \
    <description>When HA is enabled, the sleep base (in milliseconds) to be
" d' A& U) v/ D! N! X' r  a( ?      used for calculating the exponential delay between failovers. When set,
  G2 T! n* g  v1 l# C( d      this overrides the yarn.resourcemanager.connect.* settings. When
% B' y$ [! A7 ]. m7 o1 L/ ~      not set, yarn.resourcemanager.connect.retry-interval.ms is used instead.
/ q  @2 Q( S0 c    </description>" C. e  W1 [, X
    <name>yarn.client.failover-sleep-base-ms</name>% g0 I4 u  L! n! m/ n& C
    <!--value>500</value-->
' v% E3 e. m  ]" ^: j  </property>
; v8 x" b4 o, g- [! T: B9 x  <property>8 H/ H; U0 O' m- I! ^
    <description>When HA is enabled, the maximum sleep time (in milliseconds)% O  J6 W* W% o0 Z. U$ c
      between failovers. When set, this overrides the- S! m$ w  O2 s% |1 _3 w9 E$ i
      yarn.resourcemanager.connect.* settings. When not set,
6 _% [! V( {0 W0 g( @! K      yarn.resourcemanager.connect.retry-interval.ms is used instead.</description>
, M5 ^7 C; [! E+ C" K8 Y, c4 f    <name>yarn.client.failover-sleep-max-ms</name>
" i, n, k$ @; V9 p    <!--value>15000</value-->' M$ o! b: e4 H: Z
  </property>
2 ~) q" ]9 d& C2 I, H5 D! J6 j  <property>
8 |) N) s" k5 `6 P) t$ q! D    <description>When HA is enabled, the number of retries per, W- ]: B  r2 V$ x+ A1 x
      attempt to connect to a ResourceManager. In other words,
; \! q& m9 O  |% m0 K3 _- ]2 D+ p      it is the ipc.client.connect.max.retries to be used during* h8 E2 ]- g* W2 i
      failover attempts</description>
, Q- E/ L8 L! _0 ~8 @    <name>yarn.client.failover-retries</name>
* ~) [9 c; J) ?$ ]9 P5 K    <value>0</value>
) ?6 S$ Z4 z/ d/ d3 V5 Y. H7 ~  </property>
. H, Z4 Q' ^- ?/ z  <property>
& B7 E$ @& W& X6 @' l    <description>When HA is enabled, the number of retries per
+ J) h, C( d) a: ]+ `( b      attempt to connect to a ResourceManager on socket timeouts. In other
3 f7 t" g5 d! X, U: Z' q% o; D      words, it is the ipc.client.connect.max.retries.on.timeouts to be used) [7 n  z' o$ F5 N
      during failover attempts</description>+ G4 {' T; K& Q/ D( @8 w( y
    <name>yarn.client.failover-retries-on-socket-timeouts</name>
7 g: x) j% K+ F" v) G    <value>0</value>! u$ v; {- W; I% S3 @
  </property>" ^; k& a* U* e$ @
  <property>
' T. d- \, X* Z( ^    <description>The maximum number of completed applications RM keeps. </description>
7 n) N$ \8 T+ V8 L( k8 Y    <name>yarn.resourcemanager.max-completed-applications</name>
* R. |; O8 L0 |: A; f2 g, C; k    <value>1000</value>
* x6 g. C8 u6 \& A# z  </property>
; ?8 o  R% W) Z5 l1 k  <property>8 A; ~8 l7 }1 d( ~  w" |* `
    <description>Interval at which the delayed token removal thread runs</description>, O5 v# K; H  z
    <name>yarn.resourcemanager.delayed.delegation-token.removal-interval-ms</name>* o/ K8 H0 }) s
    <value>30000</value>  h) j2 T2 E- s( _  Q
  </property>! ^% K7 A! P* m( S: \/ l/ T
  <property>9 }* j5 F8 J, X' L
    <description>Maximum size in bytes for configurations that can be provided
, p2 L5 {) _; d  T. r4 S' ~. T, J0 n      by application to RM for delegation token renewal.) w) D3 S8 N4 K' h
      By experiment, it's roughly 128 bytes per key-value pair.5 B; y2 D) s/ Q& J* A6 [
      The default value 12800 allows roughly 100 configs, may be less.$ q2 t2 A  ?4 K* R
    </description>
# a( x2 C" h7 o, H2 }" t    <name>yarn.resourcemanager.delegation-token.max-conf-size-bytes</name>2 T9 P/ g% I" x8 G: z7 A* D
    <value>12800</value>
" g, B2 p6 K0 h5 O: |9 F) |/ J8 t  </property>/ u* H% a  {/ s( _
  <property>
0 }1 m0 o8 T# t  <description>If true, ResourceManager will have proxy-user privileges." x+ n0 ~' e/ t8 C; f& {
    Use case: In a secure cluster, YARN requires the user hdfs delegation-tokens to0 q, ~, q% j& X% p
    do localization and log-aggregation on behalf of the user. If this is set to true,' o+ S& Y: u; x4 _' M! p( R( }
    ResourceManager is able to request new hdfs delegation tokens on behalf of1 X% Q% C  v1 ]. N
    the user. This is needed by long-running-service, because the hdfs tokens' t, E7 y  F9 V6 h
    will eventually expire and YARN requires new valid tokens to do localization) R( F7 \% f' Z4 X: N5 G, X
    and log-aggregation. Note that to enable this use case, the corresponding1 G" _3 ?+ ?+ A
    HDFS NameNode has to configure ResourceManager as the proxy-user so that# {) m& N. b4 U& N, V7 T0 ?
    ResourceManager can itself ask for new tokens on behalf of the user when
% E9 a3 S/ h( Q) N" s, \2 t: T    tokens are past their max-life-time.</description># p6 e8 i+ a/ f  F$ C# T
    <name>yarn.resourcemanager.proxy-user-privileges.enabled</name>3 ^( U: {% j( j8 b8 g
    <value>false</value>/ [9 ^  G; c4 ^* u% h9 F- E2 g
  </property>' C; N6 v" e+ ^5 C+ t: V
  <property># E1 S0 ?/ l7 i  B( Y
    <description>Interval for the roll over for the master key used to generate* s) c* b3 O' D4 }4 {
        application tokens
/ p2 M8 v! H  w+ k( l7 r    </description>8 C3 t7 S* B* D) [$ s% X- r8 r
    <name>yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs</name>$ x  F5 u' o7 {4 m
    <value>86400</value>$ E- w) ]3 x: e' _+ w3 a
  </property>5 I" M) x4 `8 R8 K4 F, x8 w
  <property>
$ C/ ^# I6 z2 d5 ]5 ^4 {; u    <description>Interval for the roll over for the master key used to generate! N. A: K$ u: T, `: K" D9 ^7 Z
        container tokens. It is expected to be much greater than0 T3 e( Y: o" I8 V& F% O4 f
        yarn.nm.liveness-monitor.expiry-interval-ms and
' r& t% ~' N, S5 p        yarn.resourcemanager.rm.container-allocation.expiry-interval-ms. Otherwise the
; _' U4 C8 h6 W; Q% g( v        behavior is undefined.  l  o9 H* U+ D: ^
    </description>
& r- L; A" l7 H    <name>yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs</name>7 {, c" U- Z  P( l
    <value>86400</value>' L5 B4 z" g4 l- [7 e
  </property>: t4 d, i; S2 H+ q
  <property>
1 H. Q8 H* @% o3 n. a    <description>The heart-beat interval in milliseconds for every NodeManager in the cluster.</description>
( d& z0 f8 f% u( b& V    <name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name>& R; x2 l  H3 W4 e. Y, ~
    <value>1000</value>! K( Y7 Q+ X7 t
  </property>1 G7 q2 {4 {7 ?
  <property>
6 ~4 R6 t; {0 @  L9 B    <description>The minimum allowed version of a connecting nodemanager.  The valid values are1 _; O+ b8 O; {
      NONE (no version checking), EqualToRM (the nodemanager's version is equal to
, J& e  b5 k% ?) @      or greater than the RM version), or a Version String.</description>( y) o4 C" E$ L6 }$ u2 m* m
    <name>yarn.resourcemanager.nodemanager.minimum.version</name>: T8 i% b. X4 B$ ~' ]" l4 J
    <value>NONE</value>( ^/ m4 c/ v. b# K4 k/ E
  </property>" E; V7 G: J2 e: J
  <property>
  l( b! ]" E) Z. H5 p    <description>Enable a set of periodic monitors (specified in
: r' Z+ I- M! t  C& }- w        yarn.resourcemanager.scheduler.monitor.policies) that affect the
) h, g" z" y, k- ^: j6 v& q: G        scheduler.</description>
" `2 ?, }7 k0 z! Y* O7 s1 P  \+ p3 w    <name>yarn.resourcemanager.scheduler.monitor.enable</name>
1 G, P) V3 W3 V* O# a2 ?1 n    <value>false</value>1 L+ t$ r# L/ R2 O- C- d1 Y* I
  </property>5 C, p$ y" ~, H+ k2 Y9 v8 L
  <property>, I) I" n$ K! ?" E4 r6 m
    <description>The list of SchedulingEditPolicy classes that interact with
3 C4 m" g! E; \8 V/ d        the scheduler. A particular module may be incompatible with the0 V1 _8 I+ P4 R- r7 y0 l
        scheduler, other policies, or a configuration of either.</description>7 ?, }4 ^1 X3 \! z0 J
    <name>yarn.resourcemanager.scheduler.monitor.policies</name>5 L+ Y* _. i$ p- J
    <value>org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy</value>
8 M( _4 {# ^9 P  </property>) {! e6 R# p0 Y2 o& E/ C! F* O
  <property>
$ _, R! X& d7 \4 q: X* S    <description>The class to use as the configuration provider.6 [7 o7 f8 ^6 ]  ^( f
    If org.apache.hadoop.yarn.LocalConfigurationProvider is used,, O1 p, w  U; }- `2 K0 C
    the local configuration will be loaded.
3 r" [; g, v) Z; p" v6 e    If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used,
* ]) ?! y  D$ }% q# v* H3 X    the configuration which will be loaded should be uploaded to remote File system first.
$ {- e1 o6 U7 ]" n# k9 F" O2 K4 y  D    </description>
4 F, U) R. q% b# l+ y+ p- ]" @    <name>yarn.resourcemanager.configuration.provider-class</name>
$ E/ X. T+ u1 a+ Y0 O! m$ a3 q    <value>org.apache.hadoop.yarn.LocalConfigurationProvider</value>
( I4 q- O, `# g2 D    <!-- <value>org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider</value> -->
" K. u( D' T8 A8 `& |# `% L4 W  </property>
2 `+ Y& A6 y* p0 _+ C' P. l0 E  <property>
: R* }" b# b5 X2 x7 d    <description>9 }0 {. U! H# E5 u- @
    The value specifies the file system (e.g. HDFS) path where ResourceManager, Y6 R' a' X( q% a; w/ e: _/ m
    loads configuration if yarn.resourcemanager.configuration.provider-class
: E7 p4 y$ E3 n/ [, r4 s    is set to org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider.8 X: K' Z' m1 s2 D  H* |7 c& [( N/ `
    </description>
6 t% E: h- s6 p# f2 s$ l0 Q    <name>yarn.resourcemanager.configuration.file-system-based-store</name>/ v) Q" G/ y- D+ B, Q3 \5 b
    <value>/yarn/conf</value>% h4 t" X3 t8 V- t3 q7 Y; l
  </property>  q6 G* [$ z) [3 d4 |1 W5 J
  <property>4 e" R- y' |. T
    <description>The setting that controls whether yarn system metrics is' n$ {+ ^/ F) N- q3 F2 I: s6 v
    published to the Timeline server (version one) or not, by RM.
; W0 G( y5 m& p$ b: {3 [    This configuration is now deprecated in favor of
/ E, P" L; ?/ H9 c4 \. H( H9 a    yarn.system-metrics-publisher.enabled.</description>* u6 i+ s6 N' U) Q! H) c) i. G
    <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>$ _# w. M4 ?: z4 H% S, z
    <value>false</value>* n# w0 D, \/ e& g
  </property>8 M8 r$ Y* r  `3 }" ?2 ?. i1 j
  <property>) l6 I5 \, a% K: m/ S2 |1 W
    <description>The setting that controls whether yarn system metrics is/ d4 Z/ z, n7 w4 z; J
    published on the Timeline service or not by RM And NM.</description>% ^( L7 ~: r  {. Y6 R
    <name>yarn.system-metrics-publisher.enabled</name>
+ l; Q- D2 U, u- X    <value>false</value>! M/ n" P4 ^9 C6 m# J5 C% y& J
  </property>3 \; f1 w8 n- u" n  u* t3 x
  <property>+ c9 M- [  [5 O) Y) l0 D
    <description>The setting that controls whether yarn container events are
% ]* @0 C3 R* X0 w$ ^  v4 z    published to the timeline service or not by RM. This configuration setting
5 a; D/ f8 l. A0 }) ~& N    is for ATS V2.</description>! `; b2 S7 h: `3 ?
    <name>yarn.rm.system-metrics-publisher.emit-container-events</name>
' C- q# t! O/ w  z* e6 S    <value>false</value>4 D2 h0 C! S+ n6 a1 Q3 U
  </property>( e% g  m' e* m# z' z9 X
  <property>
  ^5 Y- u2 b: f! }+ ?    <description>Number of worker threads that send the yarn system metrics1 s# Q- s5 Y7 E) v2 W% z. g0 S
    data.</description>6 D- F+ ^+ q+ L) y6 T% n1 z
    <name>yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size</name>7 E- ~$ O) h7 J  [+ ]
    <value>10</value>8 o) T+ N+ i3 ?9 r) \. u
  </property>! o; V6 u8 a8 C! h  R4 |
  <property>
2 {3 f8 b' g8 S- ^    <description>Number of diagnostics/failure messages can be saved in RM for, K3 |& h; D0 j* W3 q9 A
    log aggregation. It also defines the number of diagnostics/failure2 A$ G- n7 o6 R! T/ A
    messages can be shown in log aggregation web ui.</description>
& l5 |7 i! j, y' q. j    <name>yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory</name>
: ^$ T* s1 K/ i    <value>10</value>+ ^* Q5 {9 ]/ I5 e
  </property>
, D7 x( w+ l; Z" C. e. B& j  ]  <!-- Node Manager Configs -->
! ^, [# x) S' t* g/ j  <property>
7 @# F' o. h0 C    <description>5 O, z# X7 F8 n+ x; \' x" W
    RM DelegationTokenRenewer thread count
2 B. M+ F6 b+ g4 L; N    </description>' ^2 {( ?7 }. h& U5 v: ]
    <name>yarn.resourcemanager.delegation-token-renewer.thread-count</name>5 K! K3 G/ w( Q, v1 _8 }
    <value>50</value>0 L$ ^) O0 e# c* F
  </property>& T9 [* m9 L  e; F
  <property>1 Y: {' h& g! c+ n
    <description>
3 @. U2 E3 B5 H4 H. ~; s: I    RM secret key update interval in ms
4 {. \: Y3 X, l: j    </description>  L6 d  B2 `+ @# i# M
    <name>yarn.resourcemanager.delegation.key.update-interval</name>
, [+ I8 G- A5 H& ?  g* G9 L& h    <value>86400000</value>2 x4 S% v4 B( Q( C+ L% I; Z5 \# ]
  </property>* W3 Z; W7 J3 k, r
  <property>; [0 w- M* l4 r2 O9 a) o1 f
    <description>
  q8 U/ c6 p7 r+ H    RM delegation token maximum lifetime in ms6 l3 O$ I$ E. b+ @  X, a- n
    </description>
% ]. r) l/ \4 l3 z5 j7 r1 K    <name>yarn.resourcemanager.delegation.token.max-lifetime</name>
0 U4 ?( M- I8 Z    <value>604800000</value>
! o6 p/ ?" P  f  </property>
, f7 ^9 t/ u( c* S- z  <property>( m; o, s8 p2 O% C
    <description>) i, P) V% |1 \4 W5 ?" L
    RM delegation token update interval in ms- q5 {! F- k& g& ~
    </description>
. `! X6 B. w, b/ d5 A- ?    <name>yarn.resourcemanager.delegation.token.renew-interval</name>
- _; _/ d' g0 A# E/ n. H2 J# \( a    <value>86400000</value>
1 X6 K# n8 e( W2 N  </property>& Q) V1 m1 Y% n% y4 ]' s4 Y( F
  <property>
) [1 L, P3 ~4 d1 O6 L8 D    <description>
4 j. [2 w: V2 L0 y) `$ j3 t    Thread pool size for RMApplicationHistoryWriter." T* S* a5 T- a) s3 q7 p/ ^
    </description>
! G. B  H) q; _, g5 ]3 u    <name>yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size</name>
$ U  H& }* S# O% }; L% G- R" d6 E    <value>10</value>
7 f# h$ h8 Y3 T3 ~  P4 E  </property>4 J+ t& v  E0 z
  <property>0 x5 |. g8 i, t4 o: {
    <description>
9 B  m# d4 W# ?9 E/ R    Comma-separated list of values (in minutes) for schedule queue related
- k$ p8 O' q& F( _    metrics.8 L9 u; f- L# a# X8 K  n9 e8 O
    </description>
+ G) Y% V, S1 B    <name>yarn.resourcemanager.metrics.runtime.buckets</name>
% C  @9 j3 @& P1 Q    <value>60,300,1440</value>
5 v) p$ ^0 S  z/ |9 w' H) D0 b  </property>
7 P* k7 `$ @$ S3 L4 r. G  <property>. ]0 e& c/ w4 O# I
    <description>
  i0 k, D/ |2 m/ p0 ?/ c; v    Interval for the roll over for the master key used to generate  ~/ t; P; O2 x! H' p
    NodeManager tokens.  It is expected to be set to a value much larger
& e2 o* \3 c7 K9 g6 @2 |" j! S( I/ l    than yarn.nm.liveness-monitor.expiry-interval-ms.) G; q8 g9 [# h, f1 T5 G1 N+ [
    </description>- `% D" _7 Y: M
    <name>yarn.resourcemanager.nm-tokens.master-key-rolling-interval-secs</name>: p: L; t! e' l' ~2 V
    <value>86400</value>7 `6 \7 u: n+ H1 q
  </property>6 K) j4 h4 L6 n( w6 Q, B4 A
  <property>7 d6 V8 o8 a6 D% d( }( T: L
    <description>; S" `) i* O# E& Y! M
    Flag to enable the ResourceManager reservation system.; ^  J3 [7 {5 G, h8 F, D" ]
    </description>: _. ~5 B. _# ?  v2 k8 x; j. j$ m" i
    <name>yarn.resourcemanager.reservation-system.enable</name>! \7 d: ~3 w' S' h* c; ?* U- f
    <value>false</value>
% Q' I% z! ?/ s3 U( v" Q0 X8 }# I, d7 e  </property>( ]: s0 @& N& o. ?7 ~0 {
  <property>
' l2 c) Z  x# p8 a3 Q9 [9 p    <description>
  E  W; x$ Y& P0 \    The Java class to use as the ResourceManager reservation system.* M9 {9 ]4 j# f2 h% `7 i
    By default, is set to- {4 T% b  x2 F! }0 e9 z
    org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityReservationSystem
& A0 c  R+ j7 X- {9 e: q    when using CapacityScheduler and is set to
1 s% g' V7 T' r0 Q    org.apache.hadoop.yarn.server.resourcemanager.reservation.FairReservationSystem( h4 b1 f' l/ @& N) T
    when using FairScheduler.  c; @+ }% L% n1 k# G! y
    </description>" U0 d% K9 I0 m# U! l. C8 |
    <name>yarn.resourcemanager.reservation-system.class</name>. s4 U1 S) z% |* g! F
    <value></value>* B5 ]& |; X8 Q* `& r2 Y
  </property>3 \2 o+ H% G  B6 q8 k2 T
  <property>
* \8 K# _9 |3 |    <description>+ i4 b1 s3 @! a9 |3 L
    The plan follower policy class name to use for the ResourceManager
; O! h/ {4 l% \8 h2 L- K7 t    reservation system., g2 F7 n3 J- b/ g
    By default, is set to
4 o/ x; d' O# ~' \* Q& k    org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacitySchedulerPlanFollower/ A7 T- A% {/ M4 s9 L1 O
    is used when using CapacityScheduler, and is set to
6 U% Z" I0 ]4 K1 w% I    org.apache.hadoop.yarn.server.resourcemanager.reservation.FairSchedulerPlanFollower/ f2 Q0 Q3 x; T- R% s/ [7 Y
    when using FairScheduler.  R* H, Z# X4 K: P/ T5 z( }, _( r
    </description>9 [: t  g7 K' ^- N' d. I
    <name>yarn.resourcemanager.reservation-system.plan.follower</name>2 @# ?, B" e  W
    <value></value>; i6 R" ?4 E. ]* A
  </property>
0 Q& W& {2 q. ^( r$ v  <property>6 u4 i) A  I5 C( ?
    <description>
: m6 w; W" W8 \! f0 p    Step size of the reservation system in ms
( T" @- j6 ?/ y# o) |    </description>
3 D2 A0 T6 F2 \; h5 o- E5 ~# ]    <name>yarn.resourcemanager.reservation-system.planfollower.time-step</name>' z$ G; J' u' Y' t0 H2 A
    <value>1000</value>9 L! {) o$ m' v
  </property>5 S; g$ c: z9 A; r
  <property>
* q& j* n5 P# b+ Y2 X0 P    <description>
( i2 X0 b3 L4 H/ q+ K4 B    The expiry interval for a container
/ y- z- w0 ~# Y- X    </description>
9 l8 w! ]: u% \4 x& k  x6 c    <name>yarn.resourcemanager.rm.container-allocation.expiry-interval-ms</name>
+ E8 X- i& G8 C  Q    <value>600000</value>
+ \/ p: ^; ~  H6 A9 Z5 K% `  </property>/ z; M, Z" n7 {* E8 ]! k
  <property>, q! ^7 A1 l2 F# |: S! p
    <description>
& e/ ]$ n  `3 b) u2 W! {3 Z0 e: |    Flag to enable/disable resource profiles! q3 R$ ?6 s* s1 A3 `( U9 P5 t
    </description>
9 b. A+ R- E* C7 d0 }    <name>yarn.resourcemanager.resource-profiles.enabled</name>4 O# x; p4 O. r* T
    <value>false</value>
' w6 g& e8 M  B  e( o2 a  </property>
+ X: S* o' }; ^* y3 B1 x/ |6 {  <property>+ F1 p, y: f7 |
    <description>7 [7 V% l9 Q8 R! q3 N! \
    If resource profiles is enabled, source file for the profiles( M6 P+ t1 E8 p  N7 n  g' W
    </description>0 [' J. d# W+ V" |/ b+ J% e6 u
    <name>yarn.resourcemanager.resource-profiles.source-file</name>) p! J. C3 F. |. g' u" n
    <value>resource-profiles.json</value>
% }4 R  e  K) Z) y" q  </property>
" D" i- I4 q  Q, G  <!-- Node Manager Configuration -->/ U6 k8 [* y0 {; ~" z: C( e
  <property>" U% x# m& e5 K8 P
    <description>The hostname of the NM.</description>
$ n' e" ?& I4 ?( m' y- M, K    <name>yarn.nodemanager.hostname</name>" W2 p$ x& Q* X  f9 q/ X4 d" z
    <value>0.0.0.0</value>
. e8 Q7 V' ^# ~1 H  T! p  </property>* J" D& |0 L" b1 @1 K' O0 C
  <property># T+ M4 W. r' c' Q5 l1 [6 Y8 E" v
    <description>The address of the container manager in the NM.</description>1 c: X$ n+ |( }% F" Y
    <name>yarn.nodemanager.address</name>: o3 F4 g& {+ z6 y. M8 v/ x
    <value>${yarn.nodemanager.hostname}:0</value>
3 B4 I- Y/ m* d' W/ F5 R  </property>5 u" N" V6 \& r+ g9 v+ M
  <property>5 J4 A! f7 O# ^8 m, {& \3 v& a
    <description>) M! E4 s; L7 _' N' j6 D
      The actual address the server will bind to. If this optional address is1 a6 o$ G+ k2 E. r
      set, the RPC and webapp servers will bind to this address and the port specified in, W1 z, ~4 j* K. W  D# g
      yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is# {( D8 ^3 x# {& Y
      most useful for making NM listen to all interfaces by setting to 0.0.0.0.! z% m# @" R, A8 L, j- U/ }; u
    </description>: {4 x$ L  }: x8 ]
    <name>yarn.nodemanager.bind-host</name>
# x% b5 o; m7 F4 R1 @( L3 q* B    <value></value>
% o* g2 |9 o8 Z( R1 r5 x8 R, l* k$ P6 _  </property>  H- p4 M0 E3 ^4 n$ H
  <property>
3 X2 N: r3 A) W" a. Z' }& p    <description>Environment variables that should be forwarded from the NodeManager's environment to the container's.</description>
  w/ \+ n% t( K( N& I    <name>yarn.nodemanager.admin-env</name>9 h7 V5 a7 s% u- Y- E- ^+ y
    <value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value>
! X8 ?$ z  \' M0 |- x! ~  </property>8 t0 N! S$ A* i4 F% n! A
  <property>) o5 E8 d, t6 A7 D6 E
    <description>Environment variables that containers may override rather than use NodeManager's default.</description>4 `: c. n4 t- l) H$ _& d% |
    <name>yarn.nodemanager.env-whitelist</name>
- @9 o4 R+ ?* \: K    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ</value>
4 Z6 T+ `$ _5 b4 h1 G, _  </property>
% h) O1 M" i5 d8 s/ D9 I- \  <property>
0 W. T, q) J: y, D2 n; x+ ?    <description>who will execute(launch) the containers.</description>) _( F5 {' d; u" t7 D! @/ b
    <name>yarn.nodemanager.container-executor.class</name>
* J) ^. V, U# W4 w    <value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value>5 y# k( C: t# A8 N" x# S4 b
  </property>3 S8 }+ a3 k" ^* e
  <property>
! T" o0 Z8 a8 e8 H    <description>Comma separated List of container state transition listeners.</description>
7 b( _# w% I* D: a3 w    <name>yarn.nodemanager.container-state-transition-listener.classes</name>
# `- E( f$ o1 K# O6 _+ H- @, o    <value></value>
  M- N. B* `$ {6 f9 B; V! V  </property>
8 c5 [3 C+ h# w5 z, P+ A3 y' ^) x  <property>$ A0 g# l) l* Z/ F' x7 h
    <description>Number of threads container manager uses.</description>, L6 t/ T  Y' f( l% ?! F% I
    <name>yarn.nodemanager.container-manager.thread-count</name>
5 j" K3 f& h4 z. i9 J2 A    <value>20</value>7 m* T* R+ s* C: @; V2 T& t" h
  </property>3 b4 f9 l) A4 K* N/ @
    <property>+ l. [- Q2 o! [8 K2 C/ S1 s9 x& m
    <description>Number of threads collector service uses.</description>; q+ W5 R( C- S9 C' R' _" q* P
    <name>yarn.nodemanager.collector-service.thread-count</name>5 @8 j( S+ [: x$ z- P" d
    <value>5</value>
& w3 c( j/ h' D7 f8 T# m  </property>. v" H7 L- Q% Z, V8 E9 E! F
  <property>8 A* I' X. M5 W! b, r
    <description>Number of threads used in cleanup.</description>
* `5 Q* D* m" q. H  j9 f    <name>yarn.nodemanager.delete.thread-count</name>& l. A9 g8 F) Q' @1 D9 @+ h
    <value>4</value>
7 M7 H* ~/ H7 t! K( N9 ~) o3 C0 p  </property>" Z3 J0 O% Q4 M! o8 b
  <property>
0 {* ~9 J* y# k6 Z: a    <description>Max number of OPPORTUNISTIC containers to queue at the4 A& G  ~8 w8 R
      nodemanager.</description>
: W1 m5 M/ V/ p6 l- _& K    <name>yarn.nodemanager.opportunistic-containers-max-queue-length</name>* e6 S+ @9 x. c* P* \
    <value>0</value>
* U; i/ p/ e. q  ]$ |; Z' a0 E# j4 H  </property>
! M5 h- ]- Z) k! M  L8 ?) [  <property>* m7 J4 ]3 v; t# S! S
    <description>
- m5 _7 C2 d+ l      Number of seconds after an application finishes before the nodemanager's
4 L& c% ]% @5 O6 W; K      DeletionService will delete the application's localized file directory) V3 Y+ b* m; {
      and log directory.
- U9 Q! ^6 a  Z5 e. u* g7 l# ~. _% L      To diagnose YARN application problems, set this property's value large
5 x, N. ]2 i# ]8 l: p' ^% h      enough (for example, to 600 = 10 minutes) to permit examination of these
; D. _6 ?* Q, y# P3 ^      directories. After changing the property's value, you must restart the 4 F. H2 ^( s, n/ M, }: Q& s
      nodemanager in order for it to have an effect.4 C& L6 V' U7 V; V
      The roots of YARN applications' work directories is configurable with
' r3 Y# ]- G. m& M+ a4 D$ u. {0 N      the yarn.nodemanager.local-dirs property (see below), and the roots0 l! a: L8 }0 z! _4 t6 }
      of the YARN applications' log directories is configurable with the  p6 U6 |6 `/ _% h# K7 r: i* u
      yarn.nodemanager.log-dirs property (see also below).; I) f9 W1 A+ y, }# A: Z6 i
    </description># i; Q1 c& O* _* O4 A: T
    <name>yarn.nodemanager.delete.debug-delay-sec</name>
) L' M' O# Z) n( z7 K' \    <value>0</value>
% {1 }& A. [5 E  </property>
3 t  x1 g( F/ Q' I/ {  <property>6 G" g; M2 T( @0 `/ F0 C  u: |7 k2 M
    <description>Keytab for NM.</description>% h( Z7 i, L5 a  C
    <name>yarn.nodemanager.keytab</name>, s) q" w3 Q- \
    <value>/etc/krb5.keytab</value>( }+ G& t0 L3 F9 f$ k5 E2 |
  </property>
( Y! w) L; S# U6 h- Q! f  <property>5 ]4 e& ~. O' _2 h% t: Z
    <description>List of directories to store localized files in. An # w% k  h! y( |* V# t
      application's localized file directory will be found in:! h) d. i, n4 `0 Q, ~# _5 ^$ Q
      ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.& G, b1 K1 E* D" H
      Individual containers' work directories, called container_${contid}, will
0 E" u- b; ]" l6 w$ e# u: G) \- a      be subdirectories of this.. v, Q- E$ R) W& c7 \
   </description>" d: X4 n; D. \8 d
    <name>yarn.nodemanager.local-dirs</name>
2 I; q9 x1 [, K  a4 L0 R5 S    <value>${hadoop.tmp.dir}/nm-local-dir</value>% z* B8 U0 T$ s: g
  </property>3 r4 V$ L2 l6 m2 f. p
  <property>1 K: V% o8 [- x" c( i( b
    <description>It limits the maximum number of files which will be localized. L! d9 u6 U1 h/ L5 _7 U, B. m
      in a single local directory. If the limit is reached then sub-directories3 a( f( H; k7 S9 O7 G' I( a- `- a
      will be created and new files will be localized in them. If it is set to. N* a( ~5 C3 s2 b0 v9 I  g+ J
      a value less than or equal to 36 [which are sub-directories (0-9 and then1 j8 S* O% S, A8 z" `
      a-z)] then NodeManager will fail to start. For example; [for public& T4 b, W- U6 C: c
      cache] if this is configured with a value of 40 ( 4 files +
* {& X) O2 T% {( g0 X# [5 P      36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will" y% ^+ S9 {8 e1 `3 c
      allow 4 files to be created directly inside "/tmp/local-dir1/filecache".  ~- o$ ~& c* h9 x! @' w
      For files that are localized further it will create a sub-directory "0"+ |8 f. M8 V1 m
      inside "/tmp/local-dir1/filecache" and will localize files inside it
7 s& M7 [; c% i4 K1 X; Y, T      until it becomes full. If a file is removed from a sub-directory that
2 v- U3 ]6 @# a4 ^6 e" R      is marked full, then that sub-directory will be used back again to
3 G1 P, m; c7 X      localize files.
/ H+ d0 S. l8 L5 Y: X# [9 R   </description>
9 {' c! T1 U3 s8 H! c! @    <name>yarn.nodemanager.local-cache.max-files-per-directory</name>5 x6 E4 v$ ]' }
    <value>8192</value>
: e8 J# {& x  H" a  K" a% d  </property>
1 ~' x' V+ f! }. l+ \5 N4 `  <property>1 B- c2 u. G4 r: Z) n
    <description>Address where the localizer IPC is.</description>
0 K$ F! _7 \! ~/ j    <name>yarn.nodemanager.localizer.address</name>
5 Q% y$ _. T6 [0 u3 V2 t8 O, Q; K    <value>${yarn.nodemanager.hostname}:8040</value>$ r6 d& O7 S/ d" a+ K2 t' s0 ~0 F
  </property>
/ I. Z% V3 k' s3 n: O/ W8 C  <property>
6 d& p3 @1 |7 m0 e! Y    <description>Address where the collector service IPC is.</description>
$ w" F3 T7 X* e# w    <name>yarn.nodemanager.collector-service.address</name>6 ]  I2 U* d- A- k* H
    <value>${yarn.nodemanager.hostname}:8048</value>2 i1 Y( }; i* i$ U# _; J. ^; j7 @
  </property>
( T; j( V0 `1 |9 \; O* T  <property>
1 U: r- }' g1 H$ N# V! b3 f    <description>Interval in between cache cleanups.</description># H0 u- b- g- q1 u2 n' V
    <name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name>
9 s8 D1 ~0 G' V( l" D0 `    <value>600000</value>9 ^5 w" A0 g2 b
  </property>  F2 y5 b% Y. s4 \5 m* y+ {/ Q
  <property>
9 q! ~7 d, C0 Z9 e* Q& w7 x    <description>Target size of localizer cache in MB, per nodemanager. It is
  W' f" K7 [! ~' \* m! X% H/ z      a target retention size that only includes resources with PUBLIC and
4 S  b7 J+ t; S. o0 j2 @      PRIVATE visibility and excludes resources with APPLICATION visibility$ ]1 i6 b: M/ d8 ]* v; D
    </description>5 k- g1 u& r6 g6 P8 i! u
    <name>yarn.nodemanager.localizer.cache.target-size-mb</name>
( H) a' a  i2 K- t$ H    <value>10240</value>; U% Y4 K6 a" e1 U
  </property>4 l/ x1 K3 Y$ F: F
  <property>8 y$ l5 N9 X, G7 [# |
    <description>Number of threads to handle localization requests.</description>$ M' g, r$ |1 c  O4 @0 e
    <name>yarn.nodemanager.localizer.client.thread-count</name>8 o# X* G, E$ D+ y& G
    <value>5</value>/ O4 i! S+ r/ j$ |1 u& z
  </property>
  C3 b! X( c' E/ S" ~  <property>
/ c: W: S; c* u+ B+ ~( l- U- }  M    <description>Number of threads to use for localization fetching.</description>; H, Z* G: U: ]  b! p
    <name>yarn.nodemanager.localizer.fetch.thread-count</name>+ |8 F/ T+ L- U1 f" W8 O2 x
    <value>4</value>  T4 X/ u% E) z+ |  F: j) N
  </property>
, b! V8 ]) {* K  <property>  t6 j% B& ?/ y1 H  u
    <description>
5 Z) l1 |0 k& R- C    </description>
4 C8 F. z4 s% E    <name>yarn.nodemanager.container-localizer.java.opts</name>+ o( x5 Q8 o$ V3 X; l/ m4 K! j& l
    <value>-Xmx256m</value>
  h# m! v3 _# n  </property>
1 a9 ?& u* L& N  <property>+ ]- G+ X, m; M# m. p0 r( ?! U: |
    <description>/ `+ w5 ?4 g1 a. a  X2 _6 l* M# }& Z
      The log level for container localizer while it is an independent process.9 N. n& q3 r$ R. T2 |! D9 ?) J
    </description>( a; G" H# {4 B4 W6 ]- V
    <name>yarn.nodemanager.container-localizer.log.level</name>& Q: }' D) u. x' Y# f: \
    <value>INFO</value>
1 V4 ^- v5 s* \0 ?  </property>: J: K0 ]) y. c- u. P" ~
  <property>
1 U& l  w9 g5 u. K& K( a    <description>/ W9 r9 l4 e, G1 T1 t# t, y4 |* d
      Where to store container logs. An application's localized log directory
$ L9 p2 b2 e- E* `) D      will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
( }. t% p7 k' g8 H      Individual containers' log directories will be below this, in directories ; K: C4 t( u3 [. \7 [6 \
      named container_{$contid}. Each container directory will contain the files: b& r' r+ _) j5 R# D+ l2 @# n
      stderr, stdin, and syslog generated by that container.
+ y; M. d3 K- n4 E3 m) v2 }    </description>
+ |7 H3 M0 e! i' z    <name>yarn.nodemanager.log-dirs</name>
" Y+ g3 S( \$ M    <value>${yarn.log.dir}/userlogs</value>
/ Z  l$ Y/ p% ^8 l$ Y  </property>3 k* g) d6 r% }' F# l2 G3 `( A5 C
  <property>
& S7 N$ E1 S$ ^0 C  }    <description>. w/ Y: |5 i. d2 @
      The permissions settings used for the creation of container- Q6 i$ u( ^, z9 d
      directories when using DefaultContainerExecutor.  This follows0 q) p* {8 x+ S9 o1 c2 M5 b" @
      standard user/group/all permissions format.
9 k. }9 J* l8 ^; C" ]4 k    </description>
) q/ u1 Q4 r" t0 M    <name>yarn.nodemanager.default-container-executor.log-dirs.permissions</name>
3 v8 ^1 F- [1 ?2 W, S& X9 Q    <value>710</value>' y9 t- }: F3 U0 b+ I, G
  </property>
7 M8 C+ x. c/ ?& U/ V2 V  @  <property>
" t8 h) A& }7 y/ F, C    <description>Whether to enable log aggregation. Log aggregation collects( p& r2 K2 o% l7 h7 j6 x
      each container's logs and moves these logs onto a file-system, for e.g." h: s% K' u5 Q) i4 T/ F3 M6 {
      HDFS, after the application completes. Users can configure the. o9 [3 X! I. I9 n8 s
      "yarn.nodemanager.remote-app-log-dir" and
! [/ R9 N3 Y' ?# h- R1 T9 G      "yarn.nodemanager.remote-app-log-dir-suffix" properties to determine: V0 O, `3 e) ]+ e2 L3 b8 ^- ?( c
      where these logs are moved to. Users can access the logs via the
/ j7 p. t+ r& y4 R/ i. ^      Application Timeline Server.2 A5 e/ V( ]7 M" M
    </description>) t; x/ U6 k) w/ r& N
    <name>yarn.log-aggregation-enable</name>, C$ I  g' W3 W7 r% L! k; [
    <value>false</value>
3 N& ^* D+ l* Z, Y. J3 d  </property>
8 q4 C% Z3 t9 i# y' C  <property>
: Q2 D; t4 Y" X4 V' l    <description>How long to keep aggregation logs before deleting them.  -1 disables. 8 w- c* G2 `  h6 o# R
    Be careful set this too small and you will spam the name node.</description>
" _5 \* t1 G" ^& i    <name>yarn.log-aggregation.retain-seconds</name>
& c' G  U9 ?* D3 a    <value>-1</value>
, A' U+ e. \& v3 g4 V& ~! g  </property> $ g3 y6 n3 b; r% G  l/ W' t
  <property>
# k8 v2 y* Q$ ~  Q5 X, C* i    <description>How long to wait between aggregated log retention checks.
0 X/ G* j, E: D  `+ m1 X  u    If set to 0 or a negative value then the value is computed as one-tenth4 B- Z' U9 I8 A9 b1 ?
    of the aggregated log retention time. Be careful set this too small and
' n: Z8 p5 l8 }$ y5 ?    you will spam the name node.</description>+ c, l& X& v, P0 v4 M$ b
    <name>yarn.log-aggregation.retain-check-interval-seconds</name>) n1 J0 J3 h' ]0 B0 C
    <value>-1</value>
9 ]: k4 ~( U( N- M. `( {  </property>
4 \1 W$ L/ m! I4 z* @; v- o3 z. S4 Q  <property>
. N  c: V( k2 K: ?* [' }1 x    <description>Specify which log file controllers we will support. The first
" ~* Y5 P* U  s* x+ }    file controller we add will be used to write the aggregated logs.' W1 K9 ~8 g/ r
    This comma separated configuration will work with the configuration:
! F5 `) b$ B0 i0 l3 D- O. H& k+ k    yarn.log-aggregation.file-controller.%s.class which defines the supported% o; {1 c9 A" B+ R# B6 s
    file controller's class. By default, the TFile controller would be used.
/ ^3 P6 S6 Y$ J$ N8 O; P) k    The user could override this configuration by adding more file controllers.+ k$ a* V7 q/ G: N/ l. S4 o) s
    To support back-ward compatibility, make sure that we always
0 b0 x$ q- r0 V& t# I9 x- ^: [    add TFile file controller.</description>. D/ A' Q) Z2 U( g% T( F
    <name>yarn.log-aggregation.file-formats</name>
8 g" F* B2 ^* w' i    <value>TFile</value>  g6 m7 Q! S7 l# b- [
  </property>
# D* T! g( e: A+ H1 T/ L8 Y  <property>$ Q" x, S* ?1 b/ F$ U5 e/ C
    <description>Class that supports TFile read and write operations.</description>& e& B- t# j, A% A3 W/ _# R
    <name>yarn.log-aggregation.file-controller.TFile.class</name>% ~4 ?% w- j5 E6 `- j/ m4 \  r
    <value>org.apache.hadoop.yarn.logaggregation.filecontroller.tfile.LogAggregationTFileController</value>5 A. w' G' ]2 n" B
  </property>
$ \, v* B; h# o, e$ {  <property>
$ N0 O; _- d8 l4 S    <description>
1 t" [) |5 z# `7 `! z    How long for ResourceManager to wait for NodeManager to report its
8 B; f, _, ~( v6 D. e4 o) Z7 \    log aggregation status. If waiting time of which the log aggregation6 M; O. H- N- {2 S
    status is reported from NodeManager exceeds the configured value, RM+ W6 j2 ^7 G( |* ]; j5 j' h# i
    will report log aggregation status for this NodeManager as TIME_OUT.
8 Q) q" i- L# |' o/ [. G& `6 h7 c    This configuration will be used in NodeManager as well to decide
- p4 _5 O# N/ ?4 h    whether and when to delete the cached log aggregation status." ~$ B- b1 j* Z" _* ]
    </description>; w# J/ x0 D# V6 a3 E
    <name>yarn.log-aggregation-status.time-out.ms</name>
3 P  O1 C) u) X2 W: w7 ^8 C    <value>600000</value>
3 @1 h* V5 S8 O" s7 T: v6 m  </property>
0 U, d& \0 F2 N3 c1 `0 m: B# T$ K  <property>3 ^: D$ u- ]: t
    <description>Time in seconds to retain user logs. Only applicable if
# R  R! X8 H# p% L2 r    log aggregation is disabled
) j* H, g- |& D/ H    </description>
, _: g3 M% c$ c, F9 I9 c5 _    <name>yarn.nodemanager.log.retain-seconds</name>& l" K+ x9 t2 ^9 o: S8 G
    <value>10800</value>
- H5 N2 I  M- V7 B  </property>
+ o0 L$ b3 @3 a- l  <property>7 C5 g7 s' s9 h
    <description>Where to aggregate logs to.</description>
( Y) s4 w* f% `% p( D3 R' v3 F    <name>yarn.nodemanager.remote-app-log-dir</name>  ]! K( q5 d% h) l5 T6 z+ B
    <value>/tmp/logs</value>7 Y/ M7 d5 B% M/ k1 Q
  </property>* h% h9 t2 h5 k( C* d$ c
  <property>
0 o+ p1 e5 l  S# m, L; \1 j# \    <description>The remote log dir will be created at
% K, b6 O' f( S1 Y9 ?. Z: K9 J      {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
: J3 y, K" S4 Y: }9 a    </description>. B( h# T( C0 O0 F) M) f
    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>2 x; J: m* f6 E4 q; B
    <value>logs</value>
7 {/ P; t9 L  H" E% R  </property>
1 z$ q( o' f, y  <property>1 p6 h. D4 F. j4 }
    <description>Generate additional logs about container launches.
3 k$ S; a* C( x% ?. L/ v7 H% @    Currently, this creates a copy of the launch script and lists the
  Y3 V6 F! j! r- `  e# U    directory contents of the container work dir. When listing directory
# |& q. W  l4 Y2 i6 t" u# h    contents, we follow symlinks to a max-depth of 5(including symlinks
$ N/ I" M0 a) o. m6 a, I: U8 w    which point to outside the container work dir) which may lead to a
3 N: h( T( m  _* L    slowness in launching containers.4 Z' T; {# F* n% v
    </description>- [* W! ~* Q9 h. g+ a
    <name>yarn.nodemanager.log-container-debug-info.enabled</name>
( ]; p/ W% k& d  y4 {    <value>true</value>
/ `6 c" S2 [. l  </property>
0 V1 k6 Y4 A1 f, h+ l  l  <property>
" t& m, M- g6 ?1 a! f9 m    <description>Amount of physical memory, in MB, that can be allocated
9 P3 @8 I+ r0 Q0 o. i5 Q/ V; F    for containers. If set to -1 and
+ O0 K( v7 J% g; q/ h" G    yarn.nodemanager.resource.detect-hardware-capabilities is true, it is& N: k7 z, N4 ~: k4 S- Y; b
    automatically calculated(in case of Windows and Linux).
1 T1 z% E5 r- h5 g2 A$ e    In other cases, the default is 8192MB.0 @' a0 C6 B0 W) v& \
    </description>
8 @. `4 P( i3 Q3 z% l3 Y    <name>yarn.nodemanager.resource.memory-mb</name>7 m, k5 N0 p# d4 D0 y
    <value>-1</value>( x$ G. U( h3 `  @5 _
  </property>/ w/ M5 g, g2 M' Y5 _5 r1 }
  <property>& g' d: u) W0 ^- _4 D* v/ y
    <description>Amount of physical memory, in MB, that is reserved
- F$ D1 z' s0 n2 Y6 ^% U    for non-YARN processes. This configuration is only used if( G. [5 v8 s" a* i# @+ Y% J
    yarn.nodemanager.resource.detect-hardware-capabilities is set
6 z# `9 g9 ]# \& j    to true and yarn.nodemanager.resource.memory-mb is -1. If set' ^' ~4 P# N4 a2 s$ n! z
    to -1, this amount is calculated as1 G) _7 M% U  Y, S. n- L
    20% of (system memory - 2*HADOOP_HEAPSIZE)
+ ^1 n; [, t0 n# m: U: q    </description>
. C0 K+ }6 e+ ~    <name>yarn.nodemanager.resource.system-reserved-memory-mb</name>
3 x/ N# C5 Y9 m( ^' M    <value>-1</value># c' c6 s; _8 x- |/ r
  </property># \. s- q0 |) {$ _
  <property>4 D+ Q0 V% P5 a- C9 h" O  N6 K
    <description>Whether YARN CGroups memory tracking is enabled.</description>
, r3 E' ~5 g  h& J    <name>yarn.nodemanager.resource.memory.enabled</name>3 m: o1 Z1 o0 ?
    <value>false</value>0 u- \5 R8 O' o% C" o. \
  </property>5 P6 ?7 g& j" \, u4 Q
  <property>
* S: Q, X' |) z- L) ?6 K, B7 l    <description>Whether YARN CGroups strict memory enforcement is enabled.
* t6 M- T7 r+ \    </description>
0 S1 L' D% B& {    <name>yarn.nodemanager.resource.memory.enforced</name>! G5 D$ g5 F3 L- b
    <value>true</value>% `+ U. M- p6 J) p$ v& Y7 l+ t
  </property>
& X* K7 [' Z2 h* \/ N) x  <property>$ r3 m0 {( ]5 F7 f, r
    <description>If memory limit is enforced, this the percentage of soft limit+ `- O; o# e+ ~0 m7 M
      compared to the memory assigned to the container. If there is memory- E* P. ~" s$ W6 J' `
      pressure container memory usage will be pushed back to its soft limit
& ~3 G6 c+ t' k0 J2 U/ G1 Z. ^      by swapping out memory.
* n( l8 B9 l1 I3 m) Q/ Z9 h    </description>
  D0 L4 J7 E" L. e, V8 U    <name>yarn.nodemanager.resource.memory.cgroups.soft-limit-percentage</name>
( q3 T$ Q- \( ~    <value>90.0</value>
' K0 o! h2 Y. H  </property>
% N5 X7 a' d6 k0 W2 W6 f8 R  ?  <property>
+ M* b; q& u' e2 z# |    <description>Container swappiness is the likelihood a page will be swapped
9 s9 ?7 i! ?1 `- r. e2 q) c2 A      out compared to be kept in memory. Value is between 0-100.6 f6 X1 }6 z! G. A0 |: ]
    </description>
+ ^& q, |/ M$ @& y+ d    <name>yarn.nodemanager.resource.memory.cgroups.swappiness</name>
: W- b2 J8 S# \2 r( _# i    <value>0</value>
3 a5 c# U. K' d$ P( a  </property>
6 w0 m( x7 `! Y& Y! }1 U  <property>9 v0 e' Y0 w( o; g
    <description>Whether physical memory limits will be enforced for
2 V6 _: x% ]# @8 \( U) ^    containers.</description>
% d$ s* _& `5 K+ z% ~7 V3 v    <name>yarn.nodemanager.pmem-check-enabled</name>
8 {3 C' W1 u9 U! E# H! n+ z    <value>true</value>% |9 U! B, r, U
  </property>) k7 I+ x% D. C5 A3 D" r
  <property>
' _- X6 u$ _4 w( l# t    <description>Whether virtual memory limits will be enforced for
; L5 V8 o5 e+ k    containers.</description>. s- R- u! B/ k( i: F4 x# w9 K; U
    <name>yarn.nodemanager.vmem-check-enabled</name>  @1 Q6 c# k) e! R
    <value>true</value>
. _& j6 S" Z7 n6 I9 v0 W  </property>
% _; u* L$ ~. D% E  <property>
$ ?7 I; s% N. n, [1 U; F    <description>Ratio between virtual memory to physical memory when
  @& u+ [$ _9 [( }$ \! G" G    setting memory limits for containers. Container allocations are2 [) m0 T) Y* W* K% p9 o* e
    expressed in terms of physical memory, and virtual memory usage
6 N8 k& ]( q$ ~4 Q% b7 T+ W  d    is allowed to exceed this allocation by this ratio.$ a" l; f9 I" g7 v/ A; t/ i/ Z
    </description>% x4 u: \" [' b
    <name>yarn.nodemanager.vmem-pmem-ratio</name>, p0 l5 L# G. _7 r
    <value>2.1</value>! u9 g7 B1 ~& i4 U9 P  u
  </property>5 n: V( _! Y+ f  \. f* v
  <property>2 U9 }; e# D" U6 V+ ~% J" a7 W0 p
    <description>Number of vcores that can be allocated. O5 r0 P, Q+ T! l! [
    for containers. This is used by the RM scheduler when allocating
# U* r. ?7 T# D1 ?0 x1 R    resources for containers. This is not used to limit the number of& {+ J7 `! Y. m' r) g
    CPUs used by YARN containers. If it is set to -1 and
% g; R" [" d: V" X2 N" k1 b    yarn.nodemanager.resource.detect-hardware-capabilities is true, it is4 d8 q" |3 z. }6 ?) \$ L
    automatically determined from the hardware in case of Windows and Linux.
/ z- S/ V% r. ~! W    In other cases, number of vcores is 8 by default.</description>
/ a  Y% `, S& S; [    <name>yarn.nodemanager.resource.cpu-vcores</name>
* X  f8 ~: x' f9 e2 O+ [. n    <value>-1</value>
6 P4 `# ?; |4 P2 S; t  </property>
$ u% P/ Y1 V0 s/ ~, a, E" c  <property>  E% b' \% S: v8 ^- C; U, s
    <description>Flag to determine if logical processors(such as8 M) I2 k9 j. E, @! v' @  v7 J
    hyperthreads) should be counted as cores. Only applicable on Linux# e( [  k1 Y  i5 n8 v# Q1 o
    when yarn.nodemanager.resource.cpu-vcores is set to -1 and
3 {6 n, J' U, z! I2 k    yarn.nodemanager.resource.detect-hardware-capabilities is true.! U) S" y2 ?  Q6 L
    </description>
7 L+ D) D# y8 h    <name>yarn.nodemanager.resource.count-logical-processors-as-cores</name>
* F' a, R* q  i& R  s    <value>false</value>
( u- h9 [1 T. E' m/ Y9 S* _6 F  </property>
+ X. P  q9 `0 Q! M7 U6 G  <property>( w4 N. u& t* Y' r
    <description>Multiplier to determine how to convert phyiscal cores to3 P0 N9 g: V# d4 w/ u4 @
    vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
# g! q) p) c. p2 D0 Q3 H  n    is set to -1(which implies auto-calculate vcores) and9 f1 W0 R. u8 E
    yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The
2 a/ x7 i- C" ?; R$ q, |( L    number of vcores will be calculated as. u# P8 l3 j7 q, t1 _
    number of CPUs * multiplier.2 c+ ]% c& @9 O8 k- K8 \& k
    </description>
, p3 _) u% r/ i5 W/ o8 J    <name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>/ p) |# ]% e$ ?
    <value>1.0</value>- G  Q- ^0 X; N5 r* i
  </property>
, _$ `# B  `" a" g1 r) |  <property>
# f  _( K+ p6 c& W1 f    <description>
; p! j2 M7 V; R( H* @8 b  M! J0 O% K9 p    Thread pool size for LogAggregationService in Node Manager.
& T. w. }' e( K; G* I0 E    </description>
) E) ]8 I1 W' R    <name>yarn.nodemanager.logaggregation.threadpool-size-max</name>; f- O7 B8 m. F9 ]
    <value>100</value>
& G5 \6 V. @9 O, k" h  </property>
7 T, z  l2 a. n/ J  <property>
( A( N9 d7 i! G7 ^$ d    <description>Percentage of CPU that can be allocated3 Z/ e9 b- ?; C! ?/ S1 f
    for containers. This setting allows users to limit the amount of# G9 e7 ]; q! B4 t& z
    CPU that YARN containers use. Currently functional only
7 X. m+ E9 o% a/ g2 ]+ q    on Linux using cgroups. The default is to use 100% of CPU." A& X( H: T8 F- @, _0 Q
    </description>
1 M2 B0 y# y3 L, T( ?- Y2 {6 G    <name>yarn.nodemanager.resource.percentage-physical-cpu-limit</name>
5 G6 P1 {" s; s9 F7 V2 M    <value>100</value>
3 c+ X& K5 w& I% l9 B" ~% J  </property>7 m8 W& x5 {3 ^5 Z# u6 b2 b) }& h
  <property>* y8 d' ]" R  J+ {, G, Y
    <description>Enable auto-detection of node capabilities such as
, M1 B3 V  [7 V  S6 n- v    memory and CPU.$ B- w' p: a. Y0 C0 u$ k# C3 {
    </description>* ]" K' K. m3 v# [" n! e
    <name>yarn.nodemanager.resource.detect-hardware-capabilities</name>/ q  v6 D6 c% L( B6 ?& J3 f6 o
    <value>false</value>
7 G7 i9 T$ l, N$ _  </property>
3 R. p( t' N* P+ m  <property>4 w- K  w* O8 ?" G
    <description>NM Webapp address.</description># z" S4 K: A/ t3 [3 Y0 e  s
    <name>yarn.nodemanager.webapp.address</name>
" A& q* X7 p+ K    <value>${yarn.nodemanager.hostname}:8042</value>
  T- J/ F) a! |+ x9 N7 T, K) A  </property>3 b) ^! g4 E2 S% x8 ^7 @6 L" V
  <property>
; P" H$ x7 h# f  S" u7 D3 `' J: }3 Y    <description>
# G# K9 J6 E2 f+ \    The https adddress of the NM web application." J( `; X8 u! |  T; ?( T
    </description>/ L% k9 t0 G. e5 P
    <name>yarn.nodemanager.webapp.https.address</name>8 @  y$ v5 l4 e
    <value>0.0.0.0:8044</value># ^, H0 B) ?& j, y
  </property>
7 f; \+ I3 w: F7 ~# ^, G  <property>9 w9 S# H: ~8 d* [9 w
    <description>
/ J( T& C; J, z    The Kerberos keytab file to be used for spnego filter for the NM web8 O8 i! Z1 a- f, ]7 ~9 ?
    interface.: g( E! [5 \+ B  w
    </description>1 B& ]" `; e2 H* j* ?: e5 d4 h& c
    <name>yarn.nodemanager.webapp.spnego-keytab-file</name>
9 P0 M) |4 B  l6 Z    <value></value>! g  w* T, `$ [% U0 d+ B
  </property>
0 I& b" r  f% d$ w2 m  <property>9 C4 X' a6 o, ^9 V
    <description>
6 G$ B$ a$ M- ?    The Kerberos principal to be used for spnego filter for the NM web
( p+ Z; d7 A2 D- J. t0 Q" T& p    interface.$ d" p( E+ V- d/ J
    </description>6 Y: V1 H) r/ v" Q* U
    <name>yarn.nodemanager.webapp.spnego-principal</name>
9 F6 h! I+ R  l+ X    <value></value>
! S% V/ D  Y% T! T9 b  </property>
4 `/ ]4 H  ^* ~4 p  <property>
& P! o+ Z- m+ e  \    <description>How often to monitor the node and the containers.( N: ~' ?- C$ b9 j1 ^
    If 0 or negative, monitoring is disabled.</description>
6 k6 g3 G$ M( f% t* y' u3 B0 W    <name>yarn.nodemanager.resource-monitor.interval-ms</name>
, k0 v5 g, S8 @. w    <value>3000</value>" \8 f1 [6 K; b2 @% ]
  </property>, u4 `& A( H9 ^/ ?. b1 z
  <property>
1 i. j9 B: l8 ?$ J5 v/ Z2 ^    <description>Class that calculates current resource utilization.</description>
" [! S2 [- y5 i$ S' A; B    <name>yarn.nodemanager.resource-calculator.class</name>  v/ m9 Q- a6 x; ~3 F
  </property>  a& H% S* d0 f5 b
  <property>
, q/ q: p2 D0 O6 M2 t. W, r    <description>Enable container monitor</description>8 g) B  P  j3 q* r: p* \
    <name>yarn.nodemanager.container-monitor.enabled</name>
- i, \! L* d) g( Z    <value>true</value>$ W7 r6 ~4 G7 b6 X2 {0 a, \
  </property>
4 p* R, M2 Y6 F" Z1 N4 j1 R+ R( I5 E  <property>* J5 {. Q8 ]7 F, \2 A
    <description>How often to monitor containers. If not set, the value for8 p2 N' U  s% W- ?# ~# v
    yarn.nodemanager.resource-monitor.interval-ms will be used.+ `& p  ^& z4 \: Q
    If 0 or negative, container monitoring is disabled.</description>) a3 v# H1 ?1 i+ U* R" _  Z
    <name>yarn.nodemanager.container-monitor.interval-ms</name>
% s2 n4 C* i# A* G7 A2 h  </property>
8 X3 s8 K: t" k2 K. b% w  <property>2 @9 u% M" W3 T) W
    <description>Class that calculates containers current resource utilization.
* C) {$ ?, a8 g/ E0 @1 W2 j    If not set, the value for yarn.nodemanager.resource-calculator.class will
7 r, Y. Z8 b7 w+ z6 v  O9 a; k    be used.</description>" @  Y2 @" X+ u; V" j& h
    <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
( C* v' H. a% P0 e* {  </property>6 Q; `+ H* v. a9 G  \
  <property>
* m6 L. J- K3 y2 ]2 p3 X$ O; W    <description>Frequency of running node health script.</description>% p5 H! t! l0 l% b, s0 W
    <name>yarn.nodemanager.health-checker.interval-ms</name>9 ~' `. o: B$ ?, I# n
    <value>600000</value>
# g+ \. M$ t% m6 [; ^" _+ k  </property>6 N/ F9 v+ v9 i! \& e3 |
  <property>
, o5 u& z) @& q) ^2 B( D3 `& A    <description>Script time out period.</description># w- ]( u. l; D
    <name>yarn.nodemanager.health-checker.script.timeout-ms</name>
# _- N6 d# Q% u2 D" y( V& Q    <value>1200000</value>7 O& u% N( Q* N- G+ x
  </property>5 v9 Q; n' T& s" a* m& d, D
  <property>
0 b# H7 h9 B, }7 u* v    <description>The health check script to run.</description>
0 F" x) m* l! s6 d- K0 \    <name>yarn.nodemanager.health-checker.script.path</name>1 E, M* |5 N: L3 T
    <value></value>
/ y. A: H* y: e( Z  </property>
; p; D6 H0 b3 L8 U# `/ O$ \  <property>
2 b+ V7 I) c+ Z$ D$ h0 l& q    <description>The arguments to pass to the health check script.</description>
2 M: q. D) N8 k  |' I" J    <name>yarn.nodemanager.health-checker.script.opts</name>/ |9 V( h+ P) T
    <value></value>* G. F$ B+ v  z4 O( s; C
  </property>/ J- F; }  b% x8 [
  <property>. ^& D. O( u8 y6 Z- d
    <description>Frequency of running disk health checker code.</description>/ j9 V$ G- X9 c
    <name>yarn.nodemanager.disk-health-checker.interval-ms</name>
, e. G6 X: H" o) k8 N    <value>120000</value>
' f: x. ]3 R" N1 z  </property>
0 O5 D3 ?- d2 M/ @+ h  <property>8 }! z6 J- }  W- y
    <description>The minimum fraction of number of disks to be healthy for the
, y& m% q6 ]9 l  _( X- ~, A    nodemanager to launch new containers. This correspond to both& {# ^( R4 W9 Z) Y- X  d6 A
    yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there2 ]) e) d7 K# q( ?7 H
    are less number of healthy local-dirs (or log-dirs) available, then4 T7 N) L. B* v8 c) R
    new containers will not be launched on this node.</description>1 d7 P2 r, ^1 Z- P$ R
    <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
+ q9 _7 `: c* ~/ p% r    <value>0.25</value>. I0 L3 x7 Q6 v9 x& Z
  </property># E  s( z8 Y0 y4 W1 |) _# z8 ]7 S5 D3 L
  <property>
$ Z  W$ j9 l, N+ k7 q8 b    <description>The maximum percentage of disk space utilization allowed after ) s) `: n  j- i
    which a disk is marked as bad. Values can range from 0.0 to 100.0.
4 O* t8 L' u6 _* q    If the value is greater than or equal to 100, the nodemanager will check 0 O% @! ]7 w' M
    for full disk. This applies to yarn.nodemanager.local-dirs and
% ?+ c' k) ~, k. F" |. N) P- I    yarn.nodemanager.log-dirs.</description>5 P7 S! v" `5 I* T& d* I& T
    <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
6 H* O! k) i' m7 J- j/ Z# O  B    <value>90.0</value>) }% A# o  x& G+ t1 @1 z
  </property>
5 w3 ]  |2 z9 ]" z  M, h6 i  N; L  <property>
1 E0 M! x# b" Y. }% k    <description>The low threshold percentage of disk space used when a bad disk is
2 z% h9 G6 n9 h    marked as good. Values can range from 0.0 to 100.0. This applies to
$ @0 ~2 ?2 }5 t. {, o6 m* u    yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs./ T. }8 D4 d+ a
    Note that if its value is more than yarn.nodemanager.disk-health-checker.* {; O/ S0 o. I$ @7 ]
    max-disk-utilization-per-disk-percentage or not set, it will be set to the same value as
/ X0 j* {, j3 }' l8 e$ q  s0 ?2 F    yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage.</description>
1 O# H& ]( |/ k    <name>yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage</name>" y, X( L9 l8 Y- ?9 f2 t& f
    <value></value>
9 p3 A' w. `0 j3 _' w  </property>2 z* R' x$ W$ Y0 o) N& i' p3 ?
  <property>' ?9 y( G' `! }: ^9 S5 g0 s
    <description>The minimum space that must be available on a disk for4 C* K6 @& c- Z
    it to be used. This applies to yarn.nodemanager.local-dirs and" O' Z8 u( Z4 O9 N
    yarn.nodemanager.log-dirs.</description>
5 Y/ u  \" V: v2 q    <name>yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb</name>/ k9 s. }$ ?; S9 l) d( N% F
    <value>0</value>
: t( K3 v7 M, @+ w  </property>6 M/ q' a& }/ z/ q
  <property>
% w& h4 p$ @6 y1 g8 L5 r( ^, N    <description>The path to the Linux container executor.</description>
0 s5 V' |* z6 f0 K: |! ]4 O    <name>yarn.nodemanager.linux-container-executor.path</name>
: m5 s6 q$ T  Z2 h. Y9 I# T6 D1 Z" i  </property>
: p$ J/ a2 F; c" Q4 N; q/ @: y  <property>
4 k2 {2 P1 u: X    <description>The class which should help the LCE handle resources.</description>
1 E* _# t1 }# w/ f$ _    <name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>
/ V# n% N8 l0 y6 s. t% s    <value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>
2 W, H: a1 D% ]    <!-- <value>org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler</value> -->
7 s" b8 O! i+ H% Y2 |4 |  </property>" a2 X1 m6 I' O. v" l- R4 n7 e
  <property>" @9 m* f6 q. t0 K
    <description>The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).- s5 t# q* E: G) a  n% E, A
    If yarn.nodemanager.linux-container-executor.cgroups.mount is false3 g1 O4 t' F" I3 y2 G9 ~9 V( V
    (that is, if cgroups have been pre-configured) and the YARN user has write3 p1 h9 j6 f9 l$ R) @
    access to the parent directory, then the directory will be created.$ t- V; M; f7 o& ?7 F1 r
    If the directory already exists, the administrator has to give YARN5 p% m0 Y: r; u0 y+ x1 y
    write permissions to it recursively.3 c, Y6 y) l, \* @3 @3 |
    This property only applies when the LCE resources handler is set to8 n) G' d) ^! R3 v
    CgroupsLCEResourcesHandler.</description>) `( r, K5 C& x$ C
    <name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
. {4 b. O& n! h, H0 K9 u; j6 q0 H7 Y    <value>/hadoop-yarn</value>
3 J$ l4 b5 V- X9 @4 b" m; J  </property>) B4 d8 w0 x/ J) e" M
  <property>& |! [& j# s) C% D$ @( Q8 ]4 a! k* ~
    <description>Whether the LCE should attempt to mount cgroups if not found.$ I1 a: W$ V8 m8 Y
    This property only applies when the LCE resources handler is set to
, g! \* v' P2 \) g2 u  L* E! r    CgroupsLCEResourcesHandler.
2 a. i2 Q* K1 V& ~' W9 p    </description>0 y' }: g% D, o, c' @9 J, H% v
    <name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
8 M+ t3 N( H$ v, \6 a' G/ M    <value>false</value>8 o6 ^7 n  R7 _$ A2 f* m
  </property>9 b- h+ m! Y5 f1 d- l  \
  <property>
/ Y% T  e* E- t! z    <description>This property sets the path from which YARN will read the
: Q, c' f7 o' S: \  o/ x    CGroups configuration. YARN has built-in functionality to discover the
, K& C1 D$ L0 [4 \. [    system CGroup mount paths, so use this property only if YARN's automatic
) L5 M. g% N5 R    mount path discovery does not work.
. \9 R- ~! J% Q- ~: p6 F- |    The path specified by this property must exist before the NodeManager is; U  s4 x4 J, `  w1 R; x% [
    launched.
. I+ o/ Z" I( X4 Q; e( ~( |9 w$ h    If yarn.nodemanager.linux-container-executor.cgroups.mount is set to true,) O# o9 `7 I! u4 m$ ?- @# C/ q
    YARN will first try to mount the CGroups at the specified path before
3 Q( _4 a  B7 C2 L& i- K    reading them.
; g) j0 u+ |5 \    If yarn.nodemanager.linux-container-executor.cgroups.mount is set to
! m/ ^: `9 t5 {+ s, u    false, YARN will read the CGroups at the specified path.5 q+ A- e0 o# r( @. V7 i/ h
    If this property is empty, YARN tries to detect the CGroups location.
0 d: M# N5 d( N+ ^. D    Please refer to NodeManagerCgroups.html in the documentation for further
8 K6 R- Y% Y" y  I( ?6 _    details.! P6 ^" ^& a9 L: J
    This property only applies when the LCE resources handler is set to' t" {7 O6 O3 k, E
    CgroupsLCEResourcesHandler.0 {0 h0 t& |1 w: x. _+ H
    </description># z, b3 ~# d# o1 {1 G9 z
    <name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>) q/ F; M8 i6 g0 C( u2 p
  </property>" W* k7 N& h  n; U
  <property>
2 W: O4 x- o0 C  G2 _    <description>Delay in ms between attempts to remove linux cgroup</description>
5 k5 t6 T6 n4 Z4 Y& e7 \9 [    <name>yarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms</name>2 Y1 ]" h; W8 h# D
    <value>20</value>0 u  [; K0 d) A  H
  </property>
- y. K3 c( T) x1 @1 r. Y0 q6 a  <property>& T: \& ?/ b( R1 Y: G4 B
    <description>This determines which of the two modes that LCE should use on, J& `7 q( z# }2 c% D& x
      a non-secure cluster.  If this value is set to true, then all containers
5 _/ m+ l& [6 T" J      will be launched as the user specified in6 [  o; E, |3 h0 N
      yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user.  If1 k3 F) d7 l. z
      this value is set to false, then containers will run as the user who
0 C$ z2 N4 [, J      submitted the application.</description>" r9 _) W7 e: D; l4 ^
    <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users</name>
0 L9 h, s0 x5 s    <value>true</value>/ [, @% p1 [: F/ W! F
  </property>
& H  `' Y( F9 S9 R! O) v: z  b4 f  <property>
  D2 [1 k. }7 @, V7 _7 I* n- s, Y    <description>The UNIX user that containers will run as when
/ T5 a* L. F" G- U      Linux-container-executor is used in nonsecure mode (a use case for this
8 D8 Z- Y$ }/ Y' _" W4 W) ]: U      is using cgroups) if the' P' a1 o$ j" s: K# c
      yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users is( U. M8 l! n( d% o2 E
      set to true.</description>2 i8 H# k& V7 h! V
    <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name>
. i/ C4 f. V2 n! t' I    <value>nobody</value>
* P  `0 f( O) }2 ~  </property>
1 h9 u# J1 L- s: M" W  <property>8 k& w# Q# |( O1 _% x
    <description>The allowed pattern for UNIX user names enforced by& A; {6 T! ^/ W3 s
    Linux-container-executor when used in nonsecure mode (use case for this
& P8 a0 \6 @% q+ j+ y    is using cgroups). The default value is taken from /usr/sbin/adduser</description>, g; y( A7 I9 h& n3 B) z; b2 o2 y2 v6 e
    <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern</name>
/ S+ [0 }- i* ]" ?! a" i/ W* `    <value>^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$</value>
3 q9 `0 ?7 k; @# W  </property>
& D4 w8 M5 T- E" }+ }8 M  <property>3 D8 ?1 G, Z" t& W1 c8 Y
    <description>This flag determines whether apps should run with strict resource limits0 X4 ]: s4 U( P5 b9 ]
    or be allowed to consume spare resources if they need them. For example, turning the
# C+ H8 k3 _( z% x# i4 A. _# y    flag on will restrict apps to use only their share of CPU, even if the node has spare4 O6 _. k3 M  J5 j0 D
    CPU cycles. The default value is false i.e. use available resources. Please note that( p# y1 c- f' x8 n
    turning this flag on may reduce job throughput on the cluster. This setting does/ }7 @/ K8 |* S- a0 c% f
    not apply to other subsystems like memory.</description>3 K* T- `. V" |! k
    <name>yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage</name>& ^' A" x4 z4 g7 _4 d
    <value>false</value>
6 ?% m% y5 M/ J) ]( ~  </property>
4 L  q  v5 W; }+ D& I& d- Q  <property>
$ K4 M3 ^6 _" a    <description>Comma separated list of runtimes that are allowed when using, l: Z; I3 m; v+ S% n5 `! D3 A0 U
    LinuxContainerExecutor. The allowed values are default, docker, and4 ~0 a- M+ F' `: Y
    javasandbox.</description>
  A  e+ F# J" M8 M$ |& W    <name>yarn.nodemanager.runtime.linux.allowed-runtimes</name>
; ]9 w8 H: `5 X1 i& P    <value>default</value>+ V- @$ F9 V- f; z
  </property>: ]+ w$ p8 Z6 C; D
  <property>
% x0 y7 e/ K( H- @( v    <description>This configuration setting determines the capabilities9 b) p: L) o! H
      assigned to docker containers when they are launched. While these may not) v. B. e3 z. E' \- q: W( o7 c
      be case-sensitive from a docker perspective, it is best to keep these+ q" H& q  _1 y& ~& Q
      uppercase. To run without any capabilites, set this value to5 Z( s) _! D$ J) A
      "none" or "NONE"</description>
% S8 w) J; e2 L3 v5 r" h    <name>yarn.nodemanager.runtime.linux.docker.capabilities</name>
+ i2 y  b  g2 y+ b6 x; l: i- J3 n    <value>CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE</value>1 c2 Y2 I: R2 p. ?( h8 K% E
  </property>- A; m; m  j2 w1 [. s
  <property>* W1 k2 F1 P- p8 A. E$ r, \
    <description>This configuration setting determines if7 t" O$ n( z0 v9 |1 l
      privileged docker containers are allowed on this cluster.6 \# |: G" B& g1 X7 k# S
      Use with extreme care.</description>, y2 b5 ?( n7 A# @
    <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed</name>( I( }0 j; }* [2 i6 {: `: Y
    <value>false</value>- h9 ~4 D5 N& I( L
  </property>
$ K. q3 @- }6 g: y3 ~  v) }  <property>
8 y+ Q1 t9 D+ y    <description>This configuration setting determines who is allowed to run
* D6 m( X$ a8 o9 H      privileged docker containers on this cluster. Use with extreme care.
" \7 z. w# E) A7 d4 T4 f  {8 T. H    </description>; w4 B7 O6 A" B% P5 _0 S% j* w" [
    <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.acl</name>
0 l0 }9 F- y1 V    <value></value>
7 _7 \( Z$ L, c6 I7 x5 _) w2 F  }  </property>
" r4 V8 s) ?$ k5 _( N) h0 [  <property>
' K" q8 ?( Q6 B0 F6 H) S' t    <description>The set of networks allowed when launching containers using the
/ }* p+ t1 c+ v2 l$ f% k; W" }      DockerContainerRuntime.</description>, p. A# L! w  {  ^: P0 s
    <name>yarn.nodemanager.runtime.linux.docker.allowed-container-networks</name>
7 O/ s7 |4 P, M' p! A    <value>host,none,bridge</value>
& D; Q' j) V9 F9 J% @' ]  </property>
/ p8 m6 _( I* n2 S. v: W; E6 q2 y  <property>
0 A! N4 X7 _; r    <description>The network used when launching containers using the5 b0 Y* k: H/ ~5 n3 Z# K
      DockerContainerRuntime when no network is specified in the request1 f. m! S. L+ c6 V/ ]; e& X" Z
      . This network must be one of the (configurable) set of allowed container' J( N% T/ H" q5 E) q7 {/ P! l
      networks.</description>) t* d0 d. _) N
    <name>yarn.nodemanager.runtime.linux.docker.default-container-network</name>( [/ O9 k. Y7 j4 x6 ?& [
    <value>host</value>& H/ @. U6 b5 a" Z
  </property>
3 E% E$ `7 d' P" T  <property>
! M" M% G8 B. a- i" n    <description>This configuration setting determines whether the host's PID
' C- e! N3 @* O- h" g      namespace is allowed for docker containers on this cluster.3 K7 k% j4 l# v# H* X
      Use with care.</description>
. Q$ {) l+ g+ l, |" z    <name>yarn.nodemanager.runtime.linux.docker.host-pid-namespace.allowed</name>
, Q9 ]8 k! b0 i6 x    <value>false</value>
( h9 p! d7 i' U' u3 A( v  </property>
( v8 ]# \3 u% P" C* I: @  <property>* I7 A: |1 G- C6 d1 n) V# B) u
    <description>Property to enable docker user remapping</description>
; ~( o' A% v7 K) D8 f9 Y    <name>yarn.nodemanager.runtime.linux.docker.enable-userremapping.allowed</name>' ^  H1 U' r0 P  V
    <value>true</value>6 ]% S% C) [' ?3 _5 w+ ?
  </property>
. G* K, B  t" j# l$ f, @- \  <property>; y9 ^8 _. U6 j0 U: D
    <description>lower limit for acceptable uids of user remapped user</description>2 e& j1 Q7 v* Z
    <name>yarn.nodemanager.runtime.linux.docker.userremapping-uid-threshold</name>
% a' b3 }; R7 S3 r, m    <value>1</value>$ L( ]" E6 G9 _  ?
  </property>, M# o! [) |+ i) J; u  N4 H9 g
  <property>+ d( E+ ?8 U* e3 _- E& T
    <description>lower limit for acceptable gids of user remapped user</description>
% I! E! b0 \( a3 B3 n. H    <name>yarn.nodemanager.runtime.linux.docker.userremapping-gid-threshold</name>
. O# k( t2 N! x' O0 r7 _    <value>1</value>9 f. H! z8 v8 A/ I/ d& y
  </property>
) F3 [+ n! S8 k5 z0 p% Z1 m  <property>. U( ~8 \0 M( A& S0 _  [9 X
    <description>Whether or not users are allowed to request that Docker' U( q" O" S- ?
      containers honor the debug deletion delay. This is useful for$ r8 v( J) C# [/ \& e  }1 _
      troubleshooting Docker container related launch failures.</description>
! H+ c- F, q9 B1 b    <name>yarn.nodemanager.runtime.linux.docker.delayed-removal.allowed</name>" {8 m1 X2 e/ I( J1 D
    <value>false</value>
  s6 e2 }* ?/ R. x  </property>, O$ i3 [7 S. d7 m9 k4 V
  <property>
% s( o. ?  k/ l) g    <description>The default list of read-only mounts to be bind-mounted
* t: k+ B/ a/ N& Q4 t* n& a      into all Docker containers that use DockerContainerRuntime.</description>" d" s8 S# X' l; Q3 V
    <name>yarn.nodemanager.runtime.linux.docker.default-ro-mounts</name>
; i: ]2 `5 X8 x# i    <value></value>% W% \# f, k; o, r
  </property>; x1 B2 \. g7 [- t
  <property>  C2 [7 w$ V7 N8 N; S( g
    <description>The default list of read-write mounts to be bind-mounted% v( T  b- h% L1 N6 Q: \- b
      into all Docker containers that use DockerContainerRuntime.</description>* x- m3 U! U) q) L7 Y7 m4 W. {
    <name>yarn.nodemanager.runtime.linux.docker.default-rw-mounts</name>7 r) I' J3 \- v; R& M2 i
    <value></value>
9 L7 X7 S  R, R8 Z1 m& L  </property>$ R* B( O! N9 M& U/ x0 ?# B
  <property>5 |  k% |* t, e
    <description>The mode in which the Java Container Sandbox should run detailed by3 U1 p; d6 y! L1 P
      the JavaSandboxLinuxContainerRuntime.</description>1 d8 E9 W  I7 f7 b8 D
    <name>yarn.nodemanager.runtime.linux.sandbox-mode</name>, ~) ]+ [  v1 p3 f
    <value>disabled</value>
: K: F6 f0 n2 i8 x# h7 o& [  </property>; s4 G( R% A3 B% ~: e6 w
  <property>. m! n% o" W+ Q* F7 U
    <description>Permissions for application local directories.</description>
$ S* x! s" L1 H  w    <name>yarn.nodemanager.runtime.linux.sandbox-mode.local-dirs.permissions</name>
) ~% j! W9 B2 z    <value>read</value>+ ], X3 v! L" O9 u5 v& c
  </property>7 r: b7 ?& B8 {: N9 G8 U
  <property>& ], w' P) ?$ W+ G, Q
    <description>Location for non-default java policy file.</description>
: F/ K3 x$ N) `& b    <name>yarn.nodemanager.runtime.linux.sandbox-mode.policy</name>
" [- u) Y5 \" j" f' y( H, B    <value></value>' R5 c( z' S$ D6 @& m  W: t4 }: [3 \
  </property>
: {* H1 D$ Z: _: e4 T/ k  <property>
' ~* t: c; {9 P! |    <description>The group which will run by default without the java security+ v4 T/ b& g) B4 }' f
      manager.</description>. ?- t( o% n) V6 m* |2 k
    <name>yarn.nodemanager.runtime.linux.sandbox-mode.whitelist-group</name>
7 h" v) g, A! z. N) e    <value></value>
# m3 ^: E$ A! N  I5 E  </property>4 t3 E; v5 j8 ~' U8 d
  <property>
7 O* C2 X$ A7 P* B$ f    <description>This flag determines whether memory limit will be set for the Windows Job$ ~- E3 G5 U) L6 ?
    Object of the containers launched by the default container executor.</description>
9 d) R, N8 [6 K- q+ L) O( o$ Y    <name>yarn.nodemanager.windows-container.memory-limit.enabled</name>/ C4 O, @  r8 K* s3 h0 D
    <value>false</value>$ e2 L. T) F1 Y
  </property>
) w6 a! `& [0 [2 R6 w  <property>& s, j9 f5 U$ ~& f& S. b- i1 w
    <description>This flag determines whether CPU limit will be set for the Windows Job! w* M; p' E+ h* g
    Object of the containers launched by the default container executor.</description># s7 b" p+ t  `, h4 N0 [
    <name>yarn.nodemanager.windows-container.cpu-limit.enabled</name>) }8 z: O+ M/ D& S; R" s4 x
    <value>false</value>
5 {, E- W& @3 |  </property>
1 M* a( \! Y2 w4 U+ `  <property>
$ W" U# d; r1 B7 i    <description>
4 U8 S( a  v5 a8 q8 ?2 m, a% h  M4 m8 W    Interval of time the linux container executor should try cleaning up1 X" @+ W; ]9 {, {
    cgroups entry when cleaning up a container.
  M; I' u2 d; G$ B$ G5 u    </description>
6 E; [9 ~5 c2 n1 [- Q% ~7 k9 ]# L    <name>yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms</name>
6 \6 r- ?: A2 E% ?+ Y/ w" r+ F    <value>1000</value>
' k8 N# U! l; l% n# [# c0 H  </property>
# q' y' L9 P' H* E  <property>% M' y2 X1 Y& n0 n# a7 S
    <description>
6 M* W, a' O/ p: J    The UNIX group that the linux-container-executor should run as.
( J+ Y, b' C# }! [    </description>
' R) I  f7 b, a6 o+ Y    <name>yarn.nodemanager.linux-container-executor.group</name>  C0 p4 M" ]/ z
    <value></value># D8 `. x4 U( X- x
  </property>
1 B4 `  H) g! R/ q1 B7 ~$ e  <property>: i" b, u* L0 c, H$ M, }. D+ y
    <description>T-file compression types used to compress aggregated logs.</description>& ?4 i& ?  U  w! ], N
    <name>yarn.nodemanager.log-aggregation.compression-type</name>* Y; G5 J  a! X& T
    <value>none</value>- }" G$ b& V2 T( N
  </property>4 _$ H8 Y9 p0 g/ [8 }0 i4 z9 c% e
  <property>5 z. E/ |% Z$ b
    <description>The kerberos principal for the node manager.</description>( H. S; I- Z4 P- X
    <name>yarn.nodemanager.principal</name>7 i3 C0 X& v  |2 d
    <value></value>
9 L  c; e0 X9 C2 j$ k7 f$ H6 W& c  </property># `- e; O/ l' U9 b6 i% w
  <property># h, G) E5 Y+ z: m
    <description>A comma separated list of services where service name should only
+ I8 ^0 \! j, m7 @      contain a-zA-Z0-9_ and can not start with numbers</description>
/ T/ h2 T5 c+ L0 k    <name>yarn.nodemanager.aux-services</name>
( i; V' H% k/ E, ]2 V    <value></value>
% c; e4 O$ x" b7 M& N    <!--<value>mapreduce_shuffle</value>-->  R; h4 E; H. ~5 V
  </property>
" a% K5 @7 \3 t0 m5 i- B# W. b  <property>% H6 o2 n: W( Q1 @# M( y5 c
    <description>No. of ms to wait between sending a SIGTERM and SIGKILL to a container</description>7 D9 u5 t4 e# t+ ]
    <name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name>
" n  U! t9 ~8 B( c. H8 M" Q    <value>250</value>
, [" E9 G" u3 A8 O4 ^+ K  </property>
# k& |2 C, N; w& L9 e3 `3 q2 ~  <property>
# D" P  C+ n1 V/ L' z& F    <description>Max time to wait for a process to come up when trying to cleanup a container</description>
7 K5 E$ n+ o0 {0 U    <name>yarn.nodemanager.process-kill-wait.ms</name>
2 k5 q! Y2 K/ K2 m- q    <value>5000</value>+ I: v, a" l" e. s
  </property>3 V/ K/ [7 K8 @. F; x, o- Y+ q
  <property>- d6 P  B) ^  q/ n6 R
    <description>The minimum allowed version of a resourcemanager that a nodemanager will connect to.  
9 L; J0 L: J0 N* v3 Z      The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is
% v. v, N2 E0 K      equal to or greater than the NM version), or a Version String.</description>
- B8 f& R9 y7 v$ p1 f1 T4 I    <name>yarn.nodemanager.resourcemanager.minimum.version</name>
/ o- w9 H8 v9 }* c6 _+ |3 }    <value>NONE</value># ^7 C: m/ d" n7 c  h& C' E' O6 ?
  </property>& a! N' t! e% y+ X
  <property>
* m& r" F: s. t    <description>Maximum size of contain's diagnostics to keep for relaunching4 x2 R8 F* s3 D6 j
      container case.</description>
* ?* @9 C! b5 |$ G    <name>yarn.nodemanager.container-diagnostics-maximum-size</name>
7 H) [# m8 _8 D  e  }    <value>10000</value>- {8 O7 p6 Q6 j9 P- P& S
  </property>5 h! V! g1 }% Z: N" C  g
  <property>
) e% ~$ I) D; a3 s  t2 V$ _; m    <description>Minimum container restart interval in milliseconds.</description>
. n  a  U$ A# X4 Q/ ?    <name>yarn.nodemanager.container-retry-minimum-interval-ms</name>) D6 `- p4 O. _1 n. M
    <value>1000</value>
+ p! q2 T* {2 h0 \' R" p  </property>+ S8 v/ D2 l5 U/ I8 j
  <property>- ]$ F8 d& S# L/ U: ?4 h
    <description>Max number of threads in NMClientAsync to process container* W/ Z- e: t$ F$ V* ]
    management events</description>
8 s; V" @8 V6 d5 m    <name>yarn.client.nodemanager-client-async.thread-pool-max-size</name>! y+ ?: J0 ]0 u
    <value>500</value>
5 h9 m8 c5 M2 @+ G6 N  </property>
( O1 z$ p1 H" U- f) V0 ^1 S0 t  <property>
4 \5 ?* o8 n. l4 V/ H    <description>Max time to wait to establish a connection to NM</description>
. _9 A" u" D, h0 \. i# \: E7 o    <name>yarn.client.nodemanager-connect.max-wait-ms</name>
( y; T6 k3 o  H8 m    <value>180000</value>; t$ U& B5 T0 o3 J
  </property>8 }5 R2 o- G& a# G# s$ f, Q' \
  <property>
0 ~+ n: ~; R2 v3 {$ \& F3 ]    <description>Time interval between each attempt to connect to NM</description>
& v( T# @" w* }! w  U    <name>yarn.client.nodemanager-connect.retry-interval-ms</name>
7 Q- H0 s9 I( l6 b    <value>10000</value>
. l& _1 T: n7 f2 o4 T4 E7 P  </property>  g  \6 v" k' }- P
  <property>' t$ x! G) h9 b' o( o# y1 K
    <description>
" U" r! D) U' K0 _1 G- j1 o      Max time to wait for NM to connect to RM.& J) f' Y/ M/ Z2 s$ [4 H- X$ L
      When not set, proxy will fall back to use value of% A* y; O% Q( C8 ]( [4 w
      yarn.resourcemanager.connect.max-wait.ms.' r( f; n3 Y, w
    </description>
" Q8 |9 i! A: r! m/ a& e7 z    <name>yarn.nodemanager.resourcemanager.connect.max-wait.ms</name>
  u' x' Q" w5 V* W' ?$ H% m* T( J& p$ v    <value></value>
0 g9 Q  ^7 g& k- W1 X! c: a8 R  </property>3 ^, Q3 K; }; J1 B: o
  <property>
: u5 u, ~- r  _0 a- s2 P( u    <description>& N6 \$ A; D# A9 N7 N0 ]8 K, o
      Time interval between each NM attempt to connect to RM.
9 ]$ j& l5 r; l      When not set, proxy will fall back to use value of
/ m" s7 W! O& F% {      yarn.resourcemanager.connect.retry-interval.ms.
$ R% ]+ F' n7 ^  X% r    </description>& x7 g) i: B& J
    <name>yarn.nodemanager.resourcemanager.connect.retry-interval.ms</name>  B' t9 w& D  O6 H
    <value></value>
7 C: R. k1 H# \- M% i  </property>
" a6 Z; ]2 y6 }1 M  <property>
' F8 m" j0 M  _7 N  F. |    <description>
  D6 \: r9 S& E& [- T; j      Maximum number of proxy connections to cache for node managers. If set
1 @, @; @$ c$ h9 _      to a value greater than zero then the cache is enabled and the NMClient
# g& j  x" ]( n      and MRAppMaster will cache the specified number of node manager proxies.0 u6 Q8 T/ f9 m3 H& ?. r
      There will be at max one proxy per node manager. Ex. configuring it to a
* P, X& h, v4 [: a      value of 5 will make sure that client will at max have 5 proxies cached
/ U0 h8 C: F, P      with 5 different node managers. These connections for these proxies will
6 ?# m$ ?) H3 X+ i8 }      be timed out if idle for more than the system wide idle timeout period.
# i* [, V8 `$ F0 F" i$ h      Note that this could cause issues on large clusters as many connections, M7 v+ ^3 S" W6 ]& s
      could linger simultaneously and lead to a large number of connection) m* H* o( M& w+ H: n1 n
      threads. The token used for authentication will be used only at9 P# ~$ Z- a9 d) Q
      connection creation time. If a new token is received then the earlier
2 T, H: m8 A% q4 |  z2 ]      connection should be closed in order to use the new token. This and. W& q; D9 I. k) ?( q
      (yarn.client.nodemanager-client-async.thread-pool-max-size) are related( C4 {% z) y5 N6 _$ y& X
      and should be in sync (no need for them to be equal).
4 m/ q( X; T6 }) e7 s/ }      If the value of this property is zero then the connection cache is
6 g! P( i( D( p1 |/ x      disabled and connections will use a zero idle timeout to prevent too
( S$ F  D$ @9 w+ g  A      many connection threads on large clusters.
9 n( Q# F$ o/ s    </description>$ C6 M& d  w& r- y6 H4 p9 V
    <name>yarn.client.max-cached-nodemanagers-proxies</name>
# a9 Y/ K% v4 ~    <value>0</value>
4 F: R" s: h! f& E2 ^, p4 M  </property>
8 s. q# L# s5 q# x7 P5 Z  <property>
, N# H; _; o0 \( a- w    <description>Enable the node manager to recover after starting</description>
9 L! O+ o- r& x0 t* _9 _! p    <name>yarn.nodemanager.recovery.enabled</name>! K& g- g, [! F* M( a! K( u& p  v
    <value>false</value>
0 I5 x$ ]  E- t1 Q- N3 ^  </property>
+ P" ]- ~) \% j, b7 Z) H1 r$ C  <property>
" j8 L/ C' \6 Z4 A    <description>The local filesystem directory in which the node manager will
* ?! L8 l3 n% H3 S+ T" X2 t5 y    store state when recovery is enabled.</description>9 y% W# c) a/ @) v* f/ ~+ N
    <name>yarn.nodemanager.recovery.dir</name>: }% k) F, U  p) v( s0 `, b+ A
    <value>${hadoop.tmp.dir}/yarn-nm-recovery</value>  ~* x. h3 b& B
  </property>
& \' ]2 z8 e5 F! n& h  <property>
- V0 n8 ?1 ?, S9 j    <description>The time in seconds between full compactions of the NM state
7 L) d! H1 U0 _+ g  U) d9 e. r    database. Setting the interval to zero disables the full compaction
$ r$ V, P' e) L    cycles.</description>3 s3 d* r! V/ P: N& ~
    <name>yarn.nodemanager.recovery.compaction-interval-secs</name>8 [$ W7 D9 Y2 [, o# a; a, p( Q
    <value>3600</value>
1 }/ ^+ a% V0 J  </property>8 _% k+ P* X7 _! f
  <property>
' T& H* b  c; L2 U8 d    <description>Whether the nodemanager is running under supervision. A
; h- o# A& c. k7 z+ m      nodemanager that supports recovery and is running under supervision
4 l, F  R$ M( O8 o& _5 k      will not try to cleanup containers as it exits with the assumption
& D. e9 w# I. B& h; ^0 Z# c" W9 u      it will be immediately be restarted and recover containers.</description>
! f) c3 `$ G, |& b# g    <name>yarn.nodemanager.recovery.supervised</name>
+ N  O& z- {; s0 _0 |    <value>false</value>( y- D. E# z. H3 m3 n
  </property>/ Y$ k; [: z/ [
  <!--Docker configuration-->
! z# J1 J7 s  P+ I. F" n% ?1 ^( e  <property>
- p* O/ C0 z7 J. g7 |3 E# B    <description>% p  u1 x. K$ x1 S$ R7 Y1 v( D
    Adjustment to the container OS scheduling priority.  In Linux, passed
! m$ E! T+ ]0 J5 Q; k, b" k    directly to the nice command. If unspecified then containers are launched
* |" H- p" Z, X* [: w8 a4 y  k    without any explicit OS priority.
/ I$ t/ f& `7 F, e$ `" I    </description>8 j* a# `8 }) P4 V2 L" j/ J5 K, B
    <name>yarn.nodemanager.container-executor.os.sched.priority.adjustment</name>
+ h- d( [- P0 u  </property>% b6 F- y) q- U$ u0 I- i+ n# `
  <property>
( S+ \: x! w, [& }- N    <description>
1 c7 x4 W/ h5 p# C! [    Flag to enable container metrics9 d" b: [) L5 x- i- b
    </description>) ], _! I& w. U( m. d0 ]
    <name>yarn.nodemanager.container-metrics.enable</name>1 M4 x  d, J6 w" Y; z1 i  c, Y+ m
    <value>true</value>
8 I3 J" V. h6 z: E- T0 ^: Y0 ^8 e  </property>
" }' o% p/ s/ t$ n) H6 }  <property>/ a; B8 C8 V9 D& T# m3 p( o
    <description>
' T; D# k5 N! J    Container metrics flush period in ms.  Set to -1 for flush on completion.# z6 M2 {' P2 u
    </description>' d0 w! K9 i4 b$ }* J) _
    <name>yarn.nodemanager.container-metrics.period-ms</name>6 a& C2 X. ^) p
    <value>-1</value>
3 B/ S; W' n( d' x7 d: a8 e  </property>* @% Q- N0 \9 Y7 P
  <property>6 I  ?/ }, B6 t' P9 T& l
    <description>
5 ^3 M) B9 s9 v) {4 r    The delay time ms to unregister container metrics after completion.
; h; j, G# z5 J7 t! p! K2 z. k    </description>
6 w# [; B  M2 V" w( Q8 |0 |' G    <name>yarn.nodemanager.container-metrics.unregister-delay-ms</name>
1 y, f0 n& |1 s. A, N+ c9 {: y7 q    <value>10000</value>: Q' ^% _+ x( e, K8 N! U" X* e
  </property>6 X% w9 u/ [- O% ?0 }7 G7 a! v5 S1 k# Q
  <property>
" i+ B% ~, F4 \! ?! M# a2 M    <description>' [) N4 Z; r  F4 p; G2 f
    Class used to calculate current container resource utilization.
$ F) |) n) B  n+ n    </description>
$ b9 `/ }1 M  \: K: y2 ^- T    <name>yarn.nodemanager.container-monitor.process-tree.class</name>
' c/ m- l4 s3 N" ?; y! b  w* V6 K" E    <value></value>7 D9 Z1 [  ^" S  P5 ~% F# N5 o
  </property>' C( j/ o/ S3 T) R' R" i7 I
  <property>
( E# E" `' a9 L6 z: G. Q  G    <description>$ `8 L6 B& b7 x+ o! c& H* l
    Flag to enable NodeManager disk health checker
  y7 V% z* `; X+ N1 E+ O    </description>" o5 B9 r0 e$ N" ~9 Z
    <name>yarn.nodemanager.disk-health-checker.enable</name>  g7 b" h$ f' y# K* m. D. D% U
    <value>true</value>
% x( \1 O2 N* \: T8 g9 _9 Q1 ~0 E4 S  </property>. `* @* ^4 }1 X, H* y! l4 e* a
  <property>5 \: Q/ Z4 B4 `1 |" s0 R
    <description>; i' b  L$ ?* \/ T
    Number of threads to use in NM log cleanup.  Used when log aggregation
- p, S6 H( l, U3 X7 ^. {    is disabled.
' F# u3 ?( v% h; X    </description>
7 P2 d/ y: u$ D/ a/ f/ \    <name>yarn.nodemanager.log.deletion-threads-count</name>8 v/ Q. q- C: Q/ o1 M* B2 C5 }
    <value>4</value>1 Y$ N& @/ I9 N' u7 F# m+ [
  </property>  m5 W& m& P3 y+ x
  <property>6 N7 m- ^+ ~" m0 I( P: J  P
    <description>
2 d. C/ O8 {: e$ @( ?9 R2 [2 B& j    The Windows group that the windows-container-executor should run as.7 H) a3 [# [0 i' s# V, a! d6 o( b' |
    </description>
) {% C/ s' _: P; I    <name>yarn.nodemanager.windows-secure-container-executor.group</name>4 ^6 x6 ?/ E& f8 P* y- A" F' t3 Z
    <value></value>* R$ Q$ t* Y; ]" \  R
  </property>
7 l/ C6 }$ t* y  <!-- Map Reduce Configuration -->1 D0 O1 d: h/ D8 j# c0 I
  <property>) @/ y9 y! {; S! k/ G" Y
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
0 c5 m' a! O) _+ p. m    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
6 A7 c. C  ]9 ^1 [  </property>
3 I( W: X9 C5 H: I/ ~3 v$ [! q. O  <!-- WebAppProxy Configuration -->! n+ i! l8 D4 o" B5 ]& S% s
  <property>6 {, o, O/ B0 h' r; h" g- n
    <description>The kerberos principal for the proxy, if the proxy is not
, ]  K. g! z/ \. y5 t& b  i0 a3 F    running as part of the RM.</description>$ E6 ?/ V. g+ ~! `+ h. o1 p) x) J
    <name>yarn.web-proxy.principal</name>
: H( {+ w" C* H  ?6 F' x  ~    <value/>) |; o2 @: C" |3 [
  </property>
2 J5 ]# B; m7 y3 r0 ]4 y" h  <property>
1 q6 n: H4 z# c4 Y! n7 R- Z    <description>Keytab for WebAppProxy, if the proxy is not running as part of ) e, y/ L0 D8 ?# K% \' E
    the RM.</description>0 c5 n: u) ?' v+ n; |
    <name>yarn.web-proxy.keytab</name>. v  p* [1 g% P. w
  </property>: x( ^' `: ~7 p# B! A5 |
  <property>
: M+ J% j" N6 o    <description>The address for the web proxy as HOST:PORT, if this is not
& k' B( K2 N6 h     given then the proxy will run as part of the RM</description>
2 I% g# B) c1 {- T. f; g+ ^2 d     <name>yarn.web-proxy.address</name>  L7 m, u0 s( e) i
     <value/>
2 E/ |, y  @2 J& K2 N1 U  </property>7 ^5 G+ |( v1 M# f. b
  <!-- Applications' Configuration -->
* x1 d0 B' D$ Q( C' A% C- ^  <property>
( s, H; F; Q" \! N9 W) u    <description>1 K8 i& t. n7 Y& E; r
      CLASSPATH for YARN applications. A comma-separated list" Q1 u2 c; W4 d( m
      of CLASSPATH entries. When this value is empty, the following default
$ `) W; c. X. v0 O      CLASSPATH for YARN applications would be used.
! A0 Z7 B  n9 C. _      For Linux:
. l# P3 o- I& t" c) r/ v      $HADOOP_CONF_DIR,
+ h/ Y: S' C9 T+ y! m3 j      $HADOOP_COMMON_HOME/share/hadoop/common/*,
$ M, M5 ^) Q( J3 j" C      $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
2 q' l6 ?5 R6 I) J      $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
. w: Z6 h/ e7 X; c      $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
1 F5 L% l5 T% V& ^7 f; T2 s      $HADOOP_YARN_HOME/share/hadoop/yarn/*,, d/ [; [% u% B2 C! r: f
      $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
/ o. I- T  L, q, ~8 F4 d      For Windows:% z5 Z* c( R- Q. \$ p5 ~
      %HADOOP_CONF_DIR%,( N; _: m$ h* ^4 Z- A  E( I' p$ S
      %HADOOP_COMMON_HOME%/share/hadoop/common/*,7 r9 h, Y# ?) B4 B1 _1 e8 y5 A9 S
      %HADOOP_COMMON_HOME%/share/hadoop/common/lib/*,1 @* @: \& g, k0 B: `1 S- G% A' D; ^9 r
      %HADOOP_HDFS_HOME%/share/hadoop/hdfs/*,! }  H4 L( h* a1 `# v8 P
      %HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*,
9 e4 p* Q  w) E" {      %HADOOP_YARN_HOME%/share/hadoop/yarn/*,
4 M3 m% ]% k5 T9 N9 X* i9 y5 `      %HADOOP_YARN_HOME%/share/hadoop/yarn/lib/** c) }8 U8 U* v' Q; h( x
    </description>
8 ^1 U7 q! H( G7 A4 o  b6 \" `9 r/ _, L7 k    <name>yarn.application.classpath</name>0 v! D4 ?5 d/ C( E1 m
    <value></value>
, V& z! [' M# N' B3 f" v3 ^5 S  </property>3 S- o$ h9 h1 f: @- q
  <!-- Timeline Service Configuration -->, v6 V5 d: m" C' I& M
  <property>
2 ]! M  `6 g. Y% Y+ ]5 t    <description>Indicate what is the current version of the running+ ^8 M3 ~1 ?9 b
    timeline service. For example, if "yarn.timeline-service.version" is 1.5,
# m8 n) Y; e8 A& k    and "yarn.timeline-service.enabled" is true, it means the cluster will and$ K& \- y7 i6 D3 I7 Y
    should bring up the timeline service v.1.5 (and nothing else).
6 H, s, h0 G/ f" r, f! Q$ a# p    On the client side, if the client uses the same version of timeline service,$ k+ m5 Y, q/ q
    it should succeed. If the client chooses to use a smaller version in spite of this,
! q- G+ }' a, q# w    then depending on how robust the compatibility story is between versions,, M% e3 @) e3 G& e1 r' z
    the results may vary.; b/ r/ @8 k0 e  U/ P
    </description>
5 ~% j* a! e5 L- f+ i4 U1 h3 i    <name>yarn.timeline-service.version</name>
% O8 N  s" Q& ]" O; ]* \    <value>1.0f</value>
! {" K3 Y, c: [: b: Z/ p) g  </property>& w; G0 V3 t$ f' y/ @8 w) d/ B8 {8 A
  <property>! P4 E/ z6 b: f4 g9 p. z
    <description>. Z( P( C; z  s/ L( _  ^( @9 U
    In the server side it indicates whether timeline service is enabled or not.# p- H, W, q, A' O
    And in the client side, users can enable it to indicate whether client wants
: M4 V8 @5 i% w) [! R+ N, H  g( H    to use timeline service. If it's enabled in the client side along with
# W$ T. y* Q: r/ o5 J# E  _    security, then yarn client tries to fetch the delegation tokens for the3 ~3 @( I3 j; s3 S# u
    timeline server.
5 A5 `* X0 s' [6 |# b* g    </description>
5 ^' C) Y& \; c& h8 t: ^    <name>yarn.timeline-service.enabled</name>
4 g* Q5 N/ J. N    <value>false</value>
- [+ ]5 j% R) ^4 b6 D( ^  </property>
$ S! f% j$ l3 h# r, z: a# x; {0 }  <property>' d  v3 |' a8 d1 `+ w3 q
    <description>The hostname of the timeline service web application.</description>
! C% R' k0 }0 ^" t0 p    <name>yarn.timeline-service.hostname</name>) N" [. ^' {& n/ @- }- K% \
    <value>0.0.0.0</value>
& X& i' D3 |- d2 W. r% t4 s8 p% v  [  </property>. Q4 y6 k6 @( w- X+ T6 H
  <property>
2 p' b* H, F% \1 J* O    <description>This is default address for the timeline server to start the0 [+ @& j4 H( J4 T* \
    RPC server.</description>: F( \$ e% x6 d" @# w+ D) Q
    <name>yarn.timeline-service.address</name>
7 {. ^; U3 q0 N    <value>${yarn.timeline-service.hostname}:10200</value>
6 h" `& ^& S) _) E5 _. m  </property>
; w9 F& _" A' s% p" K& r  L  <property>
, s" v$ u. R$ b  ]: \( {: h# p    <description>The http address of the timeline service web application.</description>
. p2 P% o8 N0 G    <name>yarn.timeline-service.webapp.address</name>% z% k+ g5 `! }; A
    <value>${yarn.timeline-service.hostname}:8188</value>. ?  K6 C) n5 @1 W% E
  </property>4 h6 F0 I4 {+ Z$ |" E! Q: L" I
  <property>
( S6 y- b2 W4 u, B* L) W$ D' y: @    <description>The https address of the timeline service web application.</description>
# k( f- x$ v9 `# s    <name>yarn.timeline-service.webapp.https.address</name>) v% h3 S7 C1 G; o6 t
    <value>${yarn.timeline-service.hostname}:8190</value>' M7 v' p7 _( o+ l+ `' |6 p
  </property>9 u3 K& {6 M' [5 S7 A
  <property>
7 q. `8 {1 p( X; H    <description>8 v- Q3 H) J1 Q* U6 \6 l
      The actual address the server will bind to. If this optional address is. ?" Q0 k% L' e3 W) Y
      set, the RPC and webapp servers will bind to this address and the port specified in2 x- j, H8 E" P: P- D
      yarn.timeline-service.address and yarn.timeline-service.webapp.address, respectively.
3 ^+ c- {  U7 I% u, C# f6 A8 _7 Z" o      This is most useful for making the service listen to all interfaces by setting to
/ S8 S; M0 h( @      0.0.0.0.
4 l+ A& c: |$ Y    </description>9 x7 i" p& v* t" n
    <name>yarn.timeline-service.bind-host</name>. z+ K7 x; ^& A$ A7 c, T( e( B
    <value></value>
4 E0 J7 N4 P& ]  </property>" u& X$ @, v$ M; U' U
  <property>
  q3 Z/ W7 L1 b$ R    <description>4 U4 z7 Y# ?, I4 A
      Defines the max number of applications could be fetched using REST API or
2 s: N. r4 I* b4 \      application history protocol and shown in timeline server web ui.
" Z/ W: r! e! e( C    </description>
; k8 m* W, C2 G    <name>yarn.timeline-service.generic-application-history.max-applications</name>
) z& N' q$ F2 v/ v! q  R1 R* d    <value>10000</value>
) a0 P% m# Z# n/ m! I) W  </property>
/ P: y$ ]2 D2 ^1 \8 }  <property>
/ O% I5 N, a7 m+ ~' U; p    <description>Store class name for timeline store.</description>/ K9 }$ I5 o! Y, D
    <name>yarn.timeline-service.store-class</name>
0 R% Q# _. \. m5 ?1 ~    <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
; U9 Q% a: Z8 x/ g7 l! Q! d( Y  </property>* Q7 b, K: ^; J. S6 r" n3 y/ ]. [6 L: t$ |
  <property>
) V& W' I8 M1 r+ B) k& l    <description>Enable age off of timeline store data.</description>
6 c3 q. _8 L0 d% }) ^    <name>yarn.timeline-service.ttl-enable</name>5 p! n, \$ e3 n
    <value>true</value>6 N6 v9 k" e; \7 {9 D, q
  </property>
3 Z* A' a3 ^6 v$ `# d5 e& y  <property>
5 e+ |4 j! c) |8 [* I) ^; N& K5 M    <description>Time to live for timeline store data in milliseconds.</description>. Y0 C6 A8 h0 y: x
    <name>yarn.timeline-service.ttl-ms</name>$ f# j: b- {$ `3 u$ c7 s8 ~' F
    <value>604800000</value>8 A7 y" J5 N3 k2 q4 o
  </property>1 h( R; z( i2 K7 _8 i
  <property>! J# A( T4 p2 k) ^
    <description>Store file name for leveldb timeline store.</description>! x9 M' |- Z7 i  }( o* w* ^
    <name>yarn.timeline-service.leveldb-timeline-store.path</name>
$ U, z) d, P8 I; ^$ R) Y2 ^  M    <value>${hadoop.tmp.dir}/yarn/timeline</value>
% I% f! x& B, A) L) z  </property>
$ O/ o$ B& C2 U1 i: i0 |, s  <property>
0 F& z( o; U$ b    <description>Length of time to wait between deletion cycles of leveldb timeline store in milliseconds.</description>
9 R$ F8 K! f9 }0 U( I0 P    <name>yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms</name>
+ l1 O$ `! ?* a' A8 w    <value>300000</value>7 V" P# J$ x7 l  O
  </property>
$ r0 k, k8 ]: m# \  <property>
& U9 K2 P" m8 H) J5 R3 X    <description>Size of read cache for uncompressed blocks for leveldb timeline store in bytes.</description>, n2 Y4 A; I, V! b+ C
    <name>yarn.timeline-service.leveldb-timeline-store.read-cache-size</name>- s5 j0 [# M( _# a
    <value>104857600</value>
( _4 X9 _' r% d) J+ o4 Z# `! V  </property>$ T: H; L9 [# M2 t, x
  <property>
, A6 d9 ~; m" H+ ~3 @7 r    <description>Size of cache for recently read entity start times for leveldb timeline store in number of entities.</description>2 k* D: I: M4 B
    <name>yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size</name>, f- e/ O8 u4 {
    <value>10000</value>
3 s- C, M& d4 ?/ A# `  </property>
; ]* D2 S* h* A" `7 n, r' h  <property>" g2 M: {) Q& a& t
    <description>Size of cache for recently written entity start times for leveldb timeline store in number of entities.</description>
; ]1 L2 r2 `0 B' y  B) \    <name>yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size</name>
1 H) _1 p4 j+ f; K# E9 r    <value>10000</value>
( S, }  Q9 @2 w# h  </property>. N/ N7 g% }5 g5 h) |& c
  <property>9 Q4 ?# p3 D3 s5 \5 z
    <description>Handler thread count to serve the client RPC requests.</description>
- u* W; o$ I* |5 s5 z$ U    <name>yarn.timeline-service.handler-thread-count</name>
% y3 D! e% _( O% M6 H; W/ [    <value>10</value>6 a& \$ S- O9 ^3 [! J
  </property>! ^# i2 j) Z$ C- u$ Z
  <property>
, r" d' ?! w7 B+ {  a    <name>yarn.timeline-service.http-authentication.type</name>
5 o4 M/ s2 j3 N# ^9 `+ @" k; a. X  H    <value>simple</value>2 S: e0 {3 r& C* U1 o( p3 @6 Y
    <description>9 O8 n+ ?/ ?1 l  z' M! P
      Defines authentication used for the timeline server HTTP endpoint.
+ @( x) B8 Y6 U" R" p      Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME## g1 V8 c. F. w. i3 ^
    </description>8 C1 G" M; ]" d: a% B
  </property>
9 T% e$ S" ]- ]+ @" U! e7 Q! n* o" E  <property>( S5 }$ ?% n% R% E
    <name>yarn.timeline-service.http-authentication.simple.anonymous.allowed</name>
8 Q/ A1 J8 ~* y/ Q4 {) O) H    <value>true</value>) c: n5 J2 t0 j2 J1 j# r$ J6 {3 o
    <description>  C% v% X, i9 y  C! M. q0 N9 h
      Indicates if anonymous requests are allowed by the timeline server when using% [" {$ C* P7 A. [3 c8 G+ q
      'simple' authentication.
# W, a8 ?& A9 c7 B; c, `2 q7 a7 t  T    </description>  @' B; ?/ f3 M7 t7 f4 q# z* L
  </property>
. F9 L2 E- C: Z5 w3 e; ~! Y  <property>
# P+ m+ o) T6 h0 i    <description>The Kerberos principal for the timeline server.</description>9 [# U. c8 I1 n0 i8 G
    <name>yarn.timeline-service.principal</name>
% Y( m$ N* @& g5 N% j! j. d; D    <value></value>9 q4 U' t$ W+ ^
  </property>
4 d3 r- y' B7 T- h/ n  <property>
1 a( r  `$ ~( z" S8 R    <description>The Kerberos keytab for the timeline server.</description>
' Y+ {; C6 l; `( i; P6 r6 z    <name>yarn.timeline-service.keytab</name>9 F- E+ K0 T) |- F
    <value>/etc/krb5.keytab</value>
8 f4 e4 T, n# Z* ^1 S  </property>
9 [5 |" Q* ~3 o' I; |0 x; [0 K) v  <property>
1 x* f8 }, p+ E    <description>Comma separated list of UIs that will be hosted</description>$ S; t; S( L$ Y) q- D3 @+ Q; J7 L
    <name>yarn.timeline-service.ui-names</name>8 A  w. I6 t) I" X& V
    <value></value>
0 U, _6 r1 ]) B9 F3 g4 [  </property>) A! ]! A  o, u- T  R. E
  <property>
8 }# X* U# z# R0 W' e8 j! V    <description>$ U7 O- r) m+ \# x
    Default maximum number of retries for timeline service client5 L( J% P, o0 e1 l" D' e: b" v
    and value -1 means no limit.$ w; \& U3 K. X
    </description>
: G, p- C- ~" B, T4 [3 D    <name>yarn.timeline-service.client.max-retries</name>/ f3 n4 g. S6 L$ h. P
    <value>30</value>
2 B' I3 v' s! x  l& m" i  </property>
/ {: c9 E; i* o  s6 D" x1 F/ i, x8 {  m  <property>/ {0 _! B5 `7 e: I
    <description>Client policy for whether timeline operations are non-fatal.( N  F- `5 e8 r1 b) ^) s) G1 n! E) I- [
    Should the failure to obtain a delegation token be considered an application
' V! o0 e3 T  Y7 K. N) x2 @    failure (option = false),  or should the client attempt to continue to
) |, v3 s$ _! [0 ~* J    publish information without it (option=true)</description>. n9 i+ k0 F# t( q! P; W
    <name>yarn.timeline-service.client.best-effort</name>& y" d2 K* ?# ~
    <value>false</value>5 ~6 ^( {/ g; W2 k* O7 |( E2 q
  </property>
# v5 V2 n" O+ T8 U0 {% Q9 N+ p  <property>
& [7 \) x* A+ g& z% }5 u8 D    <description>
  p$ g) h5 T' T4 d0 z- m    Default retry time interval for timeline servive client.6 x9 m) F0 M  c3 {( `
    </description>
% C* {' h& p  A$ c# v. m4 v. F0 w    <name>yarn.timeline-service.client.retry-interval-ms</name>
: Y: `+ f0 }( l9 S$ [' Y1 [    <value>1000</value>
  U+ D# H. ]9 R0 A8 [  Q" M. j: N8 r  </property>  f1 K; \6 T: Y& o! S
  <property>+ z; ?, o$ f2 }$ r
    <description>
1 }" h9 w! n8 o9 e/ a0 X3 X1 W    The time period for which timeline v2 client will wait for draining
+ u5 o! o: j7 O# U: s0 y) J9 ^    leftover entities after stop.) u  Z+ w: C1 ^% F/ |* O
    </description>
& q" y/ p( G; I3 {. g* {+ o    <name>yarn.timeline-service.client.drain-entities.timeout.ms</name>
9 ?, a3 S% u2 \    <value>2000</value>
2 J  ]; T; \+ @, I" h. M9 E  </property>
2 A1 ~" ~8 d7 q; s7 {& u0 G  <property>
/ M0 E: m, h' @7 X1 r/ u% r9 @3 [    <description>Enable timeline server to recover state after starting. If7 K1 P9 T; m" `7 J' n6 F* O7 O1 c
    true, then yarn.timeline-service.state-store-class must be specified.1 y& G& Y( l8 _
    </description>6 |4 s3 z  s7 L5 K3 K0 t
    <name>yarn.timeline-service.recovery.enabled</name>
8 L7 W: p. u) a1 j9 V& L: p# E7 K    <value>false</value># N8 {4 \6 c! u
  </property>1 v2 w9 R% w( J$ I
  <property>/ t6 V- [$ X5 a' o. }0 `
    <description>Store class name for timeline state store.</description>
# d. E: M0 P- v    <name>yarn.timeline-service.state-store-class</name>
. c/ V  P1 {5 U" ]0 S  }    <value>org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore</value>
# [! t) K, @+ z" O, x9 ^% X4 a  </property>7 i2 b; s- G+ N4 P5 r7 @
  <property>
& Q& i2 P) o& _    <description>Store file name for leveldb state store.</description>
; z: ?/ {7 n% @4 t% V$ \" B" W    <name>yarn.timeline-service.leveldb-state-store.path</name>' U( G* P) T) U; \% x
    <value>${hadoop.tmp.dir}/yarn/timeline</value>
; c: X" X9 ~. s- B+ j  </property>
% Z: [$ q( G+ |$ T  <!-- Timeline Service v1.5 Configuration -->2 X" c% `; ]8 L* c9 V
  <property>) P- H, S% H7 r: U8 S3 ~
    <name>yarn.timeline-service.entity-group-fs-store.cache-store-class</name>/ i' G" D* s- J
    <value>org.apache.hadoop.yarn.server.timeline.MemoryTimelineStore</value>
* x* t$ r0 E; B1 B    <description>Caching storage timeline server v1.5 is using. </description>/ ^% Z( b8 C* _1 D. q% E# T
  </property>
" t: N7 T$ }; ^# w, K  <property>
6 P8 o9 a6 w1 g0 H, Q    <name>yarn.timeline-service.entity-group-fs-store.active-dir</name>
. u0 v0 L$ c: |4 F6 b  k' R2 t    <value>/tmp/entity-file-history/active</value>
  m* {% b% b" \+ Z    <description>HDFS path to store active application’s timeline data</description>- ~% D; F9 l+ k# _
  </property>* x+ A; m2 u6 M1 t9 i7 A! _
  <property>
# R* L* t) w. ^* B9 f    <name>yarn.timeline-service.entity-group-fs-store.done-dir</name>
3 |6 j% b, K  T9 |( J! g& R    <value>/tmp/entity-file-history/done/</value>
- G3 ]/ P7 i& L3 Q# h8 \% e    <description>HDFS path to store done application’s timeline data</description>
" C! ]% \5 A  s% U  </property>+ x$ X/ }1 B( M2 T, h0 j
  <property>+ l  b6 g7 v2 e0 @" K0 `5 k
    <name>yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes</name>6 U  |, i+ J8 l, o* r+ y
    <value></value>6 T7 O3 x  G) E9 k2 n
    <description>9 A& I' e8 L5 f- O2 H4 y( g
      Plugins that can translate a timeline entity read request into
% M+ ~5 D6 v* k      a list of timeline entity group ids, separated by commas.' q# }( V$ `6 X% q, M% W
    </description>
3 {& \" K, I, v( A  </property>2 M3 l! A4 y% V. K9 O  P; g; d
  <property>0 y& W9 f  ~8 I# q3 _& E
    <name>yarn.timeline-service.entity-group-fs-store.group-id-plugin-classpath</name>
( }6 O0 S4 E: }& [, y' ~    <value></value>
5 r& D' X/ k: \! m" n  s    <description>, t3 A/ e/ ?/ |$ G; z5 r9 `
      Classpath for all plugins defined in
: X) Q1 m7 }/ V  J& X" \) ]      yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes.
. K/ b8 {+ i( x8 ?1 X    </description>
; [* L- G6 D, X8 u9 t- z! ]) A! y  </property>9 s+ s' ~0 k0 E6 w% V; @- w$ _8 H
   <property>( {; H  F' S7 Z$ Q
     <name>yarn.timeline-service.entity-group-fs-store.summary-store</name>
: Z8 f& s& N: t+ N2 s# D     <description>Summary storage for ATS v1.5</description>' `4 j' q' |. j7 k- I/ G) n
     <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
7 D  {& D6 t  j6 M  </property>9 w7 v3 x' f/ m& }) g7 Z
  <property>" {$ `; W4 i7 R9 l! ^$ A+ p9 ^
    <name>yarn.timeline-service.entity-group-fs-store.scan-interval-seconds</name>
9 U0 \! {# b6 |4 F3 z    <description>
  V+ _0 `6 V" r# u, H. n  a7 U      Scan interval for ATS v1.5 entity group file system storage reader.This9 P/ a6 [% E& Y# b; ?
      value controls how frequent the reader will scan the HDFS active directory7 X& h% X2 J4 U+ F- O2 D% y2 ?
      for application status.
- Y3 e; H( c# {  Y; P: `    </description>' b  N: m, j" T: G7 F* S* ]
    <value>60</value>2 Z" t# ?' I" `
  </property>
* Y" X9 Y# r3 V, p! |% }. w  <property>
' W3 n' X1 t$ @  M0 ^4 O    <name>yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds</name>
- J0 H! ~+ F, K0 z/ R( K    <description>- X% O+ Q% `& q+ Y# F! Q4 k
      Scan interval for ATS v1.5 entity group file system storage cleaner.This$ T$ A# I3 h& {: i; C" ]: q
      value controls how frequent the reader will scan the HDFS done directory
' {' o. h: q, ]8 @% F5 Y- P      for stale application data.
) x' Y9 g8 c7 |/ |    </description>
( R$ W+ Q9 _- g( V9 H% v    <value>3600</value>  Q* K7 ^6 B6 L, X
  </property>
* u0 Q7 f( f: n$ q/ B+ V8 ?  <property>
% F7 C& R+ F' i) r4 G8 c, k    <name>yarn.timeline-service.entity-group-fs-store.retain-seconds</name>
5 d9 d9 j& s; v0 z4 Q- O    <description>. R; D7 ?- x2 k# Q; q' i2 G: H
      How long the ATS v1.5 entity group file system storage will keep an
% _# O9 O) m% _6 r6 m      application's data in the done directory.6 W" l5 k; Y9 @8 J
    </description>
, E- K# Z, @5 j4 N% U" s$ B# _) R    <value>604800</value>
' D) L) O% e( W+ w  </property>
6 u6 W6 H: b5 I/ {+ ?* s3 d7 O1 J  <property>1 o# `1 U7 X, B  o' h4 F& i! G% m$ I
    <name>yarn.timeline-service.entity-group-fs-store.leveldb-cache-read-cache-size</name>
( ~4 c) ?/ R) ]+ E    <description>
/ o# E% W# h, V: @' ^* {8 N6 @1 c      Read cache size for the leveldb cache storage in ATS v1.5 plugin storage.
  _9 P3 h+ g; M  L    </description>. G. R' @9 v! a" Z1 {
    <value>10485760</value>
+ m2 o+ X% f$ d" R  </property>0 j; o( p; B. \8 Z) O; q
  <property>
6 n0 A+ H1 C4 ?% h8 k( T; B! G    <name>yarn.timeline-service.entity-group-fs-store.app-cache-size</name>) g) z: J, U) z% ^+ L! y
    <description>
( K# B8 X) b8 V; V/ t# Y" H      Size of the reader cache for ATS v1.5 reader. This value controls how many% c* H& A+ i) a% N% e+ ^$ a
      entity groups the ATS v1.5 server should cache. If the number of active  G# Y, _2 A7 D. w0 z+ ?% ^- F
      read entity groups is greater than the number of caches items, some reads
/ A/ c/ j- g& a2 U; z' B3 h      may return empty data. This value must be greater than 0.
# B4 d) L  F& G, ?4 g  [    </description>
4 b8 I  E; m6 m/ C! ~  {    <value>10</value>& m+ Q5 T2 r3 M& `6 r
  </property>
/ c  F3 y; M' l# s% Q$ e3 A3 l  <property>
2 ]/ H$ R4 Y7 i& ?! V7 }    <name>yarn.timeline-service.client.fd-flush-interval-secs</name>
4 j" d- H" T* k. ^  F    <description>8 [' i; B& Z9 ?- V2 y  n0 A
      Flush interval for ATS v1.5 writer. This value controls how frequent
3 Q) f, i, w% @+ }; ?. b) M9 ?, m      the writer will flush the HDFS FSStream for the entity/domain.
9 j# ^1 C) d4 p: G  f    </description>: M/ ^- L/ @7 I/ o# q8 A
    <value>10</value>/ z$ m: y, b/ l3 T# \
  </property>* ^+ {6 i& q  v% Z/ {3 t5 d* \
  <property>
* }, Y. a" Y, G0 P+ u# q    <name>yarn.timeline-service.client.fd-clean-interval-secs</name>+ k8 u! v0 U: K( F6 m
    <description>
- W! N7 e, \2 R  c4 B! K      Scan interval for ATS v1.5 writer. This value controls how frequent
2 c( t' }% ?9 |$ h+ F9 g8 i      the writer will scan the HDFS FSStream for the entity/domain.
3 }. ]5 W- F, b3 g' j$ r8 I4 I. j+ ~2 u      If the FSStream is stale for a long time, this FSStream will be close." e$ p- t6 @7 N; f$ B1 M  X+ }
    </description>
7 V1 t" n+ M( c* h8 e# V+ f  x0 K& T    <value>60</value>+ R* {, w2 ~' d7 r
  </property>: n2 R9 v0 y: f& Q( O
  <property>
1 I7 L5 V) P! ^2 O5 Q, G( r    <name>yarn.timeline-service.client.fd-retain-secs</name>' \& N3 i& P/ ~: j6 o" U
    <description>
& A& K  w" s# O! b7 _5 i) H      How long the ATS v1.5 writer will keep an FSStream open.
9 K5 P" m" X* u- O: c2 n4 ]5 |- {      If this fsstream does not write anything for this configured time,
- u( Q5 M9 z9 C) h1 l$ X$ p      it will be close.3 I! }+ `7 l; ?
    </description>) H$ {; Q( H1 w" m
    <value>300</value>' T+ r( |. s' P& G0 O3 C: W. x- f
  </property>$ x8 m2 V8 [/ \) ?
  <!-- Timeline Service v2 Configuration -->
& R6 F3 V* U4 i( _, x* _  <property>6 L8 ]" Y; p. T, |3 u9 }5 A, w
    <name>yarn.timeline-service.writer.class</name>
5 J; Z% S- V6 }, n. L- R    <description>* i, z, S7 ~  ]0 C3 P3 d9 q
      Storage implementation ATS v2 will use for the TimelineWriter service.# z( ?) {% j+ r4 p( Z) k6 T
    </description>5 \5 {' W0 B# y
    <value>org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl</value>
, m! f: X3 l1 J0 x* z  </property>; y9 h% B. q! D5 U/ ]$ l, |
  <property>
6 M+ ~) r$ ^5 C; P% q& `    <name>yarn.timeline-service.reader.class</name>( X% e' b9 u1 C  Q8 M0 d( {" E, S
    <description>
1 |3 J, D" @# C      Storage implementation ATS v2 will use for the TimelineReader service.# X# Y; S% E/ n: H# P
    </description>$ U( x7 U4 h. A9 r
    <value>org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl</value>
: C  m- c3 i# ], Q- }$ m  </property>. b* m# a, Y. S0 _8 H
  <property>. F% h- J% }$ |2 `
    <name>yarn.timeline-service.client.internal-timers-ttl-secs</name>1 \7 A& ^' |7 p4 D' D0 P+ V. [
    <description>
2 ?" p, }& W6 X5 b0 c1 J      How long the internal Timer Tasks can be alive in writer. If there is no
& t4 N- Y" |1 t9 D      write operation for this configured time, the internal timer tasks will2 H9 S% W0 X& Z" ^6 ~5 Z
      be close.( u4 E0 p  v: [7 L" \8 |4 Q
    </description>
/ m6 n2 {* @. d" I    <value>420</value>
- q. F1 X9 ~/ d$ W* V. P0 T  </property># _* {* j+ P' Y3 j
  <property>
5 ?) Z# L+ S  [% {' ^. t% j    <description>The setting that controls how often the timeline collector
: A) w* i, H, W" T: r    flushes the timeline writer.</description>+ G, ~0 [; _( K1 ?& N; {" n: r
    <name>yarn.timeline-service.writer.flush-interval-seconds</name>7 l# n' g6 z5 p0 V8 s1 G* }. j
    <value>60</value>+ f- A5 x" V& X
  </property>
0 K" g7 p1 Z5 ?' M, r" z3 b  <property>( m! P# b, l) N* X$ ]1 c
    <description>Time period till which the application collector will be alive$ M% u3 {+ L- _6 G
     in NM, after the  application master container finishes.</description>
6 T- V& }7 g* N$ d    <name>yarn.timeline-service.app-collector.linger-period.ms</name>
* T8 B8 v' m1 P" o) n% L    <value>60000</value>0 A$ t) z) w$ w- v/ t# X/ g; _7 i
  </property>% ]  |4 ~$ Z9 s1 `; l' ~  v
  <property>3 o& M2 z- c% L! x5 {# e- [! J! i$ U
    <description>Time line V2 client tries to merge these many number of! W$ ~0 S+ d: f1 S: C4 g9 N9 @$ U$ S
    async entities (if available) and then call the REST ATS V2 API to submit.
/ {. U- K) O% W! M  l1 g0 ^. C    </description>
$ u9 p8 ?0 V0 r/ F" V% i& P    <name>yarn.timeline-service.timeline-client.number-of-async-entities-to-merge</name>
1 q0 D7 X3 Z8 U' o* F9 h    <value>10</value>
2 X( B# Y4 z8 J  </property>* ~- L/ v# t' r. i$ N, W
  <property>1 g2 d3 m2 S6 j* ]1 _, F
    <description>
' ~, s) \4 e4 l9 @    The setting that controls how long the final value0 m3 w" L$ |# f6 u/ o
    of a metric of a completed app is retained before merging into, ?1 ^* e5 L/ i3 D& a. s
    the flow sum. Up to this time after an application is completed2 C) R& n8 j3 B1 h
    out-of-order values that arrive can be recognized and discarded at the# I: h0 ]8 S+ B; L4 T: ]: u- W
    cost of increased storage.7 U. a/ S4 u$ K" T  m5 H* N- {
    </description>1 t  x# D; j# B' I" F9 f
    <name>yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds+ F% ?- e: a- V; w+ ~
    </name>
( ~' z* X8 [- g9 \" R    <value>259200000</value>/ f6 a8 k8 u  z9 x. I8 X
  </property>$ l9 X; ^% t/ `+ g! o2 c
  <property>; v" _# c( ?/ K6 m4 h
    <description>1 K" j+ r. V: f4 h7 j
    The default hdfs location for flowrun coprocessor jar.* Z0 w( N1 @% |" O* x( g4 ]
    </description>
+ t4 L( h, t- E3 q! W    <name>yarn.timeline-service.hbase.coprocessor.jar.hdfs.location
6 ?  y9 s  y7 C7 N' a' e. ]+ l$ M    </name>, }4 p* G4 y! \. d
    <value>/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar</value>- I% z& c  z) Q1 E6 E" Z+ L" ~  {2 l
  </property>1 G% V8 P$ J* ]; P! }$ b: m8 J8 ]
  <property>6 `, p5 A7 X. ~, g" b/ F4 V
    <description>
2 S$ B7 }. {- h# [. k& @    The value of this parameter sets the prefix for all tables that are part of
" {4 E4 X( }9 V" \    timeline service in the hbase storage schema. It can be set to "dev."
. \; ^& N0 m" l; z" H( H    or "staging." if it is to be used for development or staging instances.' i0 @: q1 _) _3 W9 a
    This way the data in production tables stays in a separate set of tables
/ |; E! r8 e# ^3 i    prefixed by "prod.".
0 S' O1 X% L! o    </description>4 G* A$ ~7 k( A) D) q
    <name>yarn.timeline-service.hbase-schema.prefix</name>
: w2 `' H2 f: y9 O( a! x0 K' W1 F    <value>prod.</value>
$ J8 w$ a, P1 y/ A5 a) b$ {  </property>
, @% W% ?7 x1 _. K7 y' d  <property>
. t' y/ b0 q7 _7 z    <description> Optional URL to an hbase-site.xml configuration file to be$ u! L5 B; z) t- P. D3 ]( k; F2 p
    used to connect to the timeline-service hbase cluster. If empty or not, }: x: d( x% K: V
    specified, then the HBase configuration will be loaded from the classpath.
! u* K! Y; k/ |$ h: d4 u9 ~( q/ n    When specified the values in the specified configuration file will override
% p% b& [9 h4 y( c/ ^    those from the ones that are present on the classpath., q2 F& z" j; Q' W
    </description>
# e) s7 ~3 `- ]    <name>yarn.timeline-service.hbase.configuration.file
- `$ n3 J  X. K! l! c$ f' Q    </name>7 J  E  I9 c4 N( ]: `( N. B/ ~
    <value></value>9 l! P) v9 Z# d1 }5 k6 Z
  </property>* m1 O5 W% @4 b$ H
  <!--  Shared Cache Configuration -->
6 {' H# v% a0 j1 ?7 a  <property>
6 E, ^: i( o, L2 i; a  w    <description>Whether the shared cache is enabled</description>5 ~: G* R& e; {# H  v
    <name>yarn.sharedcache.enabled</name>9 t& l5 V& B+ k; ], S" M+ j8 Z3 A# C
    <value>false</value>, S" d. U! E1 _
  </property>
  `3 Y+ e- Z" X" ^: Y  <property>
8 P6 h  j/ B2 C& P, I    <description>The root directory for the shared cache</description>
2 D. I7 u4 o2 j, D  C9 K    <name>yarn.sharedcache.root-dir</name>
, T+ @3 y" Y' V' C: J" Y. W    <value>/sharedcache</value>
  N8 _, {: R8 W  </property>
, B. O6 \- r) X  <property>" B5 R$ |* e5 P- R% f' V
    <description>The level of nested directories before getting to the checksum7 h7 C. U  a' u" k
    directories. It must be non-negative.</description>: f* T3 O/ S4 U$ v
    <name>yarn.sharedcache.nested-level</name>
4 K0 }, x% t% Z' F* H# v# Z    <value>3</value>
) W  T) \# P0 R1 B  </property>
* B$ \& a: G  K$ ]! Z  <property>
4 ~# ?& t+ \' D9 n6 Q# L6 S0 ~& D    <description>The implementation to be used for the SCM store</description>
5 \' K$ p; V9 P- F2 J    <name>yarn.sharedcache.store.class</name>
8 n" P% B! G: s* W7 i5 c, D    <value>org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore</value>
  g: Z  L2 A6 }  </property>
: J8 P8 q2 ~+ \  <property>* W" ~, z) D5 k" q& t
    <description>The implementation to be used for the SCM app-checker</description>
. }/ G* S: t  \& [$ m. d* l7 ?& Y( \    <name>yarn.sharedcache.app-checker.class</name>1 _2 j( j( g& ]0 g
    <value>org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker</value>0 e0 Y2 q. y7 ?$ `
  </property>
" `6 q) F, m) T8 k  <property>
. q6 x; [. g' a" K    <description>A resource in the in-memory store is considered stale( V% y7 u8 l: H% q, x/ w9 v$ l
    if the time since the last reference exceeds the staleness period.* R+ c8 N# F- _, A" B+ M  r; \& `
    This value is specified in minutes.</description>! _. r3 g; ?: v' `
    <name>yarn.sharedcache.store.in-memory.staleness-period-mins</name>
. m. ^3 L) o7 E$ v/ c    <value>10080</value>
9 c. \$ u5 I! S7 N8 i7 t  </property>
$ U. f2 k# m9 j& o  <property>* ^5 r5 d/ _( m# _8 D% m
    <description>Initial delay before the in-memory store runs its first check& K  l$ |4 R+ G' l
    to remove dead initial applications. Specified in minutes.</description>
* T" u$ E  e9 e* h2 E    <name>yarn.sharedcache.store.in-memory.initial-delay-mins</name>
2 G, O9 W. @- r' |* l2 R    <value>10</value>5 o$ a& O% d% B% o% @, i+ b
  </property>
% B: L, q# U  y+ _" a) u  <property>3 N0 s2 e: q9 Q  o8 \7 ^6 q
    <description>The frequency at which the in-memory store checks to remove- N2 ]! T6 D6 J3 v! U, _
    dead initial applications. Specified in minutes.</description>6 D& A  W4 h: c" W5 F0 u0 X
    <name>yarn.sharedcache.store.in-memory.check-period-mins</name>! Q6 n9 c! ~0 I( G6 ~
    <value>720</value>
! E$ H9 a% C0 B  </property>
2 K. i- X7 H' S* y  ~  <property>
+ o' t- k* U2 C: ]: B    <description>The address of the admin interface in the SCM (shared cache manager)</description>8 ~4 e7 [) g/ t. x
    <name>yarn.sharedcache.admin.address</name>
9 m. M, s' p9 J# w6 {4 Q    <value>0.0.0.0:8047</value>2 [4 v1 m  B" i
  </property>0 t, d# c* o( q( q/ @+ h1 ?( Z
  <property>4 u3 F1 \, L( r: G% c% ~  @
    <description>The number of threads used to handle SCM admin interface (1 by default)</description>
* m  d' M: F3 K, ~+ a* _    <name>yarn.sharedcache.admin.thread-count</name>
5 s4 p8 u7 H8 _; u" s/ E    <value>1</value>
+ g9 S# ?! C( u0 z) s) a' j  </property>
# F& N9 q, ?! `; M( X) O1 c) c  <property>
6 L. V; r6 E* `& @3 M& T    <description>The address of the web application in the SCM (shared cache manager)</description>1 S( O1 W9 }% n% P
    <name>yarn.sharedcache.webapp.address</name>
, f0 [: z! v& j& A! I' ]    <value>0.0.0.0:8788</value>9 q  \& }0 A2 z2 `8 ]# e
  </property>8 o/ {; Z5 u$ I, b$ r5 A% B
  <property>
& h9 v7 [2 X, w, K    <description>The frequency at which a cleaner task runs.* q' n: I$ ^( O3 p5 P7 ]0 T% O# l
    Specified in minutes.</description>
$ Z! `& r4 H( W; H8 A" h    <name>yarn.sharedcache.cleaner.period-mins</name>
$ F0 r* M0 `9 f3 |7 `' W    <value>1440</value>
% g& r5 S' o1 ]$ m8 V( P. K+ m  </property>7 ~* `  ?6 D  N4 x7 e
  <property>, g3 b" i) p& s9 r# w3 O7 l
    <description>Initial delay before the first cleaner task is scheduled.
' G& G$ Y0 `1 g+ O( i    Specified in minutes.</description>
1 G) y+ Z. `6 C: o! p. t, h) [    <name>yarn.sharedcache.cleaner.initial-delay-mins</name>$ Q# V$ J; A3 E7 s2 u
    <value>10</value>
4 e! |9 u+ _6 n- l5 y  </property>
9 b5 V' D% ]; B9 l, H  <property>1 G- X$ F3 r0 Q# G! T) M  D0 v
    <description>The time to sleep between processing each shared cache$ \1 }# R) F: a$ L
    resource. Specified in milliseconds.</description>
/ X5 c, w% b8 _4 l+ b3 o    <name>yarn.sharedcache.cleaner.resource-sleep-ms</name>$ Y( V' J( T2 z/ W
    <value>0</value>
. Z& X* ?# A- Y5 Q9 O6 F  </property>
! c0 t6 Y" E. z6 @6 i9 Q: @' O0 x  <property>
6 g: o, R( a8 W6 H  j    <description>The address of the node manager interface in the SCM
) c, P6 E3 L7 n+ Q7 U: D    (shared cache manager)</description>; M( }1 c5 p$ E. P5 E8 V4 K
    <name>yarn.sharedcache.uploader.server.address</name>
. x: X1 n1 B* x- m6 R0 E    <value>0.0.0.0:8046</value>* l4 @" G6 T: x3 m  ?  S! }
  </property>1 F" n2 ~! d% L7 J8 H. f$ M6 r
  <property>
4 |6 }7 ]) y6 M3 S1 ], V/ S    <description>The number of threads used to handle shared cache manager
* w' N7 \  h, ]9 d! K# m$ q    requests from the node manager (50 by default)</description>
& `( s/ \- S/ w* j- l: f    <name>yarn.sharedcache.uploader.server.thread-count</name>1 K  V3 I0 T# P" z4 D; B
    <value>50</value>
7 a: L. B7 E+ n' R3 t1 e% r/ D  </property>, `5 |+ F! P' z$ k: Q
  <property>
  ?9 q4 b/ g8 u$ L    <description>The address of the client interface in the SCM% l3 z0 T7 W  O7 _. x' x
    (shared cache manager)</description>
- O- t' b9 B& ]$ H    <name>yarn.sharedcache.client-server.address</name>4 w! d0 P; D' n9 q  p6 H' f7 h
    <value>0.0.0.0:8045</value>, K: Y; U2 o9 x$ l" W9 d' O1 F
  </property>. i1 r' c; \; }! C# C$ C& S
  <property>3 v: k  o9 K1 c& u* ^
    <description>The number of threads used to handle shared cache manager( e" }, M1 p8 u
    requests from clients (50 by default)</description>! \$ x/ e& s. z+ ]. U0 n7 w
    <name>yarn.sharedcache.client-server.thread-count</name>
3 p- Z3 d6 t. R; w) V    <value>50</value>
" g7 O2 L' v; x! n6 x  </property>- h8 S; f! N, Z+ Q
  <property>8 h5 |& a, V$ e# c/ U; D1 r
    <description>The algorithm used to compute checksums of files (SHA-256 by1 P" H3 ]0 I- {- ^
    default)</description>
& n* ~2 w5 b2 ^3 o+ `' c8 j9 _    <name>yarn.sharedcache.checksum.algo.impl</name>5 I6 m% \- `* w' {- H$ h+ X
    <value>org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl</value>  y$ w; n; V6 F. a: }8 q: d4 t
  </property>' U/ ^( ^  q* J) H! i
  <property>
$ O/ n- y" d1 G- k2 U    <description>The replication factor for the node manager uploader for the
' x5 G5 p& v% N    shared cache (10 by default)</description>4 ^3 k- I' j) G
    <name>yarn.sharedcache.nm.uploader.replication.factor</name>3 ]  A4 U  b3 Z- A& l! o  G
    <value>10</value>% A3 i& u7 q7 f" h% j& m' E
  </property>% y& C) e  w) J1 ~
  <property>6 @. [, a1 s$ o
    <description>The number of threads used to upload files from a node manager
5 R6 v* [8 g& b1 O, Q3 l) s    instance (20 by default)</description>
/ g) l( v" M5 q& [$ f    <name>yarn.sharedcache.nm.uploader.thread-count</name>
/ T, d- i/ T7 S; t    <value>20</value>
+ ]! E* v  e9 D( p4 U9 V  </property>
% i3 I6 j' B1 b3 B( _5 Q. f  <property>
/ T3 H0 {, W! a, k    <description>
7 Y! ^- v  S6 ]0 r2 d' r. q    ACL protocol for use in the Timeline server.; u. `0 E# h5 D6 w, C3 s
    </description>/ U- @6 F0 Y! \) M
    <name>security.applicationhistory.protocol.acl</name>
$ r) {  L3 B" ~% ^8 A    <value></value>. b  p, C' C; X. p* Z, B& o
  </property>2 h# @& l4 X+ d! M8 N- j. z
  <!-- Minicluster Configuration (for testing only!) -->
1 `5 |; E2 f6 \! S" ^5 s2 k, x$ K  <property>0 T6 r% j+ H2 _" G$ ~# V
    <description>
/ F9 N( A5 H+ M5 M# F    Set to true for MiniYARNCluster unit tests
0 V1 P- h* g  u! n0 _( _8 Z    </description>
1 T, k# y) T, z* M    <name>yarn.is.minicluster</name>$ L( ~* b: z" @3 \- n$ A
    <value>false</value>
  {( O" @/ r6 X  </property>) p/ G1 h5 ?6 \+ w; s' G  x
  <property>9 C' J' F; z+ U2 M$ J( `2 Z) x  q7 S
    <description>
7 D3 j( e4 ~2 l    Set for MiniYARNCluster unit tests to control resource monitoring
! T7 @( ~+ ^0 o/ }/ Y    </description>- B8 j! V) N  P
    <name>yarn.minicluster.control-resource-monitoring</name>
; n! s# S2 K) `, n/ e( F& _    <value>false</value>" |! N4 t6 h+ m; w! l
  </property>; n% u. g2 |1 H; Y+ V0 s- {, `
  <property>) a0 H3 j2 V, ~9 I1 ~3 [
    <description>
' a: l& {% R7 a% Z    Set to false in order to allow MiniYARNCluster to run tests without5 B# `6 b; B+ b5 C
    port conflicts.
% V+ a6 ^9 u% l) b2 W+ T    </description>
( r# Q8 h' P; {& Q    <name>yarn.minicluster.fixed.ports</name>: |+ y+ u% Y' w
    <value>false</value>; g  \' J) W3 w$ r  s
  </property>; T8 C, |9 `' b  K) N
  <property>. j- ~4 _& C9 x$ U5 V" w
    <description>
5 y( L+ j6 q  |" b% m5 C! _# A    Set to false in order to allow the NodeManager in MiniYARNCluster to
2 t% c& N& }3 z& U7 D  G1 \    use RPC to talk to the RM.: e& j  F+ m8 _; h8 j; ?5 H
    </description>+ C3 [8 B+ \  d' I3 B
    <name>yarn.minicluster.use-rpc</name>. @' F% _* S9 s9 i* ]: B1 O
    <value>false</value>
8 F; a9 R( R- c3 e; C0 q- _- W% ~  </property>! g- O+ K$ i; O8 d
  <property>
4 C& B; \5 p( H5 J; e1 `    <description>" P# a9 y$ I# j( E4 K2 n' M
    As yarn.nodemanager.resource.memory-mb property but for the NodeManager  R8 r9 V( Y6 h4 a& x8 h
    in a MiniYARNCluster.! R- ]3 H$ b  h; \5 Q7 k4 p
    </description>3 f" }* a" r% ~
    <name>yarn.minicluster.yarn.nodemanager.resource.memory-mb</name>; Q3 F. L9 Z% V  q/ y2 {
    <value>4096</value>) A7 {# U# Z( g) ]
  </property>
* `" [; Z, Y! `  <!-- Node Labels Configuration -->
$ J8 B+ Q) Q  f& `/ c* h+ V0 O  <property>; U5 b  z5 r& o! b$ q
    <description>
3 o1 c/ }/ I; o2 v( O$ Z  x    Enable node labels feature
, ]; n  B& r" H+ ?    </description>
# P9 r0 f, @! j. J, h& X8 i9 V0 g' x    <name>yarn.node-labels.enabled</name>
: Q4 K% {9 h$ |; Z7 j9 z    <value>false</value>
$ Q1 c, c, H/ b  </property>/ |' H8 G8 \0 T: v! E# O
  <property>" b1 p- c, r! D8 ^5 x
    <description>
* n7 L* q8 N! Y" D- h8 l; H" f    URI for NodeLabelManager.  The default value is
" ]; R% v" d* K. O- X% i3 E8 T" v    /tmp/hadoop-yarn-${user}/node-labels/ in the local filesystem.
* l9 e$ h9 h: j1 K    </description>! Q, I5 i' Y) Q2 r/ \2 E9 l
    <name>yarn.node-labels.fs-store.root-dir</name>9 O! l, Q) z& R2 i  C
    <value></value>+ J7 X0 Z; K2 ]/ D
  </property>+ }$ e# O5 `+ l) w. Q" \& n6 {
  <property>& e& H, R( l2 L% }6 X; T* w
    <description>7 y# u# V# ~4 b* r2 a
    Set configuration type for node labels. Administrators can specify
0 v1 c5 n. Y( ^, S! Z' U    "centralized", "delegated-centralized" or "distributed".  ~! M# M8 h( I. u
    </description>" U! [8 S* \) ?; \# r6 k
    <name>yarn.node-labels.configuration-type</name>& Q& a! y* f% F8 t
    <value>centralized</value>
( F! V  J6 V5 R& o3 _" k) U  </property>
* @- |8 J+ |  g3 G9 k; l0 d  [' e  <!-- Distributed Node Labels Configuration -->: [3 u1 ^( C  Y+ D5 M8 Y
  <property>1 D0 z0 O8 X& M
    <description>1 n" {% e. Y/ t2 l: f
    When "yarn.node-labels.configuration-type" is configured with "distributed") [. Y/ ^5 [. e" u, |
    in RM, Administrators can configure in NM the provider for the
+ L" C8 I0 h0 e/ }( Z    node labels by configuring this parameter. Administrators can$ j6 @# Z* t* y" ^7 M
    configure "config", "script" or the class name of the provider. Configured7 y( Y1 J* @) r: Y: B
    class needs to extend
; i& K: G. p, x! _$ }' Y    org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider.
# l; N  N! I6 D, ^    If "config" is configured, then "ConfigurationNodeLabelsProvider" and if7 u* s% r4 T+ m7 C/ I
    "script" is configured, then "ScriptNodeLabelsProvider" will be used.
" I4 o' d) ], d- C  X! ~    </description>
, m! T( F9 p, U$ T$ O7 x    <name>yarn.nodemanager.node-labels.provider</name>
. I, l' `; p3 W  </property>
- M. u, U2 N$ d" w3 l& L  <property>
$ V# Z8 V1 I; k    <description>
0 q# }* z' N4 K. S$ t" i    When "yarn.nodemanager.node-labels.provider" is configured with "config",
" Z7 ~( v0 R/ u- |' n, ^$ O    "Script" or the configured class extends AbstractNodeLabelsProvider, then
  R4 h0 }- W  ?4 n4 R  J# ?; k    periodically node labels are retrieved from the node labels provider. This
& Y3 g! n4 L- F* y0 C) h" G    configuration is to define the interval period.
. k, [( f+ J: }9 I/ K. g! m1 r% Q    If -1 is configured then node labels are retrieved from provider only
" U  Q' c$ M+ R* z- R6 ?: ^8 `1 |    during initialization. Defaults to 10 mins.# `9 V2 C. i" m
    </description>
$ r& k& [# o. G1 O. o    <name>yarn.nodemanager.node-labels.provider.fetch-interval-ms</name>
4 C0 T% e9 R$ O4 H6 z. u; V    <value>600000</value>
- _: x3 F! s$ T: I  </property>/ B, U8 l* m, ^) g6 q$ ~' c
  <property>+ t  o8 F% u$ s; |  M9 P/ w" I! R# E
    <description>
$ i( I* c3 `/ i* R4 J   Interval at which NM syncs its node labels with RM. NM will send its loaded
$ V+ f) m3 T6 a8 f- ?2 x1 |   labels every x intervals configured, along with heartbeat to RM.  V; i& E- `% W0 L: V
    </description>
  g# \- d* [& }- y6 s) N    <name>yarn.nodemanager.node-labels.resync-interval-ms</name>
# u; z  K/ r$ W0 i$ h    <value>120000</value>
6 p4 u& j: X0 x. D  </property>
) L+ E5 _+ [8 ^2 z7 b9 y) j6 A  n  <property>
( Q; R' |1 j6 Z7 T7 ?3 {0 t    <description>
: N8 G( E+ u# B1 R) r3 K    When "yarn.nodemanager.node-labels.provider" is configured with "config"# L: I" Z- E, g$ B  {1 \: T
    then ConfigurationNodeLabelsProvider fetches the partition label from this
7 t% _( f, x) E    parameter./ b. ]5 A! I* C# C, v" v8 H
    </description>  r; p1 K# K( t- D( g4 N3 d
    <name>yarn.nodemanager.node-labels.provider.configured-node-partition</name>
8 |- C) ~) P, k; X: _. H' R) @9 Y  </property>
4 E/ s4 y$ }' o/ v  <property>
" H, t  Z. H: {1 r( S    <description>
; U5 W% Q/ X- }4 f' Y  ~" w    When "yarn.nodemanager.node-labels.provider" is configured with "Script"+ n  I% ^% z: Z2 F3 X
    then this configuration provides the timeout period after which it will+ r, }# n7 H* I/ f1 y( K/ @
    interrupt the script which queries the Node labels. Defaults to 20 mins.
9 T" ^" s' [( r- S5 Z# H    </description>9 M/ h+ Q  O4 a6 H; @/ \
    <name>yarn.nodemanager.node-labels.provider.fetch-timeout-ms</name>
; w& V. \( p1 y% }3 b: ^    <value>1200000</value>
9 c$ i* k) b; Y8 q: T8 _. j  </property>7 M9 B# F7 T5 O$ Q
  <!-- Delegated-centralized Node Labels Configuration -->5 Q1 o  @2 `* S$ x2 a9 v2 D
  <property>$ B! w9 f. u1 x; s$ Y# T, \
    <description>
& ^# q! U  J+ K( R7 {1 ^2 q  f    When node labels "yarn.node-labels.configuration-type" is
8 }1 f" M5 k9 ?# u    of type "delegated-centralized", administrators should configure
, R) b6 O/ [" Q% }. \5 T# y    the class for fetching node labels by ResourceManager. Configured
- R+ y; V$ n7 Y, d1 X8 p; Q! r    class needs to extend
7 j* x2 H% I8 c    org.apache.hadoop.yarn.server.resourcemanager.nodelabels.  @$ M# p# P/ I  Y5 ?. m
    RMNodeLabelsMappingProvider.
5 R5 u; H% L6 ^    </description>$ ^& q3 }: O4 n" u# D+ ^& P
    <name>yarn.resourcemanager.node-labels.provider</name>
) @, v! @' k/ U' D/ [% K5 k' B9 n    <value></value>
" w5 [% M! V* v; X" N+ T" K  </property>
  y* L! [7 [5 D4 L1 z- d  <property>
4 A$ K! g! s8 Z    <description>
0 n. w" F8 e  @% X6 b    When "yarn.node-labels.configuration-type" is configured with% u8 W, {# k# Q6 i, R& j" \
    "delegated-centralized", then periodically node labels are retrieved
# N* U; g5 `' w- E2 \' g    from the node labels provider. This configuration is to define the: K. a& y4 ~; H: c' J4 s
    interval. If -1 is configured then node labels are retrieved from% d: R" i. b; c0 i) Y- J
    provider only once for each node after it registers. Defaults to 30 mins.$ Z; @2 m0 M# W
    </description>$ b0 K# U6 w* I- X0 V
    <name>yarn.resourcemanager.node-labels.provider.fetch-interval-ms</name>
1 L* P5 Y# v/ w% M$ k1 y6 B& K    <value>1800000</value>
# u/ |7 M) P8 k( {6 _1 r0 D0 l, o2 A  </property>$ B& E- `4 l% B
  <property>( |2 V% A' ?9 G0 k% [0 H
    <description>* o3 W* B  G; q1 i1 A+ r
    Timeout in seconds for YARN node graceful decommission., F& U. s* Y; K9 F! N/ e' K1 {
    This is the maximal time to wait for running containers and applications to complete
  E! e$ }2 V) V/ j  g5 u2 N    before transition a DECOMMISSIONING node into DECOMMISSIONED.
9 {7 z. X' r1 `3 ^/ T    </description>; d- {+ b) \8 W! n1 N
    <name>yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs</name>
7 N- U; A: s! s1 h5 u    <value>3600</value>
. Z( r# U# G6 s5 ?6 T  </property>
, p6 F( c1 u9 \; Z4 M, K9 u  <property>" T. T+ d3 {$ S( \
    <description>
! T1 W( q0 A8 F- G" e5 c% w    Timeout in seconds of DecommissioningNodesWatcher internal polling.* e9 x8 }' F! n! f9 G
    </description>! ~. k( V5 w; ?* A/ d
    <name>yarn.resourcemanager.decommissioning-nodes-watcher.poll-interval-secs</name>. D; }+ _" i, A- |  t
    <value>20</value>. i4 r* F* B" i8 _$ X- p; |3 \
  </property>4 ?/ ~% c" `! Q* r% D0 k; U# v
  <property>. F: r- g" W) T  ~) e; l- H
    <description>The Node Label script to run. Script output Line starting with0 e2 j8 O6 ?) M* O* n1 u
     "NODE_PARTITION:" will be considered as Node Label Partition. In case of& X7 Y0 f" T9 y8 X' b8 Y
     multiple lines have this pattern, then last one will be considered
: q1 n" x3 m7 |4 w2 @    </description>6 j, d6 f. I$ L8 X- S5 t: B# k4 w
    <name>yarn.nodemanager.node-labels.provider.script.path</name>
5 l* p0 k4 H: I/ ?  </property>
/ x6 K! F2 I& z. o0 X7 x3 \3 ]& f3 u5 J  \  <property>2 p  k6 n' g5 m. W8 \8 e
    <description>The arguments to pass to the Node label script.</description>
- F# H5 n, y! O4 b+ H/ Z    <name>yarn.nodemanager.node-labels.provider.script.opts</name>$ s1 a: H4 i8 h4 S& l/ Y1 n" l
  </property>4 V, f/ u" ^) ?4 k# C9 l
  <!-- Federation Configuration -->6 U  q- ?! G* [4 `
  <property>
& N; {$ f) Y& ?, b' x    <description>( W# b+ D% h& [( t9 o2 p
      Flag to indicate whether the RM is participating in Federation or not.
; r* ?  O& F/ `$ v    </description>
' H: ?, f* j0 d# [  E6 s  e    <name>yarn.federation.enabled</name>
% u' D% A: Z- g) D8 g7 ^/ }    <value>false</value>
1 Z& Q. D7 p! W# M, g  B0 Z  </property>4 i- z7 u" I1 f* K/ K
  <property>
0 z' F' H# _4 d' k( F    <description>
% T3 G  c* R1 ?      Machine list file to be loaded by the FederationSubCluster Resolver7 o5 t$ F+ V/ p- O/ Q
    </description>3 e. M; x0 R! ?' W; T
    <name>yarn.federation.machine-list</name>
; z7 {4 b) l& {6 {  </property>
3 \0 t; V* H: v! P+ S4 p  <property>
- i. m: {' U$ M2 q    <description>) O) g- ?+ H: h7 w+ y4 g& Z
      Class name for SubClusterResolver
, l$ ]1 O* O3 ]7 b6 I    </description>( X2 k* l: a. o3 t# Y" P
    <name>yarn.federation.subcluster-resolver.class</name>* m; I% G  N8 a! ^
    <value>org.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImpl</value>( t$ @# f5 _6 z" F
  </property>
4 R) L0 d  Z* a0 X: `; |& V. G  <property>2 a7 w) Z9 y/ c4 |) \( b
    <description>
  B1 j! u! \& ~  G" [      Store class name for federation state store
/ q* `" p$ r' p8 t    </description>/ w' d$ S! v/ D9 S1 Q
    <name>yarn.federation.state-store.class</name>
5 r0 \8 Z4 Y& X" m* O1 T; z1 n    <value>org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore</value>, N) h% ^. ?* l: j% A" j: v9 `$ {
  </property>
( l. v; v0 T/ Y7 E2 f  <property>
; i( h1 v, h1 P" n' U- i   <description># b, v' W8 a' W
    The time in seconds after which the federation state store local cache  `, j+ q+ X0 {1 r5 U% `
    will be refreshed periodically% c$ w2 ~# Z  `, R" A1 u
   </description>
& F' c; m& ?  c   <name>yarn.federation.cache-ttl.secs</name>4 h$ c* J, }0 c3 I$ v
   <value>300</value>
$ w# i2 V  ~3 p0 E: y  </property>
* z5 [3 L6 S0 M% I3 o* F3 G( G  <property>
. d, J8 C3 b/ q0 d- N    <description>The registry base directory for federation.</description>( A- f/ ]/ u, k/ M# [5 K# y
    <name>yarn.federation.registry.base-dir</name>
- C- u6 E5 I* {& V" n; l; n1 g    <value>yarnfederation/</value>9 M8 W! f; g& W- ^  G
  </property>1 {! C. q1 C: @" o( R0 z
  <!-- Other Configuration -->' j1 Y. u% B  ~5 K7 T2 C& V
  <property>* p3 h4 y. m" J5 r' z% Z8 U& k
    <description>The registry implementation to use.</description>7 f" |9 ]- d2 ^; \
    <name>yarn.registry.class</name>
- d! @  K& F! Q/ V# k8 z8 \    <value>org.apache.hadoop.registry.client.impl.FSRegistryOperationsService</value>
5 x3 L0 z! o4 b8 x/ ?' p, N  </property>" |9 k% y7 E; v9 G, {
  <property>7 h6 d6 R( n. x4 x" y" L
    <description>The interval that the yarn client library uses to poll the
% g4 n5 J7 k+ _; g2 S. ?    completion status of the asynchronous API of application client protocol.
2 u9 Q+ m2 A# n/ E    </description>; i4 i# |/ u# S7 {" h1 m
    <name>yarn.client.application-client-protocol.poll-interval-ms</name>* M6 l  ?  t7 I; X( O7 ^
    <value>200</value>
3 J8 C+ `9 C# ?8 }# Q# `  i* q  </property>0 e/ f5 d+ H( ]$ B. i
  <property>
; d. u7 f+ F( I+ X( o% M+ S    <description>, o$ u4 h, W2 D- i3 {, c) _& g
    The duration (in ms) the YARN client waits for an expected state change. w" M/ U7 m3 s; F; j6 j  m
    to occur.  -1 means unlimited wait time.
& d  J+ S1 t( ?8 ?) h1 {8 x    </description>
5 Q1 A  C5 b/ M2 x* t+ D    <name>yarn.client.application-client-protocol.poll-timeout-ms</name>8 k, N7 v- l2 j1 T
    <value>-1</value>
3 g/ p4 L* j8 z- T! k  </property>: S& r" S# `& q! S2 @
  <property>+ n5 q# N5 O6 H: ^9 [# c& y/ L
    <description>RSS usage of a process computed via; t  z9 H% C# z6 h5 J
    /proc/pid/stat is not very accurate as it includes shared pages of a6 z  |8 x% |; K- x7 E1 b
    process. /proc/pid/smaps provides useful information like
& H; ^! m* E- h5 b5 U    Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used, a  U+ e1 h  j& d
    for computing more accurate RSS. When this flag is enabled, RSS is computed
6 L0 {; H8 t0 u& K' y    as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes
5 I9 j; Q2 n3 i5 ~3 O4 m    read-only shared mappings in RSS computation.  
, w7 K2 s  S$ t; n& u    </description>
+ ?7 n1 [! X! f; d/ e! Q; \    <name>yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled</name>1 Z( v5 ^6 D/ t7 B1 ~+ V
    <value>false</value>" n. l3 o2 y; r' q
  </property>
/ [- b) j% {2 d& O9 w/ ?9 w  <property>
: [# n" h, f! X    <description>
) p$ V! w3 y4 D% B    URL for log aggregation server7 @; \7 K# T2 @) _$ a# n
    </description>7 D6 K% j. B/ Q6 j" V
    <name>yarn.log.server.url</name># Z7 c5 h# _* f9 x, Q9 z
    <value></value>
/ {, _) b, l# Y6 o6 S0 l  </property>
! |4 y$ u1 u3 ?2 F9 X  <property>
! e% H5 o; Q- N: i    <description>
5 Z5 N5 u, ^& ]: d    URL for log aggregation server web service% ]/ f: [0 c9 K5 t( i' x4 p+ f5 D
    </description>
+ V1 ?- E0 [$ Y0 l$ b    <name>yarn.log.server.web-service.url</name>
% s" G" w* t) b. B: K    <value></value>
2 V& P2 w4 g8 B, d  </property>; ]! |8 I; I) O0 q* }" c" g3 c
  <property>
7 o# A* H" H: o! d  Z  s    <description>. o, b: N9 X9 y" C8 P" @. z9 [2 a
    RM Application Tracking URL
- ^3 m: A' j* R. J; y6 i    </description>! C3 G4 k- A5 k6 r- j4 q) @% A
    <name>yarn.tracking.url.generator</name>- V% Q# G# F: i- U% h0 Q
    <value></value>: Y* @: Y+ Z- @( j& U  _# H
  </property>) j( x0 w) `. ^6 h
  <property>* ]2 K! b* c7 y6 F
    <description># `, v) {' O1 {  b0 Z4 O2 g
    Class to be used for YarnAuthorizationProvider
) _+ }0 E" T% X    </description>
" m& a$ \# m3 ]2 B    <name>yarn.authorization-provider</name>
" G# {6 ?1 P7 p1 t4 V2 m    <value></value>
* `( W! V# o3 b# G: o$ \  </property>
) A% T4 w' k5 a  <property>
- Y. z. V3 X' z    <description>Defines how often NMs wake up to upload log files.
3 O5 W1 D7 h' `- C    The default value is -1. By default, the logs will be uploaded when5 G) |' Z, D% k$ E/ G0 t% F
    the application is finished. By setting this configure, logs can be uploaded
1 o7 c' C% W, j) y6 h. h2 E    periodically when the application is running. The minimum rolling-interval-seconds: J9 y" H8 a* l& W# I' p5 W- E
    can be set is 3600.1 O/ [6 j, L# c5 Z! d" U
    </description>
1 a& }/ i7 j4 I5 o  ?    <name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>5 S/ _+ c! S% q3 |6 b
    <value>-1</value>) }1 z0 Q/ o! Z) e! V) t" j; W
  </property>
( W: O! u& Y9 R' R  y  <property>- J; E- ^+ }7 z' ?0 O! B1 ?
    <description>Define how many aggregated log files per application per NM& h8 ?+ q" K6 `, Z
    we can have in remote file system. By default, the total number of
4 b: F8 y$ g; \) p  T    aggregated log files per application per NM is 30.$ y. e) F4 G# i, G% W9 F% L
    </description>
; b' S! v3 l, _# J0 y    <name>yarn.nodemanager.log-aggregation.num-log-files-per-app</name>
) p: k- M7 h! Q! s    <value>30</value>
0 G8 Z- y" z. [  </property>
# g2 N/ q9 x6 B% y6 Y, X% F' t  <property>
0 K% L; {' L0 p9 G    <description>
) a7 Y. V! z2 \8 [3 Z; s    Enable/disable intermediate-data encryption at YARN level. For now,
, O& \, w! i9 U# d    this only is used by the FileSystemRMStateStore to setup right
' O1 l5 J4 d% @/ t2 _    file-system security attributes./ Z) S; U/ h! P7 Y
    </description>. D  {0 g5 h; ]5 Z
    <name>yarn.intermediate-data-encryption.enable</name>  |2 @8 P/ B3 V  j
    <value>false</value>
1 ~9 |5 y: s' C& I- f' S  </property>
2 u0 K- Z, Q) }  t6 E6 l% L7 u/ E  <property>1 Q- B7 |3 H$ V, Y+ s& \
    <description>Flag to enable cross-origin (CORS) support in the NM. This flag9 J  t1 t. _! i5 I9 \
    requires the CORS filter initializer to be added to the filter initializers9 N% b( r# o& ~0 E
    list in core-site.xml.</description>0 {3 y' W% K& L. V, ~& n4 P& c
    <name>yarn.nodemanager.webapp.cross-origin.enabled</name>
; Y% r0 K6 x/ w* u" L2 ~    <value>false</value>
! f" C" S6 u, D4 P  </property>8 Y4 O8 ]! t% ~; F5 C# p5 n) j
  <property>
2 J3 N( c2 }; D2 @' _! L6 q    <description>
. S% V  g0 A3 K& x4 ~    Defines maximum application priority in a cluster.* Q/ i' R4 w! a# t8 o0 o9 b
    If an application is submitted with a priority higher than this value, it will be2 D( W- J+ }' f( I+ Z8 S( Q
    reset to this maximum value.
8 ]6 y" y0 E+ G    </description>
: n  k2 ]. s8 ~2 k( L, U2 M    <name>yarn.cluster.max-application-priority</name>- _( Z  m, a6 a: h9 D
    <value>0</value>1 ~* o* P; m: I* Y- b2 n
  </property>" C4 V7 q  \  Q3 D% T5 i; D# D
  <property>
' e  T. m1 v0 @; ^    <description>
* P) x* C6 E$ F. S4 N9 }# E1 y    The default log aggregation policy class. Applications can1 U7 y* c# U4 g/ X
    override it via LogAggregationContext. This configuration can provide
+ ]: E* n* f! m1 Q6 V: @5 ?    some cluster-side default behavior so that if the application doesn't; J7 B+ n. }. a+ g
    specify any policy via LogAggregationContext administrators of the cluster! j$ T: H5 Q7 }  T; w3 e
    can adjust the policy globally.
/ e9 m7 Q9 T: a8 P6 @    </description>! p( o# c# I* S. J- w7 x/ [7 L; e
    <name>yarn.nodemanager.log-aggregation.policy.class</name>
- }! f/ ~" `- I: |$ |. m3 }    <value>org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AllContainerLogAggregationPolicy</value>9 g$ [5 Q: i7 E# d0 d$ b
  </property>0 @' D4 v* C6 h' l7 ]% S- b- ]
  <property>6 |0 L1 B0 b" k; n
    <description>8 h+ ?  i. ]" y, Z
    The default parameters for the log aggregation policy. Applications can1 w2 {) q" M$ l" R, H
    override it via LogAggregationContext. This configuration can provide$ t5 B4 n& v! P; [4 \$ D# u
    some cluster-side default behavior so that if the application doesn't
! k1 [! M4 G  v    specify any policy via LogAggregationContext administrators of the cluster0 X  e6 \) H. ^) s1 C
    can adjust the policy globally.
1 Q. J1 O" v0 Q) U" w    </description>
0 c9 P) ]% o4 l* Y    <name>yarn.nodemanager.log-aggregation.policy.parameters</name>
: n8 i6 V; Q1 V3 q; R8 ?    <value></value>8 E0 ]  U; s6 y! ~# R# N
  </property>
: [& h  V9 W  Q" ?  <property>, v: j2 S# e2 L5 X7 C7 ^
    <description>5 U) x& w. p+ T( c
    Enable/Disable AMRMProxyService in the node manager. This service is used to8 u9 E* C, o: v- I3 K
    intercept calls from the application masters to the resource manager.
1 s6 _5 S8 E9 P$ v4 k/ u* ^    </description>' y4 }  T- l) D. q) p* T9 b
    <name>yarn.nodemanager.amrmproxy.enabled</name>) ]3 z$ }7 l  S3 ?+ X. d9 T6 n
    <value>false</value>
5 ]5 w: o: |! p; W  </property>. g; z% q, Z: `
  <property>% V3 g6 y0 N8 [# ^2 a- R! D
    <description>" J/ L/ I4 l& X, y/ Z* K
    The address of the AMRMProxyService listener.& Z9 y9 x" I# B* c. X
    </description>
+ z  Q5 _4 \$ |6 W) U7 L    <name>yarn.nodemanager.amrmproxy.address</name>
) a8 ?, e* R! S# k% E8 f6 |5 @    <value>0.0.0.0:8049</value>5 F$ w3 q0 a; Q  W. O
  </property>$ f% a  W2 ?5 f  H9 C
  <property>
3 m9 [( J! H: Q5 ?. t2 ?    <description>  s& ]2 w- y2 L( O% h( T
    The number of threads used to handle requests by the AMRMProxyService.
7 A' M4 z, t( m) R    </description>
$ l: o& j- p; W" O    <name>yarn.nodemanager.amrmproxy.client.thread-count</name>+ q! M3 F! a/ y9 F" ^* ^6 C
    <value>25</value>1 E; M( k4 x; k1 Q2 A( b5 c
  </property>
' o2 A) x3 j9 ?: `3 k; V  <property>
) c9 R( C  ~. q" N6 r    <description>
) `7 D; C3 l' C6 \2 P# F    The comma separated list of class names that implement the
  c3 U( j2 Q/ \% ]    RequestInterceptor interface. This is used by the AMRMProxyService to create
+ Z7 }) N: `9 w* e0 T    the request processing pipeline for applications.' Q$ F8 z9 W! h! ~6 {9 k1 g
    </description>8 ?3 S! ?0 `  W% E/ T+ x) K2 j6 O2 I
    <name>yarn.nodemanager.amrmproxy.interceptor-class.pipeline</name>! r' {* O  n6 k: a9 [: Z
    <value>org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor</value>
# {' ^/ Q3 ^2 P/ B7 R. d  </property>
. d1 y  p8 E2 `1 p! O/ b. ^# {  <property>8 u  Z) e/ r/ H! @: p9 I
    <description>
, w$ |( v, d1 c# A: K# _# M+ _    Whether AMRMProxy HA is enabled.7 G4 c. M! h3 G( G9 a8 V
    </description>
# P, W: x: h5 X0 l! J    <name>yarn.nodemanager.amrmproxy.ha.enable</name>9 Y( j  @: n# C8 d
    <value>false</value>1 b5 T: _5 ?2 P7 Y7 a& t! ^. b' Q' h0 l
  </property>
# U- ]' T& i- v0 l9 @% F& ]' v' K  <property>
- u, ^; z( }; B" d    <description>
+ O9 F% N0 c" T* H9 Q    Setting that controls whether distributed scheduling is enabled.% K) Q+ d, ^! C$ V( i
    </description>
6 w8 M  f) ^" n5 T    <name>yarn.nodemanager.distributed-scheduling.enabled</name>5 `1 @* y, i5 |6 L
    <value>false</value>
. m- r/ m5 m2 d& \: F4 ~( F' I  </property>
" `' A, j% w7 V! V- k  <property>
, i7 ]$ Y' ~# L. Q, B    <description>, C% r- S9 C& A: P* m# \& V
      Setting that controls whether opportunistic container allocation
1 b6 Y$ V* x8 g2 P- e) h      is enabled.) b9 A( ]7 B7 B
    </description>/ r; |5 O9 H; R! t
    <name>yarn.resourcemanager.opportunistic-container-allocation.enabled</name>; t, G" n" [2 |6 l) c
    <value>false</value>
5 F$ G1 \% E$ j& h$ C- Y" O5 ]4 n  </property>
7 V* _( j$ R% j2 u  <property>
, J0 b$ {) _  t( F- C    <description>
+ ^8 b* S( {; T% C    Number of nodes to be used by the Opportunistic Container Allocator for6 `3 \8 i* G8 a  j, F
    dispatching containers during container allocation.: |% c" w( C' u8 ?
    </description>
. Y1 C1 E0 e2 a/ Y+ ~9 C/ W    <name>yarn.resourcemanager.opportunistic-container-allocation.nodes-used</name>
! _5 |  P+ M& F7 I$ s+ `    <value>10</value>6 Z$ w# F7 A$ \8 ?0 m! @0 [% e
  </property>
. v, G. [2 Y+ l  <property>
. H+ q+ H: L, _& ~$ {, K    <description>
! X0 F& @7 w8 `& }- N    Frequency for computing least loaded NMs.& u& _9 ^' ~6 [& p/ j! Y
    </description>
4 M5 f% ^4 U6 r7 B6 z- e    <name>yarn.resourcemanager.nm-container-queuing.sorting-nodes-interval-ms</name>
! t0 ~8 f- Q; k+ X! X" L    <value>1000</value>
- Y. b  k1 j: p4 t# U; H9 u  </property>8 ]; j3 l4 k) X& @( T+ {0 O/ O, l
  <property>2 d8 {  U0 }6 X+ a" O" E
    <description>
* w; b* I2 |; r3 V+ V. {) U/ J, W. A    Comparator for determining node load for Distributed Scheduling.: ?# ]5 j7 v+ W$ n! w
    </description>
- S; a; D! a; Q0 g5 [& j& z: G    <name>yarn.resourcemanager.nm-container-queuing.load-comparator</name>
5 ~" w+ b" a+ V7 p9 k. h    <value>QUEUE_LENGTH</value>  Z, q% X. `" s8 X3 j6 I: J
  </property>0 s: g- W1 ]; W! M8 Y" J
  <property>
! T9 v( G0 z3 I' ^* e& ~    <description>
. c" d4 S: f. I  w9 m, c    Value of standard deviation used for calculation of queue limit thresholds.
  |, T, E! S3 v3 B    </description>
+ J8 e4 {. \4 r1 |% _    <name>yarn.resourcemanager.nm-container-queuing.queue-limit-stdev</name>
/ P. |/ H6 k% H1 t# N    <value>1.0f</value>
6 f9 ]5 N- p$ ?5 i( F# ~  T+ ?  </property>
  `' }& z; z' e  <property>
# Y+ H. l  A& z. \8 H8 Y- B    <description>; v; }& @7 V" l4 \0 u. o7 \
    Min length of container queue at NodeManager.# w. t) u2 `4 x3 R
    </description>  w, {' F5 r6 Y
    <name>yarn.resourcemanager.nm-container-queuing.min-queue-length</name>
& K( x* M" e$ y" T! H  p1 z    <value>5</value>
6 x- I8 Y) n' s0 d, O' u* _0 N  </property>3 i  e: X8 D, p# F
  <property>
  Z$ x+ q; p# a5 t* Y3 b    <description>* t  U' y+ u! B3 N! _, k" C  F
    Max length of container queue at NodeManager.
: _) q0 w4 v1 J& o    </description>
. O3 \" B2 e2 ~- p: o    <name>yarn.resourcemanager.nm-container-queuing.max-queue-length</name>+ Z) G3 Y( V  }, O0 c+ \
    <value>15</value>! l1 j: |% i( a( X
  </property>
1 S  t7 q* T6 M# [& T9 g: `  <property>6 e0 n! `5 ^0 B# A; _
    <description>, m$ }0 ~  z4 ^8 Z$ a& f
    Min queue wait time for a container at a NodeManager.
7 z0 k% E+ Z# A  u    </description>7 B5 r9 o& }% E9 G' O1 f5 O
    <name>yarn.resourcemanager.nm-container-queuing.min-queue-wait-time-ms</name>
# P' r% R! C: s" U# o1 o% H    <value>10</value>8 e' y1 |' g- E% k- _
  </property>+ K, d& t& C" f1 v% i# U
  <property>3 I% j" I+ C& F: I  Z
    <description>; M+ x+ K- P& M3 c6 p+ c
    Max queue wait time for a container queue at a NodeManager.* e) H9 s' g4 N# D% W4 z. {
    </description>$ e# H7 p& x2 W( J
    <name>yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms</name># s! s4 |0 I8 y: Q& e' }" d# ~
    <value>100</value>
4 i% \0 M' _' V- a  </property>
1 X: P' o9 ]; D, L  <property>
# w# Q0 y* x! I% ~6 L    <description>
& z8 ?3 ]' o& a" M: `8 O* B    Use container pause as the preemption policy over kill in the container7 K/ x8 d7 @3 P; i+ y
    queue at a NodeManager.
; V. P0 s% ]% e2 r7 K2 F    </description>
5 ~- v! \* F8 q, O    <name>yarn.nodemanager.opportunistic-containers-use-pause-for-preemption</name>
; `  E3 n% ?* u) V8 ~! \    <value>false</value>( t8 z" w8 A4 ?, f6 g
  </property>
/ u; y  S6 {1 {, {: K& g6 W: C+ `  <property>
1 W- F2 y' l9 q! w" Q. q    <description>" b5 _1 {9 k3 j! [
    Error filename pattern, to identify the file in the container's
. v6 l9 {6 T8 L' {2 @5 J! d    Log directory which contain the container's error log. As error file
$ e3 C: _5 P9 t6 a- H    redirection is done by client/AM and yarn will not be aware of the error
# n. K, `% r- i5 z    file name. YARN uses this pattern to identify the error file and tail, h) S4 f; I0 f: ]+ q
    the error log as diagnostics when the container execution returns non zero
9 G7 u7 e" r' b6 [: \; R    value. Filename patterns are case sensitive and should match the8 U4 Y: ^8 j: J" N
    specifications of FileSystem.globStatus(Path) api. If multiple filenames5 B) {: S5 q7 g
    matches the pattern, first file matching the pattern will be picked.
" i, [' |! b  c) j  I+ u. o; [, ~    </description>4 G! |2 }" ^& n4 K( m' k
    <name>yarn.nodemanager.container.stderr.pattern</name>( X) ^1 \3 p7 m  }; P
    <value>{*stderr*,*STDERR*}</value>0 i) M" F5 K' `
  </property>
3 C5 \1 `& e" K8 k  <property>$ _/ E1 |+ a. n4 [( V- Q4 ]
    <description>
' _4 ]/ t6 x% Y. i    Size of the container error file which needs to be tailed, in bytes.
; a- L, G- Y  i3 T  k    </description>5 I# [, ^6 I! u# |: E
    <name>yarn.nodemanager.container.stderr.tail.bytes </name>
, q2 V% r$ b# C8 y( k& z1 ^& L7 G    <value>4096</value>2 C. p0 g# k) y  ~+ [+ d
  </property>; p' S8 C- M) W3 Y
  <property>
" j7 W! l% C7 z    <description>' b  V- N, j" U: u9 n: T
    Choose different implementation of node label's storage
: [* R, H2 ~" ?4 M    </description>
3 A0 _* u* A& m  Y* a    <name>yarn.node-labels.fs-store.impl.class</name>
2 F. U4 w3 {1 ?5 L    <value>org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore</value>1 m* l9 m) R1 X3 ]$ k  j* G
  </property>" u( u% Y" T5 J; u9 P6 j6 W) l
  <property>4 [) J& P5 ^. O7 J: ^2 |. w
    <description>
& |" z. t8 F8 D+ q( A2 Z3 V& K      Enable the CSRF filter for the RM web app7 P4 E: F  ~  ^$ g$ a5 o7 w
    </description>
$ x2 I8 j. p! A3 k# P& j# m! v5 J    <name>yarn.resourcemanager.webapp.rest-csrf.enabled</name>4 Z6 y0 P, S' G7 u
    <value>false</value>' Y! W: x9 a' c- n# {1 A
  </property>
# R8 m/ _& i+ Z- z: G8 e: P  <property># T5 f6 d: l$ v+ r- n
    <description>- V+ D2 N2 y6 t. T
      Optional parameter that indicates the custom header name to use for CSRF$ Q" ?7 f; f# ?6 M- T; s
      protection.
" k  L9 s) Q$ o* y    </description>3 b1 y4 Z5 W9 D5 V+ M0 X) `
    <name>yarn.resourcemanager.webapp.rest-csrf.custom-header</name>
* @% \) h+ |9 k9 J# _    <value>X-XSRF-Header</value>6 g& P$ t) a# @' Q, Y6 K3 I
  </property>
6 Q, z6 e8 O* d3 j& q4 N* x0 Q  <property>
$ Y2 o! I! V3 m3 b    <description>* Z/ [# p/ @! ^8 {; w& ]
      Optional parameter that indicates the list of HTTP methods that do not# ~" B2 w, b$ L7 R
      require CSRF protection; }' X) a% v8 ]( M
    </description>
+ U) T, n+ D/ D$ A    <name>yarn.resourcemanager.webapp.rest-csrf.methods-to-ignore</name>% \. s) b: d- h% J4 r
    <value>GET,OPTIONS,HEAD</value>9 w: U1 O8 G, }+ j9 i( k" U$ L0 f
  </property>$ H, m7 b  H5 S; q9 ~
  <property>7 j6 L0 m$ R: F6 m
    <description>1 P; B- C% s' |& j* R6 L  U$ I: n) F
      Enable the CSRF filter for the NM web app
" b& A/ ^6 P: X6 x    </description>5 j( l. d8 G8 ], u
    <name>yarn.nodemanager.webapp.rest-csrf.enabled</name>
" u! @/ R2 N& [9 w3 f    <value>false</value>
5 U8 }) f6 K  u8 a  </property>
3 i4 w  v$ x# d( {, _, V  <property>
2 X( \( W; R& x9 F    <description>
: f! I: K8 F8 a* q+ d" y      Optional parameter that indicates the custom header name to use for CSRF
6 ?5 ~( y5 S- \( v+ K: Y' Q& Y) e* }      protection." k6 O; \, E! Z' I  b( W
    </description>
( h6 d! D! i, Y) Y    <name>yarn.nodemanager.webapp.rest-csrf.custom-header</name>
. N5 ]) C& K1 G    <value>X-XSRF-Header</value>" r7 d  z& @4 ~& z
  </property>
% K% j' k0 Z$ K; J  <property>
. [4 t: f! U2 X7 j    <description>& W) Q  q3 D  u# `3 [, K9 K
      Optional parameter that indicates the list of HTTP methods that do not+ N: |8 f* g! ?8 N3 f4 e7 m
      require CSRF protection4 J6 T2 q5 J# _* ~+ p
    </description>! e, T- b& h  S# q
    <name>yarn.nodemanager.webapp.rest-csrf.methods-to-ignore</name>9 U$ c& |1 L! J
    <value>GET,OPTIONS,HEAD</value>
4 H- R% W! D- a$ U# s3 |  </property>- L7 G/ }  q2 C; y$ Q; v5 I- |
  <property>* {: s! A1 Y6 X+ m9 b& F+ l
    <description>
7 N. v" j/ z& m- W0 ^% T9 p7 M      The name of disk validator.7 i3 s, v5 w- Q# Z0 Q
    </description>
* u% v6 w6 X+ ~% _    <name>yarn.nodemanager.disk-validator</name>
5 D. H5 B% I$ w1 I3 ?! @    <value>basic</value>% \8 q% ^2 _. o; w
  </property>
  N3 P( @8 W; E2 W  U' G/ w  <property>  G% Q. K+ b. {$ k- C5 b& T' {
  <description># d2 D0 P% W+ @$ t$ x" e5 a$ W
      Enable the CSRF filter for the timeline service web app$ f1 G' [- |/ z& v
    </description>
5 M7 J- t- A+ b9 T$ |    <name>yarn.timeline-service.webapp.rest-csrf.enabled</name>" v4 b" v0 F6 I0 K
    <value>false</value>
+ v3 p4 Q. y9 z" _& `  u- U! V0 u  </property>
/ ]3 G! Q& A1 A+ T+ l  <property>
9 q8 h" g4 k) `1 y: w  {: J  P    <description>9 g& z, \1 q4 K: F+ ^. K
      Optional parameter that indicates the custom header name to use for CSRF
- i5 C" g, h$ ^; ^! t5 v      protection.# y) i6 x; m' W- N+ ?! L/ `2 X
    </description>0 t* n: z0 _$ x9 y( }
    <name>yarn.timeline-service.webapp.rest-csrf.custom-header</name>
  D; W, H" V: x$ O8 |8 {    <value>X-XSRF-Header</value>
7 \$ t# I, T- U  </property>
4 @" c; V: ~3 H; P. Z  <property>" j* I) f( o3 Q- \
    <description>0 Q6 h! B( M2 u. |0 _/ E( Y
      Optional parameter that indicates the list of HTTP methods that do not
5 U) i& u- T+ T" ^5 v9 L3 W      require CSRF protection
8 x7 x- G( r+ S0 b4 H+ J- e    </description>
* a3 w5 ^; t2 k  l$ k+ y    <name>yarn.timeline-service.webapp.rest-csrf.methods-to-ignore</name>
- z4 s. k$ g# p) _: Z    <value>GET,OPTIONS,HEAD</value>
8 s( I: Z; s" U/ ^' {  </property>
( B& p, B% m: O" Z8 e: @( T, ]  <property>6 l& Z. M1 b3 O4 s' W' u6 {7 g3 x! b
    <description>4 J9 O9 _& Q9 r9 x! r) M
      Enable the XFS filter for YARN
& `: f( ^  {! x( @    </description>
( F; {, }# G+ G  q5 i" v    <name>yarn.webapp.xfs-filter.enabled</name>+ T% U( L% s8 w
    <value>true</value>! S: r2 d; ~" S
  </property>+ G, }5 e! n. @- O8 s# M
  <property>
8 [& N8 k& k  y$ N4 \) [    <description># U, F: W( F, F, i" Z% Q
      Property specifying the xframe options value.! i0 z) M5 Z5 `" g9 d" l
    </description>
% \5 g$ r. c9 Q( L    <name>yarn.resourcemanager.webapp.xfs-filter.xframe-options</name>
$ h( G" |% X% b) D7 }0 M    <value>SAMEORIGIN</value>& a7 p6 U1 v, [# o6 `- D" E
  </property>( G, C& `; H, p7 \
  <property>
$ f& l8 s$ F! [, e    <description>
$ I. d/ g" W& k; [2 p" s9 o* k1 f/ V      Property specifying the xframe options value.
* e; ^8 t( q' I: f/ I    </description>8 |, u4 ~7 T- c4 _
    <name>yarn.nodemanager.webapp.xfs-filter.xframe-options</name>- G& D- y# d5 p8 ]5 K3 ^
    <value>SAMEORIGIN</value>
# a: V' x: t" q+ w9 E  </property>
$ o7 M  v. ~6 t) t) |  <property>
$ h% d! E7 F1 m* @5 h# ~    <description>
8 z+ g& u1 S$ U) Q! H% s6 _      Property specifying the xframe options value.: A' e; A, j9 ~4 P
    </description>9 z' k- U: B( n) E" x) Z& {
    <name>yarn.timeline-service.webapp.xfs-filter.xframe-options</name>5 i9 V: l' i$ k4 h, t
    <value>SAMEORIGIN</value>
% U& `: B* k, j5 l  </property>
- H  Q  ?, C& H) n' l  <property>/ t+ X2 ^+ p& e/ o" b
    <description>
. S, i4 s7 k# G+ f0 j    The least amount of time(msec.) an inactive (decommissioned or shutdown) node can
$ ]. v6 A5 y) C6 S0 Q" C    stay in the nodes list of the resourcemanager after being declared untracked./ z, @/ q3 s8 @) |* z8 ?
    A node is marked untracked if and only if it is absent from both include and$ V% P# t7 L; q8 f& c' E- A
    exclude nodemanager lists on the RM. All inactive nodes are checked twice per- K5 Q1 ~! a4 c& a
    timeout interval or every 10 minutes, whichever is lesser, and marked appropriately.
; R2 h& Y% q. P4 `4 @' z8 w    The same is done when refreshNodes command (graceful or otherwise) is invoked.' J; Z! \! z6 E9 l0 }
    </description>
5 i' m, S9 J9 ~3 P, @    <name>yarn.resourcemanager.node-removal-untracked.timeout-ms</name>; W* s& `5 S$ J9 r/ N: ?
    <value>60000</value>
5 z7 n' O) t5 G  </property>/ b9 L# `% [9 m+ V( T' ~7 @% @5 w
  <property>
6 n1 q6 f  L4 m; T* Q% D    <description>0 N+ U, Z: V! a
    The RMAppLifetimeMonitor Service uses this value as monitor interval: I% O1 w3 P1 d$ b  Y9 c% H6 m
    </description>4 X2 h2 Z$ F* l) _
    <name>yarn.resourcemanager.application-timeouts.monitor.interval-ms</name>" a- e* {" F) R
    <value>3000</value>
4 g, Q8 @' y9 E$ z6 ]7 J  </property>
8 Q  u% m6 ]( G3 X: j  <property>. h/ t7 s* m5 {6 s$ q
    <description>
9 p7 \5 S; ^) U, R! v; p      Defines the limit of the diagnostics message of an application
" Y1 \& o: e4 W9 \      attempt, in kilo characters (character count * 1024).
" o2 q/ e$ l! u9 p/ G      When using ZooKeeper to store application state behavior, it's
% n# M/ d0 b! j" g8 E# R      important to limit the size of the diagnostic messages to
, U- J/ u5 p% v4 \+ A      prevent YARN from overwhelming ZooKeeper. In cases where
9 r" n. }1 \# O+ _      yarn.resourcemanager.state-store.max-completed-applications is set to& d# M- n1 k5 v8 S6 D5 p4 o
      a large number, it may be desirable to reduce the value of this property
' u% t( J1 N( }6 F  ~" j      to limit the total data stored.' K% K& A: m4 D( Q
    </description>
, r6 e+ f" w) Q! p    <name>yarn.app.attempt.diagnostics.limit.kc</name>
  s, B9 n' `3 J: e  _9 E8 u0 c9 |    <value>64</value>
/ ?. g4 M/ c! z! B  M' G% p1 A  </property>
; a; k6 i, I6 U  <property>! b* R0 H( c6 M6 u1 |7 ]
    <description>
2 D3 h# O" I# q1 g" C7 R      Flag to enable cross-origin (CORS) support for timeline service v1.x or
0 c7 a; d. I7 G- Q/ |( J      Timeline Reader in timeline service v2. For timeline service v2, also add
, I( ?- m$ ]5 I% L      org.apache.hadoop.security.HttpCrossOriginFilterInitializer to the
: p* }) a  n7 a; _( h7 j4 N1 B      configuration hadoop.http.filter.initializers in core-site.xml.. u0 Y/ Y0 R+ U
    </description>- J2 r% Z$ f9 c1 \  `  L& D; w# z
    <name>yarn.timeline-service.http-cross-origin.enabled</name>5 b( u0 \3 Q* Q; t* X
    <value>false</value>
2 \7 d' f: M" i, Y  </property>2 {% v3 _! Z( ^% l9 I9 M
  <property>
& f/ B; k5 |2 f6 \/ ~/ b* D    <description>
1 g3 \4 T6 V5 i3 Q1 C- \      Flag to enable cross-origin (CORS) support for timeline service v1.x or
# F: k; i0 B+ M) ?      Timeline Reader in timeline service v2. For timeline service v2, also add
& q$ {4 a& u% m3 J      org.apache.hadoop.security.HttpCrossOriginFilterInitializer to the
. r3 H+ g# g# R* O' `2 C0 X      configuration hadoop.http.filter.initializers in core-site.xml.
# X* M$ n% K  O% @' `) ^    </description>) U% M/ O8 I* \4 q
    <name>yarn.timeline-service.http-cross-origin.enabled</name>) C' t/ D& a' h' @5 u$ V3 b0 i" y
    <value>false</value>+ m% N5 t) |* F& i
  </property>- E7 M9 x1 d* Q
  <property>6 [; `- o6 B2 G- R9 ?2 b( j5 N6 I
    <description>! Z2 |! _% ^. {& u8 V6 a
      The comma separated list of class names that implement the! E+ Q2 O4 s: F' _4 h
      RequestInterceptor interface. This is used by the RouterClientRMService% b! y( p9 p( D" v) Y: V/ f0 k$ F% F
      to create the request processing pipeline for users.
/ P" P  _/ r$ D+ K    </description>! W+ x  n) K/ @) X9 M6 @  p
    <name>yarn.router.clientrm.interceptor-class.pipeline</name>
5 |" P( B: |: }' v) a7 O    <value>org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor</value>" |) u$ b3 J' [
  </property>
" x/ {5 b1 K& q  <property>
0 L2 z( i/ j8 ^. A9 R    <description>
* T+ Q4 @% o) n" c      Size of LRU cache for Router ClientRM Service and RMAdmin Service.
+ F1 h* s, ^& P# X3 d% e- u    </description>6 p. x4 o4 B1 G; R% R7 @5 s4 y
    <name>yarn.router.pipeline.cache-max-size</name>
4 S" r& b& s. C3 I+ `& ]    <value>25</value>% V$ z+ N, f. D7 I& ~" q
  </property>: ?, o6 t$ B5 s! N- B# H" s- S* Z
  <property>
7 g. W+ \" Z# S0 r    <description>
' U. Z) j) p, f) f% }      The comma separated list of class names that implement the5 n/ y, U) L; u% n8 H
      RequestInterceptor interface. This is used by the RouterRMAdminService
& K2 F- `) g) m/ D      to create the request processing pipeline for users.
1 W6 z( M3 S2 d' U" q0 G- m0 a& R    </description>
! Y  t  w  m/ B5 @, R8 n    <name>yarn.router.rmadmin.interceptor-class.pipeline</name>
  O6 }! f; G: e( \6 n    <value>org.apache.hadoop.yarn.server.router.rmadmin.DefaultRMAdminRequestInterceptor</value>
+ y! A/ d; C1 S( ~$ U$ f# n  </property>
' J3 E( s5 K1 @( Y" g; K  <property>6 X9 E. H; Q8 E
    <description>
3 }* s3 e' E  N* q+ T  h  b      The actual address the server will bind to. If this optional address is
6 \" Z2 B0 R8 K% N5 H+ z" w      set, the RPC and webapp servers will bind to this address and the port specified in
' s# b5 B% J/ s$ R4 _      yarn.router.address and yarn.router.webapp.address, respectively. This is% r" g3 Q5 b3 U" ?6 j+ D2 Z
      most useful for making Router listen to all interfaces by setting to 0.0.0.0.- o+ _4 b0 ?" N1 M+ B/ I
    </description>
0 j* @( e/ p; \7 u$ S2 {    <name>yarn.router.bind-host</name>7 R5 U/ |& T# j8 j& t
    <value></value>7 T7 w5 U. F( R/ j" t
  </property>. O8 K0 i2 d* A) H& R
  <property>
5 U! ?, x  r& T' l& b: X    <description>3 a- D+ K; O- [5 G
      Comma-separated list of PlacementRules to determine how applications7 E% e( H* g0 E  d" d8 S
      submitted by certain users get mapped to certain queues. Default is
# R/ d. ~7 k1 ^6 V( @      user-group, which corresponds to UserGroupMappingPlacementRule.
: L+ g6 o8 H; y  [+ e/ h& H7 e! \; F    </description>) G6 z9 ]% H! @2 ]  e& u
    <name>yarn.scheduler.queue-placement-rules</name>( H! @/ P6 d" z- P
    <value>user-group</value>
, c1 f( r0 J( m9 {$ R( w9 e1 H  </property>) ^8 ?1 f3 w4 p( r
  <property>
. ^" k8 c# a8 e. E4 c. L8 G    <description>8 V; w3 P! ?) N2 o
      The comma separated list of class names that implement the
+ U) d5 ^6 W0 r+ D      RequestInterceptor interface. This is used by the RouterWebServices3 w0 N! B1 B& }/ y& a' x
      to create the request processing pipeline for users.1 `5 f: g4 y/ m+ F
    </description>
/ e- ^7 L: W6 n$ u6 s- q    <name>yarn.router.webapp.interceptor-class.pipeline</name>
. Q( d8 R( R1 b1 L    <value>org.apache.hadoop.yarn.server.router.webapp.DefaultRequestInterceptorREST</value>% c9 x# L5 {' N  X* Y8 i; L3 C
  </property>
1 ]( E) Q' ]+ F9 {  <property>
& |) [3 N7 ?9 H    <description>- v, `( l- k, u4 h+ L5 t1 S
      The http address of the Router web application.
7 y: e! \: N* Q$ g      If only a host is provided as the value,
' I, Z: ?3 Y9 G! R" p& a! H      the webapp will be served on a random port.
0 `( J5 h$ k0 b" C- H% f2 @    </description>
/ s5 O% \& i$ B& c. e    <name>yarn.router.webapp.address</name>
1 W6 Y# J; t' s/ P. ^' {! V    <value>0.0.0.0:8089</value>
) @+ j' ~# @1 h) X  </property>% M; V- S  H* }4 x% M
  <property>8 L7 k* w  R! n, H. }
    <description>
8 `) l& n+ v4 s+ A; s9 ?) G. r, T      The https address of the Router web application.
6 j2 W1 T( Q1 T; b0 z, L. v+ U      If only a host is provided as the value,- ?1 m6 @$ ?7 s# X7 |4 i
      the webapp will be served on a random port.) F# q8 |9 @, S
    </description>
6 b$ d) i6 S: r- d3 x- b5 o    <name> yarn.router.webapp.https.address</name>2 M+ ?5 F( L( T
    <value>0.0.0.0:8091</value>
8 I( A: f& x5 o% [, o9 l  </property>
; V* E2 x  X2 v$ K9 _  g  <property>9 l; B1 ~" {. p- h( {' i
    <description>1 \: f  O' i& a5 y
       It is TimelineClient 1.5 configuration whether to store active
  F' j1 Z% y: ~, z       application’s timeline data with in user directory i.e, b0 ?, ~# h1 i7 p
       ${yarn.timeline-service.entity-group-fs-store.active-dir}/${user.name}
/ ]; t- w  L1 e    </description>9 x& D- S, N, N/ K4 n
    <name>yarn.timeline-service.entity-group-fs-store.with-user-dir</name>! I3 @) e, O+ W+ ^8 N8 A$ v
    <value>false</value>) b" a7 W; M4 V4 s7 Z
  </property>
0 n# p6 o, ?( s/ e- o2 v2 H  <!-- resource types configuration -->
* d7 k9 i6 S, ~( k0 Y  <property>
; V1 s$ k( {: e* t( w% `    <name>yarn.resource-types</name>* ^) C2 i$ O3 M5 V  s! ]
    <value></value>
6 ?4 |1 e  V  g" b$ W    <description>
/ ~) i3 |  ~8 \' t2 i    The resource types to be used for scheduling. Use resource-types.xml
- g& L1 g: x" _, C; _, f* S    to specify details about the individual resource types.: D# s* L( T* K) V/ I. I4 m
    </description>0 H  f# ~  L( @" j( r. _+ R
  </property>
, F. V, L" H7 L5 p# z  <property>" A6 s  }- q' }* k+ C7 }- f4 E
    <name>yarn.webapp.filter-entity-list-by-user</name>
- Y) k/ N: s. G9 r& ^( S3 X  v    <value>false</value>; k# h; R8 v4 Y" E8 N
      <description>
) f3 r9 @0 e0 u2 E& }2 I        Flag to enable display of applications per user as an admin2 K/ s; c( ^( s+ r4 s
        configuration.
7 M% g( @! M7 b, B* p1 @      </description>1 t1 ~. S' w* `$ {2 Q
  </property>1 k/ b* F' c# F7 w
  <property>
( |  v; m$ y% B2 T* L8 \5 f' m' k6 h0 @    <description>( n0 A" ~* G* M/ g
      The type of configuration store to use for scheduler configurations.
3 a6 c% E& D% p* B+ t7 M      Default is "file", which uses file based capacity-scheduler.xml to4 w/ l% B7 U% c0 h8 W5 f5 p
      retrieve and change scheduler configuration. To enable API based
# N- @8 M4 _. z0 ~9 |  H      scheduler configuration, use either "memory" (in memory storage, no3 P5 ]4 r- s0 |; J
      persistence across restarts), "leveldb" (leveldb based storage), or
) j0 J+ S, R/ O" r* `, |- {, H7 Y      "zk" (zookeeper based storage). API based configuration is only useful
+ B: o3 P% R- Q0 t      when using a scheduler which supports mutable configuration. Currently
. N- D/ X9 d# }% z8 U      only capacity scheduler supports this.. A" ]0 b$ }: g5 ]+ ^
    </description>. G" y" F; A* s% G& U
    <name>yarn.scheduler.configuration.store.class</name>
9 R+ C, }9 l. E4 }3 r    <value>file</value>& J: i! F$ _2 b; m
  </property>
- `7 m# b; w8 X' {' p! F  <property>
, }1 O( R3 R: e5 z) Q, Q0 u  u+ |    <description>
( {7 }- ^8 @, n      The class to use for configuration mutation ACL policy if using a mutable" @, t8 j; J, X2 J$ |6 S
      configuration provider. Controls whether a mutation request is allowed.7 J1 O5 x9 g  v
      The DefaultConfigurationMutationACLPolicy checks if the requestor is a# `6 P7 b" h! j
      YARN admin.
  C  V: T7 X( d3 H0 ~% x    </description>
6 b1 _. Q. `8 e7 o. R* q0 J+ E    <name>yarn.scheduler.configuration.mutation.acl-policy.class</name>
" V: C% D' b* |; w/ d& X    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.DefaultConfigurationMutationACLPolicy</value>( W8 ^2 K$ U6 u% W8 r  d
  </property>
" J' y8 f; N' s% Q" {& t6 R3 }  <property># L0 H* |: S) G0 H9 ~0 S* U, h' K
    <description>
2 R- m* ^9 j7 P5 ]1 v: w      The storage path for LevelDB implementation of configuration store,
8 q% l& n! s$ D8 W6 q5 i% O; |      when yarn.scheduler.configuration.store.class is configured to be
1 w# o$ f  e- _, Y1 S. a      "leveldb".
' e+ v5 q3 C+ k" _: B    </description>! x: y5 S; Z  ?
    <name>yarn.scheduler.configuration.leveldb-store.path</name>" ~  W3 g9 U' T' m  o$ N
    <value>${hadoop.tmp.dir}/yarn/system/confstore</value>
% z* j( r. ]6 `3 I0 z- _4 D  </property>
% G6 h* n6 G7 h: t  <property>
# h4 d$ M- M1 L    <description>9 x+ W9 G5 Z5 C; X7 r3 f( L  q
      The compaction interval for LevelDB configuration store in secs,
3 t; P, e' O* I# a      when yarn.scheduler.configuration.store.class is configured to be) {6 N- m% w5 F
      "leveldb". Default is one day.
' l" o' p# ^" X. }& r    </description>5 S+ G* m; m- w6 u, ^5 g( f
    <name>yarn.scheduler.configuration.leveldb-store.compaction-interval-secs</name>' C/ ^3 X! E  D. U1 g
    <value>86400</value>
' B, H! M( u" J- B/ J% L9 I9 a  </property>
$ |' ?0 n3 |9 }" E# T0 \  <property>; `* u! E5 U" F
    <description>
* F& `0 g8 u7 @      The max number of configuration change log entries kept in config0 W! Q. ~- X2 o
      store, when yarn.scheduler.configuration.store.class is configured to be
+ g1 b1 K" Y2 z% T' f0 _      "leveldb" or "zk". Default is 1000 for either./ C0 _- B; W5 G( `& E2 D2 r
    </description>
, l" r. Q& \- `' [$ y' i    <name>yarn.scheduler.configuration.store.max-logs</name>3 T' Q+ P; W$ m
    <value>1000</value>9 r* H) n5 [6 E" @
  </property>- h; L6 J4 u, I  d3 }( V- L9 j% c
  <property>
; ?0 d0 y: L3 }. Y. }$ t/ I    <description>, b) V% c, J/ ^! q8 Q3 |
      ZK root node path for configuration store when using zookeeper-based
% M; W0 N/ f; Z! K# F1 ^      configuration store.) l; J- ]' ^7 Z+ q9 x, K) _
    </description>0 V* |5 Y) D) Q+ U! p! v# e* E
    <name>yarn.scheduler.configuration.zk-store.parent-path</name>$ \3 G9 E+ e7 d2 ]1 v
    <value>/confstore</value>  D6 F4 G# H$ g* k* j- g, B
  </property>
% N5 F4 f% b. L3 ?4 s- ]  <property>
) t9 P$ I' [) b% b  O  a1 K    <description>; a0 [, G* U: ?" y0 }2 ]& H
      Provides an option for client to load supported resource types from RM
1 Z( b9 P9 q9 {% j' u* j- U5 P" v      instead of depending on local resource-types.xml file.
! i% F( b2 W7 B2 a) D) g    </description>
% v. F  G% I8 p- D+ @4 K    <name>yarn.client.load.resource-types.from-server</name>- V' r: R1 |% d
    <value>false</value>
# {% i% }6 Z& `& d  </property>' r! l6 f, ?  e$ @% B4 W+ X5 C
  <property>
9 t# _# E( j7 l& F0 d    <description>
$ j# C# s& L# S! J      When yarn.nodemanager.resource.gpu.allowed-gpu-devices=auto specified,$ {1 N6 n8 G; B- W% ?5 b
      YARN NodeManager needs to run GPU discovery binary (now only support& d+ ?/ q. G6 q5 U  V2 D$ \
      nvidia-smi) to get GPU-related information.
" S& f7 }8 w3 c1 T' O. r      When value is empty (default), YARN NodeManager will try to locate: p& r3 ~( w$ n: s% h$ k/ G0 g
      discovery executable itself.
/ G: O6 m% o: D0 B) I      An example of the config value is: /usr/local/bin/nvidia-smi
) r2 B. \1 N2 X, O    </description>  T0 p' T" ?" U0 t% Y
    <name>yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables</name>. @) N7 M3 m5 q  z( b
    <value></value>2 @  D+ ^1 Y* D8 o
  </property>
0 E. u, Z1 k+ ?7 {: G: k  <property>2 L- @$ W! u' j* h/ g" V; k' P
    <description>
' J3 W% l) g) g0 s4 R8 ^      Enable additional discovery/isolation of resources on the NodeManager,
: U, v7 h7 R. `! ?9 y6 s% Q$ y2 G      split by comma. By default, this is empty.
7 y  h- K& W* T, G+ R+ i* w      Acceptable values: { "yarn-io/gpu", "yarn-io/fpga"}.6 L  v. y8 W1 {: d6 W
    </description>
5 m2 Y' I0 D/ `" F/ `/ d6 j    <name>yarn.nodemanager.resource-plugins</name>
6 R, f" z8 h6 K+ _    <value></value>
: p; ?- q6 m2 O/ i2 H  </property>
, T: k2 m' y/ v( V4 E  <property>: g. I- }/ {) F6 {5 T- l
    <description># O5 U. k( @3 `# Z% ^0 J
      Specify GPU devices which can be managed by YARN NodeManager, split by comma" h( r1 }+ y" f
      Number of GPU devices will be reported to RM to make scheduling decisions.- x0 [2 O- m# c5 {' M
      Set to auto (default) let YARN automatically discover GPU resource from* I+ P4 f6 N$ ?/ n& X
      system.
. ~4 K& k0 X( g9 [0 X3 D% a( h      Manually specify GPU devices if auto detect GPU device failed or admin- h" e2 g* y; W
      only want subset of GPU devices managed by YARN. GPU device is identified# K  M" y1 _( A/ F9 W( k
      by their minor device number and index. A common approach to get minor
6 A9 E& e  Z# j! e4 Q2 h9 C      device number of GPUs is using "nvidia-smi -q" and search "Minor Number"
/ t/ V5 V; C* W      output.1 m  T( m0 I& l" U$ y& P4 F
      When manual specify minor numbers, admin needs to include indice of GPUs* K$ V0 D: o3 r. M' x
      as well, format is index:minor_number[,index:minor_number...]. An example
' T9 ?# e3 z3 p, {" b8 r      of manual specification is "0:0,1:1,2:2,3:4" to allow YARN NodeManager to" f2 q; x; I7 D/ Q
      manage GPU devices with indice 0/1/2/3 and minor number 0/1/2/4.
+ L  H( u& @! I; n7 `) z6 r      numbers .
8 q- d6 h3 N; i6 o    </description>% j6 p/ \5 T- p% y* u
    <name>yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices</name>
# H5 P& E+ Y) Z" u    <value>auto</value>/ R) c8 c9 s# W: V/ ~
  </property>
( t3 H+ P9 F! p$ ~' j, \( m: M" U$ T  <property>, @7 O( `- {2 ^7 f$ K+ V
    <description>' h7 n, A2 Z2 S
      Specify docker command plugin for GPU. By default uses Nvidia docker V1.9 {. a3 M1 M3 O' \
    </description>
+ ?! E& w/ S- q+ p, R; V4 H    <name>yarn.nodemanager.resource-plugins.gpu.docker-plugin</name>$ u/ g4 S0 A: C, o
    <value>nvidia-docker-v1</value>, |! ]9 q' ?0 L
  </property>
! f' V- o, k7 I1 @  <property>
2 f0 Z6 o( H2 n8 j    <description>  _9 ^9 {' l  T5 G. Q4 s. V; i
      Specify end point of nvidia-docker-plugin.
. p+ y& \7 w& c      Please find documentation: https://github.com/NVIDIA/nvidia-docker/wiki
, [. H# n9 B1 n5 @3 V! ]( K! g      For more details.
7 R9 [0 i4 n3 o# o4 {    </description>
8 v6 I# [" V$ s, R! E9 n    <name>yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpoint</name>9 @0 B- b& g" {7 q. m4 u2 J
    <value>http://localhost:3476/v1.0/docker/cli</value>
0 o! t7 T+ A- l. U+ J! P6 o% C0 K  </property>
0 }8 J2 H6 V  y( K5 K/ ~9 B: u7 q6 O3 O  <property>
0 @8 ^4 m( y% V! u) l% ^7 `1 e    <description>
( C7 {: `8 B; W/ B6 [      Specify one vendor plugin to handle FPGA devices discovery/IP download/configure.
( a* e' @8 u# @1 F& v6 ^      Only IntelFpgaOpenclPlugin is supported by default.
, o, Z0 H; |! S+ F  Y9 t      We only allow one NM configured with one vendor FPGA plugin now since the end user can put the same
- y. C: F8 \0 `. @      vendor's cards in one host. And this also simplify our design.4 e  T6 S/ L# u' v- e/ F
    </description>" }, y0 j4 `( z) U# k8 C
    <name>yarn.nodemanager.resource-plugins.fpga.vendor-plugin.class</name>9 z, X. [6 b# C- F
    <value>org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin</value>; L8 ^. _& `. `# d
  </property>/ B5 W* g) P, X$ R3 z
  <property>
3 Q; d. k+ G! s) Q    <description>$ ^8 @+ i$ ~' I( r. k- x; D# U
      When yarn.nodemanager.resource.fpga.allowed-fpga-devices=auto specified,
. ]' p+ B7 c: y3 U3 |      YARN NodeManager needs to run FPGA discovery binary (now only support
4 H7 y5 Q. q. S5 F3 Z, [/ `      IntelFpgaOpenclPlugin) to get FPGA information.
4 {0 M/ v4 Y5 N( j0 z/ G) i) E' s      When value is empty (default), YARN NodeManager will try to locate2 ~. u; H: s: l. Z
      discovery executable from vendor plugin's preference
8 O+ i# ?- s4 _8 F1 k4 O& @    </description>
/ r3 m8 Z5 x2 G" R+ r    <name>yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables</name>
' x6 w/ t; S& P# z    <value></value>1 ^3 n5 E2 _1 o3 ~
  </property>* R$ j- z/ b" x7 X6 B
  <property>
2 T, d$ P2 u5 R; V+ f+ `    <description>9 T( J* w% z  O! L. o$ }
      Specify FPGA devices which can be managed by YARN NodeManager, split by comma
) u* g' K9 R3 {      Number of FPGA devices will be reported to RM to make scheduling decisions.! o7 P4 z! D# c, M9 e" A& i
      Set to auto (default) let YARN automatically discover FPGA resource from
3 n2 R  J' v7 Y% R( A6 l1 n      system.
+ `5 C* q* k5 X/ L8 {      Manually specify FPGA devices if admin only want subset of FPGA devices managed by YARN.  V& d9 T6 Z3 W- x! e( X* k( g
      At present, since we can only configure one major number in c-e.cfg, FPGA device is
. M* h1 G* Z; c' R' d: c7 Y7 i. r      identified by their minor device number. A common approach to get minor
% n  D+ \4 |. t' L  S      device number of FPGA is using "aocl diagnose" and check uevent with device name.
* K& n% D5 Q9 e6 |7 R# P    </description>' d2 d  t  y0 I- N
    <name>yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devices</name>7 m* g4 Z0 |- ^1 R; ]
    <value>0,1</value>
6 J% `8 K7 |/ P8 \  </property>' v( i& f; c7 \" ~  q! }
  <property>4 k" q; a: u$ P: ^; [) J
    <description>The http address of the timeline reader web application.</description>
  s8 ~+ w9 i3 L" \    <name>yarn.timeline-service.reader.webapp.address</name>
6 a* ]  B* q+ p6 e, y7 W& t, |6 G    <value>${yarn.timeline-service.webapp.address}</value>8 A- ?+ U. L# H. a( o: R* c$ ~
  </property>, l( X. A2 a2 z% f) Y: R* z
  <property>) L7 z  |# l3 V8 V9 l. t! ^& w
    <description>The https address of the timeline reader web application.</description>
; D1 j2 k6 s, b  ^0 ~* s    <name>yarn.timeline-service.reader.webapp.https.address</name>
" {+ M1 d0 K! O+ O8 h    <value>${yarn.timeline-service.webapp.https.address}</value>
; O: V" n! j& G: I  </property>  N& [& e2 q* P  e! c9 \
  <property>( y8 d+ q$ O, R8 X7 \
    <description>
& w7 o6 p7 g$ ]. S7 D8 D! E5 n! E& ]      The actual address timeline reader will bind to. If this optional address is
2 Q. P  L3 G' ^6 M( v+ E/ f$ U# r" P      set, the reader server will bind to this address and the port specified in. J% |" V1 g$ U9 k# N3 J
      yarn.timeline-service.reader.webapp.address.
4 C, {8 }9 M! p% {; k& v5 C      This is most useful for making the service listen to all interfaces by setting to5 T/ j, a  \8 G  w2 ]$ m" m- ~0 k
      0.0.0.0.8 Q* J" N4 ?; w9 g4 Z# @' j- f
    </description>* [+ d" `" e1 K
    <name>yarn.timeline-service.reader.bind-host</name>; F: Z$ A# p( C$ _1 R
    <value></value>/ F* S, c6 @+ |- R$ p1 }0 J
  </property>" w6 z# ?& S2 m9 J6 p
  <property>
3 i0 F& ]8 R+ \9 I! Y; a& W    <description>
& _, g1 z6 Y) A# p; r4 W; L    Whether to enable the NUMA awareness for containers in Node Manager., L$ y0 J4 F/ k  s2 _" ^) S
    </description>
$ H/ L, b7 A3 r( [    <name>yarn.nodemanager.numa-awareness.enabled</name>! D" h& W! X' K! j2 N% s, U* ~
    <value>false</value>
6 L- I$ C; g4 {2 q) A# [7 W/ b7 U  </property>* @3 R3 o5 g: j
  <property>
, x- h% C0 F1 y, _7 b    <description>
5 |) L% C5 G) ?; R, M    Whether to read the NUMA topology from the system or from the
6 [& K' }8 W5 N) @* ^9 `  d    configurations. If the value is true then NM reads the NUMA topology from+ @# e- N5 w3 U4 M% Y! M
    system using the command 'numactl --hardware'. If the value is false then NM1 F' d2 p+ j# Q6 I/ H  U' S3 R
    reads the topology from the configurations
: f, K  v9 m& s    'yarn.nodemanager.numa-awareness.node-ids'(for node id's),
5 g0 h. [7 a# @% ]) v- g    'yarn.nodemanager.numa-awareness.<NODE_ID>.memory'(for each node memory),
8 [; p  `! y6 ~8 T2 Z' e    'yarn.nodemanager.numa-awareness.<NODE_ID>.cpus'(for each node cpus)., [9 b) o: r* U
    </description>
' O4 L3 L1 w) {/ V1 r% U0 `    <name>yarn.nodemanager.numa-awareness.read-topology</name>
  |( ^; Y7 O0 h1 C" v: i    <value>false</value>% c  n, l4 U" Q3 q- d
  </property>9 z- ^, |7 F, e2 Q7 b2 n, L$ f
  <property>
# ]) L$ z) v! [8 H2 U9 M7 `, v    <description>3 b- i+ @5 ?: R+ q
    NUMA node id's in the form of comma separated list. Memory and No of CPUs
0 f* V+ e, _5 E1 h/ x    will be read using the properties
/ y7 x% P2 I+ n: `6 w" H    'yarn.nodemanager.numa-awareness.<NODE_ID>.memory' and
& F. ?) H  C- ~, W% ]% ]    'yarn.nodemanager.numa-awareness.<NODE_ID>.cpus' for each id specified! D$ W) }4 \& e  L. q9 p
    in this value. This property value will be read only when+ ^8 |1 C8 u) Z# |4 ^1 f# T. W5 j
    'yarn.nodemanager.numa-awareness.read-topology=false'.; D+ q" X/ C( |# y: E- K' b6 I
    For example, if yarn.nodemanager.numa-awareness.node-ids=0,1( \0 N7 O6 h# @# {
    then need to specify memory and cpus for node id's '0' and '1' like below,
' s4 \" {0 s; L! D. d# ?% ~    yarn.nodemanager.numa-awareness.0.memory=73717
/ |) ^1 E% @9 s    yarn.nodemanager.numa-awareness.0.cpus=4
% k$ t5 c# J8 J1 W3 b9 Z    yarn.nodemanager.numa-awareness.1.memory=73727- f% n  {2 `* G: |3 y# Z
    yarn.nodemanager.numa-awareness.1.cpus=48 {- y. n/ R! t6 D* X! |3 [& w; X+ i
    </description>
7 t; u' U0 I7 }5 V5 Z! }$ h+ V    <name>yarn.nodemanager.numa-awareness.node-ids</name>
: X; {) c6 E" q& u; G* K# Z    <value></value>6 e9 j# p$ z- e
  </property>
  C/ _, }5 o" m; q$ U- Z6 X  <property>) @3 x# h9 Q! [3 X  d! E
    <description>* z% U1 w4 j; {- w: p
    The numactl command path which controls NUMA policy for processes or  N) f1 R9 ~/ s" g
    shared memory.
2 |2 y: G: o) x! J2 J    </description>
8 U9 h$ ?* u) d# O# U  {8 ]$ ~    <name>yarn.nodemanager.numa-awareness.numactl.cmd</name>
0 H6 y9 C: f% X5 f8 P: n' X    <value>/usr/bin/numactl</value>  M6 T+ S) G. P" O8 f4 A7 w
  </property>
# B6 |3 |+ H$ s0 F# j</configuration>
3 M8 G- r& b5 [& C
7 ^9 p4 N0 e+ ~: L) U# P' y
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-9 00:03 , Processed in 0.219495 second(s), 23 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表