yida＆yueda 2022-02-13 07:51:37 阅读数:656
As a distributed file system ,HDFS It also integrates a set of compatible POSIX Authority management system . When the client performs each file operation , The system will User authentication and Data access authorization Two steps to verify ： The operation request of the client will first be obtained through the local user authentication mechanism “ voucher ”（ Similar to identity certificates ）, Then the system according to this “ voucher ” Distinguish the legal user name , Then check whether the data accessed by the user has been authorized . Once an exception occurs in a certain part of the process , The client's operation request will fail .
HDFS File permissions and Linux/Unix Systematic UGO Model types are similar , It can be simply described as ： Each file and directory is associated with an owner and a group . The file or directory pair is used as ** owner （USER） Users of , As it should be Other users of group members （GROUP） And for all Other users （OTHER）** Have separate permissions .
stay HDFS in , For the file , need r Permission to read the file , and w Permissions can be written to or appended to the file . No, x Concept of executable file .
For directory , need r Permission to list the contents of the directory , need w Permission to create or delete files or directories , And need x Permission to access children of the directory .
Linux in umask Can be used to set the permission mask **. The permission mask is composed of 3 Eight octal numbers make up , After subtracting the existing access permission from the permission mask , It can generate the default permissions when creating files .
And Linux/Unix The system is similar to ,HDFS Also provided umask Mask , Used to set in HDFS The default new file and directory permission bits in . Default umask Values have attributes fs.permissions.umask-mode Appoint , The default value is 022.
Used when creating files and directories umask, The default permission is ：777-022=755. That is to say drwxr-xr-x.
hadoop fs -**chmod** 750 /user/yida/foo // Change the permission bit of a directory or file hadoop fs -**chown** :portal /user/yida/foo // Change the owner or user group of a directory or file hadoop fs -**chgrp** itcast _group1 /user/yida/foo // Change user group
It should be noted that , The user using this command must be a superuser , Or the owner of the file , Also a member of this user group .
Hadoop3.0 after , Support in HDFS Web Use the mouse on the page to modify .
** Sticky bits （Sticky bit）** Usage: set... On the directory , So since , Only the owner of the file in the directory or root To delete or move the file . If you don't set the sticky bit for the directory , Any user with write and execute permissions to the directory can delete and move files in it . Practical application , Viscous position is usually used for /tmp Catalog , To prevent ordinary users from deleting or moving other users' files .
User authentication is independent of HDFS outside , That is to say HDFS It is not responsible for checking the legitimacy of user identity , but HDFS Relevant user identities will be obtained through relevant interfaces , Then it is used for subsequent permission management . Whether the user is legal , It all depends on the authentication system used by the cluster . At present, the community supports two kinds of identity authentication , Simple authentication （Simple） and Kerberos. Mode by hadoop.security.authentication Attribute specifies , Default simple.
Based on the location of the client Linux/Unix The login user name of the system to authenticate . As long as the user can log in normally, the authentication is successful . The client and NameNode Interaction time , The login account of the user will be （ Through something like whoami Command to get ） Pass as a valid user name to Namenode. This means logging in to the same client with different accounts , Different user names will be generated , Therefore, under the condition of multi tenancy, this kind of authentication will lead to authority confusion ; At the same time, malicious users can also forge other people's user names and illegally obtain corresponding permissions , Cause great hidden danger to data security . Online production environments generally do not use .simple At the time of certification ,HDFS The idea is ： Prevent good people from doing bad things by mistake , Don't prevent bad people from doing bad things .
copyright：author[yida＆yueda]，Please bring the original link to reprint, thank you. https://en.javamana.com/2022/02/202202130751360654.html