hive icon indicating copy to clipboard operation
hive copied to clipboard

HIVE-28804: The user does not have the permission for the table hdfs,…

Open zxl-333 opened this issue 6 months ago • 2 comments

What changes were proposed in this pull request?

The user does not have the permission for the table hdfs, but can delete the metadata

Why are the changes needed?

When I create a table using the hdfs user and write data into it, and then use the hive user to delete this table, the engine side shows that the deletion is successful. However, the metastore log indicates that the deletion failed due to insufficient permissions when deleting the HDFS directory. Nevertheless, the metadata has been deleted. This situation may result in the data of this table becoming junk data. 2025-03-04 16:44:27,617 | WARN | org.apache.hadoop.hive.metastore.utils.FileUtils | Failed to move to trash: hdfs://myns/warehouse/tablespace/managed/hive/test_drop; Force to delete it. 2025-03-04 16:44:27,621 | ERROR | org.apache.hadoop.hive.metastore.utils.MetaStoreUtils | Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=hive, access=ALL, inode="/warehouse/tablespace/managed/hive/test_drop":hdfs:hadoop:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSubAccess(FSPermissionChecker.java:455) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:356) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943) at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:105) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3300) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1153) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:725) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:614) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:582) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:566) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1116) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1060) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:983) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2997)org.apache.hadoop.security.AccessControlException: Permission denied: user=hive, access=ALL, inode="/warehouse/tablespace/managed/hive/test_drop":hdfs:hadoop:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSubAccess(FSPermissionChecker.java:455) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:356) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:370) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:240) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943) at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:105) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3300) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1153) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:725) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:614) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:582) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:566) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1116) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1060) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:983) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1890) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2997) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_352] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_352] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_352] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_352] at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) ~[hadoop-common-3.3.3.jar:?] at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) ~[hadoop-common-3.3.3.jar:?] at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1664) ~[hadoop-hdfs-client-3.3.3.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:992) ~[hadoop-hdfs-client-3.3.3.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:989) ~[hadoop-hdfs-client-3.3.3.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.3.3.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:999) ~[hadoop-hdfs-client-3.3.3.jar:?] at org.apache.hadoop.hive.metastore.utils.FileUtils.moveToTrash(FileUtils.java:97) ~[hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:41) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:363) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:351) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2586) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2559) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2708) [hive-exec-3.1.2.jar:3.1.2] at sun.reflect.GeneratedMethodAccessor238.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) [hive-exec-3.1.2.jar:3.1.2] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) [hive-exec-3.1.2.jar:3.1.2] at com.sun.proxy.$Proxy27.drop_table_with_environment_context(Unknown Source) [?:?] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:15068) [hive-exec-3.1.2.jar:3.1.2]

Does this PR introduce any user-facing change?

No

How was this patch tested?

Use the existing unit tests.

zxl-333 avatar Jun 17 '25 07:06 zxl-333

@aihuaxu Could you please help take a look at the CI failure? The continuous-integration/jenkins/pr-head check failed and seems to require maintainer approval to proceed. I also noticed that there’s no detailed error shown when clicking into the check. Thanks!

Abyss-lord avatar Jun 17 '25 09:06 Abyss-lord

image

@zxl-333 You can rebase your code to fix the ci issue.

zhangbutao avatar Jun 18 '25 09:06 zhangbutao

@zhangbutao Could you please take a moment to check again that the "continuous-integration/jenkins/pr-head" page does not display detailed error information? Thank you.

zxl-333 avatar Jun 20 '25 06:06 zxl-333

@zhangbutao Could you please take a moment to check again that the "continuous-integration/jenkins/pr-head" page does not display detailed error information? Thank you.

@zxl-333 you need to rebase, java compiler version has changed

deniskuzZ avatar Jul 05 '25 18:07 deniskuzZ