Clickhouse: Encountered a permission denied problem with the hdfs engine.

Created on 20 Nov 2019  Â·  4Comments  Â·  Source: ClickHouse/ClickHouse

  1. create table:
create tale hdfs_to_ck (
  id String,
  remarks String
) engine = HDFS('hdfs://namenode/user/hive/warehouse/xxxxxx.db/table_hdfs/000000_0', CSV);
  1. read table:
    select * from hdfs_to_ck;

  2. exception:
    [2019-11-20 11: 41: 07] code: 76, edisplayText ()=DB: Exception: Unable to open HDFS file: /user/hive/warehouse/xxxxxx.db/table_hdfs/000000_0 error: HdfsIOException: InputstreamImpl: cannot open file: /user/hive/warehouse/xxxxxx.db/table_hdfs/000000_0 Caused by: Permission denied, users=clickhouse, access=EXECUTE, inode="/user/hive/warehouse/xxxxxx.db": hduser3511: hduser3511: drwxr-x--- 2019-11-20 11:41:07/ et org. apache hadoop hdfs server namenode. PAAuthorizationProvider checkFsPermission(PAAuthorizationProvider java: 345) 2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode, PAAuthorizationProvider checkTraverse(PAAuthorizationProvider. java: 237) 2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. PAAuthorizationProvider checkPermission(PAAuthorizationProvider java: 192) 2019-11-20 11: 41:07] at org. apache hadoop hdfs server namenode. FSPermissionChecker checkPermission(FSPermissionchecker java: 138) [2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. FSNamesystem checkPermission( FSNamesystem java: 6621) [2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. FSNamesystem checkPermission( FSNamesystem java: 6603 2019-11-20 11: 41:07 at org. apache hadoop hdfs server namenode. FSNamesystem checkPathAccess( FSNamesystem java: 6528) [2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. FSNamesystem getBlockLocationsUpdateTimes(FSNamesystem java: 1919) 2019-11-20 11: 41: 07 at org. apache hadoop hdfs server namenode. FSNamesystem getBlockLocationsInt(FSNamesystem java: 1870) 2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. FSNamesystem ge etBlockLocations( FSNamesystem java: 1850) 2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. FSNamesystem getBlockLocations( FSNamesystem java: 1822 [2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. NameNodeRpcServer getBlockLocations ( NameNodeRpcServer java: 558) 2019-11-20 11: 41: 07] at org. apache hadoop hdfs server namenode. AuthorizationProviderProxyclientProtocol getBlockLocations(AuthorizationProviderProxyclientProtocol, java: 87) 019-11-20 11:41:07 at org. apache hadoop hdfs. protocolPB ClientNamenodeProtocolServerside TranslatorPB getBlockLocations(client NamenodeProtocolServersideTranslatorPB java: 364) [2019-11-20 11: 41: 07] at org. apache hadoop hdfs protocol. proto. clientNamenodeprotocolProtossclientNamenodeProtocol$2. callBlockingMethod (client NamenodeProtocolProtos java) 2019-11-20 11: 41: 07 at org. apache hadoop ipc. ProtobufRpcEnginesServersProtoBufRpcInvoker call(ProtobufRpcEngine java: 619) [2019-11-20 11: 41: 07] at org. apache hadoop ipc RPCS Server call(RPC java: 1060) [2019-11-20 11: 41: 07] at org. apache hadoop ipc. ServersHandler$1. run(Server. lava: 2066 [2019-11-20 11: 41: 07] at org. apache hadoop ipc. ServersHandlers1, run(Server iava: 2062)<1 internal call> 2019-11-2011:41:7]at ja vax. security auth. Subject doAs(Subject, iava:415) 2019-11-20 11: 41: 07] at org. apache hadoop. security. UserGroupInformation doAs(User GroupInformation java: 1691) [2019-11-20 11:41:07 at org. apache hadoop ipc. ServerSHandler run(Server iava: 2060) 2019-11-2011:41:87]( version19.9.3.31l( official build))

I would like to ask: Does the clickhouse read hdfs can specify hdfs users? Such as hduser3511

comp-foreign-db question question-answered

Most helpful comment

Maybe that can help: #5946

I.e.:

) engine = HDFS('hdfs://hduser3511@namenode/user/hive/warehouse/xxxxxx.db/table_hdfs/000000_0', CSV);

All 4 comments

I had the same problem

I meet the same question, and I need help.

Maybe that can help: #5946

I.e.:

) engine = HDFS('hdfs://hduser3511@namenode/user/hive/warehouse/xxxxxx.db/table_hdfs/000000_0', CSV);

It works, thanks.

Was this page helpful?
0 / 5 - 0 ratings