NetApp-Hadoop-NFS-Connector icon indicating copy to clipboard operation
NetApp-Hadoop-NFS-Connector copied to clipboard

Connector fails on Unsupported verifier flavorAUTH_SYS

Open danielhaviv opened this issue 9 years ago • 4 comments

Hi, Were trying to use the connector to connect to a normal linux NFS share but receive the following exception: [root@ip-172-31-11-139 ~]# hadoop fs -ls / 16/01/14 11:40:53 ERROR rpc.RpcClientHandler: RPC: Got an exception java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS at org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45) at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50) at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67) at org.apache.hadoop.fs.nfs.rpc.RpcClientHandler.messageReceived(RpcClientHandler.java:62) at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

danielhaviv avatar Jan 17 '16 12:01 danielhaviv

Hi. We posted a newer version of this code: https://github.com/NetApp/NetApp-Hadoop-NFS-Connector/releases/tag/v1.0.6. There is a patch for Hadoop since it has a bug. Try that instead and see.

gokulsoundar avatar Jan 17 '16 16:01 gokulsoundar

Hi, I got the same error as @danielhaviv

After replacing the nfs-hadoop.jar with your 3.0.0-version, the original error is gone, but there is still an error:

16/05/17 09:34:23 ERROR rpc.RpcClient: RPC: xid=107000001 RpcReply request denied: xid:107000001,messageType:RPC_REPLYverifier_flavor:AUTH_NONErejectState:AUTH_ERROR
16/05/17 09:34:23 ERROR mount.MountClient: Mount MNT operation failed with RpcException RPC: xid=107000001 RpcReply request denied: xid:107000001,messageType:RPC_REPLYverifier_flavor:AUTH_NONErejectState:AUTH_ERROR
16/05/17 09:34:23 DEBUG shell.Command: java.io.IOException
        at org.apache.hadoop.fs.nfs.mount.MountClient.mnt(MountClient.java:101)
        at org.apache.hadoop.fs.nfs.NFSv3FileSystemStore.<init>(NFSv3FileSystemStore.java:111)
        at org.apache.hadoop.fs.nfs.topology.SimpleTopologyRouter.getStore(SimpleTopologyRouter.java:83)
        at org.apache.hadoop.fs.nfs.NFSv3FileSystem.getFileStatus(NFSv3FileSystem.java:854)
        at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
        at org.apache.hadoop.fs.Globber.doGlob(Globber.java:285)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:151)
        at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
        at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
        at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
        at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
        at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:305)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:362)

I'm using Cloudera CDH 5.5.2.

Do you have any ideas how to fix this problem? Mountin the nfs via usual Linux commands works like a charm.

Best regards, Benjamin

edit: I added the nfs-mapping.json for further reference:

{
  "spaces": [
    {
      "name": "netapp",
      "uri": "nfs://10.231.0.11:2049/",
      "options": {
        "nfsExportPath": "/vs01_02",
        "nfsReadSizeBits": 20,
        "nfsWriteSizeBits": 20,
        "nfsSplitSizeBits": 27,
        "nfsAuthScheme": "AUTH_SYS",
    "nfsUsername": "root",
    "nfsGroupname": "root",
    "nfsUid": 0,
    "nfsGid": 0,
        "nfsPort": 2049,
        "nfsMountPort": -1,
        "nfsRpcbindPort": 111
      },
      "endpoints": [
        {
        "host": "nfs://10.231.0.11:2049/",
        "path": "/"
        }
      ]
    }
  ]
}

benruland avatar May 17 '16 13:05 benruland

Hey check controller side "unix-user" and "unix-group"-

  1. root user's UserID and GroupID of your SVM should be 0 and not 1.

  2. Also create separate users.json file and group.json file.

You can refer to TR-4382 for creating these files. I guess this might help you to get rid of the problem mentioned above. tr-4382.pdf

AnkitaD avatar Jun 05 '16 12:06 AnkitaD

With 2.7.1 there is a open issue "https://issues.apache.org/jira/browse/HADOOP-12345". Please try with "2.8.0, 3.0.0-alpha1".

potnuruamar avatar Mar 04 '17 17:03 potnuruamar