gohbase icon indicating copy to clipboard operation
gohbase copied to clipboard

gohbase stuck retrying RPCs after region server temporarily down

Open rodicus opened this issue 3 years ago • 2 comments

We are experiencing an issue where Put RPCs (and possibly any other types of RPCs) for a specific region may end up continuously failing with a ServerError. This seems to be triggered by HBase moving regions after one of the region servers goes down temporarily. The current workaround is to restart the gohbase client process, which indicates that the issue is with gohbase and not HBase itself.

Our guess is that a region client connection is closed, but it remains associated with one or more region info instances in the gohbase cache. Probably after a disconnection the region client is not properly cleaned up and may end up being reused in RPC operations.

We have seen the issue on a production HBase cluster that was periodically having sync failures making region servers restart multiple times a day. We have tried reproducing this issue in a lab by bringing region servers down temporarily, but the cluster and client process always recover.

The gohbase versions we have seen this failed RPCs include: 6acaa09d18ed 348991136365 which are relatively old, but there doesn’t appear to be any changes newer in the repo that could have potentially resolved the issue in question.

Below is a breakdown of goroutines of version 348991136365 while RPCs are failing. In this setup there are 3 HBase region servers and 8000 workers issuing Put RPCs:

3 goroutines in the "send" loop (one per server?)
/go/src/arista/vendor/github.com/tsuna/gohbase/region/new.go:69 +0x c.Dial go c.processRPCs()
/go/src/arista/vendor/github.com/tsuna/gohbase/region/client.go:364 +0x c.processRPCs select // wait for the next batch

3 goroutines in the "receive" loop (one per server?)
/go/src/arista/vendor/github.com/tsuna/gohbase/region/new.go:71 +0x c.Dial go c.receiveRPCs()
/go/src/arista/vendor/github.com/tsuna/gohbase/region/client.go:444 +0x c.receiveRPCs c.receive()
/go/src/arista/vendor/github.com/tsuna/gohbase/region/client.go:464 +0x c.receive c.readFully(sz[:])
/go/src/arista/vendor/github.com/tsuna/gohbase/region/client.go:580 +0x c.readFully io.ReadFull

3 "Put" goroutines queueing an RPC
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:237 +0x c.Put c.mutate(p)
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:264 +0x c.mutate c.SendRPC(m)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:77 +0x  c.SendRPC c.sendRPCToRegion(rpc, reg) <- ALL ON SAME REGION interface
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:141 +0x c.sendRPCToRegion sendBlocking(client, rpc)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:111 +0x sendBlocking rc.QueueRPC(rpc)
/go/src/arista/vendor/github.com/tsuna/gohbase/region/client.go:206 +0x c.QueueRPC <-c.done: returnResult(rpc, nil, ErrClientClosed)

1 "Put" goroutines starting to mark region client as "down"
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:237 +0x c.Put c.mutate(p)
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:264 +0x c.mutate c.SendRPC(m)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:77 +0x  c.SendRPC c.sendRPCToRegion(rpc, reg) <- ALL ON SAME REGION interface
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:167 +0x c.sendRPCToRegion ServerError: c.clientDown(client)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:177 +0x c.clientDown(client hrpc.RegionClient)

7995 "Put" goroutines waiting (contending) to mark a server client as "down"
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:237 +0x c.Put c.mutate(p)
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:264 +0x c.mutate c.SendRPC(m)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:77 +0x  c.SendRPC c.sendRPCToRegion(rpc, reg) <- ALL ON SAME REGION interface
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:167 +0x c.sendRPCToRegion ServerError: c.clientDown(client)
/go/src/arista/vendor/github.com/tsuna/gohbase/caches.go:84 +0x rcc.clientDown rcc.m.Lock()

1 "Put" goroutines finishing marking a server client as "down"
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:237 +0x c.Put c.mutate(p)
/go/src/arista/vendor/github.com/tsuna/gohbase/client.go:264 +0x c.mutate c.SendRPC(m)
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:77 +0x  c.SendRPC c.sendRPCToRegion(rpc, reg) <- ALL ON SAME REGION interface
/go/src/arista/vendor/github.com/tsuna/gohbase/rpc.go:167 +0x c.sendRPCToRegion ServerError: c.clientDown(client)
/go/src/arista/vendor/github.com/tsuna/gohbase/caches.go:87 +0x rcc.clientDown rcc.m.Unlock()

Some observations from the stacks:

  • The Put calls have a context timeout of 10 minutes
  • No goroutine is stuck greater than 2 minutes indicating that there is no deadlock, but rather that gohbase is retrying in a loop
  • All 8000 calls to sendRPCToRegion are referencing the same region info instance
  • Al calls to c.clients.clientDown are referencing the same region client instance

goroutine stacks attached: gohbase-goroutines.txt

rodicus avatar May 03 '22 05:05 rodicus

Thanks for the detailed bug report. I will have a look today.

dethi avatar May 03 '22 08:05 dethi

Additional infos (could be or not relevant):

  • HBase 2.4.1
  • RS crashing because of:
2022-03-28 01:41:38,880 ERROR [regionserver/XXXXXX:16201.logRoller] regionserver.HRegionServer: ***** ABORTING region server XXXXXXXX,16201,1648415126068: IOE in log roller

dethi avatar May 04 '22 17:05 dethi