AEM-Rules-for-SonarQube icon indicating copy to clipboard operation
AEM-Rules-for-SonarQube copied to clipboard

image startup fails with elasticsearch error: [FORBIDDEN/12/index read-only / allow delete (api)]

Open alexkli opened this issue 3 years ago • 1 comments

Running docker run --rm -p 9000:9000 ahmedmusallam/sonarqube-aem:latest fails during startup:

2020.11.03 19:18:10 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	at org.elasticsearch.cluster.block.ClusterBlocks.indicesBlockedException(ClusterBlocks.java:204)
	at org.elasticsearch.action.admin.indices.settings.put.TransportUpdateSettingsAction.checkBlock(TransportUpdateSettingsAction.java:73)
	at org.elasticsearch.action.admin.indices.settings.put.TransportUpdateSettingsAction.checkBlock(TransportUpdateSettingsAction.java:41)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:173)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:164)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:141)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:59)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139)
	at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:83)
	at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:73)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1289)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.common.util.concurrent.EsExecutors$1.execute(EsExecutors.java:140)
	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1247)
	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1111)
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:914)
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:53)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
	at java.lang.Thread.run(Thread.java:748)
2020.11.03 19:18:10 INFO  web[][o.s.p.StopWatcher] Stopping process
[quality.sh]:: Waiting for Sonar to be ready.. ::
2020.11.03 19:18:11 INFO  app[][o.s.a.SchedulerImpl] Process [web] is stopped
2020.11.03 19:18:11 INFO  app[][o.s.a.SchedulerImpl] Process [es] is stopped
2020.11.03 19:18:11 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped
2020.11.03 19:18:11 WARN  app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143

alexkli avatar Nov 03 '20 19:11 alexkli

Based on this article it seems to happen if there is not enough disk space left for the ElasticSearch index (threshold is 95%). In this case on the container/in my docker environment, which used ~59 GB out 64 GB.

To free some space, I ran this and after it it worked!

docker system prune

Might be useful to add to the instructions in the README in case someone else runs into it.

alexkli avatar Nov 03 '20 19:11 alexkli