alpine-sqs icon indicating copy to clipboard operation
alpine-sqs copied to clipboard

Dockerized ElasticMQ server + web UI over Alpine Linux for local development

Results 28 alpine-sqs issues
Sort by recently updated
recently updated
newest added

I think the title says it all. using `docker stats` when my queues are empty, CPU reports around 3% If I use the following command ONCE, the CPU jumps to...

Feat: upgrade to Alpine OpenJDK + last release of ElasticMQ and sqs-insight

The following code: ```python import boto3 sqs_resource = boto3.resource('sqs', region_name='us-west-1') sqs_queue = sqs_resource.get_queue_by_name(QueueName='test-queue') res = sqs_queue.send_messages(Entries=[ {'Id': '1', 'MessageBody': json.dumps({'body': 'the body'})}, {'Id': '2', 'MessageBody': json.dumps({'body': 'the body'})}, {'Id': '3',...

bug
help wanted

I am trying to launch roribio16/alpine-sqs:latest but I keeps crashing due to a segmentation fault SIGSEGV. **Expected behavior** It should rund the alpine-sqs queue **Actual behavior** Crashing on launch **Information**...

I have created a new queues via a custom `elasticmq.conf`. I copy and pasted the default and dlq queues and twaeked the names. That seems to start up fine. However,...

Hello, my problem with this docker is about delete message from aws cli. When I send a message to a queue using this command: `aws --endpoint-url http://localhost:9324 sqs send-message --queue-url...

``` ~$ docker buildx build --platform linux/arm64/v8 --build-arg ARCH=arm64v8 -t alpine-sqs . [+] Building 121.1s (16/16) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.32kB...

I'm trying to use this to do local development since our org locks down our AWS account (even our dev). If it isn't deployed through our ops terraform and scripts...

> Need to set all values as empty string and it is possible to send messages with aws cli command. Perhaps this needs adding to readme if we claim thorough...

I tried sending a message with the same `message-deduplication-id` multiple times, and all of them got processed by my queue consumer. They all had the same `message-group-id` too.