huly-selfhost icon indicating copy to clipboard operation
huly-selfhost copied to clipboard

Huly resets itself everynight

Open MuratDoganer opened this issue 1 year ago • 17 comments

I have come across this issue where my Huly installation (Ubuntu 23.04 LTS) resets itself every night.

Whichever workplace I created / users I had / projects I upload all get deleted by the following morning

Logs dont show any activity that causes this to happen, and the only error message I get is when I try to log in and it says User not found.

I've reinstalled 3 times to ensure it wasnt an error along the way but it keeps happening!

Has anyone else come across this issue?

MuratDoganer avatar Jun 24 '24 09:06 MuratDoganer

Hey, I'm running into the same issue, have you found a solution?

ershisan99 avatar Jul 06 '24 12:07 ershisan99

None at all :(

MuratDoganer avatar Jul 10 '24 15:07 MuratDoganer

None at all :(

I actually found a solution!

I'm willing to share it with you for a small contribution of $300. Just kidding :)

What I did was change all package versions in resulting docker-compose to the latest one on dockerhub, then redeploy the project with it and that solved the issue. Hope it helps!

ershisan99 avatar Jul 10 '24 15:07 ershisan99

None at all :(

I actually found a solution!

I'm willing to share it with you for a small contribution of $300. Just kidding :)

What I did was change all package versions in resulting docker-compose to the latest one on dockerhub, then redeploy the project with it and that solved the issue. Hope it helps!

Unfortunately, it hasn't helped for me

masleshov avatar Jul 14 '24 05:07 masleshov

I faced the same problem. Any solutions?

mezza-qu avatar Jul 23 '24 06:07 mezza-qu

Any solutions?

mezza-qu avatar Jul 29 '24 12:07 mezza-qu

Nevermind. Solved.

mezza-qu avatar Jul 30 '24 15:07 mezza-qu

How? 30 июля 2024 г., в 22:17, mezza-qu @.***> написал(а): Nevermind. Solved.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>

masleshov avatar Jul 31 '24 07:07 masleshov

How? 30 июля 2024 г., в 22:17, mezza-qu @.> написал(а): Nevermind. Solved. —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.>

This is a MongoDB database vulnerability. It is hacked by a bot and your data is deleted. Remove or comment in compose.yml ports for mongodb and for minio.

This will make the ports inaccessible to third-party connections.

mezza-qu avatar Jul 31 '24 07:07 mezza-qu

Any other idea why this could be happening? We host MongoDB in K8s so it can't be accessed from the outside except via ingresses. The only ingresses are for the different app components (account, collab, front, rekoni, transactor). Are these app components vulnerable?

UmanShahzad avatar Nov 06 '24 09:11 UmanShahzad

@aonnikov @lexiv0re @0xtejas can you please help with this?

UmanShahzad avatar Nov 06 '24 09:11 UmanShahzad

What are the contents of the database when a reset happens? Is it completely blank or still contains some data? You may check account.account collection to check the list of accounts.

lexiv0re avatar Nov 06 '24 09:11 lexiv0re

@lexiv0re it actually does look like it got 'hacked':

test> show dbs
READ__ME_TO_RECOVER_YOUR_DATA   40.00 KiB
admin                           40.00 KiB
config                         108.00 KiB

Note the READ__ME_TO_RECOVER_YOUR_DATA.

It turns out this is because it was accessible publicly due to the hostPort that's in the K8s example. Why is that added there?

UmanShahzad avatar Nov 07 '24 02:11 UmanShahzad

Hello @UmanShahzad

In the Deployment manifest, it is specified a hostPort of 27017. This configuration binds the container's 27017 port to the host node’s 27017 port. However, this alone does not expose it to the internet directly; it just makes the MongoDB instance accessible on port 27017 of the node within your internal network (e.g., VPC).

Also, I don't think we must bind the MongoDB or any service to make it available on the internal network other than the k8s pods themselves using k8s services. Therefore we can consider removing usage of hostPort.

To double confirm, I have even run Nmap against my pods to see the open ports. There is no MongoDB.

0xtejas avatar Nov 10 '24 06:11 0xtejas

Yes there's no purpose to the hostPort for these services, even the external services because ingresses are available.

But the hostPort does expose it to the internet for setups where machines aren't in a VPC, where the machine's public IP is available and the service thus binds to <publicIp>:<hostPort>. Without general inbound restrictions on ports, this would expose it.

Good practice to have those port restrictions in place, though our setup doesn't have or need it (as we don't expose anything except LB/NodePort ports) so it exposed this issue.

I'll upstream a chart when I get some time, which should make deployment really easy and bug-free.

UmanShahzad avatar Nov 10 '24 09:11 UmanShahzad

Hi. I ran a Nmap Port scan to check the worker nodes on the Digital Ocean. I can confirm that they aren't exposed to the internet and were only internally accessible on the VPC level. However, we don't have to worry much currently, as the team is moving to Postgres DB instead of MongoDB.

0xtejas avatar Nov 11 '24 16:11 0xtejas

Yes there's no purpose to the hostPort for these services, even the external services because ingresses are available.

But the hostPort does expose it to the internet for setups where machines aren't in a VPC, where the machine's public IP is available and the service thus binds to <publicIp>:<hostPort>. Without general inbound restrictions on ports, this would expose it.

Good practice to have those port restrictions in place, though our setup doesn't have or need it (as we don't expose anything except LB/NodePort ports) so it exposed this issue.

I'll upstream a chart when I get some time, which should make deployment really easy and bug-free.

This is off-topic but... in terms of security practice I really advise you to have your Kubernetes cluster in a closed network behind gateways. It's been few years I've been working with Kubernetes cluster and having them exposed publicly is always prone to security breaches.

In my organization what we do is having Kubernetes cluster behind haproxy gateways which are just small VMs gateways take 80/443 and that's all. (fyi we use only open-source software for our environment, so this is possible even if you self-host)

warstrolo avatar Apr 23 '25 13:04 warstrolo