community icon indicating copy to clipboard operation
community copied to clipboard

Query regarding synchronization of data inside volumes in stateful sets

Open rohan-97 opened this issue 1 year ago • 4 comments

Describe the issue

Hello,

Being new to Kubernetes, I have a basic doubt regarding volume synchronization in statefulset.

I am working on a stateful application and trying to scale it up with multiple replicas, I stumbled upon stateful set and was considering whether I can use it to implement my stateful application.

The application requires pods to be replicated and storage volume of all the replicated pods should be in sync and I was wondering if I can use statefulset for the same.

I went through documentation of stateful set but didn't found any block of document mentioning that kubernetes/statefulset synchronizes persistent volumes among multiple replicas of stateful set.

I need to confirm if I use stateful set to implement my stateful application then will kubernetes synchronize all the persistent volumes of all the pods or I need to implement some mechanism to manually synchronize data among the pods (e,g distributed file storage)

Thanks for the help in advance.

rohan-97 avatar Jul 29 '24 12:07 rohan-97

/sig scalability

rohan-97 avatar Jul 29 '24 13:07 rohan-97

/sig storage

rohan-97 avatar Aug 06 '24 06:08 rohan-97

You're best bet would be to reach out on the sig-storage mailing list or slack channel. This repo is sort of meta for self-management of the k8s community and not meant to route questions^^;;

mrbobbytables avatar Aug 07 '24 12:08 mrbobbytables

Hi @mrbobbytables ,

Thanks for the response, I'll add my query over there. :)

rohan-97 avatar Aug 08 '24 10:08 rohan-97

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 06 '24 10:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 06 '24 11:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 05 '25 12:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jan 05 '25 12:01 k8s-ci-robot