Creating Actor Reminder Fails with 500 error if already exists
In what area(s)?
/area runtime
/area operator
/area placement
/area docs
/area test-and-release
/area injector
/area scheduler
/area sentry
What version of Dapr?
1.15+
Expected Behavior
Before v1.15, if you created an actor reminder with a name that already existed, it would overwrite it, now it fails with a 500 error.
Actual Behavior
Creating an actor reminder with an existing name is failing with a 500 error. This has caused a regression in our project, since the behaviour has changed.
Steps to Reproduce the Problem
- Register an actor
- Create a reminder
- Try to create the same reminder again
While a regression, I would propose that this should be the preferred behavior over just assuming that an overwrite is acceptable, much along the lines of the reasoning for the same with Jobs.
Perhaps there's instead an opportunity for the SDKs to handle this differently while retaining this as the norm on the runtime?
So not idempotent? And an obscure 500 over a 409?
Sent from my iPhone
On Oct 4, 2025, at 4:34 AM, Whit Waldo @.***> wrote:
Re: [dapr/dapr] Creating Actor Reminder Fails with 500 error if already exists (Issue #9105)
While a regression, I would propose that this should be the preferred behavior over just assuming that an overwrite is acceptable, much along the lines of the reasoning for the same with Jobs https://github.com/dapr/dapr/issues/8402 .
Perhaps there's instead an opportunity for the SDKs to handle this differently while retaining this as the norm on the runtime?
— Reply to this email directly, view it on GitHub https://github.com/dapr/dapr/issues/9105#issuecomment-3368148335 or unsubscribe https://github.com/notifications/unsubscribe-auth/AAR7OD6RF34P5KRN52QWIQ33V6WEFBFHORZGSZ3HMVZKMY3SMVQXIZNMON2WE2TFMN2F65DZOBS2YSLTON2WKQ3PNVWWK3TUUZ2G64DJMNZZJAVEOR4XAZNKOJSXA33TNF2G64TZUV3GC3DVMWUTCOJSGYZTEMBQGCBKI5DZOBS2K2LTON2WLJLWMFWHKZNKGM2DQMRRGQ3DGNRYQKSHI6LQMWSWYYLCMVWKK5TBNR2WLKRRGQYTENJRGY2TCMECUR2HS4DFUVWGCYTFNSSXMYLMOVS2UMJUGE2TQMJWGI3DTKTBOR2HE2LCOV2GK44WQKSXMYLMOVS2I5DSOVS2I3TBNVS3W5DIOJSWCZC7OBQXE5DJMNUXAYLOORPWCY3UNF3GS5DZQKSXMYLMOVS2I5DSOVS2I3TBNVS2453BORRWQX3BMN2GS5TJOR4YFJLWMFWHKZNFNFZXG5LFURXGC3LFVN2GQ4TFMFSF65DZOBSYFJLWMFWHKZNHGIZDGOBVGI42I3TBNVS2QYLDORXXEX3JMSBKK5TBNR2WLKRRGQYTENJRGY2TCMFENZQW2ZNJNBQXGX3MMFRGK3ECUV3GC3DVMWVDCNBRGU4DCNRSGY42I3TBNVS2S2DBONPWYYLCMVWA . You are receiving this email because you authored the thread.
Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub .
@WhitWaldo so you are saying it should not be idempotent? And that an obscure 500 error over a 409 is preferred?
@danielgerlag I'd argue in favor of immutability over idempotency. If the resource is created for an ID, I don't think you should be able to overwrite it without explicitly indicating that you want to do so, whether it's a job, a workflow, a reminder - anything. I think the API should favor using the least calls possible to do anything and should not lean towards enabling destructive results without clear intent by the user.
We've taken this approach with the 1.16 release for jobs (I suspect that's why we're experiencing this here now), and I think it should be consistently applied throughout.
I didn't say anything about favoring an unclear status code. I would certainly prefer a 409 Conflict so that the SDK has better information about why setting it failed. I'm just providing my two cents for how I'd prefer the API to be designed and documented in this situation.
I don't think you understand idempotency.
https://en.m.wikipedia.org/wiki/Idempotence
It is not destructive, and it is common practice in api design because of retry semantics.
I do understand idempotency - and I agree it can be a valuable concept in API design, but respectfully, I think you're misapplying it in this context.
Actor reminders in Dapr have never been idempotent in practice. While some reminder configurations could behave idempotently (e.g., fixed schedules like "run at 3 PM daily"), most do not - especially those based on relative time intervals. For example, registering a reminder with a 10-minute interval will set its start time to "now" (specified by the local time on the machine running the scheduler). Repeating that same call five minutes later would, in v1.15, overwrite the original reminder with a new start time, which is a destructive change, especially if that subsequent call runs on a different machine. This behavior on the runtime is not idempotent, even if your payload to the API is identical.
Again, I agree that a 500 isn't ideal and a 409 Conflict is more informative as to why there's an issue, but the underlying issue remains: the reminder API has never supported true idempotency, and assuming it does may lead to inaccuracies in your implementation.
My preference for immutability over idempotency stems from the practical complexity of trying to support both - especially in APIs like this one where the payload may appear identical but still result in different runtime behavior. Designing the API to sometimes accept a request because it wouldn't change the underlying data, and other times reject it because it would, introduces ambiguity and an inconsistent experience to developers not familiar with how the system works under the hood. It would force the runtime to make nuanced decisions about whether a reminder is "effectively the same", which becomes more convoluted when dealing with relative time intervals and omitted start times.
That ship has sailed - the scheduler already supports a wide range of valid configurations, many of which are inherently non-idempotent. Trying to retroactively enforce idempotency across all permutations would not only be brittle, but also confusing for users. Instead, opting for immutability with an explicit overwrite flag is a cleaner and more intentional approach. It makes the API behavior predictable: if the resource exists, you get a conflict unless you've clearly indicated you want to replace it. No guessing, no silent overwrites, and no runtime heuristics trying to infer user intent.
To clarify, I'm not advocating for obscure error codes, and I'm certainly not dismissing idempotency as a principle. But I am simply pointing out that in this specific case, immutability provides a safer and more predictable model - especially given how reminders are typically used.
If we are intentionally breaking the established behaviour, as has already been done, then this change should be behind a major version release, to indicate to existing users that it is not safe to upgrade without doing full regression testing.
I don't disagree about the versioning, but the change has been made, so now the choice is to revert it or not. I propose not and that the API should be modified accordingly, consistent with other Dapr APIs.
But now we're simply back to my original response where I advocate for this being the preferred behavior on the runtime and being handled more gracefully from the SDKs so users such as yourself presumably (unless you're directly engaging with the APIs) aren't impacted.
I think this API can be both immutable and idempotent. It is not a trade off, they are not mutually exclusive. The semantics of the API and it's design just need careful thought as to not be ambiguous. Making it idempotent is in keeping with the spirit of resilient distributed systems, which is what DAPR claims to support.
I don't believe it can be made idempotent for the reasons already stated. Such characteristics certainly are not necessarily exclusive, but the neither the Actor Reminders API nor the newer underlying Scheduler API were designed with this in mind and that would certainly be a significant breaking change at this point to accommodate in the runtime.
This might very well be something worth submitting an independent feature request for across all Dapr APIs, but I would submit that's potentially an enormous effort at this point to revisit across all affected APIs.
I disagree. If a start time was provided as a field then it could be idempotent. The API just needs thoughtful design to remove ambiguity.
Hi both. I have verified with the following integration test that overwriting an actor reminder, with both the HTTP and gRPC clients will not cause an error/non-200 response. There will be something else going on causing the 500.
@danielgerlag please can you share the error output, SDK & version you are using.
package reminders
import (
"context"
"io"
"net/http"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
"github.com/dapr/dapr/tests/integration/framework"
"github.com/dapr/dapr/tests/integration/framework/client"
"github.com/dapr/dapr/tests/integration/framework/process/daprd/actors"
"github.com/dapr/dapr/tests/integration/suite"
)
func init() {
suite.Register(new(overwrite))
}
type overwrite struct {
actors *actors.Actors
}
func (o *overwrite) Setup(t *testing.T) []framework.Option {
o.actors = actors.New(t,
actors.WithActorTypes("abc"),
)
return []framework.Option{
framework.WithProcesses(o.actors),
}
}
func (o *overwrite) Run(t *testing.T, ctx context.Context) {
o.actors.WaitUntilRunning(t, ctx)
gclient := o.actors.GRPCClient(t, ctx)
_, err := gclient.RegisterActorReminder(ctx, &rtv1.RegisterActorReminderRequest{
ActorType: "abc",
ActorId: "123",
Name: "reminder1",
DueTime: "24h",
})
require.NoError(t, err)
_, err = gclient.RegisterActorReminder(ctx, &rtv1.RegisterActorReminderRequest{
ActorType: "abc",
ActorId: "123",
Name: "reminder1",
DueTime: "48h",
})
require.NoError(t, err)
hclient := client.HTTP(t)
url := o.actors.Daprd().ActorReminderURL("abc", "123", "helloworld")
body := `{"data":"reminderdata","dueTime":"1s","period":"R5/PT1S"}`
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, strings.NewReader(body))
require.NoError(t, err)
resp, err := hclient.Do(req)
require.NoError(t, err)
if !assert.Equal(t, http.StatusNoContent, resp.StatusCode) {
b, err := io.ReadAll(resp.Body)
require.NoError(t, err)
t.Logf("response: %s", string(b))
}
require.NoError(t, resp.Body.Close())
req, err = http.NewRequestWithContext(ctx, http.MethodPost, url, strings.NewReader(body))
require.NoError(t, err)
resp, err = hclient.Do(req)
require.NoError(t, err)
if !assert.Equal(t, http.StatusNoContent, resp.StatusCode) {
b, err := io.ReadAll(resp.Body)
require.NoError(t, err)
t.Logf("response: %s", string(b))
}
assert.Equal(t, http.StatusNoContent, resp.StatusCode)
require.NoError(t, resp.Body.Close())
}
Images
docker.io/daprio/daprd@sha256:656985c29345af48104a3ed9fe6ee8aaf48688c6abf2f1647e084ca6d5de256b
docker.io/daprio/placement@sha256:bbef3f541c279d9adceaa7d99eaa3cbc956b3117726334e769ccf0a7a05d0150
Rust Dapr SDK: 0.15.1
App error:
GrpcError(GrpcError { _status: Status { code: Internal, message: "error invoke actor method: error from actor service: (500) Error registering reminder", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAD9uuo9HFmlWyFpeqaVhmG8AccPtI09CuqMAgE"} }, source: None } })
App logs: Error registering reminder: GrpcError(GrpcError { _status: Status { code: Internal, message: "error creating actor reminder: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAA8UpoEFn/ZIhqGqmiQMeaHAeFdYLtX2kgUAgE"} }, source: None } }) [2025-10-05T16:41:03Z ERROR view_svc::view_actor] Error registering reminder: Method error: GrpcError(GrpcError { _status: Status { code: Internal, message: "error creating actor reminder: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAA8UpoEFn/ZIhqGqmiQMeaHAeFdYLtX2kgUAgE"} }, source: None } })
Daprd logs: time="2025-10-05T16:39:07.277158554Z" level=error msg="Error scheduling reminder job ping due to: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'" app_id=default-view-svc instance=default-view-svc-5bd54d6db9-gfjtd scope=dapr.runtime.actor.reminders.scheduler type=log ver=1.15.12
@danielgerlag is the Dapr control plane (scheduler) running with version v1.16?
Hello, I am running Dapr 1.16.2 and have the same problem.
But scheduler dapr.runtime.scheduler.cluster sidecar seem to be running 1.15.9 (see last line of the log bellow app_id=xxxapi instance=xxxapi-deployment-544ddbb8f7-fgb5h scope=dapr.runtime.scheduler.cluster type=log ver=1.15.9-msft.1)
Every service seems to be running 1.16.2 on Dapr dashboard
2025-11-06T16:29:52.023Z time="2025-11-06T16:29:52.023668899Z" level=error msg="failed to invoke scheduled actor reminder named: ReminderPerteCommunication due to: error from actor service: (500) Dapr.DaprApiException: error creating actor reminder: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||xxx-dev||EqActor||XX000503||ReminderPerteCommunication'
at Dapr.Actors.DaprHttpInteractor.SendAsyncHandleUnsuccessfulResponse(Func`1 requestFunc, String relativeUri, CancellationToken cancellationToken)
at Dapr.Actors.DaprHttpInteractor.SendAsync(Func`1 requestFunc, String relativeUri, CancellationToken cancellationToken)
at Dapr.Actors.Runtime.DefaultActorTimerManager.RegisterReminderAsync(ActorReminder reminder)
at Dapr.Actors.Runtime.Actor.RegisterReminderAsync(ActorReminderOptions options)
at Dapr.Actors.Runtime.Actor.RegisterReminderAsync(String reminderName, Byte[] state, TimeSpan dueTime, TimeSpan period)
at xxx.GA.Azure.EtatEquipementService.Actors.EqActor.OnActivateAsync() in /src/Services/Azure/xxx.GA.Azure.EtatEquipementService/Actors/EqActor.cs:line 74
at Dapr.Actors.Runtime.Actor.OnActivateInternalAsync()
at Dapr.Actors.Runtime.ActorManager.ActivateActorAsync(ActorId actorId)
at Dapr.Actors.Runtime.ActorManager.ActivateActorAsync(ActorId actorId)
at Dapr.Actors.Runtime.ActorManager.DispatchInternalAsync[T](ActorId actorId, ActorMethodContext actorMethodContext, Func`3 actorFunc, CancellationToken cancellationToken)
at Dapr.Actors.Runtime.ActorManager.FireReminderAsync(ActorId actorId, String reminderName, Stream requestBodyStream, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Builder.ActorsEndpointRouteBuilderExtensions.<>c__DisplayClass4_0.<b__0>d.MoveNext()
--- End of stack trace from previous location ---
at xxx.GA.Azure.Services.ApplicationBuilderExtensions.<>c__DisplayClass1_0.<b__0>d.MoveNext() in /src/Librairies/Azure/xxx.GA.Azure.Services/ApplicationBuilderExtensions.cs:line 46
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)
at Microsoft.AspNetCore.Localization.RequestLocalizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddlewareImpl.Invoke(HttpContext context)
HEADERS
=======
Host: 127.0.0.1:80
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
Content-Type: application/json
Content-Length: 36
" app_id=xxxapi instance=xxxapi-deployment-544ddbb8f7-fgb5h scope=dapr.runtime.scheduler.cluster type=log ver=1.15.9-msft.1
I digged a little more and I found out that the sidecar images were not updated and they were still running 1.15.9. Previous updates had successfully updated the sidecar images but not this time with 1.16. I think this is the real source of the problem.
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' | findstr "1.15"
a-deployment-7bcf67ccf-f66vf: xxx.azurecr.io/a:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
e-deployment-745b8f459-vqlzz: xxx.azurecr.io/e:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
f-deployment-5dbfff9876-6k27w: xxx.azurecr.io/f:1.7.4.2-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
f-deployment-5dbfff9876-xqdc5: xxx.azurecr.io/f:1.7.4.2-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
fl-deployment-865b6f-8hszr: xxx.azurecr.io/fl:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
i-deployment-844999c6ff-w6tlr: xxx.azurecr.io/i:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
in-deployment-86cb8b4575-ff2hx: xxx.azurecr.io/in:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
in-deployment-86cb8b4575-tsvxb: xxx.azurecr.io/in:1.7.5.0-linux, mcr.microsoft.com/daprio/daprd:1.15.9-msft.1,
@javier-aliaga the possible root problem is that an update from 1.5 to 1.6 does not update the sidecar images of the deployments .
Images
docker.io/daprio/daprd@sha256:656985c29345af48104a3ed9fe6ee8aaf48688c6abf2f1647e084ca6d5de256b
docker.io/daprio/placement@sha256:bbef3f541c279d9adceaa7d99eaa3cbc956b3117726334e769ccf0a7a05d0150
Rust Dapr SDK: 0.15.1
App error:
GrpcError(GrpcError { _status: Status { code: Internal, message: "error invoke actor method: error from actor service: (500) Error registering reminder", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAD9uuo9HFmlWyFpeqaVhmG8AccPtI09CuqMAgE"} }, source: None } })
App logs: Error registering reminder: GrpcError(GrpcError { _status: Status { code: Internal, message: "error creating actor reminder: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAA8UpoEFn/ZIhqGqmiQMeaHAeFdYLtX2kgUAgE"} }, source: None } }) [2025-10-05T16:41:03Z ERROR view_svc::view_actor] Error registering reminder: Method error: GrpcError(GrpcError { _status: Status { code: Internal, message: "error creating actor reminder: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'", metadata: MetadataMap { headers: {"content-type": "application/grpc", "grpc-trace-bin": "AAA8UpoEFn/ZIhqGqmiQMeaHAeFdYLtX2kgUAgE"} }, source: None } })
Daprd logs: time="2025-10-05T16:39:07.277158554Z" level=error msg="Error scheduling reminder job ping due to: rpc error: code = AlreadyExists desc = job already exists: 'actorreminder||drasi-system||default.View||freezerx||ping'" app_id=default-view-svc instance=default-view-svc-5bd54d6db9-gfjtd scope=dapr.runtime.actor.reminders.scheduler type=log ver=1.15.12
Which version of Dapr are you using? If 1.16, can you check with kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' that is there still sidecar images that were not updated?