solid_queue
solid_queue copied to clipboard
Mongoid data duplicated
record(param)&.update!(params(param)) || RecordClass.create!(params(param))
def record(param)
RecordClass.find_by(
field1: 'a',
field2: 'b',
field3: 'c',
field4: [param]
)
end
I have a background job to create or update a record, as I have given in the example above, the code is something like that but the thing is when the data is always created even if the data exists in the database.
NOTE: for first few jobs, it updates but suddenly the data starts to get created
but once I change the job to perform now, the data is updated
I use mongoid, solid queue
Hey @KinWang-2013, this doesn't seem to be an issue in Solid Queue, as far as I can see 🤔 Does this happen when using a different Active Job backend?
Hey @KinWang-2013, this doesn't seem to be an issue in Solid Queue, as far as I can see 🤔 Does this happen when using a different Active Job backend?
I did not try with other active job backends, we had another project using sidekiq, it was working for that with mongoid.
I came across #276 which seemed relatable as the task was working when I did perform now but was not with perform later
thank you for the response
So, to be clear, Solid Queue doesn't do anything special about the code you run in your job. I assume that there's some kind of race condition here that perhaps it's noticeable with Solid Queue because it's slower than Sidekiq (assuming the other project where you're using Sidekiq is similar to this, which might not be at all).
So, to be clear, Solid Queue doesn't do anything special about the code you run in your job. I assume that there's some kind of race condition here that perhaps it's noticeable with Solid Queue because it's slower than Sidekiq (assuming the other project where you're using Sidekiq is similar to this, which might not be at all).
I will try testing in the staging server and leave an update here. The weird thing is, when I put a pry in the server, this does not happen, only happens when the job is run in the background.
hello @rosa, this seems to be fixed but I did not change anything, it was fixed over the days
also I faced another issue today, I use SOLID_QUEUE_IN_PUMA env to run solid queue with puma and the interesting thing in my staging server is that the code changes I deploy applies to the puma process but for solid queue processes, the old code is run, I feel the issue above was also because of this.
my staging server is that the code changes I deploy applies to the puma process but for solid queue processes, the old code is run, I feel the issue above was also because of this.
Oh, interesting... how are you handling deploys and signaling a restart to the puma process?
my staging server is that the code changes I deploy applies to the puma process but for solid queue processes, the old code is run, I feel the issue above was also because of this.
Oh, interesting... how are you handling deploys and signaling a restart to the puma process?
I am using github actions for deploying using docker and signaling using supervisord
Hey @KinWang-2013, could you share more details about the signals you're sending with supervisord?
Hey @KinWang-2013, could you share more details about the signals you're sending with
supervisord?
[supervisord] nodaemon=true user=root
[program:puma] command=bundle exec puma -b tcp://0.0.0.0 -p 3000 directory=/app autostart=true autorestart=false stopasgroup=true stopsignal=TERM