mysql2 icon indicating copy to clipboard operation
mysql2 copied to clipboard

MySql2::Error losing data, raises with error_number(0) and sql_state(00000) on write?

Open coldnebo opened this issue 4 years ago • 0 comments

Hi, I am running into a weird issue on Rails 5 under docker compose, which I'm not sure what it means or how to debug.

I get random raises of MySql2::Error on activerecord saves (writes), but the error_number is 0 and the sql_state is also zero. The errors lose data, so that seems bad.

I've specified reconnect: true in the database.yml without effect.

Maybe multithreaded code is a problem?

Any thoughts?

setup

docker-compose.yml:

version: '3.1'
services:

  db:
    image: mariadb
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example
    volumes:
      - ./tmp/db:/var/lib/mysql

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

  web:
    build: 
      context: .
      dockerfile: rails-dockerfile
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db

Using this as a test case:

require 'test_helper'

class PerformanceStoriesTest < ActiveSupport::TestCase
  test "stress the database with reads and writes" do
    posts = 10.times.map{|n| 
      Post.create(title: "post number #{n}", 
                  body:"sample post") 
    }

    posts.each{|post|
      comments = 10.times.map{|n|
        post.comments.create(post_id: post.id, body: "comment #{n}")
      }
    }

    proc = Proc.new do
      Thread.current[:ok_writes] = 0
      Thread.current[:failed_writes] = 0
      100.times.map do |n|
        post = posts.sample
        #puts "reading post #{post.id}"
        # read a post
        Post.where(id: post.id).to_a 
        # read it's comments
        comments = post.comments.to_a
        # update one of the comments
        comment = comments.sample
        orig_comment_body = comment.body
        #puts "reading/updating comment #{comment.id} from post #{comment.post_id}"
        comment.body = comment.body + "+"
        begin 
          comment.save
          Thread.current[:ok_writes] += 1
          print '-'
        rescue Mysql2::Error => e 
          Thread.current[:failed_writes] += 1
          print "\ndata loss: '#{comment.body}', kept '#{orig_comment_body}'; (#{e.error_number},#{e.sql_state})"
        end
      end
    end

    threads = posts.map{|p| Thread.new(&proc)}
    metric = threads.reduce({ok_writes: 0, failed_writes: 0}) {|acc, thread| 
      thread.join
      acc[:ok_writes] += thread[:ok_writes].nil? ? 0 : thread[:ok_writes]
      acc[:failed_writes] += thread[:failed_writes].nil? ? 0 : thread[:failed_writes]
      acc
    }
    puts 
    puts metric.inspect
    puts "data loss: %% %0.2f" % [100.0 * metric[:failed_writes].to_f/metric[:ok_writes].to_f]

  end
end

I get these types of errors during my perf test:

$ docker-compose run web rake test:integration
Starting dbresearch_db_1_98ef47d7a106 ... done
2020-06-15 21:37:28 WARN Selenium [DEPRECATION] Selenium::WebDriver::Chrome#driver_path= is deprecated. Use Selenium::WebDriver::Chrome::Service#driver_path= instead.
Run options: --seed 12016

# Running:

----------------------------------------------
data loss: 'comment 7+', kept 'comment 7'; (0,00000)--------------
data loss: 'comment 2++', kept 'comment 2'; (0,00000)------------------
data loss: 'comment 5++', kept 'comment 5+'; (0,00000)-----------------
data loss: 'comment 2+++', kept 'comment 2++'; (0,00000)--------------------------------------------------------------------------
data loss: 'comment 6+++', kept 'comment 6++'; (0,00000)-------------------------------------------------------
data loss: 'comment 3++', kept 'comment 3+'; (0,00000)----------------------
data loss: 'comment 3++', kept 'comment 3+'; (0,00000)------------------------------------------------------------------------
data loss: 'comment 0+++', kept 'comment 0++'; (0,00000)--
data loss: 'comment 2++++', kept 'comment 2+++'; (0,00000)---------------------------------------------
data loss: 'comment 6+++', kept 'comment 6++'; (0,00000)------------------------------------
data loss: 'comment 7++++++', kept 'comment 7+++++'; (0,00000)----------
data loss: 'comment 9+++++', kept 'comment 9++++'; (0,00000)---------------------------------------------------------------------------------------------------------------------------------------------------------
data loss: 'comment 9++++', kept 'comment 9+++'; (0,00000)-------------------------------------------------------------------------------
data loss: 'comment 8+++++++', kept 'comment 8++++++'; (0,00000)-----------------------------------------------------------------------------------
data loss: 'comment 7++++++++', kept 'comment 7+++++++'; (0,00000)---------------
data loss: 'comment 1+++++++', kept 'comment 1++++++'; (0,00000)-------------------------------------------
data loss: 'comment 7+++++++++', kept 'comment 7++++++++'; (0,00000)---
data loss: 'comment 9+++++++++', kept 'comment 9++++++++'; (0,00000)
data loss: 'comment 2++++++++', kept 'comment 2+++++++'; (0,00000)
data loss: 'comment 3+++++++', kept 'comment 3++++++'; (0,00000)-----------------------------------------------------------------
data loss: 'comment 7+++++++++', kept 'comment 7++++++++'; (0,00000)------------
data loss: 'comment 2+++++++++++', kept 'comment 2+++++++++'; (0,00000)--------------------------------------------------------------------------------------------------------------
data loss: 'comment 7++++++++++++', kept 'comment 7+++++++++++'; (0,00000)---
{:ok_writes=>977, :failed_writes=>23}
data loss: % 2.35
.

Finished in 2.919079s, 0.3426 runs/s, 0.0000 assertions/s.
1 runs, 0 assertions, 0 failures, 0 errors, 0 skips

coldnebo avatar Jun 15 '20 21:06 coldnebo