tables icon indicating copy to clipboard operation
tables copied to clipboard

API Insert row into a table `duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"`

Open Sylvain303 opened this issue 1 year ago • 1 comments

Steps to reproduce

Probably after migration #999

Was using API to insert into some rows into a table and it started to happen.

Don't know how to reproduce. May be the migration occ tables:legacy:transfer:rows --all

I deleted the Tables (don't know how to recreate them at identical, may be from previous version) and starting recreating table structure manually. This seems to fix the problem.

some errors

occ tables:legacy:transfer:rows --all
Look for tables
Found 8 table(s)

-- Start transfer for table 1 (Tutoriel ) [1/8]
---- Found 4 columns
---- Found 5 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(1) already exists.

-- Start transfer for table 2 (Clients) [2/8]
---- Found 13 columns
---- Found 1003 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(9) already exists.

-- Start transfer for table 3 (Membres) [3/8]
---- Found 5 columns
---- Found 1 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(1011) already exists.

-- Start transfer for table 4 (Nouveaux Clients) [4/8]
---- Found 4 columns
---- Found 8 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(1046) already exists.

-- Start transfer for table 5 (Tutorial) [5/8]
---- Found 4 columns
---- Found 5 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(2013) already exists.

-- Start transfer for table 6 (Rapport création Clients) [6/8]
---- Found 7 columns
---- Found 1 rows
⚠️  Could not transfer data. Continue with next table. The logs will have more information about the error: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint "oc_tables_row_sleeves_pkey"
DETAIL:  Key (id)=(2020) already exists.

-- Start transfer for table 7 (Création 2) [7/8]
---- Found 4 columns
---- Found 0 rows
---- ✅  All rows transferred.

-- Start transfer for table 8 (bug) [8/8]
---- Found 1 columns
---- Found 0 rows
---- ✅  All rows transferred.

Expected behavior

no error on API call, record should be inserted. No data corruption on other tables.

Actual behavior

API call

import requests
import json
from dotenv import load_dotenv
import os

load_dotenv()  # take environment variables from .env.
username = "admin"
password = os.getenv('ADMIN_PASSWORD')
nextcloud_url = os.getenv('NEXTCLOUD_URL')

data ={
    "data": 
#    {31: 'Julien', 32: 'Boudichon', 33: '[email protected]', 34: 'julien.boudichon', 35: '# Création en cours...', 36: 0, 37: 1046}

         {
           "23": "Ivan",
           "24": "Dutest",
           "25": "[email protected]"
          }
}
tableId = 4
url_path = f"/index.php/apps/tables/api/1/tables/{tableId}/rows"
response = requests.post(
    f"{nextcloud_url}{url_path}",
    headers={
	"OCS-APIRequest": "true",
	'accept': 'application/json',
    },
    auth=(username, password),
    json=data,
    )

print(json.dumps(response.json(), indent=2))

The error:

python t.py 
{
  "message": "OCA\\Tables\\Service\\RowService - create: An exception occurred while executing a query: SQLSTATE[23505]: Unique violation: 7 ERROR:  duplicate key value violates unique constraint \"oc_tables_row_sleeves_pkey\"\nDETAIL:  Key (id)=(15) already exists."
}

Tables app version

0.7.0

Browser

Firefox 125.0

Client operating system

xubuntu 22.04

Operating system

lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 12 (bookworm) Release: 12 Codename: bookworm occ status - installed: true - version: 28.0.4.1 - versionstring: 28.0.4 - edition: - maintenance: false - needsDbUpgrade: false - productname: Nextcloud - extendedSupport: false

Web server

None

PHP engine version

PHP 8.2

Database

PostgreSQL

Additional info

hope that helps...

Sylvain303 avatar Apr 29 '24 09:04 Sylvain303

Do you see a related exception in the nextcloud.log?

blizzz avatar Apr 30 '24 19:04 blizzz

Here is one line from the log with the given error. May be not the exact same table (id 4) which may not exist anymore on my NC instance. 🤷‍♀️

nextcloud.log

Sylvain303 avatar May 02 '24 06:05 Sylvain303

Can you check your DB schema for the oc_tables_row_sleeves table? For, the id is actually auto-incremented by the database.

blizzz avatar May 02 '24 08:05 blizzz

here is.

nextcloud_database=> \d oc_tables_row_sleeves
                                           Table "public.oc_tables_row_sleeves"
    Column    |              Type              | Collation | Nullable |                      Default                      
--------------+--------------------------------+-----------+----------+---------------------------------------------------
 id           | integer                        |           | not null | nextval('oc_tables_row_sleeves_id_seq'::regclass)
 table_id     | integer                        |           | not null | 
 created_by   | character varying(64)          |           | not null | 
 created_at   | timestamp(0) without time zone |           | not null | 
 last_edit_by | character varying(64)          |           | not null | 
 last_edit_at | timestamp(0) without time zone |           | not null | 
Indexes:
    "oc_tables_row_sleeves_pkey" PRIMARY KEY, btree (id)
    "idx_dd46db80bf396750" btree (id)
nextcloud_database=> select * from oc_tables_row_sleeves;
  id  | table_id | created_by |     created_at      | last_edit_by |    last_edit_at     
------+----------+------------+---------------------+--------------+---------------------
   17 |        8 | admin      | 2024-04-29 09:17:49 | admin        | 2024-04-29 09:17:49
   18 |        9 | admin      | 2024-04-29 09:19:04 | admin        | 2024-04-29 09:19:04
   30 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   31 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   32 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   33 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   34 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   21 |        9 | admin      | 2024-04-29 09:22:32 | admin        | 2024-04-29 09:22:32
 2013 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2014 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2015 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2016 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2017 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
(13 rows)

My though is that during the upgrade process from 0.6.6 to 0.7.0 as mentioned in other issue the data migration process changed something. There was more than 1000 rows in some tables. I suppose that the rows number counter where somewhat rested at some point.

As my data was only experimenting with Tables, when I saw the errors I removed the tables I created before, and start some with new table. This seems to remove the errors.

Sylvain303 avatar May 03 '24 04:05 Sylvain303

Same to me at 0.7.1.

Jednadvacet avatar May 03 '24 05:05 Jednadvacet

OK, could you also check with this gives you?

select * from oc_tables_row_sleeves_id_seq;

I understand both of you @Jednadvacet and @Sylvain303 use postgres? (same here, but i am not facing the same issue)

blizzz avatar May 03 '24 07:05 blizzz

Yes, I am using postgres.

 last_value | log_cnt | is_called
------------+---------+-----------
          9 |      32 | t

Jednadvacet avatar May 03 '24 08:05 Jednadvacet

@Jednadvacet as immediate cure, you can change the sequence:

ALTER SEQUENCE oc_tables_row_sleeves_id_seq RESTART WITH 99;

Replace 99 with the highest ID you have plus 1.

It's a good question why the counter was reset though, that's odd.

blizzz avatar May 03 '24 09:05 blizzz

It's a good question why the counter was reset though, that's odd.

Okay, so when migrating the data to the new structure, we also set the id when inserting into oc_tables_row_sleeves. But this does not lead to an update of the PostgreSQL sequence, it stays at 0.

blizzz avatar May 03 '24 09:05 blizzz

@Jednadvacet as immediate cure, you can change the sequence:

ALTER SEQUENCE oc_tables_row_sleeves_id_seq RESTART WITH 99;

Replace 99 with the highest ID you have plus 1.

It's a good question why the counter was reset though, that's odd.

Thank you very much, looks like quickfix. Hope I got the right max ID.

Still struggling with https://github.com/nextcloud/tables/issues/1036, though. I hoped it might be linked with this issue but it's not.

Jednadvacet avatar May 03 '24 11:05 Jednadvacet

Proposed fix as repair step in #1049. Can be applied and a occ maintenance:repair should fix it.

blizzz avatar May 03 '24 13:05 blizzz

Hello,

I suppose, you've found the problem now. Great the issue helped to diagnose. Yes I'm using AIO which comes with postgresql.

nextcloud_database=> select * from oc_tables_row_sleeves_id_seq;
 last_value | log_cnt | is_called 
------------+---------+-----------
         34 |      28 | t
(1 row)

max +1

select max(id) + 1 from oc_tables_rows;
 ?column? 
----------
     2033
(1 row)

apply the fix

nextcloud_database=> ALTER SEQUENCE oc_tables_row_sleeves_id_seq RESTART WITH 2033;
ALTER SEQUENCE

nextcloud_database=> select * from oc_tables_row_sleeves_id_seq;
 last_value | log_cnt | is_called 
------------+---------+-----------
       2033 |       0 | f
(1 row)

more details on table max row id:

nextcloud_database=> select max(id) as max_id, table_id from oc_tables_rows group by table_id;
 max_id | table_id 
--------+----------
   2020 |        6
      5 |        1
   1011 |        3
   2017 |        5
   2032 |        4
   2018 |        2
(6 rows)

but I mentioned I deleted some table to fix the issue at first... :

nextcloud_database=> select id, title from oc_tables_tables;
 id |  title   
----+----------
  5 | Tutorial
  8 | bug
  9 | New
 10 | Tutorial
(4 rows)

and some rows are left here

nextcloud_database=> select count(*) as nb_rows, table_id from oc_tables_rows group by table_id;
 nb_rows | table_id 
---------+----------
       1 |        6
       5 |        1
       1 |        3
       5 |        5
       8 |        4
    1003 |        2
(6 rows)

here : https://github.com/nextcloud/tables/pull/1049/files#diff-32c114fc3305c14c121415784e62230dd9209c47d99043b2efb43b3d1627abeeR45-R46

wouldn't be oc_tables_rows instead of tables_row_sleeves ?

nextcloud_database=> select * from oc_tables_row_sleeves;
  id  | table_id | created_by |     created_at      | last_edit_by |    last_edit_at     
------+----------+------------+---------------------+--------------+---------------------
   17 |        8 | admin      | 2024-04-29 09:17:49 | admin        | 2024-04-29 09:17:49
   18 |        9 | admin      | 2024-04-29 09:19:04 | admin        | 2024-04-29 09:19:04
   30 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   31 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   32 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   33 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   34 |       10 | sylvain    | 2024-04-30 15:36:36 | sylvain      | 2024-04-30 15:36:36
   21 |        9 | admin      | 2024-04-29 09:22:32 | admin        | 2024-04-29 09:22:32
 2013 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2014 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2015 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2016 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
 2017 |        5 | clement    | 2024-04-25 06:41:36 | clement      | 2024-04-25 06:41:36
(13 rows)

which would have returned 2018 in my use-case not 2033

nextcloud_database=> select max(id) from oc_tables_row_sleeves;
 max  
------
 2017
(1 row)

Sylvain303 avatar May 05 '24 14:05 Sylvain303