superset db upgrade not working
Bug description
getting this error when running the superset db upgrade command. sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "extra": syntax error [SQL: ALTER TABLE report_schedule RENAME extra TO extra_json] (Background on this error at: https://sqlalche.me/e/14/e3q8)
Screenshots/recordings
Loaded your LOCAL configuration at [/home/ec2-user/superset/superset_config.py] logging was configured successfully 2024-10-17 16:55:04,332:INFO:superset.utils.logging_configurator:logging was configured successfully 2024-10-17 16:55:04,339:INFO:root:Configured event logger of type <class 'superset.utils.log.DBEventLogger'> /home/ec2-user/superset/superset/superset/superset-env/lib/python3.11/site-packages/flask_limiter/extension.py:333: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. warnings.warn( WARNI [alembic.env] SQLite Database support for metadata databases will be removed in a future version of Superset. INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade 409c7b420ab0 -> ffa79af61a56, rename report_schedule.extra to extra_json Traceback (most recent call last): File "/home/ec2-user/superset/superset/superset/superset-env/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context self.dialect.do_execute( File "/home/ec2-user/superset/superset/superset/superset-env/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute cursor.execute(statement, parameters) sqlite3.OperationalError: near "extra": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ec2-user/superset/superset/superset/superset-env/bin/superset", line 33, in
Superset version
master / latest-dev
Python version
3.11
Node version
16
Browser
Chrome
Additional context
No response
Checklist
- [X] I have searched Superset docs and Slack and didn't find a solution to my problem.
- [X] I have searched the GitHub issue tracker and didn't find a similar bug report.
- [X] I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.
There are similar issues that have been reported and resolved. One such issue is Failed run with 'superset db upgrade', which was closed. The suggested solutions for this issue are:
-
Replace SQLite with PostgreSQL 15 as the database backend. Update the
SQLALCHEMY_DATABASE_URIto use PostgreSQL:SQLALCHEMY_DATABASE_URI = 'postgresql://postgres:postgres@localhost/postgres'This solution was confirmed to work fine.
-
Alternatively, manually modify the migration script. Comment out the methods in the file:
/usr/local/lib/python3.8/site-packages/superset/migrations/versions/2022-07-11_11-26_ffa79af61a56_rename_report_schedule_extra_to_extra_.pyThen manually create the
report_scheduletable and modify the column fromextratoextra_json[1].
These solutions address the limitations of SQLite when executing ALTER TABLE commands, which is the root cause of the error you're encountering.
To continue talking to Dosu, mention @dosu.
@dosu itried postgress mothod. still not working
@dosu can you point this to some one who has expertise on the same
If the PostgreSQL method didn't work, ensure that you've correctly configured the SQLALCHEMY_DATABASE_URI in your superset_config.py file with the correct PostgreSQL credentials and details. Also, make sure that the PostgreSQL server is running and accessible from your Superset instance.
If the configuration is correct and the issue persists, you might need to manually adjust the migration script. Locate the migration script at:
/usr/local/lib/python3.8/site-packages/superset/migrations/versions/2022-07-11_11-26_ffa79af61a56_rename_report_schedule_extra_to_extra_.py
Comment out the methods related to renaming the column, and manually create the report_schedule table with the extra_json column instead of extra.
These steps should help resolve the migration issue. If you continue to face problems, double-check the PostgreSQL setup and ensure that all dependencies are correctly installed.
This is a tough nut to crack, could really use your help on this, @rusackas.
To continue talking to Dosu, mention @dosu.
@rusackas can you please help me on the same. I am unable to run this command. I need to productionise the data asap. I would be grateful if you can help me
how to manually create the report_schedule table with the extra_json column instead of extra.
@dosu how to manually create the report_schedule table with the extra_json column instead of extra.
To manually create the report_schedule table with the extra_json column instead of extra, you can use the following SQL command. This assumes you are using PostgreSQL:
CREATE TABLE report_schedule (
id SERIAL PRIMARY KEY,
type VARCHAR(50) NOT NULL,
name VARCHAR(150) NOT NULL UNIQUE,
description TEXT,
context_markdown TEXT,
active BOOLEAN DEFAULT TRUE,
crontab VARCHAR(50) NOT NULL,
sql TEXT,
chart_id INTEGER,
dashboard_id INTEGER,
database_id INTEGER,
last_eval_dttm TIMESTAMP,
last_state VARCHAR(50),
last_value FLOAT,
last_value_row_json TEXT,
validator_type VARCHAR(100),
validator_config_json TEXT DEFAULT '{}',
log_retention INTEGER DEFAULT 90,
grace_period INTEGER DEFAULT 14400,
created_on TIMESTAMP,
changed_on TIMESTAMP,
created_by_fk INTEGER,
changed_by_fk INTEGER,
extra_json JSONB, -- Use JSONB for storing JSON data
FOREIGN KEY (chart_id) REFERENCES slices(id),
FOREIGN KEY (dashboard_id) REFERENCES dashboards(id),
FOREIGN KEY (database_id) REFERENCES dbs(id),
FOREIGN KEY (created_by_fk) REFERENCES ab_user(id),
FOREIGN KEY (changed_by_fk) REFERENCES ab_user(id)
);
This command creates the report_schedule table with a JSONB column named extra_json. Make sure to adjust the foreign key references (slices, dashboards, dbs, ab_user) to match your existing schema.
If you continue to face issues, a maintainer has already been looped in to assist further.
To continue talking to Dosu, mention @dosu.
@rusackas can you please help me on this
@dosu can you link me to other expert on this
@dosu Now i am getting this error (superset-env1) [root@ip-192-168-3-20 superset]# superset db upgrade Loaded your LOCAL configuration at [/home/ec2-user/superset_project/superset_config.py] logging was configured successfully 2024-10-21 12:34:39,107:INFO:superset.utils.logging_configurator:logging was configured successfully 2024-10-21 12:34:39,119:INFO:root:Configured event logger of type <class 'superset.utils.log.DBEventLogger'> /home/ec2-user/superset_project/superset-env1/lib/python3.11/site-packages/flask_limiter/extension.py:333: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. warnings.warn( WARNI [alembic.env] SQLite Database support for metadata databases will be removed in a future version of Superset. INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. ERROR [flask_migrate] Error: Can't locate revision identified by '74688be0db29'
The error "Can't locate revision identified by '74688be0db29'" typically occurs when there is a mismatch between the migration scripts and the current state of the database. Here are some steps you can take to resolve this issue:
-
Check Migration History: Ensure that your local migration scripts are up-to-date with the repository. Sometimes, missing or outdated migration scripts can cause this error.
-
Verify
alembic_versionTable: Check thealembic_versiontable in your database to see if it contains the correct revision identifiers. This table should match the latest migration script applied. -
Reset Migrations: If the above steps don't resolve the issue, you might need to reset your migrations. This involves:
- Backing up your database.
- Dropping the
alembic_versiontable. - Re-running the migrations from scratch using
superset db upgrade.
-
Manual Intervention: If the error persists, you may need to manually adjust the migration scripts or the database schema to align them. This can involve creating or modifying tables and columns to match the expected state.
-
Persistent Storage for Flask-Limiter: Additionally, address the warning about using in-memory storage for Flask-Limiter by configuring a persistent storage backend like Redis or MongoDB. This is crucial for production environments to ensure reliable rate limit tracking.
These steps should help you resolve the migration error and ensure your Superset instance is correctly configured.
To continue talking to Dosu, mention @dosu.
@rusackas can you please help me on this
@dosu where to find the alembic version table
The alembic_version table is typically found in the same database that Superset is using for its metadata. You can locate it by connecting to your database (PostgreSQL, SQLite, etc.) using a database client or command-line tool and running a query to list all tables. Once connected, you can execute:
SELECT * FROM alembic_version;
This will show you the current revision identifier stored in the database. Ensure that this matches the latest migration script in your Superset installation. If there are discrepancies, you might need to adjust the database or migration scripts accordingly.
To continue talking to Dosu, mention @dosu.
@dosu whats the path for .db file
The path to the .db file depends on your Superset configuration and setup. If you're using SQLite as the database, the .db file is typically located in the directory where Superset is running. You can check your superset_config.py file for the SQLALCHEMY_DATABASE_URI setting, which should point to the location of your SQLite database file. If it's not explicitly set, it might be in the default location, such as the root directory of your Superset project. If you're using a different database like PostgreSQL, there won't be a .db file, as the data is stored in the PostgreSQL server.
To continue talking to Dosu, mention @dosu.