dbt-databricks icon indicating copy to clipboard operation
dbt-databricks copied to clipboard

Error with subsequent `seed`-ing when `persist_docs` is enabled.

Open jeremyyeo opened this issue 1 year ago • 7 comments

Describe the bug

Subsequent dbt seed command causes The specified schema does not match the existing schema error.

Also reported in an issue in dbt-spark as it happens there too and is presumably part of the same issue:

https://github.com/dbt-labs/dbt-spark/issues/112#issuecomment-1467277631

Steps To Reproduce

# dbt_project.yml
...
seeds:
  my_dbt_project:
    +file_format: delta
    +location_root: /mnt/root/seeds
    +persist_docs:
      relation: true
      columns: true

# seeds/schema.yml
version: 2
seeds:
  - name: person
    description: Persons
    columns:
      - name: id
        description: Id
      - name: name
        description: Name

# seeds/person.csv
id,name
1,alice

First things, first - ensure dbfs is clean and /mnt/root/seeds does NOT contain a person folder...

# In notebook
dbutils.fs.rm("/mnt/root/seeds/person", recurse=True)
drop table if exists dbt_jyeo.person;

Seed twice:

$ dbt seed && dbt seed
...
22:16:40.553952 [info ] [MainThread]: Finished running 1 seed in 0 hours 0 minutes and 15.25 seconds (15.25s).
22:16:40.555053 [debug] [MainThread]: Command end result
22:16:40.568006 [info ] [MainThread]: 
22:16:40.568743 [info ] [MainThread]: Completed with 1 error and 0 warnings:
22:16:40.569295 [info ] [MainThread]: 
22:16:40.569867 [error] [MainThread]: Runtime Error in seed person (seeds/person.csv)
22:16:40.570388 [error] [MainThread]:   The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.
22:16:40.571047 [error] [MainThread]:   
22:16:40.571746 [error] [MainThread]:   == Specified ==
22:16:40.572349 [error] [MainThread]:   root
22:16:40.572908 [error] [MainThread]:    |-- id: long (nullable = true)
22:16:40.573643 [error] [MainThread]:    |-- name: string (nullable = true)
22:16:40.574582 [error] [MainThread]:   
22:16:40.575359 [error] [MainThread]:   
22:16:40.576049 [error] [MainThread]:   == Existing ==
22:16:40.576647 [error] [MainThread]:   root
22:16:40.577238 [error] [MainThread]:    |-- id: long (nullable = true)
22:16:40.577764 [error] [MainThread]:    |-- name: string (nullable = true)
22:16:40.578345 [error] [MainThread]:   
22:16:40.578902 [error] [MainThread]:   
22:16:40.579498 [error] [MainThread]:   == Differences ==
22:16:40.580106 [error] [MainThread]:   - Specified metadata for field id is different from existing schema:
22:16:40.580592 [error] [MainThread]:     Specified: {}
22:16:40.581117 [error] [MainThread]:     Existing:  {"comment":"Id"}
22:16:40.581649 [error] [MainThread]:   - Specified metadata for field name is different from existing schema:
22:16:40.582197 [error] [MainThread]:     Specified: {}
22:16:40.582656 [error] [MainThread]:     Existing:  {"comment":"Name"}
22:16:40.583122 [error] [MainThread]:   
22:16:40.583559 [error] [MainThread]:   If your intention is to keep the existing schema, you can omit the
22:16:40.583989 [error] [MainThread]:   schema from the create table command. Otherwise please ensure that
22:16:40.584378 [error] [MainThread]:   the schema matches.
22:16:40.593301 [info ] [MainThread]: 
22:16:40.593827 [info ] [MainThread]: Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1

Expected behavior

Seeds shouldn't error.

Screenshots and log output

============================== 2023-03-14 22:15:21.083878 | d96d19f0-4b23-4bd3-acad-83d8686768a2 ==============================
22:15:21.083878 [info ] [MainThread]: Running with dbt=1.4.4
22:15:21.088022 [debug] [MainThread]: running dbt with arguments {'debug': True, 'write_json': True, 'use_colors': True, 'printer_width': 80, 'version_check': True, 'partial_parse': True, 'static_parser': True, 'profiles_dir': '/Users/jeremy/.dbt', 'send_anonymous_usage_stats': True, 'quiet': False, 'no_print': False, 'cache_selected_only': False, 'show': False, 'which': 'seed', 'rpc_method': 'seed', 'indirect_selection': 'eager'}
22:15:21.088679 [debug] [MainThread]: Tracking: tracking
22:15:21.111377 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'start', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1163390c0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x116347f40>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x116347c40>]}
22:15:21.134724 [debug] [MainThread]: checksum: 170d819e8a7f11e09c497566dd7f61e1355cb9fb514921503937b951cb4a2250, vars: {}, profile: None, target: None, version: 1.4.4
22:15:21.147978 [info ] [MainThread]: Unable to do partial parsing because a project dependency has been added
22:15:21.148558 [info ] [MainThread]: Unable to do partial parsing because a project config has changed
22:15:21.149082 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'partial_parser', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x116345720>]}
22:15:22.145704 [debug] [MainThread]: 1699: static parser successfully parsed alpha/a.sql
22:15:22.163553 [debug] [MainThread]: 1699: static parser successfully parsed bravo/b.sql
22:15:22.247614 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'load_project', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1164b1660>]}
22:15:22.259724 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'resource_counts', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1163397b0>]}
22:15:22.260752 [info ] [MainThread]: Found 2 models, 0 tests, 0 snapshots, 0 analyses, 372 macros, 0 operations, 1 seed file, 0 sources, 0 exposures, 0 metrics
22:15:22.261390 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'runnable_timing', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x11632dea0>]}
22:15:22.263607 [info ] [MainThread]: 
22:15:22.266403 [debug] [MainThread]: Acquiring new databricks connection 'master'
22:15:22.268171 [debug] [ThreadPool]: Acquiring new databricks connection 'list_schemas'
22:15:22.281878 [debug] [ThreadPool]: Using databricks connection "list_schemas"
22:15:22.283339 [debug] [ThreadPool]: On list_schemas: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_schemas"} */

    show databases
  
22:15:22.284357 [debug] [ThreadPool]: Opening a new connection, currently in state init
22:15:26.794307 [debug] [ThreadPool]: SQL status: OK in 4.51 seconds
22:15:26.824159 [debug] [ThreadPool]: On list_schemas: Close
22:15:28.112413 [debug] [ThreadPool]: Acquiring new databricks connection 'list_None_dbt_jyeo'
22:15:28.136212 [debug] [ThreadPool]: Spark adapter: NotImplemented: add_begin_query
22:15:28.137189 [debug] [ThreadPool]: Using databricks connection "list_None_dbt_jyeo"
22:15:28.137990 [debug] [ThreadPool]: On list_None_dbt_jyeo: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_None_dbt_jyeo"} */
show tables in `dbt_jyeo`
  
22:15:28.138735 [debug] [ThreadPool]: Opening a new connection, currently in state closed
22:15:30.571981 [debug] [ThreadPool]: SQL status: OK in 2.43 seconds
22:15:30.587648 [debug] [ThreadPool]: Using databricks connection "list_None_dbt_jyeo"
22:15:30.588607 [debug] [ThreadPool]: On list_None_dbt_jyeo: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_None_dbt_jyeo"} */
show views in `dbt_jyeo`
  
22:15:32.206354 [debug] [ThreadPool]: SQL status: OK in 1.62 seconds
22:15:32.214334 [debug] [ThreadPool]: On list_None_dbt_jyeo: ROLLBACK
22:15:32.215450 [debug] [ThreadPool]: Databricks adapter: NotImplemented: rollback
22:15:32.216450 [debug] [ThreadPool]: On list_None_dbt_jyeo: Close
22:15:33.574780 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'runnable_timing', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1164b18a0>]}
22:15:33.576389 [debug] [MainThread]: Spark adapter: NotImplemented: add_begin_query
22:15:33.577377 [debug] [MainThread]: Spark adapter: NotImplemented: commit
22:15:33.579114 [info ] [MainThread]: Concurrency: 1 threads (target='dev')
22:15:33.580116 [info ] [MainThread]: 
22:15:33.589161 [debug] [Thread-1 (]: Began running node seed.my_dbt_project.person
22:15:33.590272 [info ] [Thread-1 (]: 1 of 1 START seed file dbt_jyeo.person ......................................... [RUN]
22:15:33.591621 [debug] [Thread-1 (]: Acquiring new databricks connection 'seed.my_dbt_project.person'
22:15:33.592436 [debug] [Thread-1 (]: Began compiling node seed.my_dbt_project.person
22:15:33.593272 [debug] [Thread-1 (]: Timing info for seed.my_dbt_project.person (compile): 2023-03-14 22:15:33.593107 => 2023-03-14 22:15:33.593117
22:15:33.594168 [debug] [Thread-1 (]: Began executing node seed.my_dbt_project.person
22:15:33.669759 [debug] [Thread-1 (]: Spark adapter: NotImplemented: add_begin_query
22:15:33.670437 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:15:33.670978 [debug] [Thread-1 (]: On seed.my_dbt_project.person: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */

    create table `dbt_jyeo`.`person` (`id` bigint,`name` string)
    
    using delta
    
    
    
    location '/mnt/root/seeds/person'
    comment 'Persons'
      
    
  
22:15:33.671542 [debug] [Thread-1 (]: Opening a new connection, currently in state closed
22:15:54.039239 [debug] [Thread-1 (]: SQL status: OK in 20.37 seconds
22:15:55.034704 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:15:55.035341 [debug] [Thread-1 (]: On seed.my_dbt_project.person: 
          insert overwrite `dbt_jyeo`.`person` values
          (cast(%s as bigint),cast(%s as string))
      ...
22:16:12.091020 [debug] [Thread-1 (]: SQL status: OK in 17.05 seconds
22:16:13.076318 [debug] [Thread-1 (]: Writing runtime SQL for node "seed.my_dbt_project.person"
22:16:13.176913 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:16:13.177576 [debug] [Thread-1 (]: On seed.my_dbt_project.person: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */

    
        alter table `dbt_jyeo`.`person` change column
            id
            comment 'Id';
      
  
22:16:15.725264 [debug] [Thread-1 (]: SQL status: OK in 2.55 seconds
22:16:15.730827 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:16:15.732122 [debug] [Thread-1 (]: On seed.my_dbt_project.person: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */

    
        alter table `dbt_jyeo`.`person` change column
            name
            comment 'Name';
      
  
22:16:18.109270 [debug] [Thread-1 (]: SQL status: OK in 2.38 seconds
22:16:18.120952 [debug] [Thread-1 (]: Spark adapter: NotImplemented: commit
22:16:18.122720 [debug] [Thread-1 (]: Timing info for seed.my_dbt_project.person (execute): 2023-03-14 22:15:33.594867 => 2023-03-14 22:16:18.122596
22:16:18.123667 [debug] [Thread-1 (]: On seed.my_dbt_project.person: ROLLBACK
22:16:18.124565 [debug] [Thread-1 (]: Databricks adapter: NotImplemented: rollback
22:16:18.125343 [debug] [Thread-1 (]: On seed.my_dbt_project.person: Close
22:16:19.134643 [debug] [Thread-1 (]: Sending event: {'category': 'dbt', 'action': 'run_model', 'label': 'd96d19f0-4b23-4bd3-acad-83d8686768a2', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1163094b0>]}
22:16:19.136303 [info ] [Thread-1 (]: 1 of 1 OK loaded seed file dbt_jyeo.person ..................................... [INSERT 1 in 45.54s]
22:16:19.141613 [debug] [Thread-1 (]: Finished running node seed.my_dbt_project.person
22:16:19.147150 [debug] [MainThread]: Acquiring new databricks connection 'master'
22:16:19.148149 [debug] [MainThread]: On master: ROLLBACK
22:16:19.149063 [debug] [MainThread]: Opening a new connection, currently in state init
22:16:20.011921 [debug] [MainThread]: Databricks adapter: NotImplemented: rollback
22:16:20.013487 [debug] [MainThread]: Spark adapter: NotImplemented: add_begin_query
22:16:20.014659 [debug] [MainThread]: Spark adapter: NotImplemented: commit
22:16:20.015788 [debug] [MainThread]: On master: ROLLBACK
22:16:20.016895 [debug] [MainThread]: Databricks adapter: NotImplemented: rollback
22:16:20.017767 [debug] [MainThread]: On master: Close
22:16:20.946855 [debug] [MainThread]: Connection 'master' was properly closed.
22:16:20.948078 [debug] [MainThread]: Connection 'seed.my_dbt_project.person' was properly closed.
22:16:20.951702 [info ] [MainThread]: 
22:16:20.952734 [info ] [MainThread]: Finished running 1 seed in 0 hours 0 minutes and 58.69 seconds (58.69s).
22:16:20.953892 [debug] [MainThread]: Command end result
22:16:20.966607 [info ] [MainThread]: 
22:16:20.967382 [info ] [MainThread]: Completed successfully
22:16:20.968085 [info ] [MainThread]: 
22:16:20.968747 [info ] [MainThread]: Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
22:16:20.969533 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'end', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x105064070>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x104f580d0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x116309f30>]}
22:16:20.970381 [debug] [MainThread]: Flushing usage events


============================== 2023-03-14 22:16:25.195758 | 2db186fd-a75e-4d1c-a6a3-32524b448433 ==============================
22:16:25.195758 [info ] [MainThread]: Running with dbt=1.4.4
22:16:25.198533 [debug] [MainThread]: running dbt with arguments {'debug': True, 'write_json': True, 'use_colors': True, 'printer_width': 80, 'version_check': True, 'partial_parse': True, 'static_parser': True, 'profiles_dir': '/Users/jeremy/.dbt', 'send_anonymous_usage_stats': True, 'quiet': False, 'no_print': False, 'cache_selected_only': False, 'show': False, 'which': 'seed', 'rpc_method': 'seed', 'indirect_selection': 'eager'}
22:16:25.199080 [debug] [MainThread]: Tracking: tracking
22:16:25.216548 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'start', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1194391e0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x119443e80>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x119443d60>]}
22:16:25.236467 [debug] [MainThread]: checksum: 170d819e8a7f11e09c497566dd7f61e1355cb9fb514921503937b951cb4a2250, vars: {}, profile: None, target: None, version: 1.4.4
22:16:25.280187 [debug] [MainThread]: Partial parsing enabled: 0 files deleted, 0 files added, 0 files changed.
22:16:25.280810 [debug] [MainThread]: Partial parsing enabled, no changes found, skipping parsing
22:16:25.290093 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'load_project', 'label': '2db186fd-a75e-4d1c-a6a3-32524b448433', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x11961a9b0>]}
22:16:25.299634 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'resource_counts', 'label': '2db186fd-a75e-4d1c-a6a3-32524b448433', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1195de110>]}
22:16:25.300418 [info ] [MainThread]: Found 2 models, 0 tests, 0 snapshots, 0 analyses, 372 macros, 0 operations, 1 seed file, 0 sources, 0 exposures, 0 metrics
22:16:25.301195 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'runnable_timing', 'label': '2db186fd-a75e-4d1c-a6a3-32524b448433', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1070b1d50>]}
22:16:25.303260 [info ] [MainThread]: 
22:16:25.305993 [debug] [MainThread]: Acquiring new databricks connection 'master'
22:16:25.307872 [debug] [ThreadPool]: Acquiring new databricks connection 'list_schemas'
22:16:25.322654 [debug] [ThreadPool]: Using databricks connection "list_schemas"
22:16:25.323876 [debug] [ThreadPool]: On list_schemas: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_schemas"} */

    show databases
  
22:16:25.324540 [debug] [ThreadPool]: Opening a new connection, currently in state init
22:16:27.289437 [debug] [ThreadPool]: SQL status: OK in 1.96 seconds
22:16:27.302740 [debug] [ThreadPool]: On list_schemas: Close
22:16:28.245509 [debug] [ThreadPool]: Acquiring new databricks connection 'list_None_dbt_jyeo'
22:16:28.267625 [debug] [ThreadPool]: Spark adapter: NotImplemented: add_begin_query
22:16:28.268438 [debug] [ThreadPool]: Using databricks connection "list_None_dbt_jyeo"
22:16:28.269059 [debug] [ThreadPool]: On list_None_dbt_jyeo: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_None_dbt_jyeo"} */
show tables in `dbt_jyeo`
  
22:16:28.269641 [debug] [ThreadPool]: Opening a new connection, currently in state closed
22:16:30.367638 [debug] [ThreadPool]: SQL status: OK in 2.1 seconds
22:16:30.377953 [debug] [ThreadPool]: Using databricks connection "list_None_dbt_jyeo"
22:16:30.378543 [debug] [ThreadPool]: On list_None_dbt_jyeo: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "connection_name": "list_None_dbt_jyeo"} */
show views in `dbt_jyeo`
  
22:16:31.372485 [debug] [ThreadPool]: SQL status: OK in 0.99 seconds
22:16:31.376199 [debug] [ThreadPool]: On list_None_dbt_jyeo: ROLLBACK
22:16:31.376772 [debug] [ThreadPool]: Databricks adapter: NotImplemented: rollback
22:16:31.377206 [debug] [ThreadPool]: On list_None_dbt_jyeo: Close
22:16:32.218813 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'runnable_timing', 'label': '2db186fd-a75e-4d1c-a6a3-32524b448433', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x11943a3e0>]}
22:16:32.220168 [debug] [MainThread]: Spark adapter: NotImplemented: add_begin_query
22:16:32.221155 [debug] [MainThread]: Spark adapter: NotImplemented: commit
22:16:32.223009 [info ] [MainThread]: Concurrency: 1 threads (target='dev')
22:16:32.223913 [info ] [MainThread]: 
22:16:32.229587 [debug] [Thread-1 (]: Began running node seed.my_dbt_project.person
22:16:32.230680 [info ] [Thread-1 (]: 1 of 1 START seed file dbt_jyeo.person ......................................... [RUN]
22:16:32.232214 [debug] [Thread-1 (]: Acquiring new databricks connection 'seed.my_dbt_project.person'
22:16:32.233130 [debug] [Thread-1 (]: Began compiling node seed.my_dbt_project.person
22:16:32.234114 [debug] [Thread-1 (]: Timing info for seed.my_dbt_project.person (compile): 2023-03-14 22:16:32.233930 => 2023-03-14 22:16:32.233951
22:16:32.235086 [debug] [Thread-1 (]: Began executing node seed.my_dbt_project.person
22:16:32.282475 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:16:32.283123 [debug] [Thread-1 (]: On seed.my_dbt_project.person: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */
drop table if exists `dbt_jyeo`.`person`
22:16:32.283605 [debug] [Thread-1 (]: Opening a new connection, currently in state closed
22:16:35.282006 [debug] [Thread-1 (]: SQL status: OK in 3.0 seconds
22:16:35.337415 [debug] [Thread-1 (]: Spark adapter: NotImplemented: add_begin_query
22:16:35.338219 [debug] [Thread-1 (]: Using databricks connection "seed.my_dbt_project.person"
22:16:35.339182 [debug] [Thread-1 (]: On seed.my_dbt_project.person: /* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */

    create table `dbt_jyeo`.`person` (`id` bigint,`name` string)
    
    using delta
    
    
    
    location '/mnt/root/seeds/person'
    comment 'Persons'
      
    
  
22:16:37.782746 [debug] [Thread-1 (]: Databricks adapter: Error while running:
/* {"app": "dbt", "dbt_version": "1.4.4", "dbt_databricks_version": "1.4.2", "databricks_sql_connector_version": "2.4.0", "profile_name": "databricks", "target_name": "dev", "node_id": "seed.my_dbt_project.person"} */

    create table `dbt_jyeo`.`person` (`id` bigint,`name` string)
    
    using delta
    
    
    
    location '/mnt/root/seeds/person'
    comment 'Persons'
      
    
  
22:16:37.783780 [debug] [Thread-1 (]: Databricks adapter: <class 'databricks.sql.exc.ServerOperationError'>: The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.

== Specified ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Existing ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Differences ==
- Specified metadata for field id is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Id"}
- Specified metadata for field name is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Name"}

If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
22:16:37.802193 [debug] [Thread-1 (]: Databricks adapter: diagnostic-info: org.apache.hive.service.cli.HiveSQLException: Error running query: [DELTA_CREATE_TABLE_SCHEME_MISMATCH] com.databricks.sql.transaction.tahoe.DeltaAnalysisException: The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.

== Specified ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Existing ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Differences ==
- Specified metadata for field id is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Id"}
- Specified metadata for field name is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Name"}

If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
        at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:48)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:585)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
        at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:484)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:353)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at com.databricks.spark.util.IdentityClaims$.withClaims(IdentityClaims.scala:48)
        at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:156)
        at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:51)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:60)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:331)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:316)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:365)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: com.databricks.sql.transaction.tahoe.DeltaAnalysisException: The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.

== Specified ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Existing ==
root
 |-- id: long (nullable = true)
 |-- name: string (nullable = true)


== Differences ==
- Specified metadata for field id is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Id"}
- Specified metadata for field name is different from existing schema:
  Specified: {}
  Existing:  {"comment":"Name"}

If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
        at com.databricks.sql.transaction.tahoe.DeltaErrorsBase.createTableWithDifferentSchemaException(DeltaErrors.scala:1247)
        at com.databricks.sql.transaction.tahoe.DeltaErrorsBase.createTableWithDifferentSchemaException$(DeltaErrors.scala:1242)
        at com.databricks.sql.transaction.tahoe.DeltaErrors$.createTableWithDifferentSchemaException(DeltaErrors.scala:2691)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.verifyTableMetadata(CreateDeltaTableCommand.scala:392)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.createTransactionLogOrVerify$1(CreateDeltaTableCommand.scala:216)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.$anonfun$run$2(CreateDeltaTableCommand.scala:272)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag(DeltaLogging.scala:196)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.withOperationTypeTag$(DeltaLogging.scala:183)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.withOperationTypeTag(CreateDeltaTableCommand.scala:54)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$2(DeltaLogging.scala:160)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:265)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:263)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.recordFrameProfile(CreateDeltaTableCommand.scala:54)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.$anonfun$recordDeltaOperationInternal$1(DeltaLogging.scala:159)
        at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:550)
        at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:645)
        at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:666)
        at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:407)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:158)
        at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:405)
        at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:402)
        at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:23)
        at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:450)
        at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:435)
        at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:23)
        at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:640)
        at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:559)
        at com.databricks.spark.util.PublicDBLogging.recordOperationWithResultTags(DatabricksSparkUsageLogger.scala:23)
        at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:550)
        at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:520)
        at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:23)
        at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:63)
        at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:145)
        at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:72)
        at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:59)
        at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:104)
        at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:433)
        at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:412)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.recordOperation(CreateDeltaTableCommand.scala:54)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperationInternal(DeltaLogging.scala:158)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation(DeltaLogging.scala:148)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordDeltaOperation$(DeltaLogging.scala:138)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.recordDeltaOperation(CreateDeltaTableCommand.scala:54)
        at com.databricks.sql.transaction.tahoe.commands.CreateDeltaTableCommand.run(CreateDeltaTableCommand.scala:119)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:246)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:265)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:263)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:85)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:113)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createTable$1(DeltaCatalog.scala:570)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:265)
        at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:263)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:85)
        at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.createTable(DeltaCatalog.scala:556)
        at com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.createTable(UnityCatalogV2Proxy.scala:212)
        at org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:45)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:47)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:54)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:235)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:245)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:424)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:190)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1035)
        at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:144)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:374)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:235)
        at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:220)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:233)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:226)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:226)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
        at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:226)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:180)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:171)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:247)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$2(SparkExecuteStatementOperation.scala:478)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1035)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$1(SparkExecuteStatementOperation.scala:460)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getOrCreateDF(SparkExecuteStatementOperation.scala:446)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.analyzeQuery(SparkExecuteStatementOperation.scala:460)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$5(SparkExecuteStatementOperation.scala:519)
        at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:708)
        at org.apache.spark.sql.execution.qrc.CacheEventLogger.recordLatency(QueryResultCache.scala:149)
        at org.apache.spark.sql.execution.qrc.CacheEventLogger.recordLatency$(QueryResultCache.scala:145)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.recordLatency(SparkExecuteStatementOperation.scala:60)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:519)
        ... 21 more

22:16:37.891095 [debug] [Thread-1 (]: Databricks adapter: operation-id: b'\x01\xed\xc2\xb5\xd6\xa3\x17\x94\xa0\xfd \x03\x87\x95\xca\xf8'
22:16:37.891948 [debug] [Thread-1 (]: Timing info for seed.my_dbt_project.person (execute): 2023-03-14 22:16:32.236057 => 2023-03-14 22:16:37.891812
22:16:37.892620 [debug] [Thread-1 (]: On seed.my_dbt_project.person: ROLLBACK
22:16:37.893260 [debug] [Thread-1 (]: Databricks adapter: NotImplemented: rollback
22:16:37.893774 [debug] [Thread-1 (]: On seed.my_dbt_project.person: Close
22:16:38.738208 [debug] [Thread-1 (]: Runtime Error in seed person (seeds/person.csv)
  The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.
  
  == Specified ==
  root
   |-- id: long (nullable = true)
   |-- name: string (nullable = true)
  
  
  == Existing ==
  root
   |-- id: long (nullable = true)
   |-- name: string (nullable = true)
  
  
  == Differences ==
  - Specified metadata for field id is different from existing schema:
    Specified: {}
    Existing:  {"comment":"Id"}
  - Specified metadata for field name is different from existing schema:
    Specified: {}
    Existing:  {"comment":"Name"}
  
  If your intention is to keep the existing schema, you can omit the
  schema from the create table command. Otherwise please ensure that
  the schema matches.
22:16:38.739210 [debug] [Thread-1 (]: Sending event: {'category': 'dbt', 'action': 'run_model', 'label': '2db186fd-a75e-4d1c-a6a3-32524b448433', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x119708370>]}
22:16:38.740190 [error] [Thread-1 (]: 1 of 1 ERROR loading seed file dbt_jyeo.person ................................. [ERROR in 6.51s]
22:16:38.743115 [debug] [Thread-1 (]: Finished running node seed.my_dbt_project.person
22:16:38.745839 [debug] [MainThread]: Acquiring new databricks connection 'master'
22:16:38.746571 [debug] [MainThread]: On master: ROLLBACK
22:16:38.747310 [debug] [MainThread]: Opening a new connection, currently in state init
22:16:39.676318 [debug] [MainThread]: Databricks adapter: NotImplemented: rollback
22:16:39.677470 [debug] [MainThread]: Spark adapter: NotImplemented: add_begin_query
22:16:39.678372 [debug] [MainThread]: Spark adapter: NotImplemented: commit
22:16:39.679386 [debug] [MainThread]: On master: ROLLBACK
22:16:39.680108 [debug] [MainThread]: Databricks adapter: NotImplemented: rollback
22:16:39.680772 [debug] [MainThread]: On master: Close
22:16:40.549273 [debug] [MainThread]: Connection 'master' was properly closed.
22:16:40.550161 [debug] [MainThread]: Connection 'seed.my_dbt_project.person' was properly closed.
22:16:40.553125 [info ] [MainThread]: 
22:16:40.553952 [info ] [MainThread]: Finished running 1 seed in 0 hours 0 minutes and 15.25 seconds (15.25s).
22:16:40.555053 [debug] [MainThread]: Command end result
22:16:40.568006 [info ] [MainThread]: 
22:16:40.568743 [info ] [MainThread]: Completed with 1 error and 0 warnings:
22:16:40.569295 [info ] [MainThread]: 
22:16:40.569867 [error] [MainThread]: Runtime Error in seed person (seeds/person.csv)
22:16:40.570388 [error] [MainThread]:   The specified schema does not match the existing schema at dbfs:/mnt/root/seeds/person.
22:16:40.571047 [error] [MainThread]:   
22:16:40.571746 [error] [MainThread]:   == Specified ==
22:16:40.572349 [error] [MainThread]:   root
22:16:40.572908 [error] [MainThread]:    |-- id: long (nullable = true)
22:16:40.573643 [error] [MainThread]:    |-- name: string (nullable = true)
22:16:40.574582 [error] [MainThread]:   
22:16:40.575359 [error] [MainThread]:   
22:16:40.576049 [error] [MainThread]:   == Existing ==
22:16:40.576647 [error] [MainThread]:   root
22:16:40.577238 [error] [MainThread]:    |-- id: long (nullable = true)
22:16:40.577764 [error] [MainThread]:    |-- name: string (nullable = true)
22:16:40.578345 [error] [MainThread]:   
22:16:40.578902 [error] [MainThread]:   
22:16:40.579498 [error] [MainThread]:   == Differences ==
22:16:40.580106 [error] [MainThread]:   - Specified metadata for field id is different from existing schema:
22:16:40.580592 [error] [MainThread]:     Specified: {}
22:16:40.581117 [error] [MainThread]:     Existing:  {"comment":"Id"}
22:16:40.581649 [error] [MainThread]:   - Specified metadata for field name is different from existing schema:
22:16:40.582197 [error] [MainThread]:     Specified: {}
22:16:40.582656 [error] [MainThread]:     Existing:  {"comment":"Name"}
22:16:40.583122 [error] [MainThread]:   
22:16:40.583559 [error] [MainThread]:   If your intention is to keep the existing schema, you can omit the
22:16:40.583989 [error] [MainThread]:   schema from the create table command. Otherwise please ensure that
22:16:40.584378 [error] [MainThread]:   the schema matches.
22:16:40.593301 [info ] [MainThread]: 
22:16:40.593827 [info ] [MainThread]: Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1
22:16:40.594650 [debug] [MainThread]: Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'end', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1195dde10>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x119641f60>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1195cbf70>]}
22:16:40.595269 [debug] [MainThread]: Flushing usage events

System information

The output of dbt --version:

Core:
  - installed: 1.4.4
  - latest:    1.4.5 - Update available!

  Your version of dbt-core is out of date!
  You can find instructions for upgrading here:
  https://docs.getdbt.com/docs/installation

Plugins:
  - databricks: 1.4.2 - Up to date!
  - bigquery:   1.4.1 - Update available!
  - snowflake:  1.4.1 - Up to date!
  - spark:      1.4.1 - Up to date!

  At least one plugin is out of date or incompatible with dbt-core.
  You can find instructions for upgrading here:
  https://docs.getdbt.com/docs/installation

The operating system you're using: macOS

The output of python --version:

Python 3.10.10

Additional context

Notably if there was no column level description - then no such error arise.

jeremyyeo avatar Mar 14 '23 22:03 jeremyyeo

This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue.

github-actions[bot] avatar Sep 11 '23 01:09 github-actions[bot]

@jeremyyeo does this still repro? If so, please remove the stale label, otherwise close :).

benc-db avatar Sep 13 '23 18:09 benc-db

This still happens for us and is majorly annoying.

eblanchi avatar Oct 19 '23 15:10 eblanchi

@eblanchi thanks for confirming so that we can prioritize.

benc-db avatar Oct 19 '23 18:10 benc-db

Thanks Ben. For the time being we are being forced to disable persist_docs on seeds, as the alternative requiring us to delete the S3 data is just not viable (an issue compounded by delta sharing).

eblanchi avatar Oct 20 '23 08:10 eblanchi

For anyone who comes across this issue: I believe this is related to using seeds with a specified location_root/external locations. We will investigate ways to work around this limitation, but for now, we recommend using managed tables for your seeds (i.e. not specifying a location root).

benc-db avatar Nov 28 '23 18:11 benc-db