Bulk upsert with SQLAlchemy [duplicate]

旧城冷巷雨未停 提交于 2019-12-23 08:36:12

问题


I am working on bulk upserting lots of data into PostgreSQL with SQLAlchemy 1.1.0b, and I'm running into duplicate key errors.

from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base

import pg

engine = create_engine("postgresql+pygresql://" + uname + ":" + passw + "@" + url)

# reflectively load the database.
metadata = MetaData()
metadata.reflect(bind=engine)
session = sessionmaker(autocommit=True, autoflush=True)
session.configure(bind=engine)
session = session()
base = automap_base(metadata=metadata)
base.prepare(engine, reflect=True)

table_name = "arbitrary_table_name" # this will always be arbitrary
mapped_table = getattr(base.classses, table_name)
# col and col2 exist in the table.
chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]]

for chunk in chunks:
    session.bulk_insert_mappings(mapped_table, chunk)
    session.commit()

When I run it, I get this:

sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR:  duplicate key value violates unique constraint <constraint>

I can't seem to properly instantiate the mapped_table as a Table() object, either.

I'm working with time series data, so I'm grabbing data in bulk with some overlap in time ranges. I want to do a bulk upsert to ensure data consistency.

What's the best way to do a bulk upsert with a large data set? I know PostgreSQL support upserts now, but I'm not sure how to do this in SQLAlchemy.


回答1:


from https://stackoverflow.com/a/26018934/465974

After I found this command, I was able to perform upserts, but it is worth mentioning that this operation is slow for a bulk "upsert".

The alternative is to get a list of the primary keys you would like to upsert, and query the database for any matching ids:



来源:https://stackoverflow.com/questions/38579049/bulk-upsert-with-sqlalchemy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!