ignoring unconsumed columns while inserting into postgres database using sqlalchemy

Question:

I want to insert data into a table from a python dictionary. But my dictionary contains more keys than the columns in the table. So I get the error sqlalchemy.exc.CompileError: Unconsumed column names: column_name while inserting. I want to just insert the column names that exist in the table and ignore extra columns.

How to do that using sqlalchemy?

The code I am using is

from sqlalchemy import *
from sqlalchemy.dialects import postgresql

db = create_engine('postgresql+psycopg2://localhost:5432/postgres')
meta = MetaData()
meta.bind = db

data = [{
    'a': 1,
    'b': 2,
    'c': 3,
    'd': 4,
    'e': 5,
    'f': 6
}]

ins_stmt = postgresql.insert(table_object).values(data)
db.execute(ins_stmt)

where my table_object contains columns a,b,c,d,e.

P.S.- I am using sqlalchemy 1.4

Asked By: user185887

||

Answers:

I’m not aware of any way to get insert to discard the extra items, so they must be filtered out beforehand. You could do this by comparing the table’s column names against the dictionary’s keys.

column_names = table_object.columns.keys()
fixed = [{k: v for k, v in d.items() if k in (d.keys() & column_names)} for d in data]

ins_stmt = postgresql.insert(table_object).values(fixed)
with engine.connect() as conn:
    with conn.begin():
        conn.execute(ins_stmt)

Answered By: snakecharmerb
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.