unpacking a sql select into a pandas dataframe

Question:

Suppose I have a select roughly like this:

select instrument, price, date from my_prices;

How can I unpack the prices returned into a single dataframe with a series for each instrument and indexed on date?

To be clear: I’m looking for:

<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: ...
Data columns (total 2 columns):
inst_1    ...
inst_2    ...
dtypes: float64(1), object(1) 

I’m NOT looking for:

<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: ...
Data columns (total 2 columns):
instrument    ...
price         ...
dtypes: float64(1), object(1)

…which is easy 😉

Asked By: Chris Withers

||

Answers:

You can pass a cursor object to the DataFrame constructor. For postgres:

import psycopg2
conn = psycopg2.connect("dbname='db' user='user' host='host' password='pass'")
cur = conn.cursor()
cur.execute("select instrument, price, date from my_prices")
df = DataFrame(cur.fetchall(), columns=['instrument', 'price', 'date'])

then set index like

df.set_index('date', drop=False)

or directly:

df.index =  df['date']
Answered By: jdennison

Update: recent pandas have the following functions: read_sql_table and read_sql_query.

First create a db engine (a connection can also work here):

from sqlalchemy import create_engine
# see sqlalchemy docs for how to write this url for your database type:
engine = create_engine('mysql://scott:tiger@localhost/foo')

See sqlalchemy database urls.

pandas_read_sql_table

table_name = 'my_prices'
df = pd.read_sql_table(table_name, engine)

pandas_read_sql_query

df = pd.read_sql_query("SELECT instrument, price, date FROM my_prices;", engine)

The old answer had referenced read_frame which is has been deprecated (see the version history of this question for that answer).


It’s often makes sense to read first, and then perform transformations to your requirements (as these are usually efficient and readable in pandas). In your example, you can pivot the result:

df.reset_index().pivot('date', 'instrument', 'price')

Note: You could miss out the reset_index you don’t specify an index_col in the read_frame.

Answered By: Andy Hayden

This connect with postgres and pandas with remote postgresql

# CONNECT TO POSTGRES USING PANDAS
import psycopg2 as pg
import pandas.io.sql as psql

this is used to establish the connection with postgres db

connection = pg.connect("host=192.168.0.1 dbname=db user=postgres")

this is used to read the table from postgres db

dataframe = psql.read_sql("SELECT * FROM DB.Table", connection)
Answered By: Mani Abi Anand
import pandas as pd
import pandas.io.sql as sqlio
import psycopg2

conn = psycopg2.connect("host='{}' port={} dbname='{}' user={} password={}".format(host, port, dbname, username, pwd))
sql = "select count(*) from table;"
dat = sqlio.read_sql_query(sql, conn)
conn = None

import pandas as pd

conn = psycopg2.connect("host='{}' port={} dbname='{}' user={} password={}".format(host, port, dbname, username, pwd))
sql = "select count(*) from table;"
dat = pd.read_sql_query(sql, conn)
conn = None
Answered By: Ram Prajapati
import pandas as pd
import psycopg2

conn = psycopg2.connect(user="",
                            password="",
                            host="",
                            port="",
                            database="")

sql = "select count(*) from table;"
dat = pd.read_sql_query(sql, conn)
Answered By: shyam yadav
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.