How to handle when connection pool gets exhausted using snowflake with python

Question:

can someone explain me how can we handle the situation when the max connections limit has been reached for any database. Can we make a connection in wait state until any existing connection gets released automatically.

import snowflake.connector as sf
import sqlalchemy.pool as pool
def get_conn():
    conn = sf.connect(
        user='username',
        password='password',
        account='snowflake-account-name',
        warehouse='compute_wh',
        database='customer_data'
    )

    return conn

mypool = pool.QueuePool(get_conn, max_overflow=10, pool_size=5)
a = mypool.connect()
a1 = mypool.connect()
a2 = mypool.connect()
a3 = mypool.connect()
a4 = mypool.connect()
a5 = mypool.connect()
a6 = mypool.connect()
a7 = mypool.connect()
a8 = mypool.connect()
a9 = mypool.connect() 
a11 = mypool.connect()
a12 = mypool.connect()
a13 = mypool.connect()
a14 = mypool.connect()
a15 = mypool.connect()

till a14 we will get 10 connections objects successfully but
when we uncomment and run this a15. we will get an error as pool exhausted.. how to handle this case ??

if we need to write the logic in such a way that we have to give access even
though the instances keep increasing. help me how i can send back the connection
to the pool ??

Asked By: Goutham Boine

||

Answers:

Your pool is configured to hold five open connections to the database and create up to ten ‘overflow’ connections for times of heavy load.

Doing mypool.connect() will check out a connection, so when you exceed the pool size plus the overflow you get an error.

You should return connections to the pool if you don’t need them to do more work.

This can be done explicitly by calling e.g. a1.close() (by the way, it would be good to give descriptive names – these are Connection objects).

However, the better way is to use a context manager, because if an exception is thrown between your checking out a connection and returning it with conn.close(), you will end up having a permanently checked-out connection and ultimately have the same problem.

e.g.

with mypool.connect() as conn:
    do_some_work(conn)

more_code()

When you finish the with block (context manager) and move on to more code, or if you exit it with an exception, the connection will be returned to the pool for future use.

Answered By: Paddy Alton

I think this code will work even if the max limit is reached in pool. Correct me if I did something wrong.

import snowflake.connector as sf
import sqlalchemy.pool as pool
    
class Database:
    connection_pool = None
    
    @classmethod
    def initialise(cls):
        def get_conn():
            conn = sf.connect(
                user='username',
                password='password',
                account='accountname',
                warehouse='compute_wh',
                database='customer_data'
            )
    
            return conn
    
        cls.connection_pool = pool.QueuePool(get_conn, max_overflow=1, pool_size=1)
    
class ConnectionFromPool:
    def __init__(self):
        self.connection  = None
        
    def __enter__(self):
        self.connection = Database.connection_pool.connect()
        return self.connection
        
    def __exit__(self, exc_type, exc_val, exc_tb):
        self.connection.commit()
        Database.connection_pool.dispose()
    
    
class User:
    def __init__(self, cust_key, name, address):
        self.cust_key = cust_key
        self.name = name
        self.address = address
    
    def save_to_db(self):
        with ConnectionFromPool() as connection:
            with connection.cursor() as cursor:
                cursor.execute("insert into data(cust_key,name,address) values (%s, %s, %s)", (self.cust_key, self.name, self.address))
    
    
    
Database.initialise()
user_to_db = User(1,'padma', 'ramnagar')
user_to_db.save_to_db()
user_to_db.save_to_db()
print("success")
Answered By: Goutham Boine