python-db-api

snowflake python connector - Time to make database connection

半城伤御伤魂 提交于 2021-02-20 04:21:27
问题 Python code is taking around 2-3 secs to make the snowflake database connection. Is it expected behaviour ? OR are there any parameters which will speed up connection time. Here is the sample code: import snowflake.connector import time t1=time.time() print("Start time :"+str(t1)) try: conn = snowflake.connector.connect( user=user, password=password, account=account, warehouse=warehouse, # database=DATABASE, # schema=SCHEMA ) cur = conn.cursor() except Exception as e: logging.error(

snowflake python connector - Time to make database connection

旧巷老猫 提交于 2021-02-20 04:21:05
问题 Python code is taking around 2-3 secs to make the snowflake database connection. Is it expected behaviour ? OR are there any parameters which will speed up connection time. Here is the sample code: import snowflake.connector import time t1=time.time() print("Start time :"+str(t1)) try: conn = snowflake.connector.connect( user=user, password=password, account=account, warehouse=warehouse, # database=DATABASE, # schema=SCHEMA ) cur = conn.cursor() except Exception as e: logging.error(

Python call sql-server stored procedure with table valued parameter

。_饼干妹妹 提交于 2021-02-04 05:01:25
问题 I have a python script that loads , transform and calculates data. In sql-server there's a stored procedure that requires a table valued parameter, 2 required parameters and 2 optional parameters. In sql server I can call this SP: USE [InstName] GO DECLARE @return_value int DECLARE @MergeOnColumn core.MatchColumnTable INSERT INTO @MergeOnColumn SELECT 'foo.ExternalInput','bar.ExternalInput' EXEC @return_value = [core].[_TableData] @Target = N'[dbname].[tablename1]', @Source = N'[dbname].

PostgreSQL/performance one general cursor or create for every query

倖福魔咒の 提交于 2021-01-29 03:23:04
问题 I am building a script to store some data in a database. First time I'm using PostgeSQL and everything goes well and as planned. I was thinking about the usage of the Cursor in PostgreSQl and what if I am making a lot of them while one is enough. But I don't want to pass the cursor to all my SQL functions. Here's my simplified example. dbConn, dbCurs = openDataBase(config) doSomeThing(dbCurs, name, age, listOfJohns) def doSomething(dbCurs, name, age, listOfPoeple): listOfPoeple

IronPython db-api 2.0

本小妞迷上赌 提交于 2020-01-02 15:25:28
问题 Does anyone know which if any db-api 2.0 drivers work with IronPython? If so, has anyone tried using it with SQLAlchemy, SQLObject or the Django ORM? 回答1: I know this is a very late answer, but I only saw the question today -- so I am answering it today. http://sourceforge.net/projects/adodbapi contains a fully compliant db-api-2 module which works with IronPython. It is restricted to use in Windows, since it uses classic ADO, using COM calls, rather than ADO.NET. [I tried a true .NET version

Using Python quick insert many columns into Sqlite\Mysql

我与影子孤独终老i 提交于 2020-01-02 10:18:16
问题 If Newdata is list of x columns, How would get the number unique columns--number of members of first tuple. (Len is not important.) Change the number of "?" to match columns and insert using the statement below. csr = con.cursor() csr.execute('Truncate table test.data') csr.executemany('INSERT INTO test.data VALUES (?,?,?,?)', Newdata) con.commit() 回答1: By "Newdata is list of x columns", I imagine you mean x tuples , since then you continue to speak of "the first tuple". If Newdata is a list

Using Python quick insert many columns into Sqlite\Mysql

老子叫甜甜 提交于 2020-01-02 10:18:09
问题 If Newdata is list of x columns, How would get the number unique columns--number of members of first tuple. (Len is not important.) Change the number of "?" to match columns and insert using the statement below. csr = con.cursor() csr.execute('Truncate table test.data') csr.executemany('INSERT INTO test.data VALUES (?,?,?,?)', Newdata) con.commit() 回答1: By "Newdata is list of x columns", I imagine you mean x tuples , since then you continue to speak of "the first tuple". If Newdata is a list

How do I specify Transaction Isolation Level for MS SQL backend in Sql Alchemy Python

走远了吗. 提交于 2019-12-25 07:05:09
问题 How do I set transaction level READ UNCOMMITED for all queries done through a SQL Alchemy engine object? I set the isolation_level argument as notated here: http://docs.sqlalchemy.org/en/latest/core/engines.html#sqlalchemy.create_engine.params.isolation_level by passing it into create_engine like so: my_eng = create_engine(db_conn_string, isolation_level='READ_UNCOMMITTED') but for my backend (MS SQL Server) I get the following error, perhaps unsurprisingly as the docs do say it is dialect

How to prevent PyMySQL from escaping identifier names?

余生长醉 提交于 2019-12-24 03:42:33
问题 I am utilizing PyMySQL with Python 2.7 and try to execute the following statement: 'INSERT INTO %s (%s, %s) VALUES (%s, %s) ON DUPLICATE KEY UPDATE %s = %s' With the following parameters: ('artikel', 'REC_ID', 'odoo_created', u'48094', '2014-12-23 10:00:00', 'odoo_modified', '2014-12-23 10:00:00') Always resulting in: {ProgrammingError}(1064, u"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''artikel' ('REC