postgresql

how to automatically create table based on CSV into postgres using python

做~自己de王妃 提交于 2021-02-11 15:37:12
问题 I am a new Python programmer and trying to import a sample CSV file into my Postgres database using python script. I have CSV file with name abstable1 it has 3 headers: absid, name, number I have many such files in a folder I want to create a table into PostgreSQL with the same name as the CSV file for all. Here is the code which I tried to just create a table for one file to test: import psycopg2 import csv import os #filePath = 'c:\\Python27\\Scripts\\abstable1.csv' conn = psycopg2.connect(

how to automatically create table based on CSV into postgres using python

断了今生、忘了曾经 提交于 2021-02-11 15:35:17
问题 I am a new Python programmer and trying to import a sample CSV file into my Postgres database using python script. I have CSV file with name abstable1 it has 3 headers: absid, name, number I have many such files in a folder I want to create a table into PostgreSQL with the same name as the CSV file for all. Here is the code which I tried to just create a table for one file to test: import psycopg2 import csv import os #filePath = 'c:\\Python27\\Scripts\\abstable1.csv' conn = psycopg2.connect(

Extracting several math operations outputs from single select query

谁说我不能喝 提交于 2021-02-11 15:30:31
问题 I have three tables that I need to merge to analyse: active, students and bills. 'Active' contains records on active students and the subjects they have been active on with columns: id (student id) int, time (time they have been active) timestamp, and subject (subject in which were active) - text id time subject 1 2020-04-23 06:53:30 Math 2 2020-05-13 09:51:22 Physics 2 2020-02-26 17:34:56 History 'Students' is the mass database containing: id (student id) int, group (the group to which

How to select columns that have name beginning with same prefix?

余生长醉 提交于 2021-02-11 15:30:05
问题 Using PostgreSQL 8.1.11, is there a way to select a set of columns that have name beginning with same prefix. Suppose we have columns : PREFIX_col1, PREFIX_col2, ... Is it possible to do a request like : SELECT 'PREFIX_*' FROM mytable; Which of course doesn't work. 回答1: information_schema.COLUMNS contains all the columns in your db so you can query for a specific pattern in the name like this: select column_name from information_schema.COLUMNS as c where c.TABLE_NAME = 'mytable' and c.COLUMN

How to select columns that have name beginning with same prefix?

情到浓时终转凉″ 提交于 2021-02-11 15:29:59
问题 Using PostgreSQL 8.1.11, is there a way to select a set of columns that have name beginning with same prefix. Suppose we have columns : PREFIX_col1, PREFIX_col2, ... Is it possible to do a request like : SELECT 'PREFIX_*' FROM mytable; Which of course doesn't work. 回答1: information_schema.COLUMNS contains all the columns in your db so you can query for a specific pattern in the name like this: select column_name from information_schema.COLUMNS as c where c.TABLE_NAME = 'mytable' and c.COLUMN

POSTGRES select n equally distributed rows by time over millions of records

[亡魂溺海] 提交于 2021-02-11 15:17:46
问题 I have a table with columns id,filter1,filter2,time,value which contains millions of records. I want to fetch n equally distributed rows between two timestamps. If the number of records between timestamps is less than n I want to fetch all the records. My current query looks like below, assuming n=200 SELECT s.* FROM ( SELECT t.time, t.value, ROW_NUMBER() OVER(ORDER BY t.time) as rnk, COUNT(*) OVER() as total_cnt FROM table_name t WHERE t.filter1='filter_value' and t.filter2='another_value'

What is wrong in this case since returning custom set of type from postgres function

元气小坏坏 提交于 2021-02-11 15:08:05
问题 My Table structure is CREATE TABLE dev.clbk_logs ( id bigint NOT NULL, clbk_typ character varying(255) COLLATE pg_catalog."default", clbk_json json, cre_dte timestamp without time zone, ld_id bigint, ld_num character varying(255) COLLATE pg_catalog."default", mod_dte timestamp without time zone, CONSTRAINT clbk_logs_pkey PRIMARY KEY (id) ) WITH ( OIDS = FALSE ) And my stored procedure as follows CREATE OR REPLACE FUNCTION dev.get_ranged_loads(p_callback_types TEXT[],p_loads TEXT[], p_days_ago

kafka JDBC sink with delete=true option do I have to use record_key?

醉酒当歌 提交于 2021-02-11 15:02:34
问题 I'd like to read from a multiple topics from cdc debezium from source postgres database, using a key from kafka message holding a primary keys. Then, the connector performs ETL operations in source database. When I set delete.enabled to true I cannot use kafka primary keys, it says I have to specify record_key and pk_fields . My idea is, set regex to read multiple desired topics, get table name from topic name and use primary keys holding by kafka topic, which is being currently read. name

Dokku Compilation Error - django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'

喜欢而已 提交于 2021-02-11 14:54:17
问题 I've been attempting to setup my built-up Django Instance as a Database Server. Had chosen DigitalOcean as my platform and had read that Dokku is a useful PaaS system that will enable better scalability for this API I'm trying to deploy. I have been at this problem for the last 3-4 days straight and really had gone through every potential means of solution I could have found online. Being more of a front-end developer, I'm pretty bad at this backend installation matter. At first I thought

How to fetch rows with max update datetime using GROUP BY and HAVING with SQLAlchemy and Postgresql

杀马特。学长 韩版系。学妹 提交于 2021-02-11 14:52:04
问题 I'm going from SQLite to Postgresql. This has made one of my queries not work. It's not clear to me why this query is allowed in SQLite, but not in Postgresql. The query in question is below in the find_recent_by_section_id_list() function. I've tried rewriting the query in multiple ways, but what is confusing me is that this query worked when I was working with SQLite. The setup is Flask, SQLAlchemy, Flask-SQLAlchemy and Postgresql. class SectionStatusModel(db.Model): __tablename__ =