TigerData logo
TigerData logo
  • Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Agentic Postgres

    Postgres for Agents

    TimescaleDB

    Postgres for time-series, real-time analytics and events

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InTry for free
Home
AWS Time-Series Database: Understanding Your OptionsStationary Time-Series AnalysisThe Best Time-Series Databases ComparedTime-Series Analysis and Forecasting With Python Alternatives to TimescaleWhat Are Open-Source Time-Series Databases—Understanding Your OptionsWhy Consider Using PostgreSQL for Time-Series Data?Time-Series Analysis in RWhat Is Temporal Data?What Is a Time Series and How Is It Used?Is Your Data Time Series? Data Types Supported by PostgreSQL and TimescaleUnderstanding Database Workloads: Variable, Bursty, and Uniform PatternsHow to Work With Time Series in Python?Tools for Working With Time-Series Analysis in PythonGuide to Time-Series Analysis in PythonUnderstanding Autoregressive Time-Series ModelingCreating a Fast Time-Series Graph With Postgres Materialized Views
Understanding PostgreSQLOptimizing Your Database: A Deep Dive into PostgreSQL Data TypesUnderstanding FROM in PostgreSQL (With Examples)How to Address ‘Error: Could Not Resize Shared Memory Segment’ How to Install PostgreSQL on MacOSUnderstanding FILTER in PostgreSQL (With Examples)Understanding GROUP BY in PostgreSQL (With Examples)PostgreSQL Join Type TheoryA Guide to PostgreSQL ViewsStructured vs. Semi-Structured vs. Unstructured Data in PostgreSQLUnderstanding Foreign Keys in PostgreSQLUnderstanding PostgreSQL User-Defined FunctionsUnderstanding PostgreSQL's COALESCE FunctionUnderstanding SQL Aggregate FunctionsUsing PostgreSQL UPDATE With JOINHow to Install PostgreSQL on Linux5 Common Connection Errors in PostgreSQL and How to Solve ThemUnderstanding HAVING in PostgreSQL (With Examples)How to Fix No Partition of Relation Found for Row in Postgres DatabasesHow to Fix Transaction ID Wraparound ExhaustionUnderstanding LIMIT in PostgreSQL (With Examples)Understanding PostgreSQL FunctionsUnderstanding ORDER BY in PostgreSQL (With Examples)Understanding WINDOW in PostgreSQL (With Examples)Understanding PostgreSQL WITHIN GROUPPostgreSQL Mathematical Functions: Enhancing Coding EfficiencyUnderstanding DISTINCT in PostgreSQL (With Examples)Using PostgreSQL String Functions for Improved Data AnalysisData Processing With PostgreSQL Window FunctionsPostgreSQL Joins : A SummaryUnderstanding OFFSET in PostgreSQL (With Examples)Understanding PostgreSQL Date and Time FunctionsWhat Is Data Compression and How Does It Work?What Is Data Transformation, and Why Is It Important?Understanding the Postgres string_agg FunctionWhat Is a PostgreSQL Left Join? And a Right Join?Understanding PostgreSQL SELECTSelf-Hosted or Cloud Database? A Countryside Reflection on Infrastructure ChoicesUnderstanding ACID Compliance Understanding percentile_cont() and percentile_disc() in PostgreSQLUnderstanding PostgreSQL Conditional FunctionsUnderstanding PostgreSQL Array FunctionsWhat Characters Are Allowed in PostgreSQL Strings?Understanding WHERE in PostgreSQL (With Examples)What Is a PostgreSQL Full Outer Join?What Is a PostgreSQL Cross Join?What Is a PostgreSQL Inner Join?Data Partitioning: What It Is and Why It MattersStrategies for Improving Postgres JOIN PerformanceUnderstanding the Postgres extract() FunctionUnderstanding the rank() and dense_rank() Functions in PostgreSQL
Guide to PostgreSQL PerformanceHow to Reduce Bloat in Large PostgreSQL TablesDesigning Your Database Schema: Wide vs. Narrow Postgres TablesBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables A Guide to Data Analysis on PostgreSQLA Guide to Scaling PostgreSQLGuide to PostgreSQL SecurityHandling Large Objects in PostgresHow to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLHow to Use PostgreSQL for Data TransformationOptimizing Array Queries With GIN Indexes in PostgreSQLPg_partman vs. Hypertables for Postgres PartitioningPostgreSQL Performance Tuning: Designing and Implementing Your Database SchemaPostgreSQL Performance Tuning: Key ParametersPostgreSQL Performance Tuning: Optimizing Database IndexesDetermining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)Top PostgreSQL Drivers for PythonWhen to Consider Postgres PartitioningGuide to PostgreSQL Database OperationsUnderstanding PostgreSQL TablespacesWhat Is Audit Logging and How to Enable It in PostgreSQLGuide to Postgres Data ManagementHow to Index JSONB Columns in PostgreSQLHow to Monitor and Optimize PostgreSQL Index PerformanceSQL/JSON Data Model and JSON in SQL: A PostgreSQL PerspectiveA Guide to pg_restore (and pg_restore Example)PostgreSQL Performance Tuning: How to Size Your DatabaseAn Intro to Data Modeling on PostgreSQLExplaining PostgreSQL EXPLAINWhat Is a PostgreSQL Temporary View?A PostgreSQL Database Replication GuideHow to Compute Standard Deviation With PostgreSQLHow PostgreSQL Data Aggregation WorksBuilding a Scalable DatabaseRecursive Query in SQL: What It Is, and How to Write OneGuide to PostgreSQL Database DesignHow to Use Psycopg2: The PostgreSQL Adapter for Python
Best Practices for Scaling PostgreSQLHow to Design Your PostgreSQL Database: Two Schema ExamplesHow to Handle High-Cardinality Data in PostgreSQLHow to Store Video in PostgreSQL Using BYTEABest Practices for PostgreSQL Database OperationsHow to Manage Your Data With Data Retention PoliciesBest Practices for PostgreSQL AggregationBest Practices for Postgres Database ReplicationHow to Use a Common Table Expression (CTE) in SQLBest Practices for Postgres Data ManagementBest Practices for Postgres PerformanceBest Practices for Postgres SecurityBest Practices for PostgreSQL Data AnalysisTesting Postgres Ingest: INSERT vs. Batch INSERT vs. COPYHow to Use PostgreSQL for Data Normalization
PostgreSQL Extensions: amcheckPostgreSQL Extensions: Unlocking Multidimensional Points With Cube PostgreSQL Extensions: hstorePostgreSQL Extensions: ltreePostgreSQL Extensions: Secure Your Time-Series Data With pgcryptoPostgreSQL Extensions: pg_prewarmPostgreSQL Extensions: pgRoutingPostgreSQL Extensions: pg_stat_statementsPostgreSQL Extensions: Install pg_trgm for Data MatchingPostgreSQL Extensions: Turning PostgreSQL Into a Vector Database With pgvectorPostgreSQL Extensions: Database Testing With pgTAPPostgreSQL Extensions: PL/pgSQLPostgreSQL Extensions: Using PostGIS and Timescale for Advanced Geospatial InsightsPostgreSQL Extensions: Intro to uuid-ossp
Columnar Databases vs. Row-Oriented Databases: Which to Choose?Data Analytics vs. Real-Time Analytics: How to Pick Your Database (and Why It Should Be PostgreSQL)How to Choose a Real-Time Analytics DatabaseUnderstanding OLTPOLAP Workloads on PostgreSQL: A GuideHow to Choose an OLAP DatabasePostgreSQL as a Real-Time Analytics DatabaseWhat Is the Best Database for Real-Time AnalyticsHow to Build an IoT Pipeline for Real-Time Analytics in PostgreSQL
When Should You Use Full-Text Search vs. Vector Search?HNSW vs. DiskANNA Brief History of AI: How Did We Get Here, and What's Next?A Beginner’s Guide to Vector EmbeddingsPostgreSQL as a Vector Database: A Pgvector TutorialUsing Pgvector With PythonHow to Choose a Vector DatabaseVector Databases Are the Wrong AbstractionUnderstanding DiskANNA Guide to Cosine SimilarityStreaming DiskANN: How We Made PostgreSQL as Fast as Pinecone for Vector DataImplementing Cosine Similarity in PythonVector Database Basics: HNSWVector Database Options for AWSVector Store vs. Vector Database: Understanding the ConnectionPgvector vs. Pinecone: Vector Database Performance and Cost ComparisonHow to Build LLM Applications With Pgvector Vector Store in LangChainHow to Implement RAG With Amazon Bedrock and LangChainRetrieval-Augmented Generation With Claude Sonnet 3.5 and PgvectorRAG Is More Than Just Vector SearchPostgreSQL Hybrid Search Using Pgvector and CohereImplementing Filtered Semantic Search Using Pgvector and JavaScriptRefining Vector Search Queries With Time Filters in Pgvector: A TutorialUnderstanding Semantic SearchWhat Is Vector Search? Vector Search vs Semantic SearchText-to-SQL: A Developer’s Zero-to-Hero GuideNearest Neighbor Indexes: What Are IVFFlat Indexes in Pgvector and How Do They WorkBuilding an AI Image Gallery With OpenAI CLIP, Claude Sonnet 3.5, and Pgvector
Understanding IoT (Internet of Things)A Beginner’s Guide to IIoT and Industry 4.0Storing IoT Data: 8 Reasons Why You Should Use PostgreSQLMoving Past Legacy Systems: Data Historian vs. Time-Series DatabaseWhy You Should Use PostgreSQL for Industrial IoT DataHow to Choose an IoT DatabaseHow to Simulate a Basic IoT Sensor Dataset on PostgreSQLFrom Ingest to Insights in Milliseconds: Everactive's Tech Transformation With TimescaleHow Ndustrial Is Providing Fast Real-Time Queries and Safely Storing Client Data With 97 % CompressionHow Hopthru Powers Real-Time Transit Analytics From a 1 TB Table Migrating a Low-Code IoT Platform Storing 20M Records/DayHow United Manufacturing Hub Is Introducing Open Source to ManufacturingBuilding IoT Pipelines for Faster Analytics With IoT CoreVisualizing IoT Data at Scale With Hopara and TimescaleDB
What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?Timescale vs. Amazon RDS PostgreSQL: Up to 350x Faster Queries, 44 % Faster Ingest, 95 % Storage Savings for Time-Series DataWhat We Learned From Benchmarking Amazon Aurora PostgreSQL ServerlessTimescaleDB vs. Amazon Timestream: 6,000x Higher Inserts, 5-175x Faster Queries, 150-220x CheaperHow to Store Time-Series Data in MongoDB and Why That’s a Bad IdeaPostgreSQL + TimescaleDB: 1,000x Faster Queries, 90 % Data Compression, and Much MoreEye or the Tiger: Benchmarking Cassandra vs. TimescaleDB for Time-Series Data
Alternatives to RDSWhy Is RDS so Expensive? Understanding RDS Pricing and CostsEstimating RDS CostsHow to Migrate From AWS RDS for PostgreSQL to TimescaleAmazon Aurora vs. RDS: Understanding the Difference
5 InfluxDB Alternatives for Your Time-Series Data8 Reasons to Choose Timescale as Your InfluxDB Alternative InfluxQL, Flux, and SQL: Which Query Language Is Best? (With Cheatsheet)What InfluxDB Got WrongTimescaleDB vs. InfluxDB: Purpose Built Differently for Time-Series Data
5 Ways to Monitor Your PostgreSQL DatabaseHow to Migrate Your Data to Timescale (3 Ways)Postgres TOAST vs. Timescale CompressionBuilding Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache SupersetMore Time-Series Data Analysis, Fewer Lines of Code: Meet HyperfunctionsIs Postgres Partitioning Really That Hard? An Introduction To HypertablesPostgreSQL Materialized Views and Where to Find ThemTimescale Tips: Testing Your Chunk Size
Postgres cheat sheet
HomeTime series basicsPostgres basicsPostgres guidesPostgres best practicesPostgres extensionsPostgres for real-time analytics
Sections

Compression

Postgres TOAST vs. Timescale Compression

Hyperfunctions

More Time-Series Data Analysis, Fewer Lines of Code: Meet Hyperfunctions

Hypertables

Is Postgres Partitioning Really That Hard? An Introduction To HypertablesTimescale Tips: Testing Your Chunk Size

Continuous aggregates

PostgreSQL Materialized Views and Where to Find Them

Build your app

How to Migrate Your Data to Timescale (3 Ways)Building Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache Superset5 Ways to Monitor Your PostgreSQL Database

Products

Time Series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Forum Tutorials Changelog Success Stories Time Series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2025 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Published at Feb 20, 2024

Python

Building Python Apps With PostgreSQL: A Developer's Guide

Try for free

Start supercharging your PostgreSQL today.

Two elephants with their trunks intertwined, like in the psycopg logo.

Written by Semab Tariq

When we think about working with both PostgreSQL and Python to create our apps, our immediate concern is how to interact with PostgreSQL using Python. For over 10+ years, we've been relying on a trustworthy adapter called psycopg2. It has served us well, offering a solid connection between PostgreSQL and Python. 

However, the fast-paced technological advancements created a need for a more advanced version of psycopg2. This newer option provides enhanced features, flexibility, and improved features as compared to psycopg2. Some of these improvements include server-side binding, fetching all results from one or more executed queries, and support for binary communication. 

So, in this blog, we'll explore the following:

  • Building a basic CRUD application in Python

  • A comparison of psycopg2 and psycopg3

  • Key improvements in psycopg3

  • How Timescale simplifies database deployment for developers 

  • Developer FAQs

Interesting Fact! There is no psycopg3; it is rebranded to psycopg.

How to install psycopg

Binary installation

If you are only interested in the psycopg binary and start development straight away, we can do so by: 

pip install "psycopg[binary]"

Note: The pip version 20.3 or greater is required, or upgrade your pip installation by using this command: pip install –upgrade pip

Building the package

If you prefer to build psycopg from source, you’ll need the following prerequisites:

  • C compiler

  • Python development headers (e.g., the python3-dev package).

  • PostgreSQL client development headers (e.g., the libpq-dev package).

  • The pg_config program is available in the PATH.

Use the following command in the compile psycopg package:

pip install "psycopg[c]"

Create a Simple CRUD Application With Python Using psycopg

Before proceeding with the code, ensure that you have your PostgreSQL installed on the system and that it is configured to accept connections. 

Follow this link to install PostgreSQL on your system: https://www.postgresql.org/download/linux/debian/

After your database is ready, create a new file and insert the following code into it.

import psycopg # Connection details (replace with your own) DATABASE_URL = "postgresql://postgres@localhost:5432/postgres"

def create_item(name, description):   with psycopg.connect(DATABASE_URL) as conn:     with conn.cursor() as cur:       cur.execute("INSERT INTO items (name, description) VALUES (%s, %s)", (name, description))       conn.commit() # Commit the changes to the database

def read_items():   with psycopg.connect(DATABASE_URL) as conn:    with conn.cursor() as cur:       cur.execute("SELECT * FROM items")       return cur.fetchall() # Fetch all results as a list of tuples

def update_item(id, name, description):   with psycopg.connect(DATABASE_URL) as conn:     with conn.cursor() as cur:       cur.execute("UPDATE items SET name = %s, description = %s WHERE id = %s", (name, description, id))       conn.commit()

def delete_item(id):   with psycopg.connect(DATABASE_URL) as conn:     with conn.cursor() as cur:       cur.execute("DELETE FROM items WHERE id = %s", (id,))       conn.commit()

# Creating/inserting the record create_item("New Item", "This is a new item.") items = read_items() print(items)

# Updating the record update_item(1, "Updated Item", "This item has been updated.")

# Deleting the record delete_item(2)

Run the code via python3 <FILENAME>.py.

A psycopg2 and psycopg3 comparison

Feature

Psycopg2

Psycopg3

Code

Written in C

Written in Ppython

Code compatibility

Code written for Psycopg2 might require adjustments for Psycopg3

Psycopg3 is designed to be backward-compatible with most pPsycopg2 features

Python version support

2.7 and 3.7+

3.8+

Performance

Good

Generally faster due to improved memory management and optimizations

Security

Prone to SQL injection vulnerabilities if not careful

Improved security with parameter binding and prepared statements

Features

Basic database interaction functionalities

Enhances existing features and introduces new advanced capabilities, including asynchronous operations, pipeline mode, and server-side binding

Error handling

Basic error handling mechanisms

Improved error handling with dedicated exception classes and context managers

Documentation

Comprehensive, but might require more effort to navigate

psycopg2 documentation: https://www.psycopg.org/docs/

Updated and well-organized documentation with examples

psycopg documentation: https://www.psycopg.org/psycopg3/docs/

For a detailed performance comparison, check out our psycopg2 vs. psycopg3 performance benchmark.

Enhancements inside psycopg

Binary communication

In psycopg, this functionality involves exchanging binary data, such as images or files, between Python and PostgreSQL. When passing a value via a standard %s placeholder, the most suitable format (text or binary) is typically chosen, But the binary is the fastest. Other options that we can specify are: 

  • %b for binary

  • %t for text

By default, data is returned in text format, but you can use binary=True with the execute function when needed.

# Sending binary data to the database (using %s): cur.execute("INSERT INTO images (data) VALUES (%b)", (image_data,))

# Retrieving binary data as is (default): cur.execute("SELECT data FROM images") binary_data = cur.fetchone()[0]  # Returns binary data as bytes

# Retrieving data as text: text_data = cur.fetchone(binary=False)[0]  # Decodes binary data to text

Raw query cursors

A cursor is a tool used to communicate with a PostgreSQL database by executing SQL queries They are normally created by the connection’s cursor() method.

Imagine you're conversing with a librarian through a special window. You speak into a microphone, asking questions about the books (SQL queries). The librarian, on the other side, listens carefully and retrieves the information you seek, presenting it through the window. This window acts as your cursor.

Now, let's delve deeper into this analogy. Just as in a conversation through the window, you have sequential access and control with a cursor. Each question you ask leads to a specific answer before you move on to the next, mirroring the controlled, sequential manner of communication. It's akin to having a structured dialogue where each query corresponds to a particular response, allowing you to process the results of your query one row at a time.

Moreover, a cursor is stateful, meaning it remembers where you are in the dataset. In our analogy, this is akin to the librarian remembering which questions you've asked and answered so far. This memory enables each new question to be informed by previous interactions, creating a cohesive and informed conversation.

Introduced in the 3.2 version of psycopg, raw query cursors provide developers with the ability to utilize PostgreSQL's native placeholders, such as $1, $2, etc., instead of the standard %s placeholder typically used in parameterized queries.

from psycopg import connect, RawCursor with connect(dsn) as conn:    with RawCursor(conn) as cur:        cur.execute("SELECT $1, $2", [1, "Hello"])        assert cur.fetchone() == (1, "Hello")

Server-side binding

This feature in psycopg sends the query and parameters to the database separately, just like a chef getting the recipe and ingredients separately. The database itself combines them and executes the query, like the chef following the recipe.

Note: Server-side binding works for normal SELECT and data manipulation statements INSERT, UPDATE, DELETE, but it doesn’t work with SET or with NOTIFY.

PostgreSQL provides an alternative approach to handle SET with set_config() and NOTIFY with pg_notify(). See the example below:

conn.execute("SELECT set_config('TimeZone', %s, false)", ["UTC"]) conn.execute("SELECT pg_notify(%s, %s)", ["chan", "42"])

Note: In both server-side and client-side scenarios, you're limited to specifying parameter values (i.e., strings in single quotes). If you need to parameterize various parts of a statement, like a table name, you'll need to utilize the psycopg.sql module.

from psycopg import sql cur.execute(    sql.SQL("INSERT INTO {} VALUES (%s, %s)")        .format(sql.Identifier('my_table')),    [10, 20])

Server-side binding in psycopg provides numerous benefits such as enhanced performance, increased security against SQL injections, and handling binary data

Multiple statements in the same query

With server-side binding, if you include parameters in your SQL query, you can't send multiple statements separated by semicolons in a single execute() call. This is because server-side binding needs to carefully match parameters to specific statements, which gets tricky with multiple statements in one go.

Use client-side-cursor to execute this, as mentioned in the following example: 

cur = psycopg.ClientCursor(conn) >>> cur.execute( ...     "INSERT INTO foo VALUES (%s); INSERT INTO foo VALUES (%s)", ...     (10, 20))

Fetch all results

In psycopg2, if you run several commands that produce multiple results, it only shows the final result. However, psycopg provides a more efficient method to handle multiple results. After running a query, the first result is immediately available in the cursor and can be fetched using standard methods. To access subsequent results, you can use the cursor.nextset() method.

cur_pg3.execute("SELECT 1; SELECT 2") cur_pg3.fetchone() (1,)

cur_pg3.nextset() True cur_pg3.fetchone() (2,)

cur_pg3.nextset() None  # no more results

Asynchronous operations

psycopg lets you speak to the database without interrupting your Python program's main thread. You can ask questions (queries) asynchronously, meaning they get sent off without pausing everything else. While you wait for the answers, your program can keep doing other tasks and won't get stuck waiting for the slow database reply.

This can be incredibly useful for improving the responsiveness and performance of your code, especially when dealing with long-running queries or frequent communication with the database.

AsyncConnection and AsyncCursor

psycopg offers asynchronous versions of Connection and Cursor objects, providing non-blocking methods for executing queries, fetching results, and managing transactions.

In psycopg2, the following loop demonstrates the use of the fileno() and poll() methods along with the Python select() function for asynchronous operations with psycopg:

def wait(conn):    while True:        state = conn.poll()        if state == psycopg2.extensions.POLL_OK:            break        elif state == psycopg2.extensions.POLL_WRITE:            select.select([], [conn.fileno()], [])        elif state == psycopg2.extensions.POLL_READ:            select.select([conn.fileno()], [], [])        else:            raise psycopg2.OperationalError("poll() returned %s" % state)

In psycopg, we can use the following example to establish an asynchronous connection with a PostgreSQL database using psycopg, execute a query asynchronously, and manage the cursor accordingly.  

async with await psycopg.AsyncConnection.connect() as aconn:    async with aconn.cursor() as cur:        await cur.execute(...)

Using COPY TO and COPY FROM

psycopg provides enhanced capabilities over psycopg2 when dealing with COPY TO and COPY FROM operations, leveraging the PostgreSQL COPY protocol, known for its efficiency in data loading and modification using SQL techniques.

The following code copies rows from the specified table to standard output and makes it ready for further processing:

with cur.copy("COPY (SELECT * FROM table_name LIMIT %s) TO STDOUT", (3,)) as copy:

With a copy operation, you can import data into the database from various Python structures like a list of tuples or any iterable sequence.

records = [(10, 20, "hello"), (40, None, "world")] with cursor.copy("COPY sample (col1, col2, col3) FROM STDIN") as copy: for record in records: copy.write_row(record)

If data is already formatted in a way suitable for a copy, it can be loaded into the database using: 

copy.write().  with open("data", "r") as f:    with cursor.copy("COPY data FROM STDIN") as copy:        while data := f.read(BLOCK_SIZE):            copy.write(data)

It's like copying and pasting chunks of the file into the database table.

Connection pools

Connection pooling was supported in psycopg2, although it wasn't as robust and advanced as the features offered by its successor, psycopg. 

Install the required package to use connection pooling with psycopg:

pip install psycopg_pool

Here is an example of how you can acquire a connection from the pool, execute the queries, and, at the end, while the transaction is committed or rolled back, we return the connection to the pool so all pool resources can be released.

with ConnectionPool(...) as pool:    with pool.connection() as conn:        conn.execute("SELECT something FROM somewhere ...")        with conn.cursor() as cur:            cur.execute("SELECT something else...")

   # At the end of the `connection()` context, the transaction is committed   # or rolled back, and the connection returned to the pool # At the end of the pool context, all the resources used by the pool are released

If your application uses async code, you can use the AsyncConnectionPool to achieve connection pooling:

async with AsyncConnectionPool(...) as pool:    async with pool.connection() as conn:        await conn.execute("SELECT something FROM somewhere ...")        with conn.cursor() as cur:           await cur.execute("SELECT something else...")

Pipeline mode

psycopg2 doesn't officially include Pipeline mode, but psycopg from version 3.1 onwards introduced it. This mode allows sending multiple queries to the PostgreSQL server without waiting for each one to complete before sending the next, which can enhance performance for applications with numerous small, frequent queries.

In psycopg, Pipeline mode is supported through the connection.pipeline() method. 

Here is an example of how you can execute multiple queries in pipeline mode with conn.pipeline():

...     conn.execute("INSERT INTO mytable VALUES (%s)", ["hello"]) ...     with conn.cursor() as cur: ...         cur.execute("INSERT INTO othertable VALUES (%s)", ["world"]) ...         cur.executemany( ...             "INSERT INTO elsewhere VALUES (%s)", ...             [("one",), ("two",), ("four",)])

Pipeline and Asynchronous modes have some similarities but differ in certain aspects

Feature

Pipeline mode

Asynchronous

Sending queries

Batch

Individual

Receiving results

Batch, potentially out of order

Individuals, in order

Execution

Synchronous

Non-blocking with await

Best suited for

Many small, independent queries

Longer-running queries, concurrent requests

If you want to build an application using Django, check out this video.

Timescale: Deploy Your PostgreSQL Server in Minutes

Setting up databases can be tough and time-consuming for application developers. It takes a lot of effort and time to configure everything correctly. But, with Timescale, things get much easier. With just a few clicks, developers can set up their database servers within minutes. 

This means less hassle and more time for them to concentrate on their application development.

Want to learn more about how to build your application using PostgreSQL/Timescale? Be sure to check the following articles—we’re taking you from data migration to monitoring your database.

  • How to Migrate Your Data to Timescale (3 Ways)

  • Bonus: Psycopg2 vs Psycopg3 Performance Benchmark

  • Data Visualization in PostgreSQL With Apache Superset

  • 5 Ways to Monitor Your PostgreSQL Database

Timescale provides a 30-day free trial period, allowing you to create up to three services without incurring any charges during this initial 30-day usage period. To begin with, sign up for Timescale.

Once you've signed up and verified your email, you are all set to create your first service.

Here's what's available: 

  • Time Series and Analytics: PostgreSQL, but faster. Built for lightning-fast ingest and querying of time-based data. Perfect for sensor data, financial data, events, or any other dataset indexed by timestamp or incrementing key. Optimized for performance and cost-efficiency at scale.

Note: Timescale offers support for connection pooling through pgbouncer. During the service launch, you can choose whether to turn on or off connection pooling as required.

After submitting all the details and completing the service creation, your database server should be ready in a couple of minutes. Once it's done, you will see the psql commands to connect to your database. You can also download a file in plain text containing all the details about your service, including your username and password. Ensure that you save this file in a secure location.

Let's see how the above code will be updated to work with our tsdb database hosted on Timescale.

Modify only the DATABASE_URL, keeping the rest of the code unchanged.

DATABASE_URL = "postgres://<USERNAME>:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"

Refer to the <SERVICENAME>-credentials.sql file you downloaded earlier for all the necessary information.

Check out the Timescale documentation to learn more about Timescale cloud services.

Developers FAQs

What kinds of applications can be developed using Python?

Python can be used to develop a wide range of applications:

  • Web applications

  • Data analysis tools

  • Artificial intelligence and machine learning applications

  • Automation scripts

  • Desktop GUI applications

  • Network servers, and more.

I installed psycopg via pip, but I am still facing a Python import error.

Make sure that you are using the correct Python environment. Ensure that you've activated the virtual environment or are working in the correct Python environment where psycopg is installed. Also, ensure that you install the psycopg package and run your application using the same user to avoid potential issues.

Can I use Docker for my development?

Yes, you can use Docker for your development. Docker provides a containerization platform that allows you to package your application and its dependencies into a container. This ensures consistency across different environments and makes it easier to manage dependencies. Docker containers can be easily deployed and run on various systems, providing a consistent development and deployment environment for your applications.

How do I hide database credentials?

To hide your database credentials in Python, you can use environment variables or a configuration file.

Can I use PostgreSQL for web and mobile app development?

Yes, PostgreSQL is well-suited for both web and mobile app development. It provides features like reliability, scalability, support for complex queries, and a wide range of data types. So, it can easily be used to build web and mobile applications. 

Can I store JSON data inside the PostgreSQL?

Yes, PostgreSQL supports storing and querying JSON data. It has native support for JSON and JSONB (binary JSON) data types, allowing you to store, retrieve, and manipulate JSON documents within the database. Learn more.

What is the difference between Rest API and CRUD?

CRUD is a set of operations (Create, Read, Update, Delete) for data manipulation, while REST API is an architectural style for web services design that can include CRUD functionality.

On this page

    Try for free

    Start supercharging your PostgreSQL today.