Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Webserver to control DSLR Camera

Hi, as title says. I am planning to building a webserver that help users control dslr camera (capture, timelapse, change settings, etc.) with Flask, my idea is:

Front-end: HTML, CSS, JS
Back-end: Python, Flask
Library to interact with camera: libgphoto2
Others: Nginx + Cloudflare Tunnel

Workflow will be: User using web interface to control -> js listening user actions and fetch api -> flask app call controller class method (using libgphoto2) -> return result as jsonify -> js display it.

Do you guys think its fine?

English is not my first language sorry for grammar mistakes .


/r/flask
https://redd.it/1klfpd6
Laptop Suggestion

Help me to select a laptop purpose to learn JAVASCRIPT Python.

/r/djangolearning
https://redd.it/1kldymu
D Had an AI Engineer interview recently and the startup wanted to fine-tune sub-80b parameter models for their platform, why?

I'm a Full-Stack engineer working mostly on serving and scaling AI models.
For the past two years I worked with start ups on AI products (AI exec coach), and we usually decided that we would go the fine tuning route only when prompt engineering and tooling would be insufficient to produce the quality that we want.

Yesterday I had an interview for a startup the builds a no-code agent platform, which insisted on fine-tuning the models that they use.

As someone who haven't done fine tuning for the last 3 years, I was wondering about what would be the use case for it and more specifically, why would it economically make sense, considering the costs of collecting and curating data for fine tuning, building the pipelines for continuous learning and the training costs, especially when there are competitors who serve a similar solution through prompt engineering and tooling which are faster to iterate and cheaper.

Did anyone here arrived at a problem where the fine-tuning route was a better solution than better prompt engineering? what was the problem and what made the decision?

/r/MachineLearning
https://redd.it/1klf53p
Querying 10M rows in 11 seconds: Benchmarking ConnectorX, Asyncpg and Psycopg vs QuestDB

A colleague asked me to review our database's updated query documentation. I ended up benchmarking various Python libraries that connect to QuestDB via the PostgreSQL wire protocol.

Spoiler: ConnectorX is fast, but asyncpg also very much holds its own.

Comparisons with dataframes vs iterations aren't exactly apples-to-apples, since dataframes avoid iterating the resultset in Python, but provide a frame of reference since at times one can manipulate the data in tabular format most easily.

I'm posting, should anyone find these benchmarks useful, as I suspect they'd hold across different database vendors too. I'd be curious if anyone has further experience on how to optimise throughput over PG wire.

Full code and results and summary chart: https://github.com/amunra/qdbc


/r/Python
https://redd.it/1klke8k
React + Node | React + Django

I’ve tried React with Node, but React + Django just feels so clean and comfy.

Django gives me user auth, admin panel, and API tools (thanks DRF!) right out of the box. No need to set up everything from scratch.

It’s like React is the fun frontend friend, and Django is the reliable backend buddy who takes care of all the serious stuff.

/r/django
https://redd.it/1klndck
Typesafety vs Performance Trade-Off - Looking for Middle Ground Solution

Hey all,

I'm a fullstack dev absolutely spoiled by the luxuries of Typescript. I'm currently working on a Python BE codebase without an ORM. The code is structured as follows:

- We have a query.py file for every service where our SQL queries are defined. These are Python functions that take some ad-hoc parameters and formats them into equally ad-hoc SQL query strings.
- Every query function returns an untyped dictionaries/lists of the results.
- It's only at the route layer that we marshall the dictionaries into Pydantic models as needed by the response object (we are using FastAPI).

The reason for this (as I was told) was performance since they don't want to have to do multiple transformations of Pydantic models between different layers of the app. Our services are indeed very data intensive. But most query results are trimmed to at most 100 rows at a time anyway because of pagination. I am very skeptical of this excuse - performance is typically the last thing I'd want to hyper-optimize when working with Python.

Do you guys know of any middle-ground solution to this? Perhaps some kind of wrapper class that only validates fields being accessed on the object? In your experience, is the overhead

/r/Python
https://redd.it/1klsydh
D Reviewer cited a newer arXiv paper as prior work and ours was online earlier. How to handle in rebuttal?

I'm currently going through the rebuttal phase of ICCV, and encountered a situation I’d appreciate some advice on.

One of the reviewers compared our submission to a recent arXiv preprint, saying our approach lacks novelty due to similarities. However, our own preprint (same methodology as our ICCV submission, with only writing changes) was publicly available before the other paper appeared. We did not cite our preprint in the submission (as it was non-peer-reviewed and citation was optional), but now that decision seems to be backfiring.

We developed the method independently, and the timeline clearly shows ours was available first. But since we didn’t cite it, the reviewer likely assumed the other work came first.

Given the double-blind review process, what’s the best way to clarify this in a rebuttal without violating anonymity? We don’t want to say too much and break policy, but we also don’t want to be penalized for something we didn’t copy.

Has anyone dealt with this kind of situation before?

/r/MachineLearning
https://redd.it/1klppvn
I built a full-stack Smart Pet Tag Platform with Django, NFC, and QR code scanning

I recently launched SmartTagPlatform.com — a custom-built Django web app that links NFC and QR-enabled pet tags to an online pet profile.

Tech stack: Django, PostgreSQL, Bootstrap, Docker, hosted on a VPS.

Tags are already in the wild. Scans log location/IP and show a contact page to help reunite pets faster.

It’s just me building this so far, but I’m exploring new features like lost pet alerts, GPS integration (AirTag/Tile), and subscription models.

Would love to hear feedback from other devs or anyone building something similar.

/r/django
https://redd.it/1klfrbn
🛠️ Tired of Pytest Fixture Weirdness? You’re Not Alone.

# I just released a small but mighty tool called **pytest-fixturecheck** – and I’d love to hear your thoughts.

Why this exists:
On a fast-moving Django project with lots of fixtures, we kept running into bizarre test failures. Turns out, broken fixtures caused by changes in model attributes were breaking tests in a different part of a project. The tests themselves weren’t the problem – the fixtures were! 😖

Enter fixturecheck**:**

Decorate your fixtures, and stop worrying
Automatically catch when the inputs change in unexpected ways
Spot unused fixtures and over-injection
Add type/value checks to make sure your fixtures behave as you expect
Works in Django, Wagtail, or Python projects

It’s flexible, lightweight, and takes minutes to set up. But it’s already saved us hours of painful debugging.

If you’ve run into similar fixture headaches, I’d love to hear:

How you manage fixture sanity in big projects
Whether this tool helps catch the kinds of bugs you’ve seen
Any ideas for making it smarter!

Repo here: https://github.com/topiaruss/pytest-fixturecheck
Happy testing! 🧪

/r/django
https://redd.it/1km1b6x
Too many installed django apps?

Hi all, I want to breakdown my application into smaller bits. I just have this huge django monolith that takes forever to build in the container.

What are people doing to break down a django app? I was thinking of using a few services outside django and making REST calls to them. Now i'm thinking about the security of that.

I wonder what others do in his scenario.

/r/django
https://redd.it/1klljn8
Wednesday Daily Thread: Beginner questions

# Weekly Thread: Beginner Questions 🐍

Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.

## How it Works:

1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.

## Guidelines:

This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).

## Recommended Resources:

If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.

## Example Questions:

1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?

Let's help each other learn Python! 🌟

/r/Python
https://redd.it/1km1dwe
D Why do people (mostly in media, not in AI/ML research) talk about Meta as if it is behind in the AI industry?

I’ve heard this from a few places, mostly news clips and YouTube channels covering AI developments, but why do people say that Meta is “behind” in the AI industry when compared to Google, OpenAI, Microsoft, Amazon, etc.? I’ve always highly revered Meta, Yann Lecun, and FAIR for open sourcing their contributions, and they do very good research. I read quite a few papers from FAIR researchers. So in what sense do people think they are behind, or is that just ill informed?

/r/MachineLearning
https://redd.it/1klnby4
FastApi vs Django Ninja vs Django for API only backend

I've been reading posts in this and other python subs debating these frameworks and why one is better than another. I am tempted to try the new, cool thing but I use Django with Graphql at work and it's been stable so far.

I am planning to build and app that will be a CRUD app that needs an ORM but it will also use LLMs for chat bots on the frontend. I only want python for an API layer, I will use next on the frontend. I don't think I need an admin panel. I will also be querying data form BigQuery, likely will be doing this more and more as so keep building out the app and adding users and data.

Here is what I keep mulling over:
- Django ninja - seems like a good solution for my use cases. The problem with it is that it has one maintainer who lives in a war torn country and a backlog of Github issues. I saw that a fork called Django Shinobi was already created of this project so that makes me more hesitant to use this framework.

- FastAPI - I started with this but then started looking at ORMs I

/r/Python
https://redd.it/1km6goh
Libraries for Flask+htmx?

Hi everyone!
I'm interested in flask+htmx for hobby projects and I would like to know, from those with experience with it, if you use libraries to simplify this kind of work.
Htmx is great but writing the html code in all responses can be annoying. FastHTML introduced an API to generate html from pure python for this reason.
Do you use a library like that, or maybe some other useful tools?

/r/flask
https://redd.it/1klybvh
Small Propositional Logic Proof Assistant

**Hey** r/Python!

I just finished working on **Deducto**, a minimalistic assistant for working with propositional logic in Python. If you're into formal logic, discrete math, or building proof tools, this might be interesting to you!

# What My Project Does

Deducto lets you:

* Parse logical expressions involving `AND`, `OR`, `NOT`, `IMPLIES`, `IFF`, and more.
* Apply formal transformation rules like:
* Commutativity, Associativity, Distribution
* De Morgan’s Laws, Idempotency, Absorption, etc.
* Justify each step of a transformation to construct equivalence proofs.
* Experiment with rewriting logic expressions step-by-step using a rule engine.
* Extend the system with your own rules or syntax easily.

# Target Audience

This was built as part of a **Discrete Mathematics** project. It's intended for:

* Students learning formal logic or equivalence transformations
* Educators wanting an interactive tool for classroom demos
* Anyone curious about symbolic logic or proof automation

While it's not as feature-rich as Lean or Coq, it aims to be lightweight and approachable — perfect for educational or exploratory use.

# Comparison

Compared to theorem provers like Lean or proof tools in Coq, Deducto is:

* Much simpler
* Focused purely on propositional logic and equivalence transformations
* Designed to be easy to read, extend, and play with — especially for beginners

If you've ever wanted

/r/Python
https://redd.it/1kmf7pe
CSV Export Truncates Records with Special Characters

I’m using django-import-export to export CSV/XLSX files. However, when the data contains certain special characters, the CSV output truncates some records.

Here’s my custom response class:

from django.http import HttpResponse
from django.conf import settings
from import_export.formats.base_formats import XLSX, CSV

class CSVorXLSXResponse(HttpResponse):
'''
Custom response object that accepts datasets and returns it as csv or excel
'''

def __init__(self, dataset, export_format, filename, *args, **kwargs):
if export_format == 'csv':
data = CSV().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)
content_type = 'text/csv; charset=utf-8'
else:
data = XLSX().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)


/r/djangolearning
https://redd.it/1kmfu2r
synchronous vs asynchronous

Can you recommend a YouTube video that explains synchronous vs asynchronous programming in depth

/r/djangolearning
https://redd.it/1kkxxn0
sqlalchemy-memory: a pure‑Python in‑RAM dialect for SQLAlchemy 2.0

# What My Project Does

sqlalchemy-memory is a fast in‑RAM SQLAlchemy 2.0 dialect designed for prototyping, backtesting engines, simulations, and educational tools.

It runs entirely in Python; no database, no serialization, no connection pooling. Just raw Python objects and fast logic.

SQLAlchemy Core & ORM support
No I/O or driver overhead (all in-memory)
Supports group\_by, aggregations, and case() expressions
Lazy query evaluation (generators, short-circuiting, etc.)
Indexes are supported. SELECT queries are optimized using available indexes to speed up equality and range-based lookups.
Commit/rollback simulation

# Links

[GitHub Project link](https://github.com/rundef/sqlalchemy-memory)
Documentation link
[Benchmarks vs SQLite in-memory](https://sqlalchemy-memory.readthedocs.io/en/latest/benchmarks.html)
Blogpost: Beyond SQLite: Supercharging SQLAlchemy with a Pure In-Memory Dialect

# Why I Built It

I wanted a backend that:

Behaved like a real SQLAlchemy engine (ORM and Core)
Avoided SQLite/driver overhead
Let me prototype quickly with real queries and relationships

# Target audience

Backtesting engine builders who want a lightweight, in‑RAM store compatible with their ORM models
Simulation and modeling developers who need high-performance in-memory logic without spinning up a database
Anyone tired of duplicating business logic between an ORM and a memory data layer

Note: It's not a full SQL engine: don't use it to unit test DB behavior or verify SQL standard conformance. But for in‑RAM logic with SQLAlchemy-style

/r/Python
https://redd.it/1kmg3db
DBOS - Lightweight Durable Python Workflows

Hi r/Python – I’m Peter and I’ve been working on DBOS, an open-source, lightweight durable workflows library for Python apps. We just released our 1.0 version and I wanted to share it with the community!

GitHub link: https://github.com/dbos-inc/dbos-transact-py

What My Project Does

DBOS provides lightweight durable workflows and queues that you can add to Python apps in just a few lines of code. It’s comparable to popular open-source workflow and queue libraries like Airflow and Celery, but with a greater focus on reliability and automatically recovering from failures.

Our core goal in building DBOS is to make it lightweight and flexible so you can add it to your existing apps with minimal work. Everything you need to run durable workflows and queues is contained in this Python library. You don’t need to manage a separate workflow server: just install the library, connect it to a Postgres database (to store workflow/queue state) and you’re good to go.

When Should You Use My Project?

You should consider using DBOS if your application needs to reliably handle failures. For example, you might be building a payments service that must reliably process transactions even if servers crash mid-operation, or a long-running data pipeline that needs to resume from checkpoints rather

/r/Python
https://redd.it/1kml2h9