Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Is type hints as valuable / expected in py as typescript?

Whether you're working by yourself or in a team, to what extent is it commonplace and/or expected to use type hints in functions?

/r/Python
https://redd.it/1m4ofoe
What's bad about using views.py as a ghetto API?

I recently realized more and more that I started using views.py as a simple way to implement APIs. DRF just requires so much overhead(!) and is a royal pain in the ass imo – Ninja I haven't tried yet, because why

Most APIs are simply "action" APIs or a simple retrieve API. Don't need fancy CRUD operations mostly. But I'm wondering why not more people are doing it the way I am so I wanted to ask is there an inherent issue with mis-using views.py for ghetto APIs?

They're just so easy to read and maintain! No nested classes, just a couple lines of code.

Examples:

@csrfexempt
def push
gateway(request):
""" Quick and dirty API to centrally handle webhooks / push notifications """
if not request.method == 'POST':
return JsonResponse({'status': 'error'})
try:
user, token = TokenAuthentication().authenticate(request)
except AuthenticationFailed:


/r/django
https://redd.it/1m4l8mz
Beginner in Django — Need Advice to Improve My Programming Thinking and Learn Django Properly

Hi everyone

I started learning Python about 5 months ago. I work at a small company — at first, they had me do web scraping tasks. After a while, they asked me to start learning Django so I could work on their internal projects.

Right now, I’m facing two main challenges and would really appreciate your advice:

1. I feel like my programming logic and thinking need improvement. Sometimes I can understand code, but I don’t fully understand why it was written that way or how I could come up with the solution myself. Should I start learning data structures, algorithms, and OOP to develop this kind of thinking? If yes, any recommended beginner-friendly resources?
2. I want to learn Django from scratch, step by step, and really understand how it works. Most tutorials I’ve found either move too fast or assume I already understand certain concepts — which I don’t. I’m looking for a structured and beginner-friendly course/book/resource that builds a solid foundation.

I'm very motivated to learn but also feeling a bit lost and overwhelmed right now.
If you have any tips, learning paths, or personal advice, I’d be super grateful 🙏

Thanks in advance to everyone who helps ❤️

/r/django
https://redd.it/1m4q97n
API Driven Waffle Feature Flags

Hey all,

Wondering how this ought to work. Right now we're dealing with a distributed monolith. One repo produces a library and sqlalchemy migrations for a database ("a"), our django repo imports the library and the models for that database (along with producing its own models for another database - "b")


The migrations for database a are run independent of both repos as part of a triggerable job. This creates a not great race condition between django and database a.

I was thinking that an api driven feature flag would be a good solution here as the flag could be flipped after the migration runs. This would decouple releasing django app changes and running database a migrations.

We're in AWS on EKS

To that end I'm kind of struggling to think of a clean implementation (e.g. not a rube goldberg), but I'm not really a web dev or a django developer and my knowledge of redis is pretty non-existent.. The best I've gotten so far is...

- Create an elasticache redis serveless instance as they appear quite cheap and I just wrote the terraform for this for another app.

- Create an interface devs can use to populate/update the redis instance with feature flags.

- install a new

/r/django
https://redd.it/1m4on3p
KvDeveloper Client – Expo Go for Kivy on Android

[KvDeveloper Client](https://github.com/Novfensec/KvDeveloper-Client)

[Live Demonstration](https://youtu.be/-VTCTNmHB94)

Instantly load your app on mobile via QR code or Server URL. Experience blazing-fast Kivy app previews on Android with KvDeveloper Client, It’s the Expo Go for Python devs—hot reload without the hassle.

# What My Project Does

KvDeveloper Client is a **mobile companion app that enables instant, hot-reloading previews of your Kivy (Python) apps directly on Android devices—no USB cable or apk builds required**. By simply starting a development server from your Kivy project folder, you can scan a QR code or input the server’s URL on your phone to instantly load your app with real-time, automatic updates as you edit Python or KV files. This workflow mirrors the speed and seamlessness of Expo Go for React Native, but designed specifically for Python and the Kivy framework.

**Key Features:**

* Instantly preview Kivy apps on Android without manual builds or installation steps.
* Real-time updates on file change (Python, KV language).
* Simple connection via QR code or direct server URL.
* Secure local-only sync by default, with opt-in controls.

# Target Audience

This project is ideal for:

* **Kivy developers** seeking faster iteration cycles and more efficient UI/logic debugging on real devices.
* Python enthusiasts interested in mobile development without the overhead of traditional Android build processes.
*

/r/Python
https://redd.it/1m4nul8
Riot-style voice-to-text chat app I built using Flask — would love feedback!

/r/flask
https://redd.it/1m4p2f1
Using asyncio for cooperative concurrency

I am writing a shell in Python, and recently posted a question about concurrency options (https://www.reddit.com/r/Python/comments/1lyw6dy/pythons_concurrency_options_seem_inadequate_for). That discussion was really useful, and convinced me to pursue the use of asyncio.

If my shell has two jobs running, each of which does IO, then async will ensure that both jobs make progress.

But what if I have jobs that are not IO bound? To use an admittedly far-fetched example, suppose one job is solving the 20 queens problem (which can be done as a marcel one-liner), and another one is solving the 21 queens problem. These jobs are CPU-bound. If both jobs are going to make progress, then each one occasionally needs to yield control to the other.

My question is how to do this. The only thing I can figure out from the async documentation is asyncio.sleep(0). But this call is quite expensive, and doing it often (e.g. in a loop of the N queens implementation) would kill performance. An alternative is to rely on signal.alarm() to set a flag that would cause the currently running job to yield (by calling asyncio.sleep(0)). I would think that there should or could be some way to yield that is much lower in cost. (E.g.,

/r/Python
https://redd.it/1m4qz35
Strange behaviour for Django > 5.0 (long loading times and high Postgres CPU load, only admin)

Hi everyone,

I'm currently experiencing some strange behavior that I can't quite wrap my head around, so I thought I'd ask if anyone here has seen something similar.

What happened:
I recently upgraded one of our larger projects from Django 4.2 (Python 3.11) to Django 5.2 (Python 3.13). The upgrade itself went smoothly with no obvious issues. However, I quickly noticed that our admin pages have become painfully slow. We're seeing a jump from millisecond-level response times to several seconds.

For example, the default /admin page used to load in around 200–300ms before the upgrade, but now it's taking 3–4 seconds.

I initially didn't notice this during development (more on that in a moment), but a colleague brought it to my attention shortly after the deployment to production. Unfortunately, I didn’t have time to investigate right away, but I finally got around to digging into it yesterday.

What I found:
Our PostgreSQL 14 database server spikes to 100% CPU usage when accessing the admin pages. Interestingly, our regular Django frontend and DRF API endpoints seem unaffected — or at least not to the same extent.

I also upgraded psycopg as part of the process, but I haven’t found anything suspicious there yet.

Why I missed

/r/django
https://redd.it/1m4vfkb
UA-Extract - Easy way to keep user-agent parsing updated

Hey folks! I’m excited to share UA-Extract, a Python library that makes user agent parsing and device detection a breeze, with a special focus on keeping regexes fresh for accurate detection of the latest browsers and devices. After my first post got auto-removed, I’ve added the required sections to give you the full scoop. Let’s dive in!

**What My Project Does**

UA-Extract is a fast and reliable Python library for parsing user agent strings to identify browsers, operating systems, and devices (like mobiles, tablets, TVs, or even gaming consoles). It’s built on top of the device\_detector library and uses a massive, regularly updated user agent database to handle thousands of user agent strings, including obscure ones.

The star feature? **Super easy regex updates**. New devices and browsers come out all the time, and outdated regexes can misidentify them. UA-Extract lets you update regexes with a single line of code or a CLI command, pulling the latest patterns from the [Matomo Device Detector](https://github.com/matomo-org/device-detector) project. This ensures your app stays accurate without manual hassle. Plus, it’s optimized for speed with in-memory caching and supports the regex module for faster parsing.

Here’s a quick example of updating regexes:

from ua_extract import Regexes


/r/Python
https://redd.it/1m5157j
Monday Daily Thread: Project ideas!

# Weekly Thread: Project Ideas 💡

Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.

## How it Works:

1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.

## Guidelines:

* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.

# Example Submissions:

## Project Idea: Chatbot

**Difficulty**: Intermediate

**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar

**Description**: Create a chatbot that can answer FAQs for a website.

**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)

# Project Idea: Weather Dashboard

**Difficulty**: Beginner

**Tech Stack**: HTML, CSS, JavaScript, API

**Description**: Build a dashboard that displays real-time weather information using a weather API.

**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)

## Project Idea: File Organizer

**Difficulty**: Beginner

**Tech Stack**: Python, File I/O

**Description**: Create a script that organizes files in a directory into sub-folders based on file type.

**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)

Let's help each other grow. Happy

/r/Python
https://redd.it/1m543e5
Introducing async_obj: a minimalist way to make any function asynchronous

If you are tired of writing the same messy threading or `asyncio` code just to run a function in the background, here is my minimalist solution.

Github: [https://github.com/gunakkoc/async\_obj](https://github.com/gunakkoc/async_obj)

# What My Project Does

`async_obj` allows running any function asynchronously. It creates a class that pretends to be whatever object/function that is passed to it and intercepts the function calls to run it in a dedicated thread. It is essentially a two-liner. Therefore, async\_obj enables async operations while minimizing the code-bloat, requiring no changes in the code structure, and consuming nearly no extra resources.

Features:

* Collect results of the function
* In case of exceptions, it is properly raised and only when result is being collected.
* Can check for completion OR wait/block until completion.
* Auto-complete works on some IDEs

# Target Audience

I am using this to orchestrate several devices in a robotics setup. I believe it can be useful for anyone who deals with blocking functions such as:

* Digital laboratory developers
* Database users
* Web developers
* Data scientist dealing with large data or computationally intense functions
* When quick prototyping of async operations is desired

# Comparison

One can always use `multithreading` library. At minimum it will require wrapping the function inside another function to get the returned result. Handling errors

/r/Python
https://redd.it/1m54hyp
Sifaka: Simple AI text improvement using research-backed critique (open source)

## What My Project Does

Sifaka is an open-source Python framework that adds reflection and reliability to large language model (LLM) applications. The core functionality includes:

- 7 research-backed critics that automatically evaluate LLM outputs for quality, accuracy, and reliability
- Iterative improvement engine that uses critic feedback to refine content through multiple rounds
- Validation rules system for enforcing custom quality standards and constraints
- Built-in retry mechanisms with exponential backoff for handling API failures
- Structured logging and metrics for monitoring LLM application performance

The framework integrates seamlessly with popular LLM APIs (OpenAI, Anthropic, etc.) and provides both synchronous and asynchronous interfaces for production workflows.

## Target Audience

Sifaka is (eventually) intended for production LLM applications where reliability and quality are critical. Primary use cases include:

- Production AI systems that need consistent, high-quality outputs
- Content generation pipelines requiring automated quality assurance
- AI-powered workflows in enterprise environments
- Research applications studying LLM reliability and improvement techniques

The framework includes comprehensive error handling, making it suitable for mission-critical applications rather than just experimentation.

## Comparison

While there are several LLM orchestration tools available, Sifaka differentiates itself through:

vs. LangChain/LlamaIndex:

- Focuses specifically on output quality and reliability rather than general orchestration
- Provides research-backed evaluation metrics instead of generic chains
- Lighter weight with minimal dependencies

/r/Python
https://redd.it/1m59s5f
An AI Meme Generator!!

/r/djangolearning
https://redd.it/1m42ddu
Hosting Open Source LLMs for Document Analysis – What's the Most Cost-Effective Way?

Hey fellow Django dev,
Any one here experince working with llms ?

Basically, I'm running my own VPS (basic $5/month setup). I'm building a simple webapp where users upload documents (PDF or JPG), I OCR/extract the text, run some basic analysis (classification/summarization/etc), and return the result.

I'm not worried about the Django/backend stuff – my main question is more around how to approach the LLM side in a cost-effective and scalable way:

I'm trying to stay 100% on free/open-source models (e.g., Hugging Face) – at least during prototyping.
Should I download the LLM locally (e.g., GGUF / GPTQ / Transformers), run it via something like text-generation-webui, llama.cpp, vLLM, or even FastAPI + transformers?
Or is there a way to call free hosted inference endpoints (Hugging Face Inference API, Ollama, [Together.ai](http://Together.ai), etc.) without needing to host models myself?
If I go self-hosted: is it practical to run 7B or even 13B models on a low-spec VPS? Or should I use something like LM Studio, llama-cpp-python, or a quantized GGUF model to keep memory usage low?

I’m fine with hacky setups as long as it’s reasonably stable. My goal isn’t high traffic, just a few dozen users at the start.

What

/r/django
https://redd.it/1m59mzz
What is the best way to deal with floating point numbers when you have model restrictions?

I can equally call my title, "How restrictive should my models be?".


I am currently working on a hobby project using Django as my backend and continually running into problems with floating point errors when I add restrictions in my model. Let's take a single column as an example that keeps track of the weight of a food entry.

foodweight = models.DecimalField(
max
digits=6,
decimalplaces=2,
validators=[MinValueValidator(0), MaxValueValidator(5000)]
)

When writing this, it was sensible to me that I did not want my users to give me data more than two decimal points of precision. I also enforce this via the client side UI.

The problem is that client side enforcement also has floating points errors. So when I use a JavaScript function such as \`toFixed(2)\` and then give these numbers to my endpoint, when I pass a number such as \`0.3\`, this will actaully fail to serialize because it was will try to serialize \`0.300000004\` and break the \`max\
digits=6` criteria.


Whenever I write a backend with restrictions, they seem sensible at

/r/django
https://redd.it/1m58xk6
Prefered way to structure polars expressions in large project?

I love polars. However once your project hit a certain size, you end up with a few "core" dataframe schemas / columns re-used across the codebase, and intermediary transformations who can sometimes be lengthy.
I'm curious about what are other ppl approachs to organize and split up things.

The first point I would like to adress is the following:
given a certain dataframe whereas you have a long transformation chains, do you prefer to split things up in a few functions to separate steps, or centralize everything?
For example, which way would you prefer?
```
# This?
def chained(file: str, cols: list[str]) -> pl.DataFrame:
return (
pl.scan_parquet(file)
.select(*[pl.col(name) for name in cols])
.with_columns()
.with_columns()
.with_columns()
.group_by()
.agg()
.select()
.with_columns()
.sort("foo")
.drop()


/r/Python
https://redd.it/1m5jcot
When working in a team do you makemigrations when the DB schema is not updated?

Pretty simple question really.

I'm currently working in a team of 4 django developers on a large and reasonably complex product, we use kubernetes to deploy the same version of the app out to multiple clusters - if that at all makes a difference.

I was wondering that if you were in my position would you run makemigrations for all of the apps when you're just - say - updating choices of a CharField or reordering potential options, changes that wouldn't update the db schema.

I won't say which way I lean to prevent the sway of opinion but I'm interested to know how other teams handle it.

/r/django
https://redd.it/1m5l189
Is it ok to use Pandas in Production code?

Hi I have recently pushed a code, where I was using pandas, and got a review saying that I should not use pandas in production. Would like to check others people opnion on it.

For context, I have used pandas on a code where we scrape page to get data from html tables, instead of writing the parser myself I used pandas as it does this job seamlessly.


Would be great to get different views on it. tks.

/r/Python
https://redd.it/1m5lm8e
Tuesday Daily Thread: Advanced questions

# Weekly Wednesday Thread: Advanced Questions 🐍

Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.

## How it Works:

1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.

## Guidelines:

* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.

## Recommended Resources:

* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.

## Example Questions:

1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the

/r/Python
https://redd.it/1m5z8b2