Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Monday Daily Thread: Project ideas!

# Weekly Thread: Project Ideas 💡

Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.

## How it Works:

1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.

## Guidelines:

* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.

# Example Submissions:

## Project Idea: Chatbot

**Difficulty**: Intermediate

**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar

**Description**: Create a chatbot that can answer FAQs for a website.

**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)

# Project Idea: Weather Dashboard

**Difficulty**: Beginner

**Tech Stack**: HTML, CSS, JavaScript, API

**Description**: Build a dashboard that displays real-time weather information using a weather API.

**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)

## Project Idea: File Organizer

**Difficulty**: Beginner

**Tech Stack**: Python, File I/O

**Description**: Create a script that organizes files in a directory into sub-folders based on file type.

**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)

Let's help each other grow. Happy

/r/Python
https://redd.it/1m543e5
Introducing async_obj: a minimalist way to make any function asynchronous

If you are tired of writing the same messy threading or `asyncio` code just to run a function in the background, here is my minimalist solution.

Github: [https://github.com/gunakkoc/async\_obj](https://github.com/gunakkoc/async_obj)

# What My Project Does

`async_obj` allows running any function asynchronously. It creates a class that pretends to be whatever object/function that is passed to it and intercepts the function calls to run it in a dedicated thread. It is essentially a two-liner. Therefore, async\_obj enables async operations while minimizing the code-bloat, requiring no changes in the code structure, and consuming nearly no extra resources.

Features:

* Collect results of the function
* In case of exceptions, it is properly raised and only when result is being collected.
* Can check for completion OR wait/block until completion.
* Auto-complete works on some IDEs

# Target Audience

I am using this to orchestrate several devices in a robotics setup. I believe it can be useful for anyone who deals with blocking functions such as:

* Digital laboratory developers
* Database users
* Web developers
* Data scientist dealing with large data or computationally intense functions
* When quick prototyping of async operations is desired

# Comparison

One can always use `multithreading` library. At minimum it will require wrapping the function inside another function to get the returned result. Handling errors

/r/Python
https://redd.it/1m54hyp
Sifaka: Simple AI text improvement using research-backed critique (open source)

## What My Project Does

Sifaka is an open-source Python framework that adds reflection and reliability to large language model (LLM) applications. The core functionality includes:

- 7 research-backed critics that automatically evaluate LLM outputs for quality, accuracy, and reliability
- Iterative improvement engine that uses critic feedback to refine content through multiple rounds
- Validation rules system for enforcing custom quality standards and constraints
- Built-in retry mechanisms with exponential backoff for handling API failures
- Structured logging and metrics for monitoring LLM application performance

The framework integrates seamlessly with popular LLM APIs (OpenAI, Anthropic, etc.) and provides both synchronous and asynchronous interfaces for production workflows.

## Target Audience

Sifaka is (eventually) intended for production LLM applications where reliability and quality are critical. Primary use cases include:

- Production AI systems that need consistent, high-quality outputs
- Content generation pipelines requiring automated quality assurance
- AI-powered workflows in enterprise environments
- Research applications studying LLM reliability and improvement techniques

The framework includes comprehensive error handling, making it suitable for mission-critical applications rather than just experimentation.

## Comparison

While there are several LLM orchestration tools available, Sifaka differentiates itself through:

vs. LangChain/LlamaIndex:

- Focuses specifically on output quality and reliability rather than general orchestration
- Provides research-backed evaluation metrics instead of generic chains
- Lighter weight with minimal dependencies

/r/Python
https://redd.it/1m59s5f
An AI Meme Generator!!

/r/djangolearning
https://redd.it/1m42ddu
Hosting Open Source LLMs for Document Analysis – What's the Most Cost-Effective Way?

Hey fellow Django dev,
Any one here experince working with llms ?

Basically, I'm running my own VPS (basic $5/month setup). I'm building a simple webapp where users upload documents (PDF or JPG), I OCR/extract the text, run some basic analysis (classification/summarization/etc), and return the result.

I'm not worried about the Django/backend stuff – my main question is more around how to approach the LLM side in a cost-effective and scalable way:

I'm trying to stay 100% on free/open-source models (e.g., Hugging Face) – at least during prototyping.
Should I download the LLM locally (e.g., GGUF / GPTQ / Transformers), run it via something like text-generation-webui, llama.cpp, vLLM, or even FastAPI + transformers?
Or is there a way to call free hosted inference endpoints (Hugging Face Inference API, Ollama, [Together.ai](http://Together.ai), etc.) without needing to host models myself?
If I go self-hosted: is it practical to run 7B or even 13B models on a low-spec VPS? Or should I use something like LM Studio, llama-cpp-python, or a quantized GGUF model to keep memory usage low?

I’m fine with hacky setups as long as it’s reasonably stable. My goal isn’t high traffic, just a few dozen users at the start.

What

/r/django
https://redd.it/1m59mzz
What is the best way to deal with floating point numbers when you have model restrictions?

I can equally call my title, "How restrictive should my models be?".


I am currently working on a hobby project using Django as my backend and continually running into problems with floating point errors when I add restrictions in my model. Let's take a single column as an example that keeps track of the weight of a food entry.

foodweight = models.DecimalField(
max
digits=6,
decimalplaces=2,
validators=[MinValueValidator(0), MaxValueValidator(5000)]
)

When writing this, it was sensible to me that I did not want my users to give me data more than two decimal points of precision. I also enforce this via the client side UI.

The problem is that client side enforcement also has floating points errors. So when I use a JavaScript function such as \`toFixed(2)\` and then give these numbers to my endpoint, when I pass a number such as \`0.3\`, this will actaully fail to serialize because it was will try to serialize \`0.300000004\` and break the \`max\
digits=6` criteria.


Whenever I write a backend with restrictions, they seem sensible at

/r/django
https://redd.it/1m58xk6
Prefered way to structure polars expressions in large project?

I love polars. However once your project hit a certain size, you end up with a few "core" dataframe schemas / columns re-used across the codebase, and intermediary transformations who can sometimes be lengthy.
I'm curious about what are other ppl approachs to organize and split up things.

The first point I would like to adress is the following:
given a certain dataframe whereas you have a long transformation chains, do you prefer to split things up in a few functions to separate steps, or centralize everything?
For example, which way would you prefer?
```
# This?
def chained(file: str, cols: list[str]) -> pl.DataFrame:
return (
pl.scan_parquet(file)
.select(*[pl.col(name) for name in cols])
.with_columns()
.with_columns()
.with_columns()
.group_by()
.agg()
.select()
.with_columns()
.sort("foo")
.drop()


/r/Python
https://redd.it/1m5jcot
When working in a team do you makemigrations when the DB schema is not updated?

Pretty simple question really.

I'm currently working in a team of 4 django developers on a large and reasonably complex product, we use kubernetes to deploy the same version of the app out to multiple clusters - if that at all makes a difference.

I was wondering that if you were in my position would you run makemigrations for all of the apps when you're just - say - updating choices of a CharField or reordering potential options, changes that wouldn't update the db schema.

I won't say which way I lean to prevent the sway of opinion but I'm interested to know how other teams handle it.

/r/django
https://redd.it/1m5l189
Is it ok to use Pandas in Production code?

Hi I have recently pushed a code, where I was using pandas, and got a review saying that I should not use pandas in production. Would like to check others people opnion on it.

For context, I have used pandas on a code where we scrape page to get data from html tables, instead of writing the parser myself I used pandas as it does this job seamlessly.


Would be great to get different views on it. tks.

/r/Python
https://redd.it/1m5lm8e
Tuesday Daily Thread: Advanced questions

# Weekly Wednesday Thread: Advanced Questions 🐍

Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.

## How it Works:

1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.

## Guidelines:

* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.

## Recommended Resources:

* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.

## Example Questions:

1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the

/r/Python
https://redd.it/1m5z8b2
Need help figuring out why it is not working.

Has anybody used django-tailwind-cli on their projects?
For the love of god, I could not figure out what is wrong with my setup. I am unable to load CSS on a template.
Anyone willing to help would be greatly appreciated.

/r/django
https://redd.it/1m6aun3
Installing djangorestframework

I have a fresh lightsail install with Django stack. I want to now install djangorestframework. How do I install it so Django can use it? Do i install it into a venv or globally using pip?

/r/djangolearning
https://redd.it/1m30n13
PEP 798 – Unpacking in Comprehensions

PEP 798 – Unpacking in Comprehensions

https://peps.python.org/pep-0798/

# Abstract

This PEP proposes extending list, set, and dictionary comprehensions, as well as generator expressions, to allow unpacking notation (* and **) at the start of the expression, providing a concise way of combining an arbitrary number of iterables into one list or set or generator, or an arbitrary number of dictionaries into one dictionary, for example:

*it for it in its # list with the concatenation of iterables in 'its'
{it for it in its} # set with the union of iterables in 'its'
{d for d in dicts} # dict with the combination of dicts in 'dicts'
(
it for it in its) # generator of the concatenation of iterables in 'its'

/r/Python
https://redd.it/1m607oi
D Is it me or is ECAI really bad this year?

I have one accepted paper and another one rejected. The review and meta-review quality was really subpar. It felt like most of the responses we got, on both sides of the spectrum, came from underexperinced reviewers. I am all for letting undergrads read, review, and get experience, but I always review the paper by myself first and would never submit theirs as is. This really boggles me because I always thought ECAI is a good conference, but this year I can't help but feel a little bit embarrassed to even go there.

I have not submitted to other conferences yet. So, I wonder if there is a trend.

/r/MachineLearning
https://redd.it/1m69wc3
Wii tanks made in Python

What My Project Does
This is a full remake of the Wii Play: Tanks! minigame using Python and Pygame. It replicates the original 20 levels with accurate AI behavior and mechanics. Beyond that, it introduces 30 custom levels and 10 entirely new enemy tank types, each with unique movement, firing, and strategic behaviors. The game includes ricochet bullets, destructible objects, mines, and increasingly harder units.

Target Audience
Intended for beginner to intermediate Python developers, game dev enthusiasts, and fans of the original Wii title. It’s a hobby project designed for learning, experimentation, and entertainment.

Comparison
This project focuses on AI variety and level design depth. It features 19 distinct enemy types and a total of 50 levels. The AI is written from scratch in basic Python, using A* and statemachine logic.

GitHub Repo
https://github.com/Frode-Henrol/Tank\_game

/r/Python
https://redd.it/1m6lzvk
Anyone else doing production Python at a C++ company? Here's how we won hearts and minds.

I work on a local LLM server tool called Lemonade Server at AMD. Early on we made the choice to implement it in Python because that was the only way for our team to keep up with the breakneck pace of change in the LLM space. However, C++ was certainly the expectation of our colleagues and partner teams.

This blog is about the technical decisions we made to give our Python a native look and feel, which in turn has won people over to the approach.

Rethinking Local AI: Lemonade Server's Python Advantage

I'd love to hear anyone's similar stories! Especially any advice on what else we could be doing to improve native look and feel, reduce install size, etc. would be much appreciated.

This is my first time writing and publishing something like this, so I hope some people find it interesting. I'd love to write more like this in the future if it's useful.

/r/Python
https://redd.it/1m6g0jx
Advice needed on coding project!

Hi! I only recently started coding and I'm running into some issues with my recent project, and was wondering if anyone had any advice! My troubles are mainly with the button that's supposed to cancel the final high-level alert. The button is connected to pin D6, and it works fine when tested on its own, but in the actual code it doesn't stop the buzzer or reset the alert counter like it's supposed to. This means the system just stays stuck in the high alert state until I manually stop it. Another challenge is with the RGB LCD screen I'm using. it doesn’t support a text cursor, so I can’t position text exactly where I want on the screen. That makes it hard to format alert messages, especially longer ones that go over the 2-line limit. I’ve had to work around this by clearing the display or cycling through lines of text. The components I'm using include a Grove RGB LCD with a 16x2 screen and backlight, a Grove PIR motion sensor to detect movement, a Grove light sensor to check brightness, a red LED on D4 for visual alerts, a buzzer on D5 for sound alerts, and a

/r/Python
https://redd.it/1m6ow2y