Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
unfold dashboard

I recently integrated django-unfold into my Django admin, and it works great. However, before I discovered Unfold, I had already built my own custom dashboard.

Now I’m wondering:

Is it possible to add my existing dashboard into the Unfold-powered admin?
Or would it be better to just rebuild/replicate the dashboard using Unfold’s features?

Has anyone here tried merging a custom dashboard with Unfold, or is the recommended approach to stick with Unfold’s way of doing things?

/r/django
https://redd.it/1mt8xit
[R] Bing Search API is Retiring - What’s Your Next Move?

/r/MachineLearning
https://redd.it/1msw9vd
Why does Django's documentation look like it's design is stuck in 2010?

Today I decided to start learning backend development in Python, choosing Django as the framework. But honestly, I was absolutely disappointed with the appearance of the documentation.

It feels like the design was never tested from the perspective of a regular user. The dark theme palette is poorly chosen, the text area is unnecessarily small, and to read anything comfortably you constantly need to zoom in. And seriously - who thought it was a good idea to make the font color gray?

The content itself might be fine, but the reading experience is frustrating enough that I couldn't spend more than an hour with it. And in the end, the way the documentation looks completely kills the motivation to stay on the site and continue learning Django

/r/django
https://redd.it/1ms1gu9
Tuitka - A TUI for Nuitka

Hi folks, I wanted to share a project I've been working on in my free time - **Tuitka**

# What My Project Does

Tuitka simplifies the process of compiling Python applications into standalone executables by providing an intuitive TUI instead of wrestling with complex command-line flags.

Additionally, Tuitka does a few things differently than Nuitka. We will use your requirements.txt, pyproject.toml or PEP 723 metadata, and based on this, we will leverage `uv` to create a clean environment for your project and run it only with the dependencies that the project might need.

# Target Audience

This is for Python developers who need to distribute their applications to users who don't have Python installed on their systems.

# Installation & Usage

You can download it via `pip install tuitka`

**Interactive TUI mode:**

tuitka


Since most people in my experience *just* want their executables packaged into onefile or standalone, I've decided to allow you to point directly at the file you want to compile:**Direct compilation mode:**

tuitka my_script.py

The direct mode automatically uses sensible defaults:

* `--onefile` (single executable file)
* `--assume-yes-for-downloads` (auto-downloads plugins)
* `--remove-output` (cleans up build artifacts)

# Why PEP 723 is Preferred

When you're working in a development environment, you often accumulate libraries that aren't actually needed by your specific script - things

/r/Python
https://redd.it/1mteev1
UV python image building does not seem to be completely in sync with python releases

Had a pipeline errors this weekend because of:

```

1.615 error: No download found for request: cpython-3.13.7-linux-x86_64-gnu

```

local testing:

```

uv python install 3.13.7 -v

DEBUG uv 0.8.11 (f892276ac 2025-08-14)

DEBUG Acquired lock for `C:\\Users\\mobj\\AppData\\Roaming\\uv\\python`

DEBUG Released lock at `C:\\Users\\mobj\\AppData\\Roaming\\uv\\python\\.lock`

error: No download found for request: cpython-3.13.7-windows-x86_64-none



uv python install 3.13.6 -v

DEBUG uv 0.8.11 (f892276ac 2025-08-14)

DEBUG Acquired lock for `C:\\Users\\mobj\\AppData\\Roaming\\uv\\python`

DEBUG No installation found for request `3.13.6 (cpython-3.13.6-windows-x86_64-none)`

DEBUG Found download `cpython-3.13.6-windows-x86_64-none` for request `3.13.6 (cpython-3.13.6-windows-x86_64-none)`

DEBUG Using request timeout of 30s

DEBUG Downloading https://github.com/astral-sh/python-build-standalone/releases/download/20250814/cpython-3.13.6%2B20250814-x86\_64-pc-windows-msvc-install\_only\_stripped.tar.gz

DEBUG Extracting cpython-3.13.6-20250814-x86_64-pc-windows-msvc-install_only_stripped.tar.gz to temporary location: C:\\Users\\mobj\\AppData\\Roaming\\uv\\python\\.temp\\.tmpWQNy1c

Downloading cpython-3.13.6-windows-x86_64-none (download) (20.1MiB)

/r/Python
https://redd.it/1mtgpa3
A high-level Cloudflare Queues consumer library for Python

Hey everyone,

I built a high-level Python-based Cloudflare queue consumer package!

Cloudflare has some great products with amazing developer experiences. However, their architecture is primarily built on the V8 runtime, which means their services are optimized for JavaScript.

They do have a beta version of their Workers for Python, but it doesn’t support some key packages that I need for an application I’m working on. So, I decided to build CFQ, to provide an easy interface for consuming messages from Cloudflare Queues in Python environments.

# What My Project Does

Lets you easily consume messages from a Cloudflare queue in pure Python environments.

# Comparison

I couldn’t find many alternatives, which is why I created this package. The only other option was to use Cloudflare’s Python SDK, which is more low-level.

# Target Audience


Developers who want to consume messages from a Cloudflare queue but can’t directly bind a Python-based Worker to the queue.


Github: https://github.com/jpjacobpadilla/cfq


Hope some of you also find it useful!

/r/Python
https://redd.it/1mtivrf
Is async really such a pain in Django?

I’m not a very experienced Django dev, but I picked it because it’s mature and batteries-included. Writing APIs with DRF has been a joy.

But as soon as I needed a bit of realtime (SSE/websockets to update clients), I hit a brick wall. Every step feels like fighting the framework. Most guides suggest hacky workarounds, bringing in Celery, or spinning up a separate FastAPI microservice.

The funniest thing I read was someone saying: “I just run two Django instances, one WSGI and one ASGI (with Channels), and route with nginx.” I mean, maybe that works, but why so much hassle for a simple feature? This isn’t some huge high-load system. I don’t want to spin up a zoo of services and micro-instances - I just want to solve it under one roof.

Is it really that bad, or am I just missing something?

/r/django
https://redd.it/1mtnf9f
PyNDS: A Python Wrapper for the Nintendo DS Emulator

Source code: https://github.com/unexploredtest/PyNDS

What My Project Does

PyNDS is a library that wraps a Nintendo DS emulator, NooDS, using nanobind. It is inspired by PyBoy, allowing you to interact with the emulator through code. (although it's a lot slower than PyBoy). It provides methods to advance frames, insert both joystick and touch input, create save states, and render the game in a window.

Target Audience
This project is aimed at developers who want to build bots or reinforcement learning agents. However, it is not ready and may contain some bugs or issues, not to mention the lack of documentation. If there's enough interest, I might polish it

Comparison
As far as I have searched, there is no Python library that provides an interface to a Nintendo DS emulator or a Nintendo DS emulator in Python.

Feedback is greatly appreciated.

/r/Python
https://redd.it/1mtiihx
Stockdex: introduce new release

Hey everyone, introducing a new release of my project, stockdex.

## What My Project Does

The project in a nutshell, provides a way to extract stocks and financial data from various sources like Yahoo Finance, Digrin, and Macrotrends and return them in pandas dataframes, but also in plotly chart to understand the data better.

## Comparison

It covers limits of other packages like yfinance in terms of data coverage (by getting data from multiple sources) and plotting.

The new release includes addition of a new data source: Finviz.

The new data sources (Finviz) has extensive coverage of dividend and revenue data partitioned by multiple categories like region of product/segment.

Below is a simple output of one of the added methods for NVDA stock:

from stockdex import Ticker

finviz = Ticker(ticker="NVDA")
finviz.plot_finviz_revenue_by_products_and_services(
logarithmic=True
)


The output looks like this

Consider opening issues and discussions for bugs and suggestions and also stars are appreciated :)

Check out the readme at stockdex and installation at pypi and please consider opening issues and discussions for bugs and suggestions and also stars are appreciated :)

/r/Python
https://redd.it/1mtsrl0
(𐑒𐑳𐑥𐑐𐑲𐑤) / Cumpyl - Python binary analysis and rewriting framework (Unlicense)

[https://github.com/umpolungfish/cumpyl-framework?tab=readme-ov-file](https://github.com/umpolungfish/cumpyl-framework?tab=readme-ov-file)

(Unlicense)

What My Project Does

Cumpyl is a comprehensive Python-based binary analysis and rewriting framework that transforms complex binary manipulation into an accessible, automated workflow. It analyzes, modifies, and rewrites executable files (PE, ELF, Mach-O) through:

* Intelligent Analysis: Plugin-driven entropy analysis, string extraction, and section examination
* Guided Obfuscation: Color-coded recommendations for safe binary modification with tier-based safety ratings
* Batch Processing: Multi-threaded processing of entire directories with progress visualization
* Rich Reporting: Professional HTML, JSON, YAML, and XML reports with interactive elements
* Configuration-Driven: YAML-based profiles for malware analysis, forensics, and research workflows

Target Audience

Primary Users

* Malware Researchers: Analyzing suspicious binaries, understanding packing/obfuscation techniques
* Security Analysts: Forensic investigation, incident response, threat hunting
* Penetration Testers: Binary modification for evasion testing, security assessment
* Academic Researchers: Binary analysis studies, reverse engineering education

Secondary Users

* CTF Players: Reverse engineering challenges, binary exploitation competitions
* Security Tool Developers: Building custom analysis workflows, automated detection systems
* Incident Response Teams: Rapid binary triage, automated threat assessment

Skill Levels

* Beginners: Guided workflows, color-coded recommendations, copy-ready commands
* Intermediate: Plugin customization, batch processing, configuration management
* Advanced: Custom plugin development, API integration, enterprise deployment

Comparison

|Feature|Cumpyl|IDA Pro|Ghidra|Radare2|LIEF|Binary Ninja|
|:-|:-|:-|:-|:-|:-|:-|
|Cost|Free|$$$$|Free|Free|Free|$$$|
|Learning Curve|Easy|Steep|Steep|Very Steep|Moderate|Moderate|
|Interface|Rich CLI + HTML|GUI|GUI|CLI|API Only|GUI|
|Batch Processing|Built-in|Manual|Manual|Scripting|Custom|Manual|
|Reporting|Multi-format|Basic|Basic|None|None|Basic|
|Configuration|YAML-driven|Manual|Manual|Complex|Code-based|Manual|
|Plugin System|Standardized|Extensive|Available|Complex|None|Available|
|Cross-Platform|Yes|Yes|Yes|Yes|Yes|Yes|
|Binary Modification|Guided|Manual|Manual|Manual|Programmatic|Manual|
|Workflow Automation|Built-in|None|None|Scripting|Custom|None|

Edit: typo

/r/Python
https://redd.it/1mtxd3l
AI-based Synology Photos "lost folder" Thumbnails Generator

Synology Photos works with my deeply hierarchical Photo structure, but does not create a thumbnail for all intermediates folders, the ones that does not contain at least one picture directly in it.

So I wrote this Python project that generate thumbnails in all parents folder without one.

**What My Project Does**

For instance, my collections are organized like this:

/volume3/photo
├── Famille/2025/25.08 - Vacation in My Hometown
├── Professional/2024/24.04 - Marketing Campaign
└── Personal/2023/23.02 - Hiking in Pyrenees

All intermediate levels (`/volume3/photo/Family`, `/volume3/photo/Family/2023`,...) does NOT have a thumbnail generated by Synologys Photos, and appear like "empty folder".

Using this program from the top level (ex: `/volume3/photo/`), a `thumbnail.jpeg`will be generated in every intermediate levels.

That was the starting point, from here i played a little bit with some AI model:

* Recursively scans a folder for photos and videos
* Uses Opensource AI models (using openCLIP) to pick four representative images (with optional randomness)
* Crops them to a uniform aspect ratio, centering on people at best as possible (openCV, mediapipe models)
* Assembles them into a 2×2 collage
* Saves it as `thumbnail.jpg`in each intermediate folders

I know it is a big script to solve a very small problem, but i like using

/r/Python
https://redd.it/1mtr07d
Tuesday Daily Thread: Advanced questions

# Weekly Wednesday Thread: Advanced Questions 🐍

Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.

## How it Works:

1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.

## Guidelines:

* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.

## Recommended Resources:

* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.

## Example Questions:

1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the

/r/Python
https://redd.it/1mu2vyt
[Release] Syda – Open Source Synthetic Data Generator with AI + SQLAlchemy Support

I’ve released **Syda**, an open-source Python library for generating **realistic, multi-table synthetic/test data**.

Key features:

* **Referential Integrity** → no orphaned records (`product.category_id →` [`category.id`](http://category.id) ``)
* **SQLAlchemy Native** → generate synthetic data from your ORM models directly
* **Multiple Schema Formats** → YAML, JSON, dicts also supported
* **Custom Generators** → define business logic (tax, pricing, rules)
* **Multi-AI Provider** → works with OpenAI, Anthropic (Claude), others

👉 GitHub: [https://github.com/syda-ai/syda](https://github.com/syda-ai/syda)
👉 Docs: [https://python.syda.ai/](https://python.syda.ai/)
👉 PyPI: [https://pypi.org/project/syda/](https://pypi.org/project/syda/)

Would love feedback from Python devs






/r/Python
https://redd.it/1mu76pd
Substack scraper

https://github.com/gitgithan/substack\_scraper

What My Project Does

Scrapes substack articles into html and markdown

Target Audience

Substack Readers 

Comparison 
https://github.com/timf34/Substack2Markdown
This tool tries to automate login with user and pass in a config file.
It also uses user-agent to get around headless problems.

My code is much less lines (100 vs 500), no config or user pass needed which reduces accidents in leaking passwords.
It requires manually logging in with a headed browser and possibly solving captcha.
Login is a one-time task only before scraper goes through all the articles, and is much more robust to hidden errors.

/r/Python
https://redd.it/1mu9cv8
How do you handle data collection in tables?

Hi,
I've been working on a few django projects lately, and as a blind user I really like finding ways to show data in tables. I don't mind giving non-defective eyeball folk the tools to see graphs, but for me being able to select the columns I can read or care about, and sort is really important.

That said, My thought is to just return JSON data to my django template and let js take over. I'm curious if there are libraries people prefer here, or if there's a cleaner way to do this keeping pagination and the like in tact.
Thanks,

/r/django
https://redd.it/1mu1mps
UVForge – Interactive Python project generator using uv package manager (just answer prompts!)

# What My Project Does

[UVForge](https://github.com/manursutil/uvforge) is a CLI tool that bootstraps a modern Python project in seconds using uv. Instead of writing config files or copying boilerplate, you just answer a few interactive prompts and UVForge sets up:

* `src/` project layout
* `pytest` with example tests
* `ruff` for linting
* optional Docker and Github Actions support
* a clean, ready-to-go structure

# Target Audience

* Beginners and Advanced programmers who want to start coding quickly without worrying about setup.
* Developers who want a **“create-react-app” experience** for Python.
* Anyone who dislikes dealing with templating syntax or YAML files.

It’s not meant for production frameworks, it is just a quick, friendly way to spin up well-structured Python projects.

# Comparison

The closest existing tool is **Cookiecutter**, which is very powerful but requires YAML/JSON templates and some upfront configuration. UVForge is different because it is:

* **Fully interactive**: answer prompts in your terminal, no template files needed.
* **Zero config to start**: works out of the box with modern Python defaults.
* **Lightweight**: minimal overhead, just install and run.

Would love feedback from the community, especially on what features or integrations you’d like to see added!

**Links**
GitHub: [https://github.com/manursutil/uvforge](https://github.com/manursutil/uvforge)

/r/Python
https://redd.it/1mugoi2
what is an ATS Resume and why is important

ATS-friendly resumes are designed to get past Applicant Tracking Systems, the software many companies use to filter candidates. Keeping your resume simple, using standard headings, avoiding images or complex formatting, and including relevant keywords from the job description can drastically increase your chances of being noticed. It's all about making your skills and experience easily readable for both the software and human recruiters.

If you want an ATS-friendly resume with 95+ score passed visit the link in the comment section

/r/django
https://redd.it/1muelci
Swizzle: flexible multi-attribute access in Python

Ever wished you could just do `obj.yxz` and grab all three at once? I got a bit obsessed playing around with `__getattr__` and `__setattr__`, and somehow it turned into a tiny library.

# What my Project Does

Swizzle lets you grab or assign multiple attributes at once, and it works with regular classes, dataclasses, Enums, etc. By default, swizzled attributes return a `swizzledtuple` (like an enhanced `namedtuple`) that keeps the original class name and allows continuous swizzling.

import swizzle

# Example with custom separator
@swizzle(sep='_', setter=True)
class Person:
def __init__(self, name, age, city, country):
self.name = name
self.age = age
self.city = city
self.country = country

p = Person("Jane", 30, "Berlin", "Germany")

# Get multiple attributes with separator


/r/Python
https://redd.it/1muhw70
My first open-source package: feedunify, a tool for fetching and standardizing data feeds.

I'm not an expert, but I've been learning a lot and wanted to share my first-ever open-source package. It's called `feedunify`, and I built it to teach myself about async programming, testing, and the whole process of publishing to PyPI.

**What My Project Does**

`feedunify` is a library that fetches and standardizes data from multiple sources. You give it a list of URLs (RSS feeds, YouTube channels, etc.), and it returns a single, clean list of Python objects with a predictable structure.

* Fetches data concurrently using `asyncio` and `httpx`.
* Parses RSS, Atom, and standard YouTube channel URLs.
* Standardizes all data into a clean `FeedItem` object using `pydantic`.
* Has a full test suite built with `pytest`.

**Target Audience**

* Developers or hobbyists building simple data aggregation tools (like a news dashboard or a Discord bot).
* Anyone who wants to learn about `asyncio`, `pydantic`, and Python packaging, as it's a simple, real-world example.
* It's meant as a learning project, not a production-ready framework.

**Comparison**

The closest existing tools are powerful parsers like `feedparser`. `feedunify` is different because it's a higher-level orchestration tool. It uses `feedparser` under the hood but adds the layer of:

* **Concurrent fetching:** Pulls from all sources at once.
* **Source detection:** Automatically distinguishes between a normal

/r/Python
https://redd.it/1mukriv