uv-ship: a CLI tool for shipping with uv
Hello r/Python.
I know, I know, there are several release-bumping tools out there, but none integrate with uv the way I would like them to. They also feel kind of bloated for what I need them to do. I simply wanted to use
If you're curious, please check out **uv-ship**
What My Project Does
>
>preflight checks: guard your release workflow by verifying branch, tags, and a clean working tree before shipping.
>changelog generation: auto-builds changelog sections from commits since the latest tag.
>one-shot release: stage, commit, tag, and push in a single step.
>dry-run mode: preview every action before making changes.
Target Audience
maintainers of uv-managed projects with strict release workflows.
Comparison
uv-ship is similar in scope to bump-my-version but it integrates with uv out-of-the-box. For example, if you use bump-my-version you need to set up the following workflow:
1. execute version bump with
2. include a pre-commit hook that runs
3. tell bump-my-version that pyproject.toml and uv.lock need to be committed
4. create the tag and push it manually
bump-my-version offers automation with pre- and post-commit hooks, but it does not evaluate if the tag is safe to
/r/Python
https://redd.it/1ntnbgn
Hello r/Python.
I know, I know, there are several release-bumping tools out there, but none integrate with uv the way I would like them to. They also feel kind of bloated for what I need them to do. I simply wanted to use
uv version to update my project metadata, paired with a small pipeline that safeguards the process and ships the changes + version tag to the repo.If you're curious, please check out **uv-ship**
What My Project Does
>
>preflight checks: guard your release workflow by verifying branch, tags, and a clean working tree before shipping.
>changelog generation: auto-builds changelog sections from commits since the latest tag.
>one-shot release: stage, commit, tag, and push in a single step.
>dry-run mode: preview every action before making changes.
Target Audience
maintainers of uv-managed projects with strict release workflows.
Comparison
uv-ship is similar in scope to bump-my-version but it integrates with uv out-of-the-box. For example, if you use bump-my-version you need to set up the following workflow:
1. execute version bump with
bump-my-version bump minor2. include a pre-commit hook that runs
uv sync3. tell bump-my-version that pyproject.toml and uv.lock need to be committed
4. create the tag and push it manually
bump-my-version offers automation with pre- and post-commit hooks, but it does not evaluate if the tag is safe to
/r/Python
https://redd.it/1ntnbgn
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
sparkenforce: Type Annotations & Runtime Schema Validation for PySpark DataFrames
sparkenforce is a PySpark type annotation package that lets you specify and enforce DataFrame schemas using Python type hints.
## What My Project Does
Working with PySpark DataFrames can be frustrating when schemas don’t match what you expect, especially when they lead to runtime errors downstream.
sparkenforce solves this by:
* Adding type annotations for DataFrames (columns + types) using Python type hints.
* Providing a `@validate` decorator to enforce schemas at runtime for function arguments and return values.
* Offering clear error messages when mismatches occur (missing/extra columns, wrong types, etc.).
* Supporting flexible schemas with ..., optional columns, and even custom Python ↔ Spark type mappings.
Example:
```
from sparkenforce import validate
from pyspark.sql import DataFrame, functions as fn
@validate
def add_length(df: DataFrame["firstname": str]) -> DataFrame["name": str, "length": int]:
return df.select(
df.firstname.alias("name"),
fn.length("firstname").alias("length")
)
```
If the input DataFrame doesn’t contain "firstname", you’ll get a `DataFrameValidationError` immediately.
## Target Audience
* PySpark developers who want stronger contracts between DataFrame transformations.
* Data engineers maintaining ETL pipelines, where schema changes often breaks stuff.
* Teams that want to make their PySpark code more self-documenting and easier to understand.
## Comparison
* Inspired by [dataenforce](https://github.com/CedricFR/dataenforce) (Pandas-oriented), but extended for PySpark DataFrames.
*
/r/Python
https://redd.it/1nu118l
sparkenforce is a PySpark type annotation package that lets you specify and enforce DataFrame schemas using Python type hints.
## What My Project Does
Working with PySpark DataFrames can be frustrating when schemas don’t match what you expect, especially when they lead to runtime errors downstream.
sparkenforce solves this by:
* Adding type annotations for DataFrames (columns + types) using Python type hints.
* Providing a `@validate` decorator to enforce schemas at runtime for function arguments and return values.
* Offering clear error messages when mismatches occur (missing/extra columns, wrong types, etc.).
* Supporting flexible schemas with ..., optional columns, and even custom Python ↔ Spark type mappings.
Example:
```
from sparkenforce import validate
from pyspark.sql import DataFrame, functions as fn
@validate
def add_length(df: DataFrame["firstname": str]) -> DataFrame["name": str, "length": int]:
return df.select(
df.firstname.alias("name"),
fn.length("firstname").alias("length")
)
```
If the input DataFrame doesn’t contain "firstname", you’ll get a `DataFrameValidationError` immediately.
## Target Audience
* PySpark developers who want stronger contracts between DataFrame transformations.
* Data engineers maintaining ETL pipelines, where schema changes often breaks stuff.
* Teams that want to make their PySpark code more self-documenting and easier to understand.
## Comparison
* Inspired by [dataenforce](https://github.com/CedricFR/dataenforce) (Pandas-oriented), but extended for PySpark DataFrames.
*
/r/Python
https://redd.it/1nu118l
GitHub
GitHub - CedricFR/dataenforce: Python package to enforce column names & data types of pandas DataFrames
Python package to enforce column names & data types of pandas DataFrames - CedricFR/dataenforce
Would you use a low-code GUI tool to build and publish Django REST API project
Hi,
In the last 2 years, I built 3 web apps using Django REST framework for backend.
I realised that most of the work like defining models, auth, simple CRUD apis are repetitive. Also, having a GUI to do all this can be great to configure, visualise the database models and APIs.
This is how I imagine to use such a GUI based Django REST app generator
Define your project and apps
Define and visualise database models
Configure user auth (from various options like username-password / google / x etc.)
Add basic CRUD APIs by simply defining request response payload structure
Add custom APIs
Publish with CICD
Other benefits would be that the generator will use good quality refactored code keeping SOLID principles in mind.
Would you use such a tool to build Django REST based backend projects?
What other features would you need?
/r/django
https://redd.it/1nu4q5t
Hi,
In the last 2 years, I built 3 web apps using Django REST framework for backend.
I realised that most of the work like defining models, auth, simple CRUD apis are repetitive. Also, having a GUI to do all this can be great to configure, visualise the database models and APIs.
This is how I imagine to use such a GUI based Django REST app generator
Define your project and apps
Define and visualise database models
Configure user auth (from various options like username-password / google / x etc.)
Add basic CRUD APIs by simply defining request response payload structure
Add custom APIs
Publish with CICD
Other benefits would be that the generator will use good quality refactored code keeping SOLID principles in mind.
Would you use such a tool to build Django REST based backend projects?
What other features would you need?
/r/django
https://redd.it/1nu4q5t
Reddit
From the django community on Reddit
Explore this post and more from the django community
Crawlee for Python v1.0 is LIVE!
Hi everyone, our team just launched **Crawlee for Python 🐍** v1.0, an open source web scraping and automation library. We launched the beta version in Aug 2024 here, and got a lot of feedback. With new features like Adaptive crawler, unified storage client system, Impit HTTP client, and a lot of new things, the library is ready for its public launch.
What My Project Does
It's an open-source web scraping and automation library, which provides a unified interface for HTTP and browser-based scraping, using popular libraries like beautifulsoup4 and Playwright under the hood.
Target Audience
The target audience is developers who wants to try a scalable crawling and automation library which offers a suite of features that makes life easier than others. We launched the beta version a year ago, got a lot of feedback, worked on it with help of early adopters and launched Crawlee for Python v1.0.
New features
Unified storage client system: less duplication, better extensibility, and a cleaner developer experience. It also opens the door for the community to build and share their own storage client implementations.
Adaptive Playwright crawler: makes your crawls faster and cheaper, while still allowing you to reliably handle complex, dynamic websites. In practice, you get the best of both worlds: speed on simple
/r/Python
https://redd.it/1nu8tt6
Hi everyone, our team just launched **Crawlee for Python 🐍** v1.0, an open source web scraping and automation library. We launched the beta version in Aug 2024 here, and got a lot of feedback. With new features like Adaptive crawler, unified storage client system, Impit HTTP client, and a lot of new things, the library is ready for its public launch.
What My Project Does
It's an open-source web scraping and automation library, which provides a unified interface for HTTP and browser-based scraping, using popular libraries like beautifulsoup4 and Playwright under the hood.
Target Audience
The target audience is developers who wants to try a scalable crawling and automation library which offers a suite of features that makes life easier than others. We launched the beta version a year ago, got a lot of feedback, worked on it with help of early adopters and launched Crawlee for Python v1.0.
New features
Unified storage client system: less duplication, better extensibility, and a cleaner developer experience. It also opens the door for the community to build and share their own storage client implementations.
Adaptive Playwright crawler: makes your crawls faster and cheaper, while still allowing you to reliably handle complex, dynamic websites. In practice, you get the best of both worlds: speed on simple
/r/Python
https://redd.it/1nu8tt6
GitHub
GitHub - apify/crawlee-python: Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract…
Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Wo...
WHAT is wrong with my static files?
Hello, all. I am trying to deploy my django app on Digital Ocean and I am having quite a bit of trouble doing so. I am following the How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces tutorial on the DO website, but I cannot seem to get my static files into my Spaces bucket. I have edited my settings.py file as follows:
With this, when i run the command
134 static files copied to '/home/<username>/<project_name>/<project_name>/<project_name>staticfiles' with nothing sent to my spaces bucket on DO. Can anyone see what I am doing wrong?
I have tried removing the STATIC_ROOT = 'static/' line, but that just causes the following error:
django.core.exceptions.ImproperlyConfigured: You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path.
/r/djangolearning
https://redd.it/1ntyz88
Hello, all. I am trying to deploy my django app on Digital Ocean and I am having quite a bit of trouble doing so. I am following the How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces tutorial on the DO website, but I cannot seem to get my static files into my Spaces bucket. I have edited my settings.py file as follows:
AWS_ACCESS_KEY_ID = '<key_id>'AWS_SECRET_ACCESS_KEY = '<secret_key_id>'AWS_STORAGE_BUCKET_NAME = '<storage_bucket_name>'AWS_S3_ENDPOINT_URL = 'https://nyc3.digitaloceanspaces.com'AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400',}AWS_LOCATION = 'static'AWS_DEFAULT_ACL = 'public-read'#Static files configurationSTATICFILES_STORAGE = '<app_name>.storage_backends.StaticStorage'#STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'STATIC_URL = f"https://{AWS_STORAGE_BUCKET_NAME}.nyc3.digitaloceanspaces.com/static/"STATIC_ROOT = 'static/' With this, when i run the command
python manage.py collectstatic, I get the following output: 134 static files copied to '/home/<username>/<project_name>/<project_name>/<project_name>staticfiles' with nothing sent to my spaces bucket on DO. Can anyone see what I am doing wrong?
I have tried removing the STATIC_ROOT = 'static/' line, but that just causes the following error:
django.core.exceptions.ImproperlyConfigured: You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path.
/r/djangolearning
https://redd.it/1ntyz88
Digitalocean
How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces | DigitalOcean
Django is a powerful web framework that can help you get your Python application or website off the ground quickly. It includes several convenient features l…
Telelog: A high-performance diagnostic & visualization tool for Python, powered by Rust
GitHub Link: https://github.com/vedant-asati03/telelog
# What My Project Does
Telelog is a diagnostic framework for Python with a Rust core. It helps you understand how your code runs, not just what it outputs.
Visualizes Code Flow: Automatically generates flowcharts and timelines from your code's execution.
High-Performance: 5-8x faster than the built-in
Built-in Profiling: Find bottlenecks easily with `with logger.profile():`.
Smart Context: Adds persistent context (
# Target Audience
Developers debugging complex systems (e.g., data pipelines, state machines).
Engineers building performance-sensitive applications.
Anyone who wants to visually understand and document their code's logic.
# Comparison (vs. built-in logging)
Scope:
Visualization: Telelog's automatic diagram generation is a unique feature.
Performance: Telelog's Rust core offers a significant speed advantage.
/r/Python
https://redd.it/1nu9n4l
GitHub Link: https://github.com/vedant-asati03/telelog
# What My Project Does
Telelog is a diagnostic framework for Python with a Rust core. It helps you understand how your code runs, not just what it outputs.
Visualizes Code Flow: Automatically generates flowcharts and timelines from your code's execution.
High-Performance: 5-8x faster than the built-in
logging module.Built-in Profiling: Find bottlenecks easily with `with logger.profile():`.
Smart Context: Adds persistent context (
user_id, request_id) to all events.# Target Audience
Developers debugging complex systems (e.g., data pipelines, state machines).
Engineers building performance-sensitive applications.
Anyone who wants to visually understand and document their code's logic.
# Comparison (vs. built-in logging)
Scope:
logging is for text records. Telelog is an instrumentation framework with profiling & visualization.Visualization: Telelog's automatic diagram generation is a unique feature.
Performance: Telelog's Rust core offers a significant speed advantage.
/r/Python
https://redd.it/1nu9n4l
GitHub
GitHub - Vedant-Asati03/Telelog: High-performance structured logging library for Rust and Python with rich visualization capabilities
High-performance structured logging library for Rust and Python with rich visualization capabilities - Vedant-Asati03/Telelog
What's the best approach to send mails at 1 day, 5 hours, 30 min, 15 min and 5 minutes before the scheduled time?
Hey everyone, I am working in a project and I want to send mails before the scheduled time.... But the issue is if I run the task every minute it will unnecessarily run if there is no meeting scheduled...
Suggest some approaches if you have ever encountered something like this.
/r/django
https://redd.it/1nu8xjj
Hey everyone, I am working in a project and I want to send mails before the scheduled time.... But the issue is if I run the task every minute it will unnecessarily run if there is no meeting scheduled...
Suggest some approaches if you have ever encountered something like this.
/r/django
https://redd.it/1nu8xjj
Reddit
From the django community on Reddit
Explore this post and more from the django community
Stories from running a workflow engine, e.g., Hatchet, in Production
Hi everybody! I find myself in need of a workflow engine (I'm DevOps, so I'll be using it and administering it), and it seems the Python space is exploding with options right now. I'm passingly familiar with Celery+Canvas and DAG-based tools such as Airflow, but the hot new thing seems to be Durable Execution frameworks like Temporal.io, DBOS, Hatchet, etc. I'd love to hear stories from people actually using and managing such things in the wild, as part of evaluating which option is best for me.
Just from reading over these projects docs, I can give my initial impressions:
* Temporal.io - enterprise-ready, lots of operational bits and bobs to manage, seems to want to take over your entire project
* DBOS - way less operational impact, but also no obvious way to horizontally scale workers independent of app servers (which is sort of a key feature for me)
* Hatchet - evolving fast, Durable Execution/Workflow bits seem fairly recent, no obvious way to logically segment queues, etc. by tenant (Temporal has Namespaces, Celery+Canvas has Virtual Hosts in RabbitMQ, DBOS… might be leveraging your app database, so it inherits whatever you are doing there?)
Am I missing any of the big (Python) players? What has
/r/Python
https://redd.it/1nuaqe8
Hi everybody! I find myself in need of a workflow engine (I'm DevOps, so I'll be using it and administering it), and it seems the Python space is exploding with options right now. I'm passingly familiar with Celery+Canvas and DAG-based tools such as Airflow, but the hot new thing seems to be Durable Execution frameworks like Temporal.io, DBOS, Hatchet, etc. I'd love to hear stories from people actually using and managing such things in the wild, as part of evaluating which option is best for me.
Just from reading over these projects docs, I can give my initial impressions:
* Temporal.io - enterprise-ready, lots of operational bits and bobs to manage, seems to want to take over your entire project
* DBOS - way less operational impact, but also no obvious way to horizontally scale workers independent of app servers (which is sort of a key feature for me)
* Hatchet - evolving fast, Durable Execution/Workflow bits seem fairly recent, no obvious way to logically segment queues, etc. by tenant (Temporal has Namespaces, Celery+Canvas has Virtual Hosts in RabbitMQ, DBOS… might be leveraging your app database, so it inherits whatever you are doing there?)
Am I missing any of the big (Python) players? What has
/r/Python
https://redd.it/1nuaqe8
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I made: Dungeon Brawl ⚔️ – Text-based Python battle game with attacks, specials, and healing
What My Project Does:
Dungeon Brawl is a text-based, turn-based battle game in Python. Players fight monsters using normal attacks, special moves, and healing potions. The game uses classes, methods, and the random module to handle combat mechanics and damage variability.
Target Audience:
It’s a toy/learning project for Python beginners or hobbyists who want to see OOP, game logic, and input/output in action. Perfect for someone who wants a small but playable Python project.
Comparison:
Unlike most beginner Python games that are static or single-turn, Dungeon Brawl is turn-based with limited special attacks, healing, and randomized combat, making it more interactive and replayable than simple text games.
Check it out here: https://github.com/itsleenzy/dungeon-brawl/
/r/Python
https://redd.it/1nuc9fy
What My Project Does:
Dungeon Brawl is a text-based, turn-based battle game in Python. Players fight monsters using normal attacks, special moves, and healing potions. The game uses classes, methods, and the random module to handle combat mechanics and damage variability.
Target Audience:
It’s a toy/learning project for Python beginners or hobbyists who want to see OOP, game logic, and input/output in action. Perfect for someone who wants a small but playable Python project.
Comparison:
Unlike most beginner Python games that are static or single-turn, Dungeon Brawl is turn-based with limited special attacks, healing, and randomized combat, making it more interactive and replayable than simple text games.
Check it out here: https://github.com/itsleenzy/dungeon-brawl/
/r/Python
https://redd.it/1nuc9fy
GitHub
GitHub - itsleenzy/dungeon-brawl: A text-based Python battle game where you, the player, fight monsters in turn-based combat. Use…
A text-based Python battle game where you, the player, fight monsters in turn-based combat. Use attacks, special moves, and healing to defeat enemies and survive the dungeon! - itsleenzy/dungeon-brawl
Pandas 2.3.3 released with Python 3.14 support
Pandas was the last major package in the Python data analysis ecosystem that needed to be updated for Python 3.14.
https://github.com/pandas-dev/pandas/releases/tag/v2.3.3
/r/Python
https://redd.it/1nu0gdd
Pandas was the last major package in the Python data analysis ecosystem that needed to be updated for Python 3.14.
https://github.com/pandas-dev/pandas/releases/tag/v2.3.3
/r/Python
https://redd.it/1nu0gdd
GitHub
Release Pandas 2.3.3 · pandas-dev/pandas
We are pleased to announce the release of pandas 2.3.3.
This release includes some improvements and fixes to the future string data type (preview feature for the upcoming pandas 3.0). We recommend ...
This release includes some improvements and fixes to the future string data type (preview feature for the upcoming pandas 3.0). We recommend ...
Issue with cookiecutter-django
im trying to set up a new project and i got this error all the time
Installing python dependencies using uv...
[2025-09-30T23:06:29.460895400Z\][docker-credential-desktop\][W\] Windows version might not be up-to-date: The system cannot find the file specified.
docker: Error response from daemon: create .: volume name is too short, names should be at least two alphanumeric characters.
See 'docker run --help'.
Error installing production dependencies: Command '['docker', 'run', '--rm', '-v', '.:/app', 'cookiecutter-django-uv-runner:latest', 'uv', 'add', '--no-sync', '-r', 'requirements/production.txt'\]' returned non-zero exit status 125.
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
do any of you have any ideas did im making anything wrong
/r/django
https://redd.it/1nus9dw
im trying to set up a new project and i got this error all the time
Installing python dependencies using uv...
[2025-09-30T23:06:29.460895400Z\][docker-credential-desktop\][W\] Windows version might not be up-to-date: The system cannot find the file specified.
docker: Error response from daemon: create .: volume name is too short, names should be at least two alphanumeric characters.
See 'docker run --help'.
Error installing production dependencies: Command '['docker', 'run', '--rm', '-v', '.:/app', 'cookiecutter-django-uv-runner:latest', 'uv', 'add', '--no-sync', '-r', 'requirements/production.txt'\]' returned non-zero exit status 125.
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
do any of you have any ideas did im making anything wrong
/r/django
https://redd.it/1nus9dw
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is raw SQL an anti-pattern / difficult to integrate?
Just curious, reading around the sub as I'm currently looking into django and bump into:
> Sure you can use raw SQL, but you really want to maintain that?
From [https://www.reddit.com/r/django/comments/17mpj2w/comment/k7mgrgo/](https://www.reddit.com/r/django/comments/17mpj2w/comment/k7mgrgo/) . Note to u/quisatz_harderah, I wanted to reply but i think the post is a bit old :)
I'm not too sure of the context for these queries, but I assume it's some sort of analytics (beyond a basic CRUD query).
Assuming that my personal response is that yes, I would like to maintain the raw SQL over sql generated by an ORM in the context of analytics queries. I'm familiar with postgres, and I like sql. I can copy it into a db session easily and play around with it etc.
So my question is whether using raw SQL with django is particularly tricky / hacky, or if its just like sqlalchemy in that we can use `session.execute(text("string of sql"))` when needed. From a search it seems that I can use `from django.db import connection` and then `connection.execute( ... )` in a similar way to sqla.
Even so, I'm curious as to how hard this is to integrate into django projects, or if
/r/django
https://redd.it/1nua8ff
Just curious, reading around the sub as I'm currently looking into django and bump into:
> Sure you can use raw SQL, but you really want to maintain that?
From [https://www.reddit.com/r/django/comments/17mpj2w/comment/k7mgrgo/](https://www.reddit.com/r/django/comments/17mpj2w/comment/k7mgrgo/) . Note to u/quisatz_harderah, I wanted to reply but i think the post is a bit old :)
I'm not too sure of the context for these queries, but I assume it's some sort of analytics (beyond a basic CRUD query).
Assuming that my personal response is that yes, I would like to maintain the raw SQL over sql generated by an ORM in the context of analytics queries. I'm familiar with postgres, and I like sql. I can copy it into a db session easily and play around with it etc.
So my question is whether using raw SQL with django is particularly tricky / hacky, or if its just like sqlalchemy in that we can use `session.execute(text("string of sql"))` when needed. From a search it seems that I can use `from django.db import connection` and then `connection.execute( ... )` in a similar way to sqla.
Even so, I'm curious as to how hard this is to integrate into django projects, or if
/r/django
https://redd.it/1nua8ff
Reddit
quisatz_haderah's comment on "For people that use FastAPI & SQLAlchemy instead of Django REST Framework: Why?"
Explore this conversation and more from the django community
How to annotate already annotated fields in Django?
I’m struggling with type hints and annotations in Django.
Let’s say I annotate a queryset with some calculated fields. Then I want to add *another annotation* on top of those previously annotated fields. In effect, the model is being “extended” with new fields.
But how do you correctly handle this in terms of typing? Using the base model (e.g., User) feels wrong, since the queryset now has additional fields. At the same time, creating a dataclass or TypedDict also doesn’t fit well, because it’s not a separate object — it’s still a queryset with annotations.
So: **what’s the recommended way to annotate already annotated fields in Django queries?**
`class User(models.Model):`
`username = models.CharField(max_length=255, unique=True, null=True)`
`first_name = models.CharField(max_length=255, verbose_name="имя")`
`class Message(models.Model):`
`text = models.TextField()`
`type = models.CharField(max_length=50)`
`class UserChainMessage(models.Model):`
`user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="chain_message")`
`message = models.ForeignKey(Message, on_delete=models.CASCADE)`
`sent_at = models.DateTimeField(auto_now_add=True)`
`id_message = models.IntegerField( null=True, blank=True)`
`class UserWithLatestMessage(TypedDict):`
`latest_message_id: int`
`def get_users_for_mailing(filters: Q) -> QuerySet[UserWithLatestMessage]:`
`return(`
`User.objects.filter(filters)`
`.annotate(latest_message_id=Max("chain_message__message_id"))`
`)`
With this code, mypy gives me the following error:
`Type argument "UserWithLatestMessage" of "QuerySet" must be a subtype of "Model" [type-var]`
If I change it to QuerySet\[User\], then later in code:
`for user in users:`
`last_message = user.latest_message_id`
I get:
`Cannot access attribute "latest_message_id" for class "User"`
So I’m stuck:
* TypedDict doesn’t work because QuerySet\[...\] only
/r/django
https://redd.it/1nubgir
I’m struggling with type hints and annotations in Django.
Let’s say I annotate a queryset with some calculated fields. Then I want to add *another annotation* on top of those previously annotated fields. In effect, the model is being “extended” with new fields.
But how do you correctly handle this in terms of typing? Using the base model (e.g., User) feels wrong, since the queryset now has additional fields. At the same time, creating a dataclass or TypedDict also doesn’t fit well, because it’s not a separate object — it’s still a queryset with annotations.
So: **what’s the recommended way to annotate already annotated fields in Django queries?**
`class User(models.Model):`
`username = models.CharField(max_length=255, unique=True, null=True)`
`first_name = models.CharField(max_length=255, verbose_name="имя")`
`class Message(models.Model):`
`text = models.TextField()`
`type = models.CharField(max_length=50)`
`class UserChainMessage(models.Model):`
`user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="chain_message")`
`message = models.ForeignKey(Message, on_delete=models.CASCADE)`
`sent_at = models.DateTimeField(auto_now_add=True)`
`id_message = models.IntegerField( null=True, blank=True)`
`class UserWithLatestMessage(TypedDict):`
`latest_message_id: int`
`def get_users_for_mailing(filters: Q) -> QuerySet[UserWithLatestMessage]:`
`return(`
`User.objects.filter(filters)`
`.annotate(latest_message_id=Max("chain_message__message_id"))`
`)`
With this code, mypy gives me the following error:
`Type argument "UserWithLatestMessage" of "QuerySet" must be a subtype of "Model" [type-var]`
If I change it to QuerySet\[User\], then later in code:
`for user in users:`
`last_message = user.latest_message_id`
I get:
`Cannot access attribute "latest_message_id" for class "User"`
So I’m stuck:
* TypedDict doesn’t work because QuerySet\[...\] only
/r/django
https://redd.it/1nubgir
Reddit
From the django community on Reddit
Explore this post and more from the django community
D Monthly Who's Hiring and Who wants to be Hired?
For Job Postings please use this template
>Hiring: [Location\], Salary:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] and [Brief overview, what you're looking for\]
For Those looking for jobs please use this template
>Want to be Hired: [Location\], Salary Expectation:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] Resume: [Link to resume\] and [Brief overview, what you're looking for\]
​
Please remember that this community is geared towards those with experience.
/r/MachineLearning
https://redd.it/1nuwj5t
For Job Postings please use this template
>Hiring: [Location\], Salary:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] and [Brief overview, what you're looking for\]
For Those looking for jobs please use this template
>Want to be Hired: [Location\], Salary Expectation:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] Resume: [Link to resume\] and [Brief overview, what you're looking for\]
​
Please remember that this community is geared towards those with experience.
/r/MachineLearning
https://redd.it/1nuwj5t
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
I built Poottu — an offline, privacy-first password manager in Python
Hey everyone — I wanted to share a project I’ve been working on recently: **Poottu**, a desktop password manager written in Python.
# What it does
At its core, Poottu is meant to be a **secure, offline, local vault** for credentials (usernames, passwords, URLs, notes).
* Fully **offline by default** — no telemetry or automatic cloud sync built in
* Clean, minimal GUI (using **PySide6**)
* Groups/categories to organize entries
* Live search across title, username, URL, notes
* Entry preview pane with “show password” option
* Context menu actions: copy username, password, URL, old password, notes
* Timed clipboard clearing (after N seconds) to reduce exposure
* Encrypted backup / restore of vault
* Password generator built in
* Keyboard shortcuts support
# Target audience
Who is Poottu for?
* **Privacy-focused users** who do not want their credentials stored in cloud services by default
* People who prefer **local, device-only control** over their vault
* Those who want a **lightweight password manager** with no vendor lock-in
# Comparison
Most existing password managers fall into two camps: **command-line tools** like `pass` or `gopass`, and **cloud-based managers** like Bitwarden, 1Password, or LastPass.
CLI tools are lightweight and fully offline, but they often feel unintuitive for non-technical users. Cloud-based solutions, on the other hand, are polished and offer seamless cross-device sync, but
/r/Python
https://redd.it/1nv01qm
Hey everyone — I wanted to share a project I’ve been working on recently: **Poottu**, a desktop password manager written in Python.
# What it does
At its core, Poottu is meant to be a **secure, offline, local vault** for credentials (usernames, passwords, URLs, notes).
* Fully **offline by default** — no telemetry or automatic cloud sync built in
* Clean, minimal GUI (using **PySide6**)
* Groups/categories to organize entries
* Live search across title, username, URL, notes
* Entry preview pane with “show password” option
* Context menu actions: copy username, password, URL, old password, notes
* Timed clipboard clearing (after N seconds) to reduce exposure
* Encrypted backup / restore of vault
* Password generator built in
* Keyboard shortcuts support
# Target audience
Who is Poottu for?
* **Privacy-focused users** who do not want their credentials stored in cloud services by default
* People who prefer **local, device-only control** over their vault
* Those who want a **lightweight password manager** with no vendor lock-in
# Comparison
Most existing password managers fall into two camps: **command-line tools** like `pass` or `gopass`, and **cloud-based managers** like Bitwarden, 1Password, or LastPass.
CLI tools are lightweight and fully offline, but they often feel unintuitive for non-technical users. Cloud-based solutions, on the other hand, are polished and offer seamless cross-device sync, but
/r/Python
https://redd.it/1nv01qm
Reddit
From the Python community on Reddit: I built Poottu — an offline, privacy-first password manager in Python
Explore this post and more from the Python community
Watch out for your commas!!!
You might already know the memes behind forgetting to end a line with a semi-colon
And it's kind of funny how Python doesn't fall into this problem.
You should, however, watch out for not ending a line with a comma in this particular scenario.
This scenario being having a list that extends multiple lines vertically.
> Python does not throw an error and errors silently
## What Happened to me?
I recently had this issue where I forgot to end an element with a comma. My program wasn't following the logical rules that I wanted it to follow. And this was a simple script, a little script. Nothing fancy, not a HUGE project. Just one file with a few lines:
Notice the missing comma, I couldn't deduce the problem for 3 minutes straight.
/r/Python
https://redd.it/1nuw799
You might already know the memes behind forgetting to end a line with a semi-colon
(;).And it's kind of funny how Python doesn't fall into this problem.
You should, however, watch out for not ending a line with a comma in this particular scenario.
This scenario being having a list that extends multiple lines vertically.
EXAMPLE_CONST_LIST = [
"main.py", # <------ Make sure there is a trailing comma
"__pycache__"
]
> Python does not throw an error and errors silently
## What Happened to me?
I recently had this issue where I forgot to end an element with a comma. My program wasn't following the logical rules that I wanted it to follow. And this was a simple script, a little script. Nothing fancy, not a HUGE project. Just one file with a few lines:
import os
EXCEPTIONS = [
"main.py" # missing a comma here
"__pycache__"
]
for file in os.listdir():
if file in EXCEPTIONS:
continue
# Rest of logic that I wanted
Notice the missing comma, I couldn't deduce the problem for 3 minutes straight.
/r/Python
https://redd.it/1nuw799
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Flask performance bottlenecks: Is caching the only answer, or am I missing something deeper?
I love Flask for its simplicity and how quickly I can spin up an application. I recently built a small, course-management app with features like user authentication, role-based access control, and PDF certificate generation. It works perfectly in development, but I’m now starting to worry about its performance as the user base grows.
I know the standard advice for scaling is to implement caching—maybe using Redis or Flask-Caching—and to optimize database queries. I've already tried some basic caching strategies. However, I'm finding that my response times still feel sluggish when testing concurrent users.
The deeper issues I'm confronting are:
Gunicorn Workers: I'm deploying with Gunicorn and Nginx, but I'm unsure if I've configured the worker count optimally. What's the best practice for setting the number of Gunicorn workers for a standard I/O-bound Flask app?
External API Calls: In one part of the app, I rely on an external service (similar to how others here deal with Google Sheets API calls. Is the best way to handle this heavy I/O through asynchronous workers like gevent in Gunicorn, or should I be looking at background workers like Celery instead?
Monitoring: Without proper monitoring, it's hard to tell if the bottleneck is the database, my code, or the
/r/flask
https://redd.it/1nv1vhf
I love Flask for its simplicity and how quickly I can spin up an application. I recently built a small, course-management app with features like user authentication, role-based access control, and PDF certificate generation. It works perfectly in development, but I’m now starting to worry about its performance as the user base grows.
I know the standard advice for scaling is to implement caching—maybe using Redis or Flask-Caching—and to optimize database queries. I've already tried some basic caching strategies. However, I'm finding that my response times still feel sluggish when testing concurrent users.
The deeper issues I'm confronting are:
Gunicorn Workers: I'm deploying with Gunicorn and Nginx, but I'm unsure if I've configured the worker count optimally. What's the best practice for setting the number of Gunicorn workers for a standard I/O-bound Flask app?
External API Calls: In one part of the app, I rely on an external service (similar to how others here deal with Google Sheets API calls. Is the best way to handle this heavy I/O through asynchronous workers like gevent in Gunicorn, or should I be looking at background workers like Celery instead?
Monitoring: Without proper monitoring, it's hard to tell if the bottleneck is the database, my code, or the
/r/flask
https://redd.it/1nv1vhf
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Clarifying a few basics on user onboarding experience (engineering wise) (rookie question).
I am trying to build a micro saas tool (backend Django, frontend react, auth using firebase)
During the onboarding I have a specific flow, which I want users to see only for the first time (i.e. when they sign up). I am using firebase for google and apple sign ups.
Question is --> How do I store the "new user" status? It really doesn't make sense to store it as a field in the model, storing it on the client local would not be a good idea either - because if they login from another device or delete cache then they would see the flow again. Any tips on how to handle these decisions? Thanks a lot!
/r/django
https://redd.it/1nv04b1
I am trying to build a micro saas tool (backend Django, frontend react, auth using firebase)
During the onboarding I have a specific flow, which I want users to see only for the first time (i.e. when they sign up). I am using firebase for google and apple sign ups.
Question is --> How do I store the "new user" status? It really doesn't make sense to store it as a field in the model, storing it on the client local would not be a good idea either - because if they login from another device or delete cache then they would see the flow again. Any tips on how to handle these decisions? Thanks a lot!
/r/django
https://redd.it/1nv04b1
Reddit
From the django community on Reddit
Explore this post and more from the django community
Logly 🚀 — a Rust-powered, super fast, and simple logging library for Python
What My Project Does
Logly is a logging library for Python that combines simplicity with high performance using a Rust backend. It supports:
Console and file logging
JSON / structured logging
Async background writing to reduce latency
Pretty formatting with minimal boilerplate
It’s designed to be lightweight, fast, and easy to use, giving Python developers a modern logging solution without the complexity of the built-in
Performance Highlights (v0.1.1)
File Logging (50,000 messages): Python `logging` 0.729s → Logly 0.205s (\~3.5× faster)
Concurrent Logging (4 threads × 25,000 messages): Python
Latency Microbenchmark (30,000 messages):
|Percentile|
|:-|:-|:-|:-|
|p50|0.014 ms|0.002 ms|7×|
|p95|0.029 ms|0.002 ms|14.5×|
|p99|0.043 ms|0.015 ms|2.9×|
>
\> Note: Performance may vary depending on your OS, CPU, Python version, and system load. Benchmarks show up to 10× faster performance under high-volume or multi-threaded workloads, but actual results will differ based on your environment.
Target Audience
Python developers needing high-performance logging
Scripts, web apps, or production systems
Developers who want structured logging or async log handling without overhead
Comparison
Python
Loguru: Logly adds a Rust-powered backend for improved performance under high-load scenarios and better async file handling.
Structlog: Logly is simpler to
/r/Python
https://redd.it/1nv3tgp
What My Project Does
Logly is a logging library for Python that combines simplicity with high performance using a Rust backend. It supports:
Console and file logging
JSON / structured logging
Async background writing to reduce latency
Pretty formatting with minimal boilerplate
It’s designed to be lightweight, fast, and easy to use, giving Python developers a modern logging solution without the complexity of the built-in
logging module.Performance Highlights (v0.1.1)
File Logging (50,000 messages): Python `logging` 0.729s → Logly 0.205s (\~3.5× faster)
Concurrent Logging (4 threads × 25,000 messages): Python
logging 3.919s → Logly 0.405s (\~9.7× faster)Latency Microbenchmark (30,000 messages):
|Percentile|
loggingPython |Logly|Speedup||:-|:-|:-|:-|
|p50|0.014 ms|0.002 ms|7×|
|p95|0.029 ms|0.002 ms|14.5×|
|p99|0.043 ms|0.015 ms|2.9×|
>
\> Note: Performance may vary depending on your OS, CPU, Python version, and system load. Benchmarks show up to 10× faster performance under high-volume or multi-threaded workloads, but actual results will differ based on your environment.
Target Audience
Python developers needing high-performance logging
Scripts, web apps, or production systems
Developers who want structured logging or async log handling without overhead
Comparison
Python
logging: Logly is faster, simpler, and supports async background writing out of the box.Loguru: Logly adds a Rust-powered backend for improved performance under high-load scenarios and better async file handling.
Structlog: Logly is simpler to
/r/Python
https://redd.it/1nv3tgp
Reddit
From the Python community on Reddit: Logly 🚀 — a Rust-powered, super fast, and simple logging library for Python
Explore this post and more from the Python community