Retry manager for arbitrary code block
There are about two pages of retry decorators in Pypi. I know about it. But, I found one case which is not covered by all other retries libraries (correct me if I'm wrong).
I needed to retry an arbitrary block of code, and not to be limited to a lambda or a function.
So, I wrote a library loopretry which does this. It combines an iterator with a context manager to wrap any block into retry.
from loopretry import retries
import time
for retry in retries(10):
with retry():
# any code you want to retry in case of exception
print(time.time())
assert int(time.time()) % 10 == 0, "Not a round number!"
Is it a novel approach or not?
Library code (any critique is highly welcomed): at Github.
If you want to try it:
/r/Python
https://redd.it/1ohczpq
There are about two pages of retry decorators in Pypi. I know about it. But, I found one case which is not covered by all other retries libraries (correct me if I'm wrong).
I needed to retry an arbitrary block of code, and not to be limited to a lambda or a function.
So, I wrote a library loopretry which does this. It combines an iterator with a context manager to wrap any block into retry.
from loopretry import retries
import time
for retry in retries(10):
with retry():
# any code you want to retry in case of exception
print(time.time())
assert int(time.time()) % 10 == 0, "Not a round number!"
Is it a novel approach or not?
Library code (any critique is highly welcomed): at Github.
If you want to try it:
pip install loopretry./r/Python
https://redd.it/1ohczpq
Reddit
From the Python community on Reddit: Retry manager for arbitrary code block
Explore this post and more from the Python community
The State of Django 2025 is here – 4,600+ developers share how they use Django
/r/django
https://redd.it/1ohlesf
/r/django
https://redd.it/1ohlesf
Looking for a python course that’s worth it
Hi I am a BSBA major graduating this semester and have very basic experience with python. I am looking for a course that’s worth it and that would give me a solid foundation. Thanks
/r/Python
https://redd.it/1ohe75v
Hi I am a BSBA major graduating this semester and have very basic experience with python. I am looking for a course that’s worth it and that would give me a solid foundation. Thanks
/r/Python
https://redd.it/1ohe75v
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Lightweight Python Implementation of Shamir's Secret Sharing with Verifiable Shares
Hi r/Python!
I built a lightweight Python library for Shamir's Secret Sharing (SSS), which splits secrets (like keys) into shares, needing only a threshold to reconstruct. It also supports Feldman's Verifiable Secret Sharing to check share validity securely.
What my project does
Basically you have a secret(a password, a key, an access token, an API token, password for your cryptowallet, a secret formula/recipe, codes for nuclear missiles). You can split your secret in n shares between your friends, coworkers, partner etc. and to reconstruct your secret you will need at least k shares. For example: total of 5 shares but you need at least 3 to recover the secret). An impostor having less than k shares learns nothing about the secret(for context if he has 2 out of 3 shares he can't recover the secret even with unlimited computing power - unless he exploits the discrete log problem but this is infeasible for current computers). If you want to you can not to use this Feldman's scheme(which verifies the share) so your secret is safe even with unlimited computing power, even with unlimited quantum computers - mathematically with fewer than k shares it is impossible to recover the secret
Features:
Minimal deps (pycryptodome), pure Python.
/r/Python
https://redd.it/1oh8yh4
Hi r/Python!
I built a lightweight Python library for Shamir's Secret Sharing (SSS), which splits secrets (like keys) into shares, needing only a threshold to reconstruct. It also supports Feldman's Verifiable Secret Sharing to check share validity securely.
What my project does
Basically you have a secret(a password, a key, an access token, an API token, password for your cryptowallet, a secret formula/recipe, codes for nuclear missiles). You can split your secret in n shares between your friends, coworkers, partner etc. and to reconstruct your secret you will need at least k shares. For example: total of 5 shares but you need at least 3 to recover the secret). An impostor having less than k shares learns nothing about the secret(for context if he has 2 out of 3 shares he can't recover the secret even with unlimited computing power - unless he exploits the discrete log problem but this is infeasible for current computers). If you want to you can not to use this Feldman's scheme(which verifies the share) so your secret is safe even with unlimited computing power, even with unlimited quantum computers - mathematically with fewer than k shares it is impossible to recover the secret
Features:
Minimal deps (pycryptodome), pure Python.
/r/Python
https://redd.it/1oh8yh4
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
Python Foundation goes ride or DEI, rejects government grant with strings attached
https://www.theregister.com/2025/10/27/pythonfoundationabandons15mnsf/
> The Python Software Foundation (PSF) has walked away from a $1.5 million government grant and you can blame the Trump administration's war on woke for effectively weakening some open source security.
/r/Python
https://redd.it/1ohqqgc
https://www.theregister.com/2025/10/27/pythonfoundationabandons15mnsf/
> The Python Software Foundation (PSF) has walked away from a $1.5 million government grant and you can blame the Trump administration's war on woke for effectively weakening some open source security.
/r/Python
https://redd.it/1ohqqgc
Reddit
From the Python community on Reddit: Python Foundation goes ride or DEI, rejects government grant with strings attached
Explore this post and more from the Python community
R PKBoost: Gradient boosting that stays accurate under data drift (2% degradation vs XGBoost's 32%)
I've been working on a gradient boosting implementation that handles two problems I kept running into with XGBoost/LightGBM in production:
1. Performance collapse on extreme imbalance (under 1% positive class)
2. Silent degradation when data drifts (sensor drift, behavior changes, etc.)
Key Results
Imbalanced data (Credit Card Fraud - 0.2% positives):
- PKBoost: 87.8% PR-AUC
- LightGBM: 79.3% PR-AUC
- XGBoost: 74.5% PR-AUC
Under realistic drift (gradual covariate shift):
\- PKBoost: 86.2% PR-AUC (−2.0% degradation)
\- XGBoost: 50.8% PR-AUC (−31.8% degradation)
\- LightGBM: 45.6% PR-AUC (−42.5% degradation)
What's Different
The main innovation is using Shannon entropy in the split criterion alongside gradients. Each split maximizes:
Gain = GradientGain + λ·InformationGain
where λ adapts based on class imbalance. This explicitly optimizes for information gain on the minority class instead of just minimizing loss.
Combined with:
\- Quantile-based binning (robust to scale shifts)
\- Conservative regularization (prevents overfitting to majority)
\- PR-AUC early stopping (focuses on minority performance)
The architecture is inherently more robust to drift without needing online adaptation.
Trade-offs
The good:
\- Auto-tunes for your data (no hyperparameter search needed)
\- Works out-of-the-box on extreme imbalance
\- Comparable inference speed to XGBoost
The honest:
\- \~2-4x slower training (45s vs 12s on 170K samples)
\- Slightly behind on balanced data (use XGBoost there)
\- Built in Rust, so less Python ecosystem integration
Why I'm Sharing
This started as
/r/MachineLearning
https://redd.it/1ohbdgu
I've been working on a gradient boosting implementation that handles two problems I kept running into with XGBoost/LightGBM in production:
1. Performance collapse on extreme imbalance (under 1% positive class)
2. Silent degradation when data drifts (sensor drift, behavior changes, etc.)
Key Results
Imbalanced data (Credit Card Fraud - 0.2% positives):
- PKBoost: 87.8% PR-AUC
- LightGBM: 79.3% PR-AUC
- XGBoost: 74.5% PR-AUC
Under realistic drift (gradual covariate shift):
\- PKBoost: 86.2% PR-AUC (−2.0% degradation)
\- XGBoost: 50.8% PR-AUC (−31.8% degradation)
\- LightGBM: 45.6% PR-AUC (−42.5% degradation)
What's Different
The main innovation is using Shannon entropy in the split criterion alongside gradients. Each split maximizes:
Gain = GradientGain + λ·InformationGain
where λ adapts based on class imbalance. This explicitly optimizes for information gain on the minority class instead of just minimizing loss.
Combined with:
\- Quantile-based binning (robust to scale shifts)
\- Conservative regularization (prevents overfitting to majority)
\- PR-AUC early stopping (focuses on minority performance)
The architecture is inherently more robust to drift without needing online adaptation.
Trade-offs
The good:
\- Auto-tunes for your data (no hyperparameter search needed)
\- Works out-of-the-box on extreme imbalance
\- Comparable inference speed to XGBoost
The honest:
\- \~2-4x slower training (45s vs 12s on 170K samples)
\- Slightly behind on balanced data (use XGBoost there)
\- Built in Rust, so less Python ecosystem integration
Why I'm Sharing
This started as
/r/MachineLearning
https://redd.it/1ohbdgu
Reddit
From the MachineLearning community on Reddit: [R] PKBoost: Gradient boosting that stays accurate under data drift (2% degradation…
Explore this post and more from the MachineLearning community
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1ohusug
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1ohusug
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Python mobile app
Hi, i just wanted to ask what to build my finance tracker app on, since I want others to use it too, so im looking for some good options.
/r/Python
https://redd.it/1ohuito
Hi, i just wanted to ask what to build my finance tracker app on, since I want others to use it too, so im looking for some good options.
/r/Python
https://redd.it/1ohuito
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
A Flask based service idea with supabase db and auth any thoughts on this
/r/flask
https://redd.it/1ohs5sz
/r/flask
https://redd.it/1ohs5sz
How I can use Django with MongoDB to have similar workflow when use Django with PostgreSQL?
I’m working on a project where I want to use the Django + Django ninja + MongoDb. I want a suggestions on this if I choose a good stack or not. If someone already has used these and have experience on them. Please provide suggestions on this?
/r/django
https://redd.it/1ohzim8
I’m working on a project where I want to use the Django + Django ninja + MongoDb. I want a suggestions on this if I choose a good stack or not. If someone already has used these and have experience on them. Please provide suggestions on this?
/r/django
https://redd.it/1ohzim8
Reddit
From the django community on Reddit
Explore this post and more from the django community
D For those who’ve published on code reasoning — how did you handle dataset collection and validation?
I’ve been diving into how people build datasets for code-related ML research — things like program synthesis, code reasoning, SWE-bench-style evaluation, or DPO/RLHF.
From what I’ve seen, most projects still rely on scraping or synthetic generation, with a lot of manual cleanup and little reproducibility.
Even published benchmarks vary wildly in annotation quality and documentation.
So I’m curious:
1. How are you collecting or validating your datasets for code-focused experiments?
2. Are you using public data, synthetic generation, or human annotation pipelines?
3. What’s been the hardest part — scale, quality, or reproducibility?
I’ve been studying this problem closely and have been experimenting with a small side project to make dataset creation easier for researchers (happy to share more if anyone’s interested).
Would love to hear what’s worked — or totally hasn’t — in your experience :)
/r/MachineLearning
https://redd.it/1ohge3t
I’ve been diving into how people build datasets for code-related ML research — things like program synthesis, code reasoning, SWE-bench-style evaluation, or DPO/RLHF.
From what I’ve seen, most projects still rely on scraping or synthetic generation, with a lot of manual cleanup and little reproducibility.
Even published benchmarks vary wildly in annotation quality and documentation.
So I’m curious:
1. How are you collecting or validating your datasets for code-focused experiments?
2. Are you using public data, synthetic generation, or human annotation pipelines?
3. What’s been the hardest part — scale, quality, or reproducibility?
I’ve been studying this problem closely and have been experimenting with a small side project to make dataset creation easier for researchers (happy to share more if anyone’s interested).
Would love to hear what’s worked — or totally hasn’t — in your experience :)
/r/MachineLearning
https://redd.it/1ohge3t
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
The State of Django 2025 is here – 4,600+ developers share how they use Django
The results of the annual Django Developers Survey, a joint initiative by the Django Software Foundation and JetBrains PyCharm, are out!
Here’s what stood out to us from more than 4,600 responses:
* HTMX and Alpine.js are the fastest-growing JavaScript frameworks used with Django.
* 38% of developers now use AI to learn or improve their Django skills.
* 3 out of 4 Django developers have over 3 years of professional coding experience.
* 63% of developers already use type hints, and more plan to.
* 76% of developers use PostgreSQL as their database backend.
What surprised you most? Are you using HTMX, AI tools, or type hints in your projects yet?
https://preview.redd.it/echnytaqlsxf1.png?width=1700&format=png&auto=webp&s=bc516c7398117e6fb878f92b00afb78e7578ddba
Get the full breakdown with charts and analysis: [https://lp.jetbrains.com/django-developer-survey-2025/](https://lp.jetbrains.com/django-developer-survey-2025/)
/r/djangolearning
https://redd.it/1oi2pw5
The results of the annual Django Developers Survey, a joint initiative by the Django Software Foundation and JetBrains PyCharm, are out!
Here’s what stood out to us from more than 4,600 responses:
* HTMX and Alpine.js are the fastest-growing JavaScript frameworks used with Django.
* 38% of developers now use AI to learn or improve their Django skills.
* 3 out of 4 Django developers have over 3 years of professional coding experience.
* 63% of developers already use type hints, and more plan to.
* 76% of developers use PostgreSQL as their database backend.
What surprised you most? Are you using HTMX, AI tools, or type hints in your projects yet?
https://preview.redd.it/echnytaqlsxf1.png?width=1700&format=png&auto=webp&s=bc516c7398117e6fb878f92b00afb78e7578ddba
Get the full breakdown with charts and analysis: [https://lp.jetbrains.com/django-developer-survey-2025/](https://lp.jetbrains.com/django-developer-survey-2025/)
/r/djangolearning
https://redd.it/1oi2pw5
About models and database engines
Hi, all. I'm developing an app for a company and their bureaucracy is killing me. So...
¿Can I develop an app with the default SQLite migrations and later deploy it on a PosgreSQL easily changing the DATABASES ENGINE in settings.py?
/r/django
https://redd.it/1oi3yr9
Hi, all. I'm developing an app for a company and their bureaucracy is killing me. So...
¿Can I develop an app with the default SQLite migrations and later deploy it on a PosgreSQL easily changing the DATABASES ENGINE in settings.py?
/r/django
https://redd.it/1oi3yr9
Reddit
From the django community on Reddit
Explore this post and more from the django community
Which linting rules do you always enable or disable?
I'm working on a Python LSP with a type checker and want to add some basic linting rules. So far I've worked on the rules from Pyflakes but was curious if there were any rules or rulesets that you always turn on or off for your projects?
Edit: thank you guys for sharing!
This is the project if you wanna take a look!
These are the rules I've committed to so far
/r/Python
https://redd.it/1oi1dkm
I'm working on a Python LSP with a type checker and want to add some basic linting rules. So far I've worked on the rules from Pyflakes but was curious if there were any rules or rulesets that you always turn on or off for your projects?
Edit: thank you guys for sharing!
This is the project if you wanna take a look!
These are the rules I've committed to so far
/r/Python
https://redd.it/1oi1dkm
GitHub
GitHub - stormlightlabs/beacon: Python LSP & Type Checker
Python LSP & Type Checker. Contribute to stormlightlabs/beacon development by creating an account on GitHub.
Learning Django Migrations
Hi everyone!
I recently joined a startup team, where I am creating the backend using Django. The startup originally hired overseas engineers through UpWork who decided to use Django over other languages and frameworks. Our code isn't live yet, and I run into even the smallest changes to a model,it blows up migrations & gives me error after error, and so I just wipe the local db and migrations and rebuild it.
Obviously, I can't do this when the code is live and has real data in it. Two questions: is this a pain point you face, and is it always this messy, or once you learn it does this 'mess' become manageable? and 2, what are some good resources that helped you improve your understanding of Django?
For context, I am a junior engineer and the only engineer at this startup, and I'm really anxious & stressed about how making updates to production is going to go if development is giving me such a hard time.
/r/django
https://redd.it/1ohjhhy
Hi everyone!
I recently joined a startup team, where I am creating the backend using Django. The startup originally hired overseas engineers through UpWork who decided to use Django over other languages and frameworks. Our code isn't live yet, and I run into even the smallest changes to a model,it blows up migrations & gives me error after error, and so I just wipe the local db and migrations and rebuild it.
Obviously, I can't do this when the code is live and has real data in it. Two questions: is this a pain point you face, and is it always this messy, or once you learn it does this 'mess' become manageable? and 2, what are some good resources that helped you improve your understanding of Django?
For context, I am a junior engineer and the only engineer at this startup, and I'm really anxious & stressed about how making updates to production is going to go if development is giving me such a hard time.
/r/django
https://redd.it/1ohjhhy
Reddit
From the django community on Reddit
Explore this post and more from the django community
Rookie alert - Facing a few race conditions / performance issues
Hi,
I built a micro-saas tool (Django backend, React frontend). Facing a bit of a race condition at times. I use firebase for the social login. Sometimes it takes a bit of time to login, but I have a redirect internally which redirects back to the login form if the required login info isn't available.
Looks like it is taking a couple of seconds to fetch the details from firebase and in the meantime the app simply goes back to the login page.
What are the best practices to handle these? Also what might be a good idea to measure some of the performance metrics?
P.S. I am beginner level coder (just getting started, so advanced apologies if this is a rookie question and thanks a lot for any support).
https://preview.redd.it/8o9u8wjpfvxf1.png?width=1560&format=png&auto=webp&s=36f0ca2bb65c9fd63990f10c67529f00abd04f86
https://preview.redd.it/4vww9wjpfvxf1.png?width=1014&format=png&auto=webp&s=d8015438c537ec59adb2521e80b4135d09d213c3
https://preview.redd.it/08dx1wjpfvxf1.png?width=1528&format=png&auto=webp&s=4ae326689894bd8d8ed50f82a2ca6ada1d3fff2b
/r/django
https://redd.it/1oibnhn
Hi,
I built a micro-saas tool (Django backend, React frontend). Facing a bit of a race condition at times. I use firebase for the social login. Sometimes it takes a bit of time to login, but I have a redirect internally which redirects back to the login form if the required login info isn't available.
Looks like it is taking a couple of seconds to fetch the details from firebase and in the meantime the app simply goes back to the login page.
What are the best practices to handle these? Also what might be a good idea to measure some of the performance metrics?
P.S. I am beginner level coder (just getting started, so advanced apologies if this is a rookie question and thanks a lot for any support).
https://preview.redd.it/8o9u8wjpfvxf1.png?width=1560&format=png&auto=webp&s=36f0ca2bb65c9fd63990f10c67529f00abd04f86
https://preview.redd.it/4vww9wjpfvxf1.png?width=1014&format=png&auto=webp&s=d8015438c537ec59adb2521e80b4135d09d213c3
https://preview.redd.it/08dx1wjpfvxf1.png?width=1528&format=png&auto=webp&s=4ae326689894bd8d8ed50f82a2ca6ada1d3fff2b
/r/django
https://redd.it/1oibnhn
IBM Flask App development KeyError
Hello, I am having an issue with a KeyError that wont go away and I really dont understand why. I am new to python and flask and have been following the IBM course (with a little googling inbetween). Can someone help with this problem? This is the error,
This is the error
This is my app code
This is my server code
This is all available from the IBM course online. I am so confused and dont know what to do, I tried changing the code to only use requests like this
changed code under advice from AI helper to access keys with .get\(\) method to avoid key error.... but it still gives me the error
still getting the same error even after removing traces of 'emotionPrediction' in my code.
emotionPrediction shows up as a nested dictionary as one of the first outputs that you have to format the output to only show emotions, which it does when I use the above code, it´s just not working in the app and leading to my confusion
this is the data before formatting i.e. the response object before formatting
Please let me know if there is any more info I can provide, and thanks in advance!
UPDATE: Thanks for your input everyone,
/r/flask
https://redd.it/1ohgl8h
Hello, I am having an issue with a KeyError that wont go away and I really dont understand why. I am new to python and flask and have been following the IBM course (with a little googling inbetween). Can someone help with this problem? This is the error,
This is the error
This is my app code
This is my server code
This is all available from the IBM course online. I am so confused and dont know what to do, I tried changing the code to only use requests like this
changed code under advice from AI helper to access keys with .get\(\) method to avoid key error.... but it still gives me the error
still getting the same error even after removing traces of 'emotionPrediction' in my code.
emotionPrediction shows up as a nested dictionary as one of the first outputs that you have to format the output to only show emotions, which it does when I use the above code, it´s just not working in the app and leading to my confusion
this is the data before formatting i.e. the response object before formatting
Please let me know if there is any more info I can provide, and thanks in advance!
UPDATE: Thanks for your input everyone,
/r/flask
https://redd.it/1ohgl8h
Introducing Kanchi - Free Open Source Celery Monitoring
I just shipped https://kanchi.io - a free open source celery monitoring tool (https://github.com/getkanchi/kanchi)
What does it do
Previously, I used flower, which most of you probably know. And it worked fine. It lacked some features like Slack webhook integration, retries, orphan detection, and a live mode.
I also wanted a polished, modern look and feel with additional UX enhancements like retrying tasks, hierarchical args and kwargs visualization, and some basic stats about our tasks.
It also stores task metadata in a Postgres (or SQLite) database, so you have historical data even if you restart the instance. It’s still in an early state.
Comparison to alternatives
Just like flower, Kanchi is free and open source. You can self-host it on your infra and it’s easy to setup via docker.
Unlike flower, it supports realtime task updates, has a workflow engine (where you can configure triggers, conditions and actions), has a great searching and filtering functionality, supports environment filtering (prod, staging etc) and retrying tasks manually. It has built in orphan task detection and comes with basic stats
Target Audience
Since by itself, it is just reading data from your message broker - and it’s working reliably, Kanchi can be used in production.
The next few releases will further target robustness and
/r/Python
https://redd.it/1oidpl8
I just shipped https://kanchi.io - a free open source celery monitoring tool (https://github.com/getkanchi/kanchi)
What does it do
Previously, I used flower, which most of you probably know. And it worked fine. It lacked some features like Slack webhook integration, retries, orphan detection, and a live mode.
I also wanted a polished, modern look and feel with additional UX enhancements like retrying tasks, hierarchical args and kwargs visualization, and some basic stats about our tasks.
It also stores task metadata in a Postgres (or SQLite) database, so you have historical data even if you restart the instance. It’s still in an early state.
Comparison to alternatives
Just like flower, Kanchi is free and open source. You can self-host it on your infra and it’s easy to setup via docker.
Unlike flower, it supports realtime task updates, has a workflow engine (where you can configure triggers, conditions and actions), has a great searching and filtering functionality, supports environment filtering (prod, staging etc) and retrying tasks manually. It has built in orphan task detection and comes with basic stats
Target Audience
Since by itself, it is just reading data from your message broker - and it’s working reliably, Kanchi can be used in production.
The next few releases will further target robustness and
/r/Python
https://redd.it/1oidpl8
Kanchi
Kanchi - Self-hosted Celery monitoring
Real-time Celery task monitoring with automatic orphan detection and workflow automation.
The HTTP caching Python deserves
# What My Project Does
[Hishel](https://hishel.com/1.0/) is an HTTP caching toolkit for python, which includes **sans-io** caching implementation, **storages** for effectively storing request/response for later use, and integration with your lovely HTTP tool in python such as HTTPX, requests, fastapi, asgi (for any asgi based library), graphql and more!!
Hishel uses **persistent storage** by default, so your cached responses survive program restarts.
After **2 years** and over **63 MILLION pip installs**, I released the first major version with tons of new features to simplify caching.
✨ Help Hishel grow! Give us a [star on GitHub](https://github.com/karpetrosyan/hishel) if you found it useful. ✨
# Use Cases:
HTTP response caching is something you can use **almost everywhere** to:
* Improve the performance of your program
* Work without an internet connection (offline mode)
* Save money and stop wasting API calls—make a single request and reuse it many times!
* Work even when your upstream server goes down
* Avoid unnecessary downloads when content hasn't changed (what I call "free caching"—it's completely free and can be configured to always serve the freshest data without re-downloading if nothing changed, like the browser's 304 Not Modified response)
# QuickStart
First, download and install Hishel using pip:
pip: `pip install "hishel[httpx, requests, fastapi, async]"==1.0.0`
We've installed several integrations just for demonstration—you
/r/Python
https://redd.it/1oilkc1
# What My Project Does
[Hishel](https://hishel.com/1.0/) is an HTTP caching toolkit for python, which includes **sans-io** caching implementation, **storages** for effectively storing request/response for later use, and integration with your lovely HTTP tool in python such as HTTPX, requests, fastapi, asgi (for any asgi based library), graphql and more!!
Hishel uses **persistent storage** by default, so your cached responses survive program restarts.
After **2 years** and over **63 MILLION pip installs**, I released the first major version with tons of new features to simplify caching.
✨ Help Hishel grow! Give us a [star on GitHub](https://github.com/karpetrosyan/hishel) if you found it useful. ✨
# Use Cases:
HTTP response caching is something you can use **almost everywhere** to:
* Improve the performance of your program
* Work without an internet connection (offline mode)
* Save money and stop wasting API calls—make a single request and reuse it many times!
* Work even when your upstream server goes down
* Avoid unnecessary downloads when content hasn't changed (what I call "free caching"—it's completely free and can be configured to always serve the freshest data without re-downloading if nothing changed, like the browser's 304 Not Modified response)
# QuickStart
First, download and install Hishel using pip:
pip: `pip install "hishel[httpx, requests, fastapi, async]"==1.0.0`
We've installed several integrations just for demonstration—you
/r/Python
https://redd.it/1oilkc1
GitHub
GitHub - karpetrosyan/hishel: Elegant HTTP Caching for Python
Elegant HTTP Caching for Python. Contribute to karpetrosyan/hishel development by creating an account on GitHub.
django-modern-csrf: CSRF protection without tokens
I made a package that replaces Django's default CSRF middleware with one based on modern browser features (Fetch metadata request headers and
The main benefit: no more
It works by checking the
The implementation is based on Go's standard library approach (there's a great article by Filippo Valsorda about it).
PyPI: https://pypi.org/project/django-modern-csrf/
GitHub: https://github.com/feliperalmeida/django-modern-csrf
Let me know if you have questions or run into issues.
/r/django
https://redd.it/1oihb4l
I made a package that replaces Django's default CSRF middleware with one based on modern browser features (Fetch metadata request headers and
Origin).The main benefit: no more
{% csrf_token %} in templates or csrfmiddlewaretoken on forms, no X-CSRFToken headers to configure in your frontend. It's a drop-in replacement - just swap the middleware and you're done.It works by checking the
Sec-Fetch-Site header that modern browsers send automatically. According to caniuse, it's supported by 97%+ of browsers. For older browsers, it falls back to Origin header validation.The implementation is based on Go's standard library approach (there's a great article by Filippo Valsorda about it).
PyPI: https://pypi.org/project/django-modern-csrf/
GitHub: https://github.com/feliperalmeida/django-modern-csrf
Let me know if you have questions or run into issues.
/r/django
https://redd.it/1oihb4l
Caniuse
headers HTTP header: Sec-Fetch-Site | Can I use... Support tables for HTML5, CSS3, etc
"Can I use" provides up-to-date browser support tables for support of front-end web technologies on desktop and mobile web browsers.