Recent Noteworthy Package Releases
Over the last 7 days, These are the significant upgrades/releases in the Python package ecosystem I have noticed.
**python-calamine 0.4.0** \- Python binding for Rust's library for reading excel and odf file - calamine
**SeleniumBase 4.40.0** \- A complete web automation framework for end-to-end testing
**pylance 0.31.0** \- Python wrapper for Lance columnar format
**PyAV 15.0.0** \- Pythonic bindings for FFmpeg's libraries
**PEFT 0.16.0** \- Parameter-Efficient Fine-Tuning (PEFT)
**CrewAI 0.140.0** \- Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks
**statsig-python-core 0.6.0** \- Statsig Python bindings for the Statsig Core SDK
**haystack-experimental 0.11.0** \- Experimental components and features for the Haystack LLM framework
**wandb 0.21.0** \- A CLI and library for interacting with the Weights & Biases API.
**fastmcp 2.10.0** \- The fast, Pythonic way to build MCP servers.
**feast 0.50.0** \- The Open Source Feature Store for AI/ML
**sentence-transformers 5.0.0** \- Embeddings, Retrieval, and Reranking
**PaddlePaddle 3.1.0** \- Parallel Distributed Deep Learning
**pillow-heif 1.0.0** \- Python interface for libheif library
**bleak 1.0.0** \- Bluetooth Low Energy platform Agnostic Klient
**browser-use 0.4** \- Make websites accessible for AI agents
**PostHog 6.0.0** \- Integrate PostHog into any python application
/r/Python
https://redd.it/1lrgrs6
Over the last 7 days, These are the significant upgrades/releases in the Python package ecosystem I have noticed.
**python-calamine 0.4.0** \- Python binding for Rust's library for reading excel and odf file - calamine
**SeleniumBase 4.40.0** \- A complete web automation framework for end-to-end testing
**pylance 0.31.0** \- Python wrapper for Lance columnar format
**PyAV 15.0.0** \- Pythonic bindings for FFmpeg's libraries
**PEFT 0.16.0** \- Parameter-Efficient Fine-Tuning (PEFT)
**CrewAI 0.140.0** \- Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks
**statsig-python-core 0.6.0** \- Statsig Python bindings for the Statsig Core SDK
**haystack-experimental 0.11.0** \- Experimental components and features for the Haystack LLM framework
**wandb 0.21.0** \- A CLI and library for interacting with the Weights & Biases API.
**fastmcp 2.10.0** \- The fast, Pythonic way to build MCP servers.
**feast 0.50.0** \- The Open Source Feature Store for AI/ML
**sentence-transformers 5.0.0** \- Embeddings, Retrieval, and Reranking
**PaddlePaddle 3.1.0** \- Parallel Distributed Deep Learning
**pillow-heif 1.0.0** \- Python interface for libheif library
**bleak 1.0.0** \- Bluetooth Low Energy platform Agnostic Klient
**browser-use 0.4** \- Make websites accessible for AI agents
**PostHog 6.0.0** \- Integrate PostHog into any python application
/r/Python
https://redd.it/1lrgrs6
GitHub
Release v0.4.0 · dimastbk/python-calamine
What's Changed
feat: add support of merged cells by @dimastbk in #126
fix: improve docs for merged_cell_ranges, don't use namedtuple by @dimastbk in #127
fix(deps): update rust crate pyo3 ...
feat: add support of merged cells by @dimastbk in #126
fix: improve docs for merged_cell_ranges, don't use namedtuple by @dimastbk in #127
fix(deps): update rust crate pyo3 ...
What is Jython and is it still relevant?
Never seen it before until I opened up this book that was published in 2010.
Is it still relevant and what has been created with it?
The book is called
Introduction to computing and programming in Python- a multimedia approach. 2nd edition
Mark Guzdial , Barbara Ericson
/r/Python
https://redd.it/1lr4o0b
Never seen it before until I opened up this book that was published in 2010.
Is it still relevant and what has been created with it?
The book is called
Introduction to computing and programming in Python- a multimedia approach. 2nd edition
Mark Guzdial , Barbara Ericson
/r/Python
https://redd.it/1lr4o0b
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Desto: A Web-Based tmux Session Manager for Bash/Python Scripts
Sharing a personal project called
What My Project Does:
Real-time system statistics directly on the dashboard.
Ability to run both bash and Python scripts, with each script launched within its own
Live viewing and monitoring of script logs.
Functionality for scheduling scripts and chaining them together.
Sessions persist even after script completion, thanks to `tmux` integration, ensuring your processes remain active even if your connection drops.
Target Audience: This project is currently a personal development and learning project, but it's built with practical use cases in mind. It's suitable for:
Developers and system administrators looking for a simple, self-hosted tool to manage automation scripts.
Anyone who needs to run long-running Python or bash processes and wants an easy way to monitor their output, system stats, and ensure persistence.
Users who prefer a web interface for managing their background tasks over purely CLI-based solutions.
Comparison: While there are many tools for process management and automation,
/r/Python
https://redd.it/1lrk2l8
Sharing a personal project called
desto, a web-based session manager built with NiceGUI. It's designed to help you run and monitor bash and Python scripts, especially useful for long-running processes or automation tasks.What My Project Does:
desto provides a centralized web dashboard to manage your scripts. Key features include:Real-time system statistics directly on the dashboard.
Ability to run both bash and Python scripts, with each script launched within its own
tmux session.Live viewing and monitoring of script logs.
Functionality for scheduling scripts and chaining them together.
Sessions persist even after script completion, thanks to `tmux` integration, ensuring your processes remain active even if your connection drops.
Target Audience: This project is currently a personal development and learning project, but it's built with practical use cases in mind. It's suitable for:
Developers and system administrators looking for a simple, self-hosted tool to manage automation scripts.
Anyone who needs to run long-running Python or bash processes and wants an easy way to monitor their output, system stats, and ensure persistence.
Users who prefer a web interface for managing their background tasks over purely CLI-based solutions.
Comparison: While there are many tools for process management and automation,
desto aims for a unique blend/r/Python
https://redd.it/1lrk2l8
Reddit
From the Python community on Reddit: Desto: A Web-Based tmux Session Manager for Bash/Python Scripts
Explore this post and more from the Python community
pyleak: pytest-plugin to detect asyncio event loop blocking and task leaks
**What** `pyleak` **does**
`pyleak` is a pytest plugin that automatically detects event loop blocking in your asyncio test suite. It catches synchronous calls that freeze the event loop (like `time.sleep()`, `requests.get()`, or CPU-intensive operations) and provides detailed stack traces showing exactly where the blocking occurs. Zero configuration required - just install and run your tests.
**The problem it solves**
Event loop blocking is the silent killer of async performance. A single `time.sleep(0.1)` in an async function can tank your entire application's throughput, but these issues hide during development and only surface under production load. Traditional testing can't detect these problems because the tests still pass - they just run slower than they should.
**Target audience**
This is a pytest-plugin for Python developers building asyncio applications. It's particularly valuable for teams shipping async web services, AI agent frameworks, real-time applications, and concurrent data processors where blocking calls can destroy performance under load but are impossible to catch reliably during development.
pip install pytest-pyleak
import pytest
@pytest.mark.no_leak
/r/Python
https://redd.it/1lrc6je
**What** `pyleak` **does**
`pyleak` is a pytest plugin that automatically detects event loop blocking in your asyncio test suite. It catches synchronous calls that freeze the event loop (like `time.sleep()`, `requests.get()`, or CPU-intensive operations) and provides detailed stack traces showing exactly where the blocking occurs. Zero configuration required - just install and run your tests.
**The problem it solves**
Event loop blocking is the silent killer of async performance. A single `time.sleep(0.1)` in an async function can tank your entire application's throughput, but these issues hide during development and only surface under production load. Traditional testing can't detect these problems because the tests still pass - they just run slower than they should.
**Target audience**
This is a pytest-plugin for Python developers building asyncio applications. It's particularly valuable for teams shipping async web services, AI agent frameworks, real-time applications, and concurrent data processors where blocking calls can destroy performance under load but are impossible to catch reliably during development.
pip install pytest-pyleak
import pytest
@pytest.mark.no_leak
/r/Python
https://redd.it/1lrc6je
Reddit
From the Python community on Reddit: pyleak: pytest-plugin to detect asyncio event loop blocking and task leaks
Explore this post and more from the Python community
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1lrwxkg
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1lrwxkg
Amazon
Fluent Python: Clear, Concise, and Effective Programming
Fluent Python: Clear, Concise, and Effective Programming [Ramalho, Luciano] on Amazon.com. *FREE* shipping on qualifying offers. Fluent Python: Clear, Concise, and Effective Programming
Skylos: The python dead code finder (Updated)
# Skylos: The Python Dead Code Finder (Updated)
Been working on Skylos, a Python static analysis tool that helps you find and remove dead code from your projs (again.....). We are trying to build something that actually catches these issues faster and more accurately (although this is debatable because different tools catch things differently). The project was initially written in Rust, and it flopped, there were too many false positives(coding skills issue). Now the codebase is in Python. The benchmarks against other tools can be found in benchmark.md
# What the project does:
Detects unreachable functions and methods
Finds unused imports
Identifies unused classes
Spots unused variables
Detects unused parameters
Pragma ignore (Newly added)
# So what has changed?
1. We have introduced pragma to ignore false positives
2. Cleaned up more false positives
3. Introduced or at least attempting to clean up dynamic frameworks like Flask or FastApi
# Target Audience:
Python developers working on medium to large codebases
Teams looking to reduce technical debt
Open source maintainers who want to keep their projects clean
Anyone tired of manually searching for dead code
# Key Features:
bash
# Basic usage
skylos /path/to/your/project
/r/Python
https://redd.it/1lrxr7b
# Skylos: The Python Dead Code Finder (Updated)
Been working on Skylos, a Python static analysis tool that helps you find and remove dead code from your projs (again.....). We are trying to build something that actually catches these issues faster and more accurately (although this is debatable because different tools catch things differently). The project was initially written in Rust, and it flopped, there were too many false positives(coding skills issue). Now the codebase is in Python. The benchmarks against other tools can be found in benchmark.md
# What the project does:
Detects unreachable functions and methods
Finds unused imports
Identifies unused classes
Spots unused variables
Detects unused parameters
Pragma ignore (Newly added)
# So what has changed?
1. We have introduced pragma to ignore false positives
2. Cleaned up more false positives
3. Introduced or at least attempting to clean up dynamic frameworks like Flask or FastApi
# Target Audience:
Python developers working on medium to large codebases
Teams looking to reduce technical debt
Open source maintainers who want to keep their projects clean
Anyone tired of manually searching for dead code
# Key Features:
bash
# Basic usage
skylos /path/to/your/project
/r/Python
https://redd.it/1lrxr7b
benchmark.md
Benchmark Hospitalists and Intensivists
Our mission is to maintain our leadership as a leading Hospitalist and Intensivist group by putting the patient and community first in everything we do.
D Did anyone receive this from NIPS?
Your co-author, Reviewer has not submitted their reviews for one or more papers assigned to them for review (or they submitted insufficient reviews). Please kindly note the Review deadline was on the 2nd July 11.59pm AOE.
===
My co-author has graduated and no longer worked in academic anymore. How can I handle that? It is not fair to reject my paper!
/r/MachineLearning
https://redd.it/1lrr5yy
Your co-author, Reviewer has not submitted their reviews for one or more papers assigned to them for review (or they submitted insufficient reviews). Please kindly note the Review deadline was on the 2nd July 11.59pm AOE.
===
My co-author has graduated and no longer worked in academic anymore. How can I handle that? It is not fair to reject my paper!
/r/MachineLearning
https://redd.it/1lrr5yy
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Generating Synthetic Data for Your ML Models
I prepared a simple tutorial to demonstrate how to use synthetic data with machine learning models in Python.
https://ryuru.com/generating-synthetic-data-for-your-ml-models/
/r/Python
https://redd.it/1lrkjvc
I prepared a simple tutorial to demonstrate how to use synthetic data with machine learning models in Python.
https://ryuru.com/generating-synthetic-data-for-your-ml-models/
/r/Python
https://redd.it/1lrkjvc
Ryuru
Generating Synthetic Data for Your ML Models - Ryuru
Maybe it’s your first time hearing the term “synthetic data.” What could it be? Synthetic data is like fuel for your ML models, made...
WebPath: Yes yet another another url library but hear me out
Yeaps another url library. But hear me out. Read on first.
# What my project does
Extending the pathlib concept to HTTP:
# before:
resp = requests.get("https://api.github.com/users/yamadashy")
data = resp.json()
name = data"name" # pray it exists
reposurl = data["reposurl"]
reposresp = requests.get(reposurl)
repos = reposresp.json()
firstrepo = repos0"name" # more praying
# after:
user = WebPath("https://api.github.com/users/yamadashy").get()
name = user.find("name", default="Unknown")
firstrepo = (user / "reposurl").get().find("0.name", default="No repos")
Other stuff:
Request timing: GET /users → 200 (247ms)
Rate limiting: .with_rate_limit(2.0)
Pagination with cycle detection
Debugging the api itself with .inspect()
Caching that strips auth headers automatically
What makes it different vs existing librariees:
requests + jmespath/jsonpath: Need 2+ libraries
httpx: Similar base nav but no json navigation or debugging integration
furl + requests: Not sure if we're in the same boat but this is more for url building ..
# Target audience
For ppl who:
Build scripts that consume apis (stock prices, crypto prices, GitHub stats, etc etc.)
Get frustrated debugging
/r/Python
https://redd.it/1lr8d7t
Yeaps another url library. But hear me out. Read on first.
# What my project does
Extending the pathlib concept to HTTP:
# before:
resp = requests.get("https://api.github.com/users/yamadashy")
data = resp.json()
name = data"name" # pray it exists
reposurl = data["reposurl"]
reposresp = requests.get(reposurl)
repos = reposresp.json()
firstrepo = repos0"name" # more praying
# after:
user = WebPath("https://api.github.com/users/yamadashy").get()
name = user.find("name", default="Unknown")
firstrepo = (user / "reposurl").get().find("0.name", default="No repos")
Other stuff:
Request timing: GET /users → 200 (247ms)
Rate limiting: .with_rate_limit(2.0)
Pagination with cycle detection
Debugging the api itself with .inspect()
Caching that strips auth headers automatically
What makes it different vs existing librariees:
requests + jmespath/jsonpath: Need 2+ libraries
httpx: Similar base nav but no json navigation or debugging integration
furl + requests: Not sure if we're in the same boat but this is more for url building ..
# Target audience
For ppl who:
Build scripts that consume apis (stock prices, crypto prices, GitHub stats, etc etc.)
Get frustrated debugging
/r/Python
https://redd.it/1lr8d7t
I benchmarked 4 Python text extraction libraries so you don't have to (2025 results)
TL;DR: Comprehensive benchmarks of Kreuzberg, Docling, MarkItDown, and Unstructured across 94 real-world documents. Results might surprise you.
## 📊 Live Results: https://goldziher.github.io/python-text-extraction-libs-benchmarks/
---
## Context
As the author of Kreuzberg, I wanted to create an honest, comprehensive benchmark of Python text extraction libraries. No cherry-picking, no marketing fluff - just real performance data across 94 documents (~210MB) ranging from tiny text files to 59MB academic papers.
Full disclosure: I built Kreuzberg, but these benchmarks are automated, reproducible, and the methodology is completely open-source.
---
## 🔬 What I Tested
### Libraries Benchmarked:
- Kreuzberg (71MB, 20 deps) - My library
- Docling (1,032MB, 88 deps) - IBM's ML-powered solution
- MarkItDown (251MB, 25 deps) - Microsoft's Markdown converter
- Unstructured (146MB, 54 deps) - Enterprise document processing
### Test Coverage:
- 94 real documents: PDFs, Word docs, HTML, images, spreadsheets
- 5 size categories: Tiny (<100KB) to Huge (>50MB)
- 6 languages: English, Hebrew, German, Chinese, Japanese, Korean
- CPU-only processing: No GPU acceleration for fair comparison
- Multiple metrics: Speed, memory usage, success rates, installation sizes
---
## 🏆 Results Summary
### Speed Champions 🚀
1. Kreuzberg: 35+ files/second, handles everything
2. Unstructured: Moderate speed, excellent reliability
3. MarkItDown: Good on simple docs, struggles with complex files
4. Docling: Often 60+ minutes per file (!!)
### Installation Footprint 📦
- Kreuzberg: 71MB, 20 dependencies ⚡
- Unstructured:
/r/Python
https://redd.it/1ls6hj5
TL;DR: Comprehensive benchmarks of Kreuzberg, Docling, MarkItDown, and Unstructured across 94 real-world documents. Results might surprise you.
## 📊 Live Results: https://goldziher.github.io/python-text-extraction-libs-benchmarks/
---
## Context
As the author of Kreuzberg, I wanted to create an honest, comprehensive benchmark of Python text extraction libraries. No cherry-picking, no marketing fluff - just real performance data across 94 documents (~210MB) ranging from tiny text files to 59MB academic papers.
Full disclosure: I built Kreuzberg, but these benchmarks are automated, reproducible, and the methodology is completely open-source.
---
## 🔬 What I Tested
### Libraries Benchmarked:
- Kreuzberg (71MB, 20 deps) - My library
- Docling (1,032MB, 88 deps) - IBM's ML-powered solution
- MarkItDown (251MB, 25 deps) - Microsoft's Markdown converter
- Unstructured (146MB, 54 deps) - Enterprise document processing
### Test Coverage:
- 94 real documents: PDFs, Word docs, HTML, images, spreadsheets
- 5 size categories: Tiny (<100KB) to Huge (>50MB)
- 6 languages: English, Hebrew, German, Chinese, Japanese, Korean
- CPU-only processing: No GPU acceleration for fair comparison
- Multiple metrics: Speed, memory usage, success rates, installation sizes
---
## 🏆 Results Summary
### Speed Champions 🚀
1. Kreuzberg: 35+ files/second, handles everything
2. Unstructured: Moderate speed, excellent reliability
3. MarkItDown: Good on simple docs, struggles with complex files
4. Docling: Often 60+ minutes per file (!!)
### Installation Footprint 📦
- Kreuzberg: 71MB, 20 dependencies ⚡
- Unstructured:
/r/Python
https://redd.it/1ls6hj5
GitHub
GitHub - Goldziher/kreuzberg: A polyglot document intelligence framework with a Rust core. Extract text, metadata, and structured…
A polyglot document intelligence framework with a Rust core. Extract text, metadata, and structured information from PDFs, Office documents, images, and 50+ formats. Available for Rust, Python, Rub...
How to record system audio from django website ?
HI , i am working on a "Real time AI lecture/class note-taker"
for that i was trying to record system audio ,,..... but that seems to not work.... i am using django framework of python... can anyone help me ?
/r/django
https://redd.it/1lrais1
HI , i am working on a "Real time AI lecture/class note-taker"
for that i was trying to record system audio ,,..... but that seems to not work.... i am using django framework of python... can anyone help me ?
/r/django
https://redd.it/1lrais1
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is this really the right way to pass parameters from React?
Making a simple application which is meant to send a list to django as a parameter for a get. In short, I'm sending a list of names and want to retrieve any entry that uses one of these names.
The only way I was able to figure out how to do this was to first convert the list to a string and then convert that string back into a JSON in the view. So it looks like this
react
api/myget/?names=${JSON.stringify(listofnames)}
Django
listofnames = json.loads(request.queryparams'list_of_names'
this feels very redundant to me. Is this the way people typically would pass a list?
/r/djangolearning
https://redd.it/1lpw4xs
Making a simple application which is meant to send a list to django as a parameter for a get. In short, I'm sending a list of names and want to retrieve any entry that uses one of these names.
The only way I was able to figure out how to do this was to first convert the list to a string and then convert that string back into a JSON in the view. So it looks like this
react
api/myget/?names=${JSON.stringify(listofnames)}
Django
listofnames = json.loads(request.queryparams'list_of_names'
this feels very redundant to me. Is this the way people typically would pass a list?
/r/djangolearning
https://redd.it/1lpw4xs
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Robyn now supports Server Sent Events
For the unaware, Robyn is a super fast async Python web framework.
Server Sent Events were one of the most requested features and Robyn finally supports it :D
Let me know what you think and if you'd like to request any more features.
Release Notes - https://github.com/sparckles/Robyn/releases/tag/v0.71.0
/r/Python
https://redd.it/1ls89sy
For the unaware, Robyn is a super fast async Python web framework.
Server Sent Events were one of the most requested features and Robyn finally supports it :D
Let me know what you think and if you'd like to request any more features.
Release Notes - https://github.com/sparckles/Robyn/releases/tag/v0.71.0
/r/Python
https://redd.it/1ls89sy
GitHub
GitHub - sparckles/Robyn: Robyn is a Super Fast Async Python Web Framework with a Rust runtime.
Robyn is a Super Fast Async Python Web Framework with a Rust runtime. - sparckles/Robyn
An analytic theory of creativity in convolutional diffusion models.
https://arxiv.org/abs/2412.20292
/r/MachineLearning
https://redd.it/1lsipgp
https://arxiv.org/abs/2412.20292
/r/MachineLearning
https://redd.it/1lsipgp
arXiv.org
An analytic theory of creativity in convolutional diffusion models
We obtain an analytic, interpretable and predictive theory of creativity in convolutional diffusion models. Indeed, score-matching diffusion models can generate highly original images that lie far...
Sunday Daily Thread: What's everyone working on this week?
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1lsnrbz
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1lsnrbz
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
For running Python scripts on schedule or as APIs, what do you use?
Just curious, if you’ve written a Python script (say for scraping, data cleaning, sending reports, automating alerts, etc.), how do you usually go about:
1. Running it on a schedule (daily, hourly, etc)?
2. Exposing it as an API (to trigger remotely or integrate with another tool/app)?
Do you:
Use GitHub Actions or cron?
Set up Flask/FastAPI + deploy somewhere like Render?
Use Replit, AWS Lambda, or something else?
Also: would you ever consider paying (like $5–10/month) for a tool that lets you just upload your script and get:
A private API endpoint
Auth + input support
Optional scheduling (like “run every morning at 7 AM”) all without needing to write YAML or do DevOps stuff?
I’m trying to understand what people prefer. Would love your thoughts! 🙏
/r/Python
https://redd.it/1lsgsqn
Just curious, if you’ve written a Python script (say for scraping, data cleaning, sending reports, automating alerts, etc.), how do you usually go about:
1. Running it on a schedule (daily, hourly, etc)?
2. Exposing it as an API (to trigger remotely or integrate with another tool/app)?
Do you:
Use GitHub Actions or cron?
Set up Flask/FastAPI + deploy somewhere like Render?
Use Replit, AWS Lambda, or something else?
Also: would you ever consider paying (like $5–10/month) for a tool that lets you just upload your script and get:
A private API endpoint
Auth + input support
Optional scheduling (like “run every morning at 7 AM”) all without needing to write YAML or do DevOps stuff?
I’m trying to understand what people prefer. Would love your thoughts! 🙏
/r/Python
https://redd.it/1lsgsqn
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
What's your take on Celery vs django-qstash for background tasks
Hello guys, I'm currently working on a personal project and would like to know your thoughts and advice on handling background tasks in django.
My use cases includes:
1. Sending transactional emails in the background
2. Some few periodic tasks
Celery is super powerful and flexible, but it requires running a persistent worker which can get tricky or expensive on some platforms like Render. On the other hand, QStash lets you queue tasks and have them POST to your app without a worker — great for simpler or cost-sensitive deployments.
Have you tried both? What are the treadoffs of adopting django-Qstash.
/r/django
https://redd.it/1lsneon
Hello guys, I'm currently working on a personal project and would like to know your thoughts and advice on handling background tasks in django.
My use cases includes:
1. Sending transactional emails in the background
2. Some few periodic tasks
Celery is super powerful and flexible, but it requires running a persistent worker which can get tricky or expensive on some platforms like Render. On the other hand, QStash lets you queue tasks and have them POST to your app without a worker — great for simpler or cost-sensitive deployments.
Have you tried both? What are the treadoffs of adopting django-Qstash.
/r/django
https://redd.it/1lsneon
Reddit
From the django community on Reddit
Explore this post and more from the django community
Web push notifications from Django. Here's the tutorial.
https://youtu.be/grSfBbYuJ0I?feature=shared
/r/django
https://redd.it/1lsrgvl
https://youtu.be/grSfBbYuJ0I?feature=shared
/r/django
https://redd.it/1lsrgvl
YouTube
JS client, SW and Test Messages | Django Web Push Notifications Part-1 | Ishaan Topkar
Learn how to send web push notifications from Django. This is the part 1 of the tutorial- setting up a JavaScript client, a service worker and sending test messages from firebase. Part 2 coming soon.
Source Code (JS client and SW):
https://github.com/topCodegeek/fcm…
Source Code (JS client and SW):
https://github.com/topCodegeek/fcm…