R We were wrong about SNNs. The bo.ttleneck isn't binary/sparsity, it's frequency.
TL;DR: The paper reveals that the performance gap between SNNs and ANNs stems not from information loss caused by binary spike activations, but from the intrinsic low-pass filtering of spiking neurons.
Paper: https://arxiv.org/pdf/2505.18608
Repo (please ⭐️ if useful): https://github.com/bic-L/MaxForme
The Main Story:
For years, it's been widely believed that SNNs' performance gap comes from "information loss due to binary/sparse activations." However, recent research has challenged this view. They have found that spiking neurons essentially act as low-pass filters at the network level. This causes high-frequency components to dissipate quickly, reducing the effectiveness of feature representation. Think of SNNs as having "astigmatism" – they see a coarse overall image but cannot clearly discern local details.
Highlighted Results:
1. In a Spiking Transformer on CIFAR-100, simply replacing Avg-Pool (low-pass) with Max-Pool (high-pass) as the token mixer boosted accuracy by +2.39% (79.12% vs 76.73%)
2. Max-Former tried to fix this "astigmatism" through the very light-weight Max-Pool and DWC operation, achieving 82.39% (+7.58%) on ImageNet with 30% less energy.
3. Max-ResNet achieves +2.25% on Cifar10 and +6.65% on Cifar100 by simply adding two Max-Pool operations.
This work provides a new perspective on understanding the performance bottlenecks of SNNs. It suggests that the path to optimizing SNNs may not simply
/r/MachineLearning
https://redd.it/1on7ow7
TL;DR: The paper reveals that the performance gap between SNNs and ANNs stems not from information loss caused by binary spike activations, but from the intrinsic low-pass filtering of spiking neurons.
Paper: https://arxiv.org/pdf/2505.18608
Repo (please ⭐️ if useful): https://github.com/bic-L/MaxForme
The Main Story:
For years, it's been widely believed that SNNs' performance gap comes from "information loss due to binary/sparse activations." However, recent research has challenged this view. They have found that spiking neurons essentially act as low-pass filters at the network level. This causes high-frequency components to dissipate quickly, reducing the effectiveness of feature representation. Think of SNNs as having "astigmatism" – they see a coarse overall image but cannot clearly discern local details.
Highlighted Results:
1. In a Spiking Transformer on CIFAR-100, simply replacing Avg-Pool (low-pass) with Max-Pool (high-pass) as the token mixer boosted accuracy by +2.39% (79.12% vs 76.73%)
2. Max-Former tried to fix this "astigmatism" through the very light-weight Max-Pool and DWC operation, achieving 82.39% (+7.58%) on ImageNet with 30% less energy.
3. Max-ResNet achieves +2.25% on Cifar10 and +6.65% on Cifar100 by simply adding two Max-Pool operations.
This work provides a new perspective on understanding the performance bottlenecks of SNNs. It suggests that the path to optimizing SNNs may not simply
/r/MachineLearning
https://redd.it/1on7ow7
🆕 ttkbootstrap-icons 3.1 — Stateful Icons at Your Fingertips 🎨💡
Hey everyone — I’m excited to announce v3.1 of ttkbootstrap-icons is bringing major enhancements to its icon system.
## 💫 What’s new
### Stateful icons
You can now map icons to widget states — hover, pressed, selected, disabled — without manually swapping images.
If you just want to map the icon to the themed button states... it's simple
> BTW... this works with vanilla styled Tkinter as well. :-)
If you want to get more fancy...
✅ Icons automatically track your widget’s theme foreground color unless you explicitly override it.
✅ Fully supports all icon sets in `ttkbootstrap-icons`.
✅ Works seamlessly with existing ttkbootstrap themes and styles.
---
## ⚙️ Under the hood
- Introduces
- Uses
- Automatically generates derived child styles like
- Falls back to the original untinted icon for unmatched states (the empty-state
- Default
---
## 🧩
/r/Python
https://redd.it/1on22u9
Hey everyone — I’m excited to announce v3.1 of ttkbootstrap-icons is bringing major enhancements to its icon system.
## 💫 What’s new
### Stateful icons
You can now map icons to widget states — hover, pressed, selected, disabled — without manually swapping images.
If you just want to map the icon to the themed button states... it's simple
button = ttk.Button(root, text="Home")
# map the icon to the styled button states
BootstrapIcon("house").map(button)
> BTW... this works with vanilla styled Tkinter as well. :-)
If you want to get more fancy...
import ttkbootstrap as ttk
root = ttk.Window("Demo", themename="flatly")
btn = ttk.Button(root, text="Home")
btn.pack(padx=20, pady=20)
icon = BootstrapIcon("house")
# swap icon on hover, and color change on pressed.
icon.map(btn, statespec=[("hover", "#0af"), ("pressed", {"name": "house-fill", "color": "green"})])
root.mainloop()
✅ Icons automatically track your widget’s theme foreground color unless you explicitly override it.
✅ Fully supports all icon sets in `ttkbootstrap-icons`.
✅ Works seamlessly with existing ttkbootstrap themes and styles.
---
## ⚙️ Under the hood
- Introduces
StatefulIconMixin, integrated into the base Icon class.- Uses
ttk.Style.map(..., image=...) to apply per-state images dynamically.- Automatically generates derived child styles like
house-house-fill-16.my.TButton if you don’t specify a subclass.- Falls back to the original untinted icon for unmatched states (the empty-state
'' entry).- Default
mode="merge" allows incremental icon-state changes without overwriting existing style maps.---
## 🧩
/r/Python
https://redd.it/1on22u9
GitHub
GitHub - israel-dryer/ttkbootstrap-icons: Font-based icons for Tkinter/ttkbootstrap with a built-in Bootstrap set and installable…
Font-based icons for Tkinter/ttkbootstrap with a built-in Bootstrap set and installable providers: Font Awesome, Material, Ionicons, Remix, Fluent, Simple, Weather, Lucide & more. - israel-...
Pyrefly: Type Checking 1.8 Million Lines of Python Per Second
How do you type-check 1.8 million lines of Python per second? Neil Mitchell explains how Pyrefly (a new Python type checker) achieves this level of performance.
Python's optional type system has grown increasingly sophisticated since type annotations were introduced in 2014, now featuring generics, subtyping, flow types, inference, and field refinement. This talk explores how Pyrefly models and validates this complex type system, the architectural choices behind it, and the performance optimizations that make it blazingly fast.
Full talk on Jane Street's youtube channel: https://www.youtube.com/watch?v=Q8YTLHwowcM
Learn more: https://pyrefly.org
/r/Python
https://redd.it/1oncd2l
How do you type-check 1.8 million lines of Python per second? Neil Mitchell explains how Pyrefly (a new Python type checker) achieves this level of performance.
Python's optional type system has grown increasingly sophisticated since type annotations were introduced in 2014, now featuring generics, subtyping, flow types, inference, and field refinement. This talk explores how Pyrefly models and validates this complex type system, the architectural choices behind it, and the performance optimizations that make it blazingly fast.
Full talk on Jane Street's youtube channel: https://www.youtube.com/watch?v=Q8YTLHwowcM
Learn more: https://pyrefly.org
/r/Python
https://redd.it/1oncd2l
YouTube
Pyrefly: Type Checking 1.8 Million Lines of Python Per Second
How do you type-check 1.8 million lines of Python per second? Neil Mitchell explains how Pyrefly (a new Python type checker) achieves this level of performance.
Python's optional type system has grown increasingly sophisticated since type annotations were…
Python's optional type system has grown increasingly sophisticated since type annotations were…
Approved: PEP 798: Unpacking in Comprehensions & PEP 810: Explicit lazy imports
Today, two PEPS were approved by the Steering Council:
- https://discuss.python.org/t/pep-798-unpacking-in-comprehensions/99435/60
- https://discuss.python.org/t/pep-810-explicit-lazy-imports/104131/466
/r/Python
https://redd.it/1ongpc9
Today, two PEPS were approved by the Steering Council:
- https://discuss.python.org/t/pep-798-unpacking-in-comprehensions/99435/60
- https://discuss.python.org/t/pep-810-explicit-lazy-imports/104131/466
/r/Python
https://redd.it/1ongpc9
Discussions on Python.org
PEP 798: Unpacking in Comprehensions
I’d recommend you start a PR now to change the PEP and (maybe after some review) mention it to the SC so they can take a look. (Not speaking on behalf of the SC, but that’s what I have done on some previous PEPs and they never got upset at me 🙂 .)
DP PKBoost v2 is out! An entropy-guided boosting library with a focus on drift adaptation and multiclass/regression support.
Hey everyone in the ML community,
I wanted to start by saying a huge thank you for all the engagement and feedback on PKBoost so far. Your questions, tests, and critiques have been incredibly helpful in shaping this next version. I especially want to thank everyone who took the time to run benchmarks, particularly in challenging drift and imbalance scenarios.
For the Context here are the previous post's
Post 1
Post 2
I'm really excited to announce that PKBoost v2 is now available on GitHub. Here’s a rundown of what's new and improved:
Key New Features
Shannon Entropy Guidance: We've introduced a mutual-information weighted split criterion. This helps the model prioritize features that are truly informative, which has shown to be especially useful in highly imbalanced datasets.
Auto-Tuning: To make things easier, there's now dataset profiling and automatic selection for hyperparameters like learning rate, tree depth, and MI weight.
Expanded Support for Multi-Class and Regression: We've added One-vs-Rest for multiclass boosting and a full range of regression capabilities, including Huber loss for outlier handling.
Hierarchical Adaptive Boosting (HAB): This is a new partition-based ensemble method. It uses k-means clustering to train specialist models on different segments of the data. It also includes drift detection, so only the affected parts of
/r/MachineLearning
https://redd.it/1on8y3y
Hey everyone in the ML community,
I wanted to start by saying a huge thank you for all the engagement and feedback on PKBoost so far. Your questions, tests, and critiques have been incredibly helpful in shaping this next version. I especially want to thank everyone who took the time to run benchmarks, particularly in challenging drift and imbalance scenarios.
For the Context here are the previous post's
Post 1
Post 2
I'm really excited to announce that PKBoost v2 is now available on GitHub. Here’s a rundown of what's new and improved:
Key New Features
Shannon Entropy Guidance: We've introduced a mutual-information weighted split criterion. This helps the model prioritize features that are truly informative, which has shown to be especially useful in highly imbalanced datasets.
Auto-Tuning: To make things easier, there's now dataset profiling and automatic selection for hyperparameters like learning rate, tree depth, and MI weight.
Expanded Support for Multi-Class and Regression: We've added One-vs-Rest for multiclass boosting and a full range of regression capabilities, including Huber loss for outlier handling.
Hierarchical Adaptive Boosting (HAB): This is a new partition-based ensemble method. It uses k-means clustering to train specialist models on different segments of the data. It also includes drift detection, so only the affected parts of
/r/MachineLearning
https://redd.it/1on8y3y
Reddit
From the MachineLearning community on Reddit: [R] PKBoost: Gradient boosting that stays accurate under data drift (2% degradation…
Explore this post and more from the MachineLearning community
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1onsfns
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1onsfns
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
AidMap - Crowdsourced Map for first-aid kits & AEDs
https://github.com/dankeg/AidMap
/r/flask
https://redd.it/1onsrai
https://github.com/dankeg/AidMap
/r/flask
https://redd.it/1onsrai
GitHub
GitHub - dankeg/AidMap: Crowdsourced information about first-aid kits, AEDS, and other emergency supplies.
Crowdsourced information about first-aid kits, AEDS, and other emergency supplies. - GitHub - dankeg/AidMap: Crowdsourced information about first-aid kits, AEDS, and other emergency supplies.
Just got $5K AWS credits approved for my startup
Didn’t expect this to still work in 2025, but I just got **$5,000 in AWS credits** approved for my small startup.
We’re not in YC or any accelerator just a verified startup with:
* a **website**
* a **business email**
* and an actual product in progress
It took around 2–3 days to get verified, and the credits were added directly to the AWS account.
So if you’re building something and have your own domain, there’s still a valid path to get AWS credits even if you’re not part of Activate.
If anyone’s curious or wants to check if they’re eligible, DM me I can share the steps.
/r/django
https://redd.it/1onmrq5
Didn’t expect this to still work in 2025, but I just got **$5,000 in AWS credits** approved for my small startup.
We’re not in YC or any accelerator just a verified startup with:
* a **website**
* a **business email**
* and an actual product in progress
It took around 2–3 days to get verified, and the credits were added directly to the AWS account.
So if you’re building something and have your own domain, there’s still a valid path to get AWS credits even if you’re not part of Activate.
If anyone’s curious or wants to check if they’re eligible, DM me I can share the steps.
/r/django
https://redd.it/1onmrq5
Reddit
From the django community on Reddit
Explore this post and more from the django community
Intercom — Open-Source WebRTC Audio & Video Intercom System in Python
Hi Friends,
I just finished an open-source project called Intercom. It turns any computer with a microphone, speakers, and webcam into a remote intercom. You can talk, listen, and watch in real time through your browser using WebRTC.
Repo: https://github.com/zemendaniel/intercom
# What My Project Does
Intercom allows a single user to remotely stream video and audio from a Linux machine to a browser. The user can monitor a space and communicate in real-time with anyone watching. It uses WebRTC for low-latency streaming and Python/Quart as the backend.
# Target Audience
Hobbyists or developers who want a self-hosted intercom system
People looking for a lightweight, Python-based WebRTC solution
Small deployments where 1 user streams and 1 viewer watches
>
# Comparison
Unlike commercial tools like Zoom or Jitsi, Intercom is:
Self-hosted: No third-party servers required for video/audio except for optional TURN relay.
Lightweight & Python-powered: Easy to read, modify, and extend.
Open-source GPLv3 licensed: Guarantees users can use, modify, and redistribute freely.
# Features
🔊 Two-way audio communication
🎥 Live video streaming
🔐 Password-protected single-user login
⚙️ Built with Python, Quart, and aiortc
🧩 Uses Coturn for TURN/STUN relay support
🖥️ Easy deployment on Linux (Ubuntu/Debian)
# Tech Stack
Python 3.11+
Quart (async web framework)
aiortc (WebRTC + media handling)
Hypercorn
/r/Python
https://redd.it/1onnwlx
Hi Friends,
I just finished an open-source project called Intercom. It turns any computer with a microphone, speakers, and webcam into a remote intercom. You can talk, listen, and watch in real time through your browser using WebRTC.
Repo: https://github.com/zemendaniel/intercom
# What My Project Does
Intercom allows a single user to remotely stream video and audio from a Linux machine to a browser. The user can monitor a space and communicate in real-time with anyone watching. It uses WebRTC for low-latency streaming and Python/Quart as the backend.
# Target Audience
Hobbyists or developers who want a self-hosted intercom system
People looking for a lightweight, Python-based WebRTC solution
Small deployments where 1 user streams and 1 viewer watches
>
# Comparison
Unlike commercial tools like Zoom or Jitsi, Intercom is:
Self-hosted: No third-party servers required for video/audio except for optional TURN relay.
Lightweight & Python-powered: Easy to read, modify, and extend.
Open-source GPLv3 licensed: Guarantees users can use, modify, and redistribute freely.
# Features
🔊 Two-way audio communication
🎥 Live video streaming
🔐 Password-protected single-user login
⚙️ Built with Python, Quart, and aiortc
🧩 Uses Coturn for TURN/STUN relay support
🖥️ Easy deployment on Linux (Ubuntu/Debian)
# Tech Stack
Python 3.11+
Quart (async web framework)
aiortc (WebRTC + media handling)
Hypercorn
/r/Python
https://redd.it/1onnwlx
GitHub
GitHub - zemendaniel/intercom
Contribute to zemendaniel/intercom development by creating an account on GitHub.
In which cases you use custom middlewares
I understand what middlewares are and how they work. But in django, they are global in nature. So, in most projects with versatile requirements, especially with roles, why would anyone want to use global middlewares other than for logging.
As a developer, when have you felt the need to use custom global middlewares?
I really want to understand its use cases so I can better prepare this topic.
Thanks a lot.
/r/djangolearning
https://redd.it/1onq17b
I understand what middlewares are and how they work. But in django, they are global in nature. So, in most projects with versatile requirements, especially with roles, why would anyone want to use global middlewares other than for logging.
As a developer, when have you felt the need to use custom global middlewares?
I really want to understand its use cases so I can better prepare this topic.
Thanks a lot.
/r/djangolearning
https://redd.it/1onq17b
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
What are some of the most interesting Django projects you worked on?
What are some of the most interesting Django projects you worked on? Be they in a professional or personal capacity. Be mindful if using a professional example not to divulge anything considered sensitive.
/r/django
https://redd.it/1onfdra
What are some of the most interesting Django projects you worked on? Be they in a professional or personal capacity. Be mindful if using a professional example not to divulge anything considered sensitive.
/r/django
https://redd.it/1onfdra
Reddit
From the django community on Reddit
Explore this post and more from the django community
Showcase trendspyg - Python library for Google Trends data (pytrends
replacement)
What My Project Does
trendspyg retrieves real-time Google Trends data with two approaches:
RSS Feed (0.2s) - Fast trends with news articles, images, and sources
CSV Export (10s) - 480 trends with filtering (time periods, categories,
regions)
pip install trendspyg
from trendspyg import download_google_trends_rss
\# Get trends with news context in <1 second
trends = download_google_trends_rss('US')
print(f"{trends[0\]['trend'\]}: {trends[0\]['news_articles'\][0\]['headline'\]}")
\# Output: "xrp: XRP Price Faces Death Cross Pattern"
Key features:
\- 📰 News articles (3-5 per trend) with sources
\- 📸 Images with attribution
\- 🌍 114 countries + 51 US states
\- 📊 4 output formats (dict, DataFrame, JSON, CSV)
\- ⚡ 188,000+ configuration options
\---
Target Audience
Production-ready for:
\- Data scientists: Multiple output formats, 24 automated tests, 92% RSS
coverage
\- Journalists: 0.2s response time for breaking news validation with credible
sources
\- SEO/Marketing: Free alternative saving $300-1,500/month vs commercial APIs
\- Researchers: Mixed-methods ready (RSS = qualitative, CSV = quantitative)
Stability: v0.2.0, tested on Python 3.8-3.12, CI/CD pipeline active
\---
Comparison
vs. pytrends (archived April 2025)
/r/Python
https://redd.it/1oo8fka
replacement)
What My Project Does
trendspyg retrieves real-time Google Trends data with two approaches:
RSS Feed (0.2s) - Fast trends with news articles, images, and sources
CSV Export (10s) - 480 trends with filtering (time periods, categories,
regions)
pip install trendspyg
from trendspyg import download_google_trends_rss
\# Get trends with news context in <1 second
trends = download_google_trends_rss('US')
print(f"{trends[0\]['trend'\]}: {trends[0\]['news_articles'\][0\]['headline'\]}")
\# Output: "xrp: XRP Price Faces Death Cross Pattern"
Key features:
\- 📰 News articles (3-5 per trend) with sources
\- 📸 Images with attribution
\- 🌍 114 countries + 51 US states
\- 📊 4 output formats (dict, DataFrame, JSON, CSV)
\- ⚡ 188,000+ configuration options
\---
Target Audience
Production-ready for:
\- Data scientists: Multiple output formats, 24 automated tests, 92% RSS
coverage
\- Journalists: 0.2s response time for breaking news validation with credible
sources
\- SEO/Marketing: Free alternative saving $300-1,500/month vs commercial APIs
\- Researchers: Mixed-methods ready (RSS = qualitative, CSV = quantitative)
Stability: v0.2.0, tested on Python 3.8-3.12, CI/CD pipeline active
\---
Comparison
vs. pytrends (archived April 2025)
/r/Python
https://redd.it/1oo8fka
Reddit
From the Python community on Reddit: [Showcase] trendspyg - Python library for Google Trends data (pytrends
replacement)
replacement)
Explore this post and more from the Python community
CoreSpecViewer: An open-source hyperspectral core scanning platform
# [](https://www.reddit.com/r/Python/?f=flair_name%3A%22Showcase%22)[CoreSpecViewer](https://github.com/Russjas/CoreSpecViewer/tree/main)
This is my first serious python repo, where I have actually built something rather than just "learn to code" projects.
It is pretty niche, a gui for hyperspectral core scanning workflows, but I am pretty pleased with it.
I hope that I have set it up in such a way that I can add pages with extra functionality, additional instrument manufacturers.
If anyone is nerdy enough to want to play with it free data can be downloaded from:
Happy to recieve all comments and criticisms, particularly if anyone does try it on data and breaks it!
**What my project does:**
This is a platform for opening raw hyperspectral core scanning data, processing and performing necessary corrections and processing for interpretation. It also handles all loading and saving of data, including products
**Target Audience**
Principally geologist working with drill core, this data is becoming more and more available, but there is limited choice in commercial applications and most open-souce solution require command line or scripting
**Comparison**
This is similar to many open-source python libraries, and uses them extensively, but is the only desktop based GUI platform
/r/Python
https://redd.it/1oo8lpf
# [](https://www.reddit.com/r/Python/?f=flair_name%3A%22Showcase%22)[CoreSpecViewer](https://github.com/Russjas/CoreSpecViewer/tree/main)
This is my first serious python repo, where I have actually built something rather than just "learn to code" projects.
It is pretty niche, a gui for hyperspectral core scanning workflows, but I am pretty pleased with it.
I hope that I have set it up in such a way that I can add pages with extra functionality, additional instrument manufacturers.
If anyone is nerdy enough to want to play with it free data can be downloaded from:
Happy to recieve all comments and criticisms, particularly if anyone does try it on data and breaks it!
**What my project does:**
This is a platform for opening raw hyperspectral core scanning data, processing and performing necessary corrections and processing for interpretation. It also handles all loading and saving of data, including products
**Target Audience**
Principally geologist working with drill core, this data is becoming more and more available, but there is limited choice in commercial applications and most open-souce solution require command line or scripting
**Comparison**
This is similar to many open-source python libraries, and uses them extensively, but is the only desktop based GUI platform
/r/Python
https://redd.it/1oo8lpf
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
How often does Python allocate?
Recently a tweet blew up that was along the lines of 'I will never forgive Rust for making me think to myself “I wonder if this is allocating” whenever I’m writing Python now' to which almost everyone jokingly responded with "it's Python, of course it's allocating"
I wanted to see how true this was, so I did some digging into the CPython source and wrote a blog post about my findings, I focused specifically on allocations of the `PyLongObject` struct which is the object that is created for every integer.
I noticed some interesting things:
1. There were a lot of allocations
2. CPython was actually reusing a lot of memory from a freelist
3. Even if it _did_ allocate, the underlying memory allocator was a pool allocator backed by an arena, meaning there were actually very few calls to the OS to reserve memory
Feel free to check out the blog post and let me know your thoughts!
/r/Python
https://redd.it/1ooe0g4
Recently a tweet blew up that was along the lines of 'I will never forgive Rust for making me think to myself “I wonder if this is allocating” whenever I’m writing Python now' to which almost everyone jokingly responded with "it's Python, of course it's allocating"
I wanted to see how true this was, so I did some digging into the CPython source and wrote a blog post about my findings, I focused specifically on allocations of the `PyLongObject` struct which is the object that is created for every integer.
I noticed some interesting things:
1. There were a lot of allocations
2. CPython was actually reusing a lot of memory from a freelist
3. Even if it _did_ allocate, the underlying memory allocator was a pool allocator backed by an arena, meaning there were actually very few calls to the OS to reserve memory
Feel free to check out the blog post and let me know your thoughts!
/r/Python
https://redd.it/1ooe0g4
zackoverflow.dev
How often does Python allocate?
The answer is "very often"
Type safe, coroutine based, purely functional algebraic effects in Python.
Hi gang. I'm a huge statically typed functional programming fan, and I have been working on a functional effect system for python for some years in multiple different projects.
With the latest release of my project https://github.com/suned/stateless, I've added direct integration with asyncio, which has been a major goal since I first started the project. Happy to take feedback and questions. Also, let me know if you want to try it out, either professionally or in your own projects!
What My Project Does
Enables type safe, functional effects in python, without monads.
Target Audience
Functional Python Enthusiasts.
/r/Python
https://redd.it/1oolq4o
Hi gang. I'm a huge statically typed functional programming fan, and I have been working on a functional effect system for python for some years in multiple different projects.
With the latest release of my project https://github.com/suned/stateless, I've added direct integration with asyncio, which has been a major goal since I first started the project. Happy to take feedback and questions. Also, let me know if you want to try it out, either professionally or in your own projects!
What My Project Does
Enables type safe, functional effects in python, without monads.
Target Audience
Functional Python Enthusiasts.
/r/Python
https://redd.it/1oolq4o
GitHub
GitHub - suned/stateless: Statically typed, purely functional effects for Python.
Statically typed, purely functional effects for Python. - suned/stateless
Real time execution?
Hello my wonderful reddit pythonists!
I have for you a question:
Is there any existing solution that effectively achieve real-time output of every line as I type?
Some background:
I am a mechanical engineer (well a student, final year) and often do many different calculations and modelling of systems in software. I find that "calculators" often don't quite hit the level of flexibility id like to see; think Qalculate for example. Essentially, what I desire is a calculator where I can define variables, write equations, display plots, etc and be able to change a earlier variable having everything below it update in real-time.
Note: I am NOT new to python/programming. Talk dirty (technical) to me if you must.
What I have already explored:
Jupyter - Cell based, fine for some calculations where there may be a long running step (think meshing or heavy iteration). Doesn't output all results, only the last without a bunch of print() statements. Requires re-running all cells if a early variable is updated.
Marimo - Closer then Jupyter. Still cell based but updates dynamically. This is pretty close but still not there as it only seems to update dynamically with Marimo ui elements (like scroll bars)
/r/Python
https://redd.it/1oolpva
Hello my wonderful reddit pythonists!
I have for you a question:
Is there any existing solution that effectively achieve real-time output of every line as I type?
Some background:
I am a mechanical engineer (well a student, final year) and often do many different calculations and modelling of systems in software. I find that "calculators" often don't quite hit the level of flexibility id like to see; think Qalculate for example. Essentially, what I desire is a calculator where I can define variables, write equations, display plots, etc and be able to change a earlier variable having everything below it update in real-time.
Note: I am NOT new to python/programming. Talk dirty (technical) to me if you must.
What I have already explored:
Jupyter - Cell based, fine for some calculations where there may be a long running step (think meshing or heavy iteration). Doesn't output all results, only the last without a bunch of print() statements. Requires re-running all cells if a early variable is updated.
Marimo - Closer then Jupyter. Still cell based but updates dynamically. This is pretty close but still not there as it only seems to update dynamically with Marimo ui elements (like scroll bars)
/r/Python
https://redd.it/1oolpva
Reddit
From the Python community on Reddit: Real time execution?
Explore this post and more from the Python community
Weak Incentives (Py3.12+) — typed, stdlib‑only agent toolkit
What My Project Does
Weak Incentives is a lean, stdlib‑first runtime for side‑effect‑free background agents in Python. It composes dataclass‑backed prompt trees that render deterministic Markdown, parses strict JSON, and records plans/tool calls/staged edits in a session ledger with reducers, rollback, a sandboxed VFS, planning tools, and optional Python‑eval (via asteval). Adapters (OpenAI/LiteLLM) are optional and add structured output + tool orchestration.
Target Audience
Python developers building LLM agents or automation who want reproducibility/auditability, typed I/O, and minimal dependencies (Python 3.12+).
Comparison
Most frameworks emphasize graph schedulers/optimizers or pull in heavy deps. Weak Incentives centers deterministic prompt composition and fail‑closed structured outputs, with a built‑in session/event model (reducers, rollback) and sandboxed VFS/planning; it works provider‑free for rendering/state and adds adapters only when you evaluate.
Source Code:
https://github.com/weakincentives/weakincentives
/r/Python
https://redd.it/1oohs41
What My Project Does
Weak Incentives is a lean, stdlib‑first runtime for side‑effect‑free background agents in Python. It composes dataclass‑backed prompt trees that render deterministic Markdown, parses strict JSON, and records plans/tool calls/staged edits in a session ledger with reducers, rollback, a sandboxed VFS, planning tools, and optional Python‑eval (via asteval). Adapters (OpenAI/LiteLLM) are optional and add structured output + tool orchestration.
Target Audience
Python developers building LLM agents or automation who want reproducibility/auditability, typed I/O, and minimal dependencies (Python 3.12+).
Comparison
Most frameworks emphasize graph schedulers/optimizers or pull in heavy deps. Weak Incentives centers deterministic prompt composition and fail‑closed structured outputs, with a built‑in session/event model (reducers, rollback) and sandboxed VFS/planning; it works provider‑free for rendering/state and adds adapters only when you evaluate.
Source Code:
https://github.com/weakincentives/weakincentives
/r/Python
https://redd.it/1oohs41
GitHub
GitHub - weakincentives/weakincentives: Tools for developing and optimizing side effect free background agents
Tools for developing and optimizing side effect free background agents - weakincentives/weakincentives
pyro-mysql v0.1.8: a fast MySQL client library
What My Project Does
pyro-mysql is a fast sync/async MySQL library backed by Rust
Repo
https://github.com/elbaro/pyro-mysql/
Bench
https://github.com/elbaro/pyro-mysql/blob/main/BENCHMARK.md
For small sync SELECT, `pyro-mysql` is 40% faster than `mysqlclient`
For small async SELECT,
For large SELECT, `pyro-mysql (async)` is x3 faster than `aiomysql/asyncmy`
An experimental
For sync INSERT, `pyro-mysql` is 50% faster than `mysqlclient`
For async INSERT,
Target Audience: the library aims to be production-ready
Comparison: see the previous post
v0.1.8 adds the sqlalchemy support with the following dialects:
mysql+pyro\_mysql://
mysql+pyro_mysql_async://
mariadb+pyro\_mysql://
mariadb+pyro_mysql_async://
It is tested against related test suites from the sqlalchemy repo.
/r/Python
https://redd.it/1oo4uuk
What My Project Does
pyro-mysql is a fast sync/async MySQL library backed by Rust
Repo
https://github.com/elbaro/pyro-mysql/
Bench
https://github.com/elbaro/pyro-mysql/blob/main/BENCHMARK.md
For small sync SELECT, `pyro-mysql` is 40% faster than `mysqlclient`
For small async SELECT,
pyro-mysql is 30% faster than aiomysqlFor large SELECT, `pyro-mysql (async)` is x3 faster than `aiomysql/asyncmy`
An experimental
wtx backend (not included in v0.1.8) is x5 faster than aiomysql.For sync INSERT, `pyro-mysql` is 50% faster than `mysqlclient`
For async INSERT,
pyro-mysql is 20% slower than aiomysqlTarget Audience: the library aims to be production-ready
Comparison: see the previous post
v0.1.8 adds the sqlalchemy support with the following dialects:
mysql+pyro\_mysql://
mysql+pyro_mysql_async://
mariadb+pyro\_mysql://
mariadb+pyro_mysql_async://
It is tested against related test suites from the sqlalchemy repo.
/r/Python
https://redd.it/1oo4uuk
GitHub
GitHub - elbaro/pyro-mysql: High-performance MySQL driver for Python
High-performance MySQL driver for Python. Contribute to elbaro/pyro-mysql development by creating an account on GitHub.
Optimizing filtered vector queries from tens of seconds to single-digit milliseconds in PostgreSQL
We actively use pgvector in a production setting for maintaining and querying HNSW vector indexes used to power our recommendation algorithms. A couple of weeks ago, however, as we were adding many more candidates into our database, we suddenly noticed our query times increasing linearly with the number of profiles, which turned out to be a result of incorrectly structured and overly complicated SQL queries.
Turns out that I hadn't fully internalized how filtering vector queries really worked. I knew vector indexes were fundamentally different from B-trees, hash maps, GIN indexes, etc., but I had not understood that they were essentially incompatible with more standard filtering approaches in the way that they are typically executed.
I searched through google until page 10 and beyond with various different searches, but struggled to find thorough examples addressing the issues I was facing in real production scenarios that I could use to ground my expectations and guide my implementation.
Now, I wrote a blog post about some of the best practices I learned for filtering vector queries using pgvector with PostgreSQL based on all the information I could find, thoroughly tried and tested, and currently in deployed in production use. In it I try to provide:
\-
/r/Python
https://redd.it/1ooy326
We actively use pgvector in a production setting for maintaining and querying HNSW vector indexes used to power our recommendation algorithms. A couple of weeks ago, however, as we were adding many more candidates into our database, we suddenly noticed our query times increasing linearly with the number of profiles, which turned out to be a result of incorrectly structured and overly complicated SQL queries.
Turns out that I hadn't fully internalized how filtering vector queries really worked. I knew vector indexes were fundamentally different from B-trees, hash maps, GIN indexes, etc., but I had not understood that they were essentially incompatible with more standard filtering approaches in the way that they are typically executed.
I searched through google until page 10 and beyond with various different searches, but struggled to find thorough examples addressing the issues I was facing in real production scenarios that I could use to ground my expectations and guide my implementation.
Now, I wrote a blog post about some of the best practices I learned for filtering vector queries using pgvector with PostgreSQL based on all the information I could find, thoroughly tried and tested, and currently in deployed in production use. In it I try to provide:
\-
/r/Python
https://redd.it/1ooy326
Reddit
From the Python community on Reddit: Optimizing filtered vector queries from tens of seconds to single-digit milliseconds in PostgreSQL
Explore this post and more from the Python community