Using OOP interfaces in Python
I mainly code in the data space. I’m trying to wrap my head around interfaces. I get what they are and ideally how they work. They however seem pretty useless and most of the functions/methods I write make the use of an interface seem useless. Does anyone have any good examples they can share?
/r/Python
https://redd.it/1lvrkpg
I mainly code in the data space. I’m trying to wrap my head around interfaces. I get what they are and ideally how they work. They however seem pretty useless and most of the functions/methods I write make the use of an interface seem useless. Does anyone have any good examples they can share?
/r/Python
https://redd.it/1lvrkpg
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1lvyekp
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1lvyekp
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Django monolith + microservice (chat) setup — need input on auth flow
We built a Django + DRF monolithic SaaS app about 3 years ago that handles:
* User authentication (CustomUser)
* Subscription plans via Razorpay
* Users sign up, pay, and access features
Now we want to add a **chat feature** that interacts with **WhatsApp Web**. Here's our current plan:
* Create a separate **chat microservice** hosted on another subdomain (new VM)
* Use **React** frontend + **Django/DRF + Postgres** backend
* The chat microservice will:
* Use the **existing monolith for authentication**
* Maintain its **own database** for chat-related models
* Have a model like `ExternalCustomUser` which stores the `UUID` of the user from the monolith
The **React frontend** will interact with:
1. **Monolith backend** (for login/auth only)
2. **Chat microservice backend** (for all chat features)
# My questions:
1. Since login happens only once via the monolith, is the **authentication latency** negligible and acceptable?
2. After login, when the React app sends the auth token to the chat microservice, will the **chat DRF backend need to validate that token with the monolith on every request**, or is there a cleaner way to handle this?
3. Also, since the chat microservice doesn’t have a native `User` model (only an `ExternalCustomUser` with UUIDs), how should I handle`request.user`in DRF views?
/r/django
https://redd.it/1lw4pa4
We built a Django + DRF monolithic SaaS app about 3 years ago that handles:
* User authentication (CustomUser)
* Subscription plans via Razorpay
* Users sign up, pay, and access features
Now we want to add a **chat feature** that interacts with **WhatsApp Web**. Here's our current plan:
* Create a separate **chat microservice** hosted on another subdomain (new VM)
* Use **React** frontend + **Django/DRF + Postgres** backend
* The chat microservice will:
* Use the **existing monolith for authentication**
* Maintain its **own database** for chat-related models
* Have a model like `ExternalCustomUser` which stores the `UUID` of the user from the monolith
The **React frontend** will interact with:
1. **Monolith backend** (for login/auth only)
2. **Chat microservice backend** (for all chat features)
# My questions:
1. Since login happens only once via the monolith, is the **authentication latency** negligible and acceptable?
2. After login, when the React app sends the auth token to the chat microservice, will the **chat DRF backend need to validate that token with the monolith on every request**, or is there a cleaner way to handle this?
3. Also, since the chat microservice doesn’t have a native `User` model (only an `ExternalCustomUser` with UUIDs), how should I handle`request.user`in DRF views?
/r/django
https://redd.it/1lw4pa4
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is this safe to use ?
Hi everyone, i am curious about this code below.
re_path(r'^media/(?P<path>.*)$', serve, {'document_root': settings.MEDIA_ROOT}),
it usually solves my problem where i turned of the debug my django system, is it safe?
/r/django
https://redd.it/1lvjzdy
Hi everyone, i am curious about this code below.
re_path(r'^media/(?P<path>.*)$', serve, {'document_root': settings.MEDIA_ROOT}),
it usually solves my problem where i turned of the debug my django system, is it safe?
/r/django
https://redd.it/1lvjzdy
Reddit
From the django community on Reddit
Explore this post and more from the django community
I built a minimal, type-safe dependency injection container for Python
Hey everyone,
Coming from a Java background, I’ve always appreciated the power and elegance of the Spring framework’s dependency injection. However, as I began working more with Python, I noticed that most DI solutions felt unnecessarily complex. So, I decided to build my own: Fusebox.
What My Project Does Fusebox is a lightweight, zero-dependency dependency injection (DI) container for Python. It lets you register classes and inject dependencies using simple decorators, making it easy to manage and wire up your application’s components without any runtime patching or hidden magic. It supports both class and function injection, interface-to-implementation binding, and automatic singleton caching.
Target Audience Fusebox is intended for Python developers who want a straightforward, type-safe way to manage dependencies—whether you’re building production applications, prototypes, or even just experimenting with DI patterns. If you appreciate the clarity of Java’s Spring DI but want something minimal and Pythonic, this is for you.
Comparison Most existing Python DI libraries require complex configuration or introduce heavy abstractions. Fusebox takes a different approach: it keeps things simple and explicit, with no runtime patching, metaclass tricks, or bulky config files. Dependency registration and injection are handled with just two decorators—
Links:
[GitHub](https://github.com/ftbits/fusebox)
PyPI
Feedback, suggestions, and PRs are very welcome!
/r/Python
https://redd.it/1lw78pn
Hey everyone,
Coming from a Java background, I’ve always appreciated the power and elegance of the Spring framework’s dependency injection. However, as I began working more with Python, I noticed that most DI solutions felt unnecessarily complex. So, I decided to build my own: Fusebox.
What My Project Does Fusebox is a lightweight, zero-dependency dependency injection (DI) container for Python. It lets you register classes and inject dependencies using simple decorators, making it easy to manage and wire up your application’s components without any runtime patching or hidden magic. It supports both class and function injection, interface-to-implementation binding, and automatic singleton caching.
Target Audience Fusebox is intended for Python developers who want a straightforward, type-safe way to manage dependencies—whether you’re building production applications, prototypes, or even just experimenting with DI patterns. If you appreciate the clarity of Java’s Spring DI but want something minimal and Pythonic, this is for you.
Comparison Most existing Python DI libraries require complex configuration or introduce heavy abstractions. Fusebox takes a different approach: it keeps things simple and explicit, with no runtime patching, metaclass tricks, or bulky config files. Dependency registration and injection are handled with just two decorators—
@component and @inject.Links:
[GitHub](https://github.com/ftbits/fusebox)
PyPI
Feedback, suggestions, and PRs are very welcome!
/r/Python
https://redd.it/1lw78pn
GitHub
GitHub - ftbits/fusebox
Contribute to ftbits/fusebox development by creating an account on GitHub.
Implementing an in-app mailbox
I want to make an in-app mailbox using combination of socketio and flask (frontend being html,css and js)
is it possible,if yes then are there any tutorials and resources I can follow ...
/r/flask
https://redd.it/1lw6ym4
I want to make an in-app mailbox using combination of socketio and flask (frontend being html,css and js)
is it possible,if yes then are there any tutorials and resources I can follow ...
/r/flask
https://redd.it/1lw6ym4
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
P PrintGuard - SOTA Open-Source 3D print failure detection model
Hi everyone,
As part of my dissertation for my Computer Science degree at Newcastle University, I investigated how to enhance the current state of 3D print failure detection.
Current approaches such as Obico’s “Spaghetti Detective” utilise a vision based machine learning model, trained to only detect spaghetti related defects with a slow throughput on edge devices (<1fps on 2Gb Raspberry Pi 4b), making it not edge deployable, real-time or able to capture a wide plethora of defects. Whilst their model can be inferred locally, it’s expensive to run, using a lot of compute, typically inferred over their paid cloud service which introduces potential privacy concerns.
My research led to the creation of a new vision-based ML model, focusing on edge deployability so that it could be deployed for free on cheap, local hardware. I used a modified architecture of ShuffleNetv2 backbone encoding images for a Prototypical Network to ensure it can run in real-time with minimal hardware requirements (averaging 15FPS on the same 2Gb Raspberry Pi, a >40x improvement over Obico’s model). My benchmarks also indicate enhanced precision with an averaged 2x improvement in precision and recall over Spaghetti Detective.
My model is completely free to use, open-source, private, deployable anywhere and outperforms
/r/MachineLearning
https://redd.it/1lw8lvh
Hi everyone,
As part of my dissertation for my Computer Science degree at Newcastle University, I investigated how to enhance the current state of 3D print failure detection.
Current approaches such as Obico’s “Spaghetti Detective” utilise a vision based machine learning model, trained to only detect spaghetti related defects with a slow throughput on edge devices (<1fps on 2Gb Raspberry Pi 4b), making it not edge deployable, real-time or able to capture a wide plethora of defects. Whilst their model can be inferred locally, it’s expensive to run, using a lot of compute, typically inferred over their paid cloud service which introduces potential privacy concerns.
My research led to the creation of a new vision-based ML model, focusing on edge deployability so that it could be deployed for free on cheap, local hardware. I used a modified architecture of ShuffleNetv2 backbone encoding images for a Prototypical Network to ensure it can run in real-time with minimal hardware requirements (averaging 15FPS on the same 2Gb Raspberry Pi, a >40x improvement over Obico’s model). My benchmarks also indicate enhanced precision with an averaged 2x improvement in precision and recall over Spaghetti Detective.
My model is completely free to use, open-source, private, deployable anywhere and outperforms
/r/MachineLearning
https://redd.it/1lw8lvh
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Find the vulnerability in this view
https://preview.redd.it/qhez8asry1cf1.png?width=1770&format=png&auto=webp&s=181c3a46f3d43e7bbbb5b6e4dd6c62ec732bac32
I'm going to start a series to help people find vulnerable code.
There are multiple vulnerabilities in this snippet, but there's a riddle below to point out 1 particular vulnerability. The source of this finding was from Corgea's scanner.
The Riddle: I’m the kind of guest who stays well past the welcome. You could say I’ve got an open door policy, coming and going without much fuss, whether day or night, rain or shine. Though my hosts don't lock the gate, they let me linger far longer than I should. Who am I?
The code that's cut off in the image is irrelevant to the vulnerability.
/r/django
https://redd.it/1lwdew4
https://preview.redd.it/qhez8asry1cf1.png?width=1770&format=png&auto=webp&s=181c3a46f3d43e7bbbb5b6e4dd6c62ec732bac32
I'm going to start a series to help people find vulnerable code.
There are multiple vulnerabilities in this snippet, but there's a riddle below to point out 1 particular vulnerability. The source of this finding was from Corgea's scanner.
The Riddle: I’m the kind of guest who stays well past the welcome. You could say I’ve got an open door policy, coming and going without much fuss, whether day or night, rain or shine. Though my hosts don't lock the gate, they let me linger far longer than I should. Who am I?
The code that's cut off in the image is irrelevant to the vulnerability.
/r/django
https://redd.it/1lwdew4
Open source django project
Hello Django developers!
I've created an open-source repository for a press and media system. I've set up the basic structure, so if you're looking to practice or contribute to an open-source project, you're welcome to join us here: press_media_api 😀
/r/djangolearning
https://redd.it/1lu93wr
Hello Django developers!
I've created an open-source repository for a press and media system. I've set up the basic structure, so if you're looking to practice or contribute to an open-source project, you're welcome to join us here: press_media_api 😀
/r/djangolearning
https://redd.it/1lu93wr
GitHub
GitHub - m1amineratit/press-media-api
Contribute to m1amineratit/press-media-api development by creating an account on GitHub.
I am trying to brainstorm ways of hiding slow API requests from the user
I have a Django app deployed on Heroku. Its basically a composite app that relies on several APIs brought together for some data visualizations and modeling.
So far, my requests and methods are quick enough to work at the user level. This was true until I started to build the most recent feature.
The new feature is a GIS visualization relying on a few datapoints from about 200 different APIs, from the same entity. Basically I do:
for every ID:
data.needed = get_data(someAPI.com/{ID})
I am currently just updating a "result" field in the model with the data every time I make those requests. Then I use that result field to populate my template.
Now obviously this is very time consuming and expensive when done at the user level, so now I am looking into caching. I know Django has a cache system for templates, which is pretty easy to use. Unfortunately, the value from this feature depends on the data being as up-to-date as possible. This means I can't just cache the template. I need to run these backend requests frequently, but hide the actual request time from the user.
My first hunch (if
/r/django
https://redd.it/1lwmpvi
I have a Django app deployed on Heroku. Its basically a composite app that relies on several APIs brought together for some data visualizations and modeling.
So far, my requests and methods are quick enough to work at the user level. This was true until I started to build the most recent feature.
The new feature is a GIS visualization relying on a few datapoints from about 200 different APIs, from the same entity. Basically I do:
for every ID:
data.needed = get_data(someAPI.com/{ID})
I am currently just updating a "result" field in the model with the data every time I make those requests. Then I use that result field to populate my template.
Now obviously this is very time consuming and expensive when done at the user level, so now I am looking into caching. I know Django has a cache system for templates, which is pretty easy to use. Unfortunately, the value from this feature depends on the data being as up-to-date as possible. This means I can't just cache the template. I need to run these backend requests frequently, but hide the actual request time from the user.
My first hunch (if
/r/django
https://redd.it/1lwmpvi
HugeDomains
SomeAPi.com is for sale | HugeDomains
Painless, quick delivery of your domain name. Fast and professional service.
Admin Panel is not styled in unfold when production when serving static files through nginx
in production
admin panel
in development
admin panel
i am serving through nginx when in production but even when debug is true in production the admin panel is not styled.
I am not getting what is happening. If someone done it, please help me.
/r/django
https://redd.it/1lwebjr
in production
admin panel
in development
admin panel
i am serving through nginx when in production but even when debug is true in production the admin panel is not styled.
I am not getting what is happening. If someone done it, please help me.
/r/django
https://redd.it/1lwebjr
PicTex, a Python library to easily create stylized text images
Hey r/Python,
For the last few days, I've been diving deep into a project that I'm excited to share with you all. It's a library called
You know how sometimes you just want to take a string, give it a cool font, a nice gradient, maybe a shadow, and get a PNG out of it? I found that doing this with existing tools like Pillow or OpenCV can be surprisingly complex. You end up manually calculating text bounds, drawing things in multiple passes... it's a hassle.
So, I built
You have a fluent, chainable API to build up a style, and then just render your text.
That's it! It automatically calculates the canvas size, handles the layout, and gives you a nice image object
/r/Python
https://redd.it/1lwjsar
Hey r/Python,
For the last few days, I've been diving deep into a project that I'm excited to share with you all. It's a library called
PicTex, and its goal is to make generating text images easy in Python.You know how sometimes you just want to take a string, give it a cool font, a nice gradient, maybe a shadow, and get a PNG out of it? I found that doing this with existing tools like Pillow or OpenCV can be surprisingly complex. You end up manually calculating text bounds, drawing things in multiple passes... it's a hassle.
So, I built
PicTex for that.You have a fluent, chainable API to build up a style, and then just render your text.
from pictex import Canvas, LinearGradient, FontWeight
# You build a 'Canvas' like a style template
canvas = (
Canvas()
.font_family("path/to/your/Poppins-Bold.ttf")
.font_size(120)
.padding(40, 60)
.background_color(LinearGradient(colors=["#2C3E50", "#4A00E0"]))
.background_radius(30)
.color("white")
.add_shadow(offset=(2, 2), blur_radius=5, color="black")
)
# Then just render whatever text you want with that style
image = canvas.render("Hello, r/Python!")
image.save("hello_reddit.png")
That's it! It automatically calculates the canvas size, handles the layout, and gives you a nice image object
/r/Python
https://redd.it/1lwjsar
Reddit
From the Python community on Reddit: PicTex, a Python library to easily create stylized text images
Explore this post and more from the Python community
Friday Daily Thread: r/Python Meta and Free-Talk Fridays
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1lwscnp
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1lwscnp
Redditinc
Reddit Rules
Reddit Rules - Reddit
I cannot deploy my web service no matter what!
I am doing a simple Python-based project with a basic frontend. it never seems to get deployed at all!
When Railway tries to build my project, I get this error:
goCopyEditThe executable
But here’s the thing —
# ✅ My Setup:
Python: 3.11 (same locally and on Railway)
Flask: 3.0.3
Gunicorn: 23.0.0 (listed in requirements.txt)
requirements.txt is at the repo root
I created a Procfile with this:makefileCopyEditweb: gunicorn app:app
My main file is `app.py`, and my Flask object is
Even tried adding a `runtime.txt` with `python-3.11.9`
# ❗ What I've Tried:
Regenerated
Checked that `gunicorn` actually appears in it
Used
Confirmed everything is committed and pushed
Tried clean re-deploy on Railway (including "Deploy from GitHub" again)
# ❓Still... Railway skips installing gunicorn!
In the build logs, I don’t see anything like
# 💡 Any ideas?
Is there something I’m missing?
Do I need to tell Railway explicitly to
/r/flask
https://redd.it/1lw6xq0
I am doing a simple Python-based project with a basic frontend. it never seems to get deployed at all!
When Railway tries to build my project, I get this error:
goCopyEditThe executable
gunicorn could not be found.But here’s the thing —
gunicorn==23.0.0 is definitely listed in my requirements.txt, and I’ve already committed and pushed everyth# ✅ My Setup:
Python: 3.11 (same locally and on Railway)
Flask: 3.0.3
Gunicorn: 23.0.0 (listed in requirements.txt)
requirements.txt is at the repo root
I created a Procfile with this:makefileCopyEditweb: gunicorn app:app
My main file is `app.py`, and my Flask object is
app = Flask(__name__)Even tried adding a `runtime.txt` with `python-3.11.9`
# ❗ What I've Tried:
Regenerated
requirements.txt using pip freezeChecked that `gunicorn` actually appears in it
Used
echo / Out-File to correctly make the ProcfileConfirmed everything is committed and pushed
Tried clean re-deploy on Railway (including "Deploy from GitHub" again)
# ❓Still... Railway skips installing gunicorn!
In the build logs, I don’t see anything like
Collecting gunicorn — so obviously it’s not getting picked up, even though it's in the file.# 💡 Any ideas?
Is there something I’m missing?
Do I need to tell Railway explicitly to
/r/flask
https://redd.it/1lw6xq0
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
html-to-markdown v1.6.0 Released - Major Performance & Feature Update!
I'm excited to announce html-to-markdown v1.6.0 with massive performance
improvements and v1.5.0's comprehensive HTML5 support!
🏃♂️ Performance Gains (v1.6.0)
- ~2x faster with optimized ancestor caching
- ~30% additional speedup with automatic lxml detection
- Thread-safe processing using context variables
- Unified streaming architecture for memory-efficient large document
processing
🎯 Major Features (v1.5.0 + v1.6.0)
- Complete HTML5 support: All modern semantic, form, table, media, and
interactive elements
- Metadata extraction: Automatic title/meta tag extraction as markdown
comments
- Highlighted text support: <mark> tag conversion with multiple styles
- SVG & MathML support: Visual elements preserved or converted
- Ruby text annotations: East Asian typography support
- Streaming processing: Memory-efficient handling of large documents
- Custom exception classes: Better error handling and debugging
📦 Installation
pip install html-to-markdownlxml # With performance boost
pip install html-to-markdown # Standard installation
🔧 Breaking Changes
- Parser auto-detects lxml when available (previously defaulted to
html.parser)
- Enhanced metadata extraction enabled by default
Perfect for converting complex HTML documents to clean Markdown with blazing
performance!
GitHub: https://github.com/Goldziher/html-to-markdown
PyPI: https://pypi.org/project/html-to-markdown/
/r/Python
https://redd.it/1lwzlti
I'm excited to announce html-to-markdown v1.6.0 with massive performance
improvements and v1.5.0's comprehensive HTML5 support!
🏃♂️ Performance Gains (v1.6.0)
- ~2x faster with optimized ancestor caching
- ~30% additional speedup with automatic lxml detection
- Thread-safe processing using context variables
- Unified streaming architecture for memory-efficient large document
processing
🎯 Major Features (v1.5.0 + v1.6.0)
- Complete HTML5 support: All modern semantic, form, table, media, and
interactive elements
- Metadata extraction: Automatic title/meta tag extraction as markdown
comments
- Highlighted text support: <mark> tag conversion with multiple styles
- SVG & MathML support: Visual elements preserved or converted
- Ruby text annotations: East Asian typography support
- Streaming processing: Memory-efficient handling of large documents
- Custom exception classes: Better error handling and debugging
📦 Installation
pip install html-to-markdownlxml # With performance boost
pip install html-to-markdown # Standard installation
🔧 Breaking Changes
- Parser auto-detects lxml when available (previously defaulted to
html.parser)
- Enhanced metadata extraction enabled by default
Perfect for converting complex HTML documents to clean Markdown with blazing
performance!
GitHub: https://github.com/Goldziher/html-to-markdown
PyPI: https://pypi.org/project/html-to-markdown/
/r/Python
https://redd.it/1lwzlti
GitHub
GitHub - Goldziher/html-to-markdown: High performance and CommonMark compliant HTML to Markdown converter
High performance and CommonMark compliant HTML to Markdown converter - Goldziher/html-to-markdown
json-numpy - Lossless JSON Encoding for NumPy Arrays & Scalars
Hi r/Python!
A couple of years ago, I needed to send NumPy arrays to a JSON-RPC API and designed my own implementation. Then, I thought it could be of use to other developers and created a package for it!
---
# What My Project Does
`json-numpy` is a small Python module that enables lossless JSON serialization and deserialization of NumPy arrays and scalars. It's designed as a drop-in replacement for the built-in `json` module and provides:
* `dumps()` and `loads()` methods
* Custom `default` and `object_hook` functions to use with the standard `json` module or any JSON libraries that support it
* Monkey patching for the `json` module to enable support in third-party code
`json-numpy` is typed-hinted, tested across multiple Python versions and follows [Semantic Versioning](https://semver.org/).
Quick usage demo:
import numpy as np
import json_numpy
arr = np.array([0, 1, 2])
encoded_arr_str = json_numpy.dumps(arr)
# {"__numpy__": "AAAAAAAAAAABAAAAAAAAAAIAAAAAAAAA", "dtype": "<i8", "shape": [3]}
decoded_arr = json_numpy.loads(encoded_arr_str)
---
# Target Audience
My project is intended to help **developers** and **data scientists** use their NumPy data anywhere they need to use JSON, for example: APIs (JSON-RPC), configuration files, or logging data.
It is **NOT** intended for
/r/Python
https://redd.it/1lx167z
Hi r/Python!
A couple of years ago, I needed to send NumPy arrays to a JSON-RPC API and designed my own implementation. Then, I thought it could be of use to other developers and created a package for it!
---
# What My Project Does
`json-numpy` is a small Python module that enables lossless JSON serialization and deserialization of NumPy arrays and scalars. It's designed as a drop-in replacement for the built-in `json` module and provides:
* `dumps()` and `loads()` methods
* Custom `default` and `object_hook` functions to use with the standard `json` module or any JSON libraries that support it
* Monkey patching for the `json` module to enable support in third-party code
`json-numpy` is typed-hinted, tested across multiple Python versions and follows [Semantic Versioning](https://semver.org/).
Quick usage demo:
import numpy as np
import json_numpy
arr = np.array([0, 1, 2])
encoded_arr_str = json_numpy.dumps(arr)
# {"__numpy__": "AAAAAAAAAAABAAAAAAAAAAIAAAAAAAAA", "dtype": "<i8", "shape": [3]}
decoded_arr = json_numpy.loads(encoded_arr_str)
---
# Target Audience
My project is intended to help **developers** and **data scientists** use their NumPy data anywhere they need to use JSON, for example: APIs (JSON-RPC), configuration files, or logging data.
It is **NOT** intended for
/r/Python
https://redd.it/1lx167z
Semantic Versioning
Semantic Versioning 2.0.0
Semantic Versioning spec and website
Pure Python cryptographic tool for long-term secret storage - Shamir's Secret Sharing + AES-256-GCM
Been working on a Python project that does mathematical secret splitting for protecting critical stuff like crypto wallets, SSH keys, backup encryption keys, etc. Figured the r/Python community might find the implementation interesting.
**Links:**
* GitHub: [https://github.com/katvio/fractum](https://github.com/katvio/fractum)
* Security docs: [https://fractum.katvio.com/security-architecture/](https://fractum.katvio.com/security-architecture/)
# What the Project Does
So basically, Fractum takes your sensitive files and mathematically splits them into multiple pieces using Shamir's Secret Sharing + AES-256-GCM. The cool part is you can set it up so you need like 3 out of 5 pieces to get your original file back, but having only 2 pieces tells an attacker literally nothing.
It encrypts your file first, then splits the encryption key using some fancy polynomial math. You can stash the pieces in different places - bank vault, home safe, with family, etc. If your house burns down or you lose your hardware wallet, you can still recover everything from the remaining pieces.
# Target Audience
This is meant for real-world use, not just a toy project:
* Security folks managing infrastructure secrets
* Crypto holders protecting wallet seeds
* Sysadmins with backup encryption keys they can't afford to lose
* Anyone with important stuff that needs to survive disasters/theft
* Teams that need emergency recovery credentials
Built it with production security standards since I was
/r/Python
https://redd.it/1lx2cz9
Been working on a Python project that does mathematical secret splitting for protecting critical stuff like crypto wallets, SSH keys, backup encryption keys, etc. Figured the r/Python community might find the implementation interesting.
**Links:**
* GitHub: [https://github.com/katvio/fractum](https://github.com/katvio/fractum)
* Security docs: [https://fractum.katvio.com/security-architecture/](https://fractum.katvio.com/security-architecture/)
# What the Project Does
So basically, Fractum takes your sensitive files and mathematically splits them into multiple pieces using Shamir's Secret Sharing + AES-256-GCM. The cool part is you can set it up so you need like 3 out of 5 pieces to get your original file back, but having only 2 pieces tells an attacker literally nothing.
It encrypts your file first, then splits the encryption key using some fancy polynomial math. You can stash the pieces in different places - bank vault, home safe, with family, etc. If your house burns down or you lose your hardware wallet, you can still recover everything from the remaining pieces.
# Target Audience
This is meant for real-world use, not just a toy project:
* Security folks managing infrastructure secrets
* Crypto holders protecting wallet seeds
* Sysadmins with backup encryption keys they can't afford to lose
* Anyone with important stuff that needs to survive disasters/theft
* Teams that need emergency recovery credentials
Built it with production security standards since I was
/r/Python
https://redd.it/1lx2cz9
GitHub
GitHub - katvio/fractum: A portable secure file encryption tool that allows you to encrypt files and split them into multiple shares…
A portable secure file encryption tool that allows you to encrypt files and split them into multiple shares, with the ability to decrypt using only a subset of these shares. - katvio/fractum
Python code Understanding through Visualization
With memory\_graph you can better understand and debug your Python code through data visualization. The visualization shines a light on concepts like:
references
mutable vs immutable data types
function calls and variable scope
sharing data between variables
shallow vs deep copy
Target audience:
Useful for beginners to learn the right mental model to think about Python data, but also advanced programmers benefit from visualized debugging.
How to use:
You can generate a visualization with just a single line of code:
import memory_graph as mg
tuple1 = (4, 3, 2) # immutable
tuple2 = tuple1
tuple2 += (1,)
list1 = [4, 3, 2] # mutable
list2 = list1
list2 += [1]
mg.show(mg.stack()) # show a graph of the call stack
IDE integration:
🚀 But the best debugging experience you get with [memory\_graph](https://github.com/bterwijn/memory_graph) integrated in your IDE:
Visual Studio Code
Cursor AI
PyCharm
🎥 See the Quick Intro video for the setup.
/r/Python
https://redd.it/1lx367g
With memory\_graph you can better understand and debug your Python code through data visualization. The visualization shines a light on concepts like:
references
mutable vs immutable data types
function calls and variable scope
sharing data between variables
shallow vs deep copy
Target audience:
Useful for beginners to learn the right mental model to think about Python data, but also advanced programmers benefit from visualized debugging.
How to use:
You can generate a visualization with just a single line of code:
import memory_graph as mg
tuple1 = (4, 3, 2) # immutable
tuple2 = tuple1
tuple2 += (1,)
list1 = [4, 3, 2] # mutable
list2 = list1
list2 += [1]
mg.show(mg.stack()) # show a graph of the call stack
IDE integration:
🚀 But the best debugging experience you get with [memory\_graph](https://github.com/bterwijn/memory_graph) integrated in your IDE:
Visual Studio Code
Cursor AI
PyCharm
🎥 See the Quick Intro video for the setup.
/r/Python
https://redd.it/1lx367g
GitHub
GitHub - bterwijn/memory_graph: Teaching tool and debugging aid in context of references, mutable data types, and shallow and deep…
Teaching tool and debugging aid in context of references, mutable data types, and shallow and deep copy. - bterwijn/memory_graph
PyData Amsterdam 2025 (Sep 24-26) Program is LIVE
Hey all, The PyData Amsterdam 2025 Program is LIVE, check it out: https://amsterdam.pydata.org/program. Come join us from September 24-26 to celebrate our 10-year anniversary this year! We look forward to seeing you onsite!
/r/Python
https://redd.it/1lx64eg
Hey all, The PyData Amsterdam 2025 Program is LIVE, check it out: https://amsterdam.pydata.org/program. Come join us from September 24-26 to celebrate our 10-year anniversary this year! We look forward to seeing you onsite!
/r/Python
https://redd.it/1lx64eg
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
aiosqlitepool - SQLite async connection pool for high-performance
If you use SQLite with asyncio (FastAPI, background jobs, etc.), you might notice performance drops when your app gets busy.
Opening and closing connections for every query is fast, but not free and SQLite’s concurrency model allows only one writer.
I built [aiosqlitepool](https://github.com/slaily/aiosqlitepool) to help with this. It’s a small, MIT-licensed library that:
* Pools and reuses connections (avoiding open/close overhead)
* Keeps SQLite’s in-memory cache “hot” for faster queries
* Allows your application to process significantly more database queries per second under heavy load
Officially released in [PyPI](https://pypi.org/project/aiosqlitepool/).
Enjoy! :))
/r/Python
https://redd.it/1lx3njh
If you use SQLite with asyncio (FastAPI, background jobs, etc.), you might notice performance drops when your app gets busy.
Opening and closing connections for every query is fast, but not free and SQLite’s concurrency model allows only one writer.
I built [aiosqlitepool](https://github.com/slaily/aiosqlitepool) to help with this. It’s a small, MIT-licensed library that:
* Pools and reuses connections (avoiding open/close overhead)
* Keeps SQLite’s in-memory cache “hot” for faster queries
* Allows your application to process significantly more database queries per second under heavy load
Officially released in [PyPI](https://pypi.org/project/aiosqlitepool/).
Enjoy! :))
/r/Python
https://redd.it/1lx3njh
GitHub
GitHub - slaily/aiosqlitepool: 🛡️A resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient…
🛡️A resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient and scalable database operations. - slaily/aiosqlitepool