html2pic: transform basic html&css to image, without a browser (experimental)
Hey everyone,
For the past few months, I've been working on a personal graphics library called [PicTex](https://github.com/francozanardi/pictex). As an experiment, I got curious to see if I could build a lightweight HTML/CSS to image converter on top of it, without the overhead of a full browser engine like Selenium or Playwright.
**Important**: this is a proof-of-concept, and a large portion of the code was generated with AI assistance (primarily Claude) to quickly explore the idea. It's definitely not production-ready and likely has plenty of bugs and unhandled edge cases.
I'm sharing it here to show what I've been exploring, maybe it could be useful for someone.
Here's the link to the repo: [https://github.com/francozanardi/html2pic](https://github.com/francozanardi/html2pic)
---
### What My Project Does
`html2pic` takes a subset of HTML and CSS and renders it into a PNG, JPG, or SVG image, using Python + Skia. It also uses BeautifulSoup4 for HTML parsing, tinycss2 for CSS parsing.
Here’s a basic example:
```python
from html2pic import Html2Pic
html = '''
<div class="card">
<div class="avatar"></div>
<div class="user-info">
<h2>pictex_dev</h2>
<p>@python_renderer</p>
</div>
</div>
'''
css = '''
.card {
font-family: "Segoe UI";
display: flex;
align-items: center;
gap: 16px;
padding: 20px;
/r/Python
https://redd.it/1neuyit
Hey everyone,
For the past few months, I've been working on a personal graphics library called [PicTex](https://github.com/francozanardi/pictex). As an experiment, I got curious to see if I could build a lightweight HTML/CSS to image converter on top of it, without the overhead of a full browser engine like Selenium or Playwright.
**Important**: this is a proof-of-concept, and a large portion of the code was generated with AI assistance (primarily Claude) to quickly explore the idea. It's definitely not production-ready and likely has plenty of bugs and unhandled edge cases.
I'm sharing it here to show what I've been exploring, maybe it could be useful for someone.
Here's the link to the repo: [https://github.com/francozanardi/html2pic](https://github.com/francozanardi/html2pic)
---
### What My Project Does
`html2pic` takes a subset of HTML and CSS and renders it into a PNG, JPG, or SVG image, using Python + Skia. It also uses BeautifulSoup4 for HTML parsing, tinycss2 for CSS parsing.
Here’s a basic example:
```python
from html2pic import Html2Pic
html = '''
<div class="card">
<div class="avatar"></div>
<div class="user-info">
<h2>pictex_dev</h2>
<p>@python_renderer</p>
</div>
</div>
'''
css = '''
.card {
font-family: "Segoe UI";
display: flex;
align-items: center;
gap: 16px;
padding: 20px;
/r/Python
https://redd.it/1neuyit
GitHub
GitHub - francozanardi/pictex: A powerful Python library for creating complex visual compositions and beautifully styled images
A powerful Python library for creating complex visual compositions and beautifully styled images - francozanardi/pictex
D Math foundations to understand Convergence proofs?
Good day everyone, recently I've become interested in proofs of convergence for federated (and non-federated) algorithms, something like what's seen in appendix A of the FedProx paper (one page of it attached below)
I managed to go through the proof once and learn things like first order convexity condition from random blogs, but I don't think I will be able to do serious math with hackjobs like that. I need to get my math foundations up to a level where I can write one such proof intuitively.
So my question is: What resources must I study to get my math foundations up to par? Convex optimization by Boyd doesn't go through convergence analysis at all and even the convex optimization books that do, none of them use expectations over the iteration to proof convergence. Thanks for your time
https://preview.redd.it/481lxdf47lof1.png?width=793&format=png&auto=webp&s=6771d3ffe8a533155aa145b2ec691181a30968b9
/r/MachineLearning
https://redd.it/1nehy84
Good day everyone, recently I've become interested in proofs of convergence for federated (and non-federated) algorithms, something like what's seen in appendix A of the FedProx paper (one page of it attached below)
I managed to go through the proof once and learn things like first order convexity condition from random blogs, but I don't think I will be able to do serious math with hackjobs like that. I need to get my math foundations up to a level where I can write one such proof intuitively.
So my question is: What resources must I study to get my math foundations up to par? Convex optimization by Boyd doesn't go through convergence analysis at all and even the convex optimization books that do, none of them use expectations over the iteration to proof convergence. Thanks for your time
https://preview.redd.it/481lxdf47lof1.png?width=793&format=png&auto=webp&s=6771d3ffe8a533155aa145b2ec691181a30968b9
/r/MachineLearning
https://redd.it/1nehy84
16 reproducible bugs every Django learner hits (and how to fix them before they grow)
when i was learning Django and tried to connect it with modern AI workflows (RAG, embeddings, async tasks), i kept hitting weird bugs. each time i patched one, another came back in a different form.
so i built a Problem Map: a catalog of 16 reproducible failure modes. it’s written as a learning tool — you can open any item, see the minimal diagnosis, and apply a fix without needing extra SDKs or infra.
### why it matters for Django learners
* early projects often fail silently: OCR splits headers wrong, pgvector returns “nearest” but semantically wrong, or Celery starts before your index is ready.
* with this map, you can see the bug class before it happens. the fix is small but structural, and once applied, the same bug never reappears.
* it’s not a black box — each page is a step-by-step explainer, so you understand *why* the fix works.
### before vs after
* before: patch after output, firefight each bug, add regex, rerankers, tool hacks. ceiling \~70–80% stability.
* after: run acceptance checks before output (ΔS, λ, coverage). only stable states generate. ceiling moves to 90–95%+, and fixes stay permanent.
### quick use
you don’t need to read everything at once. just keep the map bookmarked.
/r/djangolearning
https://redd.it/1nd8xr5
when i was learning Django and tried to connect it with modern AI workflows (RAG, embeddings, async tasks), i kept hitting weird bugs. each time i patched one, another came back in a different form.
so i built a Problem Map: a catalog of 16 reproducible failure modes. it’s written as a learning tool — you can open any item, see the minimal diagnosis, and apply a fix without needing extra SDKs or infra.
### why it matters for Django learners
* early projects often fail silently: OCR splits headers wrong, pgvector returns “nearest” but semantically wrong, or Celery starts before your index is ready.
* with this map, you can see the bug class before it happens. the fix is small but structural, and once applied, the same bug never reappears.
* it’s not a black box — each page is a step-by-step explainer, so you understand *why* the fix works.
### before vs after
* before: patch after output, firefight each bug, add regex, rerankers, tool hacks. ceiling \~70–80% stability.
* after: run acceptance checks before output (ΔS, λ, coverage). only stable states generate. ceiling moves to 90–95%+, and fixes stay permanent.
### quick use
you don’t need to read everything at once. just keep the map bookmarked.
/r/djangolearning
https://redd.it/1nd8xr5
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
D Larry Ellison: “Inference is where the money is going to be made.”
In Oracle’s recent call, Larry Ellison said something that caught my attention:
“All this money we’re spending on training is going to be translated into products that are sold — which is all inferencing. There’s a huge amount of demand for inferencing… We think we’re better positioned than anybody to take advantage of it.”
It’s striking to see a major industry figure frame inference as the real revenue driver, not training. Feels like a shift in narrative: less about who can train the biggest model, and more about who can serve it efficiently, reliably, and at scale.
Not sure if the industry is really moving in this direction? Or will training still dominate the economics for years to come?
/r/MachineLearning
https://redd.it/1nfav96
In Oracle’s recent call, Larry Ellison said something that caught my attention:
“All this money we’re spending on training is going to be translated into products that are sold — which is all inferencing. There’s a huge amount of demand for inferencing… We think we’re better positioned than anybody to take advantage of it.”
It’s striking to see a major industry figure frame inference as the real revenue driver, not training. Feels like a shift in narrative: less about who can train the biggest model, and more about who can serve it efficiently, reliably, and at scale.
Not sure if the industry is really moving in this direction? Or will training still dominate the economics for years to come?
/r/MachineLearning
https://redd.it/1nfav96
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Update: Should I give away my app to my employer for free?
Link to original post - https://www.reddit.com/r/Python/s/UMQsQi8lAX
Hi, since my post gained a lot of attention the other day and I had a lot of messages, questions on the thread etc. I thought I would give an update.
I didn’t make it clear in my previous post but I developed this app in my own time, but using company resources.
I spoke to a friend in the HR team and he explained a similar scenario happened a few years ago, someone built an automation tool for outlook, which managed a mailbox receiving 500+ emails a day (dealing/contract notes) and he simply worked on a fund pricing team and only needed to view a few of those emails a day but realised the mailbox was a mess. He took the idea to senior management and presented the cost saving and benefits. Once it was deployed he was offered shares in the company and then a cash bonus once a year of realised savings was achieved.
I’ve been advised by my HR friend to approach senior management with my proposal, explain that I’ve already spoken to my manager and detail the cost savings I can make, ask for a salary increase to provide ongoing
/r/Python
https://redd.it/1nf57hb
Link to original post - https://www.reddit.com/r/Python/s/UMQsQi8lAX
Hi, since my post gained a lot of attention the other day and I had a lot of messages, questions on the thread etc. I thought I would give an update.
I didn’t make it clear in my previous post but I developed this app in my own time, but using company resources.
I spoke to a friend in the HR team and he explained a similar scenario happened a few years ago, someone built an automation tool for outlook, which managed a mailbox receiving 500+ emails a day (dealing/contract notes) and he simply worked on a fund pricing team and only needed to view a few of those emails a day but realised the mailbox was a mess. He took the idea to senior management and presented the cost saving and benefits. Once it was deployed he was offered shares in the company and then a cash bonus once a year of realised savings was achieved.
I’ve been advised by my HR friend to approach senior management with my proposal, explain that I’ve already spoken to my manager and detail the cost savings I can make, ask for a salary increase to provide ongoing
/r/Python
https://redd.it/1nf57hb
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1nfiys8
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1nfiys8
Amazon
Fluent Python: Clear, Concise, and Effective Programming
Fluent Python: Clear, Concise, and Effective Programming [Ramalho, Luciano] on Amazon.com. *FREE* shipping on qualifying offers. Fluent Python: Clear, Concise, and Effective Programming
Flowfile - An open-source visual ETL tool, now with a Pydantic-based node designer.
Hey r/Python,
I built Flowfile, an open-source tool for creating data pipelines both visually and in code. Here's the latest feature: Custom Node Designer.
# What My Project Does
Flowfile creates bidirectional conversion between visual ETL workflows and Python code. You can build pipelines visually and export to Python, or write Python and visualize it. The Custom Node Designer lets you define new visual nodes using Python classes with Pydantic for settings and Polars for data processing.
# Target Audience
Production-ready tool for data engineers who work with ETL pipelines. Also useful for prototyping and teams that need both visual and code representations of their workflows.
# Comparison
* **Alteryx**: Proprietary, expensive. Flowfile is open-source.
* **Apache NiFi**: Java-based, requires infrastructure. Flowfile is pip-installable Python.
* **Prefect/Dagster**: Orchestration-focused. Flowfile focuses on visual pipeline building.
# Custom Node Example
import polars as pl
from flowfile_core.flowfile.node_designer import (
CustomNodeBase, NodeSettings, Section,
ColumnSelector, MultiSelect, Types
)
class TextCleanerSettings(NodeSettings):
cleaning_options: Section = Section(
title="Cleaning Options",
/r/Python
https://redd.it/1nff4dw
Hey r/Python,
I built Flowfile, an open-source tool for creating data pipelines both visually and in code. Here's the latest feature: Custom Node Designer.
# What My Project Does
Flowfile creates bidirectional conversion between visual ETL workflows and Python code. You can build pipelines visually and export to Python, or write Python and visualize it. The Custom Node Designer lets you define new visual nodes using Python classes with Pydantic for settings and Polars for data processing.
# Target Audience
Production-ready tool for data engineers who work with ETL pipelines. Also useful for prototyping and teams that need both visual and code representations of their workflows.
# Comparison
* **Alteryx**: Proprietary, expensive. Flowfile is open-source.
* **Apache NiFi**: Java-based, requires infrastructure. Flowfile is pip-installable Python.
* **Prefect/Dagster**: Orchestration-focused. Flowfile focuses on visual pipeline building.
# Custom Node Example
import polars as pl
from flowfile_core.flowfile.node_designer import (
CustomNodeBase, NodeSettings, Section,
ColumnSelector, MultiSelect, Types
)
class TextCleanerSettings(NodeSettings):
cleaning_options: Section = Section(
title="Cleaning Options",
/r/Python
https://redd.it/1nff4dw
Reddit
From the Python community on Reddit: Flowfile - An open-source visual ETL tool, now with a Pydantic-based node designer.
Explore this post and more from the Python community
I built a from-scratch Python package for classic Numerical Methods (no NumPy/SciPy required!)
Hey everyone,
Over the past few months I’ve been building a Python package called
The idea is to make algorithms transparent and educational, so you can actually see how LU decomposition, power iteration, or RK4 are implemented under the hood. This is especially useful for students, self-learners, or anyone who wants a deeper feel for how numerical methods work beyond calling library functions.
https://github.com/denizd1/numethods
# 🔧 What’s included so far
Linear system solvers: LU (with pivoting), Gauss–Jordan, Jacobi, Gauss–Seidel, Cholesky
Root-finding: Bisection, Fixed-Point Iteration, Secant, Newton’s method
Interpolation: Newton divided differences, Lagrange form
Quadrature (integration): Trapezoidal rule, Simpson’s rule, Gauss–Legendre (2- and 3-point)
Orthogonalization & least squares: Gram–Schmidt, Householder QR, LS solver
Eigenvalue methods: Power iteration, Inverse iteration, Rayleigh quotient iteration, QR iteration
SVD (via eigen-decomposition of ATAA\^T AATA)
ODE solvers: Euler, Heun, RK2, RK4, Backward Euler, Trapezoidal, Adams–Bashforth, Adams–Moulton, Predictor–Corrector, Adaptive RK45
# ✅ Why this might be useful
Great for teaching/learning numerical methods step by step.
Good reference for people writing their own solvers in C/Fortran/Julia.
Lightweight, no dependencies.
Consistent object-oriented API (
# 🚀 What’s next
PDE solvers (heat, wave, Poisson with finite differences)
More optimization
/r/Python
https://redd.it/1nexoe8
Hey everyone,
Over the past few months I’ve been building a Python package called
numethods — a small but growing collection of classic numerical algorithms implemented 100% from scratch. No NumPy, no SciPy, just plain Python floats and list-of-lists.The idea is to make algorithms transparent and educational, so you can actually see how LU decomposition, power iteration, or RK4 are implemented under the hood. This is especially useful for students, self-learners, or anyone who wants a deeper feel for how numerical methods work beyond calling library functions.
https://github.com/denizd1/numethods
# 🔧 What’s included so far
Linear system solvers: LU (with pivoting), Gauss–Jordan, Jacobi, Gauss–Seidel, Cholesky
Root-finding: Bisection, Fixed-Point Iteration, Secant, Newton’s method
Interpolation: Newton divided differences, Lagrange form
Quadrature (integration): Trapezoidal rule, Simpson’s rule, Gauss–Legendre (2- and 3-point)
Orthogonalization & least squares: Gram–Schmidt, Householder QR, LS solver
Eigenvalue methods: Power iteration, Inverse iteration, Rayleigh quotient iteration, QR iteration
SVD (via eigen-decomposition of ATAA\^T AATA)
ODE solvers: Euler, Heun, RK2, RK4, Backward Euler, Trapezoidal, Adams–Bashforth, Adams–Moulton, Predictor–Corrector, Adaptive RK45
# ✅ Why this might be useful
Great for teaching/learning numerical methods step by step.
Good reference for people writing their own solvers in C/Fortran/Julia.
Lightweight, no dependencies.
Consistent object-oriented API (
.solve(), .integrate() etc).# 🚀 What’s next
PDE solvers (heat, wave, Poisson with finite differences)
More optimization
/r/Python
https://redd.it/1nexoe8
GitHub
GitHub - denizd1/numethods: A lightweight, from-scratch, object-oriented Python package implementing classic numerical methods.
A lightweight, from-scratch, object-oriented Python package implementing classic numerical methods. - denizd1/numethods
Thanks r/Python community for reviewing my project Ducky all in one networking tool!
Thanks to this community I received some feedbacks about Ducky that I posted last week on here, I got 42 stars on github as well and some comments for Duckys enhancement. Im thankful for the people who viewed the post and went to see the source code huge thanks to you all.
What Ducky Does:
Ducky is a desktop application that consolidates the essential tools of a network engineer or security enthusiast into a single, easy-to-use interface. Instead of juggling separate applications for terminal connections, network scanning, and diagnostics, Ducky provides a unified workspace to streamline your workflow. Its core features include a tabbed terminal (SSH, Telnet, Serial), an SNMP-powered network topology mapper, a port scanner, and a suite of security utilities like a CVE lookup and hash calculator.
Target Audience:
Ducky is built for anyone who works with network hardware and infrastructure. This includes:
Network Engineers & Administrators: For daily tasks like configuring switches and routers, troubleshooting connectivity, and documenting network layouts.
Cybersecurity Professionals: For reconnaissance tasks like network discovery, port scanning, and vulnerability research.
Students & Hobbyists: For those learning networking (e.g., for CompTIA Network+ or CCNA), Ducky provides a free, hands-on tool to explore and interact with real or virtual network devices.
/r/Python
https://redd.it/1nfdhlu
Thanks to this community I received some feedbacks about Ducky that I posted last week on here, I got 42 stars on github as well and some comments for Duckys enhancement. Im thankful for the people who viewed the post and went to see the source code huge thanks to you all.
What Ducky Does:
Ducky is a desktop application that consolidates the essential tools of a network engineer or security enthusiast into a single, easy-to-use interface. Instead of juggling separate applications for terminal connections, network scanning, and diagnostics, Ducky provides a unified workspace to streamline your workflow. Its core features include a tabbed terminal (SSH, Telnet, Serial), an SNMP-powered network topology mapper, a port scanner, and a suite of security utilities like a CVE lookup and hash calculator.
Target Audience:
Ducky is built for anyone who works with network hardware and infrastructure. This includes:
Network Engineers & Administrators: For daily tasks like configuring switches and routers, troubleshooting connectivity, and documenting network layouts.
Cybersecurity Professionals: For reconnaissance tasks like network discovery, port scanning, and vulnerability research.
Students & Hobbyists: For those learning networking (e.g., for CompTIA Network+ or CCNA), Ducky provides a free, hands-on tool to explore and interact with real or virtual network devices.
/r/Python
https://redd.it/1nfdhlu
Reddit
From the Python community on Reddit: Thanks r/Python community for reviewing my project Ducky all in one networking tool!
Explore this post and more from the Python community
Every Python Built-In Function Explained
Hi there, I just wanted to know more about Python and I had this crazy idea about knowing every built-in function from this language. Hope you learn sth new. Any feedback is welcomed. The source has the intention of sharing learning.
Here's the explanation
/r/Python
https://redd.it/1nfphsi
Hi there, I just wanted to know more about Python and I had this crazy idea about knowing every built-in function from this language. Hope you learn sth new. Any feedback is welcomed. The source has the intention of sharing learning.
Here's the explanation
/r/Python
https://redd.it/1nfphsi
YouTube
Every Python Built-In Function Explained
🌐 Python built-in functions in just 17 minutes! This fast-paced yet beginner-friendly tutorial will break down what each function does, why it’s useful, and how you can apply it to write cleaner, more efficient code.
Know about all the built-in functions…
Know about all the built-in functions…
What Auth/Security do you prefer for api in django ?
Hi all, I have been working on a django app and came to a point where i need to make a decision.
Should i use ?
1. Django(SessionAuthentication)
- Here i was facing issue with CSRF (Is CSRF good to have or must have ?)
2. Django allauth with dj-rest-auth with token based auth or with JWT
Here if i used JWT then what is more secure
\- sending refresh token in response body
\- sending refresh token in headers(cookie)
I just want to make an informed decision by taking help from you experienced devs.
Please enlighten me.
/r/django
https://redd.it/1nfqoib
Hi all, I have been working on a django app and came to a point where i need to make a decision.
Should i use ?
1. Django(SessionAuthentication)
- Here i was facing issue with CSRF (Is CSRF good to have or must have ?)
2. Django allauth with dj-rest-auth with token based auth or with JWT
Here if i used JWT then what is more secure
\- sending refresh token in response body
\- sending refresh token in headers(cookie)
I just want to make an informed decision by taking help from you experienced devs.
Please enlighten me.
/r/django
https://redd.it/1nfqoib
Reddit
From the django community on Reddit
Explore this post and more from the django community
Django deployed on Render gets me forbidden error in post
So recently i deployed backend made on django on render and frontend made on react on vercel so locally it was working perfectly but when i deployed on homepage i was calling an api which was GET request and it also worked perfectly on deployed version as well but on POST request its giving me forbidden error when i looked into it further it was something like CSRF error like from react i have to POST it with CSRF added to it .. so for calling any api i made a file called apiClient.js which i call for every api request (A small API client file that i call that fetches data from the backend, attaches CSRF tokens to non-GET requests, retries on 403 by refreshing the token, and always returns JSON.) and in the code itself i tackle an issue like i was not getting the csrftoken itself , like if i print document.cookies it gave me null all time .. i am trying to solve these issue from past few days tried chatgpt, gemini, deepseek , not solved the error yet . Please help me to fix these error or even if someone tackled the same issue you
/r/djangolearning
https://redd.it/1nbd0ss
So recently i deployed backend made on django on render and frontend made on react on vercel so locally it was working perfectly but when i deployed on homepage i was calling an api which was GET request and it also worked perfectly on deployed version as well but on POST request its giving me forbidden error when i looked into it further it was something like CSRF error like from react i have to POST it with CSRF added to it .. so for calling any api i made a file called apiClient.js which i call for every api request (A small API client file that i call that fetches data from the backend, attaches CSRF tokens to non-GET requests, retries on 403 by refreshing the token, and always returns JSON.) and in the code itself i tackle an issue like i was not getting the csrftoken itself , like if i print document.cookies it gave me null all time .. i am trying to solve these issue from past few days tried chatgpt, gemini, deepseek , not solved the error yet . Please help me to fix these error or even if someone tackled the same issue you
/r/djangolearning
https://redd.it/1nbd0ss
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Django needs a REST story
https://forum.djangoproject.com/t/django-needs-a-rest-story/42814
/r/django
https://redd.it/1nfssaq
https://forum.djangoproject.com/t/django-needs-a-rest-story/42814
/r/django
https://redd.it/1nfssaq
Django Forum
Django needs a REST story
In my DjangoCon US talk, “All the Ways to Use Django”, I had several ideas for how Django could improve as a framework. My first and most actionable idea was for Django to have native support REST APIs. I have been in many discussions about this at DjangoCon…
Built a simple, open-source test planner your team can start using today
https://kingyo-demo.pages.dev
/r/django
https://redd.it/1nfovqz
https://kingyo-demo.pages.dev
/r/django
https://redd.it/1nfovqz
Reddit
Built a simple, open-source test planner your team can start using today : r/django
153K subscribers in the django community. News and links for the Django community.
Seeking better opportunities - Advice needed!
Hi everyone,
I'm a Full-Stack developer from Spain with over 4 years of experience, mainly working with Django and Python. I'm currently the sole tech lead on a project, working remotely. While I love what I do, I feel a bit stuck due to the relatively low salaries in Spain and limited growth opportunities.
I'm looking for advice on how to transition to better opportunities abroad (ideally remote or in another country with a stronger tech scene). Has anyone made a similar move? What platforms, strategies, or skills would you recommend to stand out internationally? Any tips on navigating visas or finding remote roles with higher pay?
Thanks in advance for any advice!
/r/django
https://redd.it/1nfdx6j
Hi everyone,
I'm a Full-Stack developer from Spain with over 4 years of experience, mainly working with Django and Python. I'm currently the sole tech lead on a project, working remotely. While I love what I do, I feel a bit stuck due to the relatively low salaries in Spain and limited growth opportunities.
I'm looking for advice on how to transition to better opportunities abroad (ideally remote or in another country with a stronger tech scene). Has anyone made a similar move? What platforms, strategies, or skills would you recommend to stand out internationally? Any tips on navigating visas or finding remote roles with higher pay?
Thanks in advance for any advice!
/r/django
https://redd.it/1nfdx6j
Reddit
From the django community on Reddit
Explore this post and more from the django community
Announcing iceoryx2 v0.7: Fast and Robust Inter-Process Communication (IPC) Library
Hello hello,
I am one of the maintainers of the open-source zero-copy middleware iceoryx2, and we’ve just released iceoryx2 v0.7 which comes with Python language bindings. That means you can now use fast zero-copy communication directly in Python. Here is the full release blog: [https://ekxide.io/blog/iceoryx2-0-7-release/](https://ekxide.io/blog/iceoryx2-0-7-release/)
With iceoryx2 you can communicate between different processes, send data with publish-subscribe, build more complex request-response streams, or orchestrate processes using the event messaging pattern with notifiers and listeners.
We’ve prepared a set of Python examples here: [https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples/python](https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples/python)
On top of that, we invested some time into writing a detailed getting started guide in the iceoryx2 book: [https://ekxide.github.io/iceoryx2-book/main/getting-started/quickstart.html](https://ekxide.github.io/iceoryx2-book/main/getting-started/quickstart.html)
And one more thing: iceoryx2 lets Python talk directly to C, C++ and Rust processes - without any serialization or binding overhead. Check out the cross-language publish-subscribe example to see it in action: [https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples](https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples)
So in short:
* **What My Project Does:** Zero-Copy Inter-Process Communication
* **Target Audience:** Developers building distributed systems, plugin-based applications, or safety-critical and certifiable systems
* **Comparision:** Provides a high-level, service-oriented abstraction over low-level shared memory system calls
/r/Python
https://redd.it/1nfvo8y
Hello hello,
I am one of the maintainers of the open-source zero-copy middleware iceoryx2, and we’ve just released iceoryx2 v0.7 which comes with Python language bindings. That means you can now use fast zero-copy communication directly in Python. Here is the full release blog: [https://ekxide.io/blog/iceoryx2-0-7-release/](https://ekxide.io/blog/iceoryx2-0-7-release/)
With iceoryx2 you can communicate between different processes, send data with publish-subscribe, build more complex request-response streams, or orchestrate processes using the event messaging pattern with notifiers and listeners.
We’ve prepared a set of Python examples here: [https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples/python](https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples/python)
On top of that, we invested some time into writing a detailed getting started guide in the iceoryx2 book: [https://ekxide.github.io/iceoryx2-book/main/getting-started/quickstart.html](https://ekxide.github.io/iceoryx2-book/main/getting-started/quickstart.html)
And one more thing: iceoryx2 lets Python talk directly to C, C++ and Rust processes - without any serialization or binding overhead. Check out the cross-language publish-subscribe example to see it in action: [https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples](https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples)
So in short:
* **What My Project Does:** Zero-Copy Inter-Process Communication
* **Target Audience:** Developers building distributed systems, plugin-based applications, or safety-critical and certifiable systems
* **Comparision:** Provides a high-level, service-oriented abstraction over low-level shared memory system calls
/r/Python
https://redd.it/1nfvo8y
ekxide IO GmbH
ekxide IO GmbH | The company behind iceoryx and iceoryx2 | iceoryx Support & Training
Official iceoryx support, training and custom feature development by ekxide IO GmbH. Expert C++ & Rust engineering services for mission-critical projects and systems.
Help!!. How do I approach to write code for this?
I have product and product_img table relation(one-many),
if client sends the form containing datas of product and product_img in single request,
what approach should i use(or standard),
should i extract text and img separately and feed to serializer and save it ?
or should i use nested serializer?
/r/django
https://redd.it/1nfn641
I have product and product_img table relation(one-many),
if client sends the form containing datas of product and product_img in single request,
what approach should i use(or standard),
should i extract text and img separately and feed to serializer and save it ?
or should i use nested serializer?
/r/django
https://redd.it/1nfn641
Reddit
From the django community on Reddit
Explore this post and more from the django community
MathFlow: an easy-to-use math library for python
Project Site: [https://github.com/cybergeek1943/MathFlow](https://github.com/cybergeek1943/MathFlow)
In the process of doing research for my paper [Combinatorial and Gaussian Foundations of Rational Nth Root Approximations](https://doi.org/10.48550/arXiv.2508.14095) (on arXiv), I created this library to address the pain points I felt when using only SymPy and SciPy separately. I wanted something lightweight, easy to use (exploratory), and something that would support numerical methods more easily. Hence, I created this lightweight wrapper that provides a hybrid symbolic-numerical interface to symbolic and numerical backends. It is backward compatible with Sympy. In short, this enables much faster analysis of symbolic math expressions by providing both numerical and traditional symbolic methods of analysis in the same interface. I have also added additional numerical methods that neither SymPy nor SciPy have (Pade approximations, numerical roots, etc.). The main goal for this project is to provide a tool that requires as little of a learning curve as possible and allows them to just focus on the math they are doing.
# Core features
* **🔒 Operative Closure**: Mathematical operations return new Expression objects by default
* **⚡ Mutability Control**: Choose between immutable (default) and mutable expressions for different workflows
* **🔗 Seamless Numerical Integration**: Every symbolic expression has a `.n` attribute providing numerical methods without manual lambdification (uses cached lambdified
/r/Python
https://redd.it/1nfyq8o
Project Site: [https://github.com/cybergeek1943/MathFlow](https://github.com/cybergeek1943/MathFlow)
In the process of doing research for my paper [Combinatorial and Gaussian Foundations of Rational Nth Root Approximations](https://doi.org/10.48550/arXiv.2508.14095) (on arXiv), I created this library to address the pain points I felt when using only SymPy and SciPy separately. I wanted something lightweight, easy to use (exploratory), and something that would support numerical methods more easily. Hence, I created this lightweight wrapper that provides a hybrid symbolic-numerical interface to symbolic and numerical backends. It is backward compatible with Sympy. In short, this enables much faster analysis of symbolic math expressions by providing both numerical and traditional symbolic methods of analysis in the same interface. I have also added additional numerical methods that neither SymPy nor SciPy have (Pade approximations, numerical roots, etc.). The main goal for this project is to provide a tool that requires as little of a learning curve as possible and allows them to just focus on the math they are doing.
# Core features
* **🔒 Operative Closure**: Mathematical operations return new Expression objects by default
* **⚡ Mutability Control**: Choose between immutable (default) and mutable expressions for different workflows
* **🔗 Seamless Numerical Integration**: Every symbolic expression has a `.n` attribute providing numerical methods without manual lambdification (uses cached lambdified
/r/Python
https://redd.it/1nfyq8o
GitHub
GitHub - cybergeek1943/MathFlow: Like `requests` for mathematical computing, making complex math feel simple.
Like `requests` for mathematical computing, making complex math feel simple. - cybergeek1943/MathFlow
Interactive Relationship-Aware Vector Search for Jupyter
# 🧬 RudraDB-Opin: Interactive Relationship-Aware Vector Search for Jupyter
**Turn your notebook into an intelligent research assistant that discovers hidden connections.**
# Perfect for Interactive Research
Working in Jupyter? Tired of losing track of related papers, connected concepts, and follow-up ideas? RudraDB-Opin brings **relationship-aware search** directly to your interactive Python environment.
# Beyond Similarity Search
Traditional vector search in notebooks: "Find papers similar to this one"
**RudraDB-Opin**: "Find papers similar to this one + cited works + follow-up research + related methodologies + prerequisite concepts"
# 🎯 Built for Research Workflows
# Interactive Discovery
* **Multi-hop exploration** \- Start with one paper, discover research chains
* **Relationship visualization** \- See how your documents connect
* **Dynamic relationship building** \- Add connections as you discover them
* **Auto-dimension detection** \- Works with any embedding model instantly
# Research Organization Made Easy
* **Hierarchical relationships** \- Literature reviews → Key papers → Specific methods
* **Temporal connections** \- Research progression over time
* **Causal links** \- Problem → Methodology → Solution → Applications
* **Cross-references** \- Related work and citations
* **Thematic clustering** \- Group by research themes automatically
# 🔬 Research Use Cases
**Literature Review**: Start with key paper → Auto-discover entire research lineage
**Knowledge Base**: Build searchable repository of papers with intelligent connections
**Research Planning**: Map
/r/IPython
https://redd.it/1nfrf3p
# 🧬 RudraDB-Opin: Interactive Relationship-Aware Vector Search for Jupyter
**Turn your notebook into an intelligent research assistant that discovers hidden connections.**
# Perfect for Interactive Research
Working in Jupyter? Tired of losing track of related papers, connected concepts, and follow-up ideas? RudraDB-Opin brings **relationship-aware search** directly to your interactive Python environment.
# Beyond Similarity Search
Traditional vector search in notebooks: "Find papers similar to this one"
**RudraDB-Opin**: "Find papers similar to this one + cited works + follow-up research + related methodologies + prerequisite concepts"
# 🎯 Built for Research Workflows
# Interactive Discovery
* **Multi-hop exploration** \- Start with one paper, discover research chains
* **Relationship visualization** \- See how your documents connect
* **Dynamic relationship building** \- Add connections as you discover them
* **Auto-dimension detection** \- Works with any embedding model instantly
# Research Organization Made Easy
* **Hierarchical relationships** \- Literature reviews → Key papers → Specific methods
* **Temporal connections** \- Research progression over time
* **Causal links** \- Problem → Methodology → Solution → Applications
* **Cross-references** \- Related work and citations
* **Thematic clustering** \- Group by research themes automatically
# 🔬 Research Use Cases
**Literature Review**: Start with key paper → Auto-discover entire research lineage
**Knowledge Base**: Build searchable repository of papers with intelligent connections
**Research Planning**: Map
/r/IPython
https://redd.it/1nfrf3p
Reddit
From the IPython community on Reddit: Interactive Relationship-Aware Vector Search for Jupyter
Explore this post and more from the IPython community
SplitterMR: a modular library for splitting & parsing documents
Hey guys, I just released **SplitterMR**, a library I built because none of the existing tools quite did what I wanted for slicing up documents cleanly for LLMs / downstream processing.
If you often work with **mixed document types** (PDFs, Word, Excel, Markdown, images, etc.) and **need flexible, reliable splitting/parsing**, this might be useful.
This library supports **multiple input formats**, e.g., text, Markdown, PDF, Word / Excel / PowerPoint, HTML / XML, JSON / YAML, CSV / TSV, and even images.
Files can be read using **MarkItDown** or **Docling**, so this is perfect if you are using those frameworks with your current applications.
Logically, it supports **many different splitting strategies**: not only based on the number of characters but on tokens, schema keys, semantic similarity, and many other techniques. You can even develop your own splitter using the Base object, and it is the same for the Readers!
In addition, **you can process the graphical resources of your documents (e.g., photos) using VLMs** (OpenAI, Gemini, HuggingFace, etc.), so you can extract the text or caption them!
# What’s new / what’s good in the latest release
* Stable Version **1.0.0** is out.
* Supports **more input formats / more robust readers**.
* **Stable API** for the Reader abstractions so
/r/Python
https://redd.it/1ng2h8x
Hey guys, I just released **SplitterMR**, a library I built because none of the existing tools quite did what I wanted for slicing up documents cleanly for LLMs / downstream processing.
If you often work with **mixed document types** (PDFs, Word, Excel, Markdown, images, etc.) and **need flexible, reliable splitting/parsing**, this might be useful.
This library supports **multiple input formats**, e.g., text, Markdown, PDF, Word / Excel / PowerPoint, HTML / XML, JSON / YAML, CSV / TSV, and even images.
Files can be read using **MarkItDown** or **Docling**, so this is perfect if you are using those frameworks with your current applications.
Logically, it supports **many different splitting strategies**: not only based on the number of characters but on tokens, schema keys, semantic similarity, and many other techniques. You can even develop your own splitter using the Base object, and it is the same for the Readers!
In addition, **you can process the graphical resources of your documents (e.g., photos) using VLMs** (OpenAI, Gemini, HuggingFace, etc.), so you can extract the text or caption them!
# What’s new / what’s good in the latest release
* Stable Version **1.0.0** is out.
* Supports **more input formats / more robust readers**.
* **Stable API** for the Reader abstractions so
/r/Python
https://redd.it/1ng2h8x
Reddit
From the Python community on Reddit: SplitterMR: a modular library for splitting & parsing documents
Explore this post and more from the Python community