Feedback for an orchestration project
I have a project in mind that I want feedback about.
The project consists:
\- Server with a REST-API
\- Multiple agent with a REST-API
Both REST-API's will be made through flask-restful.
The communication should be initiated by the server through SSL connection and the agent should respond. And what the server will do: asking to execute command like statuses, changing configuration of an specific application and restart the application. The agent does the actual execution.
So the type of data is not realtime, so there is no need to use websockets.
But I can't rap my head around about the following:
\- Is it wise to have multi-agent architecture with REST-api's on both sides or is there a better way?
\- In case of multiple agents that potentially generate a lot of traffic: Should I use a message broker and in what way in case of the REST-API's?
\- What else do I need to take into consideration? (I already thought about authentication and authorization, what is going to be token-based and ACL's)
/r/flask
https://redd.it/1mexvj3
I have a project in mind that I want feedback about.
The project consists:
\- Server with a REST-API
\- Multiple agent with a REST-API
Both REST-API's will be made through flask-restful.
The communication should be initiated by the server through SSL connection and the agent should respond. And what the server will do: asking to execute command like statuses, changing configuration of an specific application and restart the application. The agent does the actual execution.
So the type of data is not realtime, so there is no need to use websockets.
But I can't rap my head around about the following:
\- Is it wise to have multi-agent architecture with REST-api's on both sides or is there a better way?
\- In case of multiple agents that potentially generate a lot of traffic: Should I use a message broker and in what way in case of the REST-API's?
\- What else do I need to take into consideration? (I already thought about authentication and authorization, what is going to be token-based and ACL's)
/r/flask
https://redd.it/1mexvj3
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Forget metaclasses; Python’s
Think you need a metaclass? You probably just need
Most people reach for metaclasses when customizing subclass behaviour. But in many cases,
What is
It’s a hook that gets automatically called on the base class whenever a new subclass is defined. Think of it like a class-level
# Why use it?
Validate or register subclasses
Enforce class-level interfaces or attributes
Automatically inject or modify subclass properties
Avoid the complexity of full metaclasses
# Example: Plugin Auto-Registration
class PluginBase:
plugins =
def initsubclass(cls, **kwargs):
super().initsubclass(kwargs)
print(f"Registering: {cls.name}")
PluginBase.plugins.append(cls)
class PluginA(PluginBase): pass
class PluginB(PluginBase): pass
print(PluginBase.plugins)
Output:
Registering: PluginA
Registering: PluginB
/r/Python
https://redd.it/1mevs3i
__init_subclass__ is all you really needThink you need a metaclass? You probably just need
__init_subclass__; Python’s underused subclass hook.Most people reach for metaclasses when customizing subclass behaviour. But in many cases,
__init_subclass__ is exactly what you need; and it’s been built into Python since 3.6.What is
__init_subclass__**?**It’s a hook that gets automatically called on the base class whenever a new subclass is defined. Think of it like a class-level
__init__, but for subclassing; not instancing.# Why use it?
Validate or register subclasses
Enforce class-level interfaces or attributes
Automatically inject or modify subclass properties
Avoid the complexity of full metaclasses
# Example: Plugin Auto-Registration
class PluginBase:
plugins =
def initsubclass(cls, **kwargs):
super().initsubclass(kwargs)
print(f"Registering: {cls.name}")
PluginBase.plugins.append(cls)
class PluginA(PluginBase): pass
class PluginB(PluginBase): pass
print(PluginBase.plugins)
Output:
Registering: PluginA
Registering: PluginB
/r/Python
https://redd.it/1mevs3i
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Need help with venv in vscode
Does anyone have a good tutorial on this ? I made my virtual environment on my desktop started the project and have problem opening the virtual environment in vsc. Do u know what the next step it usually has an option like this in pycharm.
/r/django
https://redd.it/1mf0v3b
Does anyone have a good tutorial on this ? I made my virtual environment on my desktop started the project and have problem opening the virtual environment in vsc. Do u know what the next step it usually has an option like this in pycharm.
/r/django
https://redd.it/1mf0v3b
Reddit
From the django community on Reddit
Explore this post and more from the django community
Pip 25.2: Resumable Downloads By Default
This week pip 25.2 has been released, it's a small release but the biggest change is resumable downloads, introduced in 25.1, have been enabled by default.
Resumable downloads will retry the download at the point a connection was disconnected within the same install or download command (though not across multiple commands). This has been a long standing feature request for users which have slow and/or unreliable internet, especially now some packages are multi-GB in size.
Richard, one of the pip maintainers, has again done an excellent write up: https://ichard26.github.io/blog/2025/07/whats-new-in-pip-25.2/
The full changelog is here: https://github.com/pypa/pip/blob/main/NEWS.rst#252-2025-07-30
One thing not obvious from either is the upgrade to resolvelib 1.2.0 improves most pathological resolutions significantly, speeding up the time for pip to find a valid resolution for the requirements. There is more work to do here, I will continue to try and find improvements in my spare time.
/r/Python
https://redd.it/1mf0cnh
This week pip 25.2 has been released, it's a small release but the biggest change is resumable downloads, introduced in 25.1, have been enabled by default.
Resumable downloads will retry the download at the point a connection was disconnected within the same install or download command (though not across multiple commands). This has been a long standing feature request for users which have slow and/or unreliable internet, especially now some packages are multi-GB in size.
Richard, one of the pip maintainers, has again done an excellent write up: https://ichard26.github.io/blog/2025/07/whats-new-in-pip-25.2/
The full changelog is here: https://github.com/pypa/pip/blob/main/NEWS.rst#252-2025-07-30
One thing not obvious from either is the upgrade to resolvelib 1.2.0 improves most pathological resolutions significantly, speeding up the time for pip to find a valid resolution for the requirements. There is more work to do here, I will continue to try and find improvements in my spare time.
/r/Python
https://redd.it/1mf0cnh
Richard Si
What's new in pip 25.2 ... and a sneak peek of 25.3
pip 25.2 is a small release. Python 3.14 is now supported, automatic download resumption is now enabled by default, and a few bugs/QoL deficiencies have been fixed.
Flask x SocketIO appears to be buffering socket.emit()'s with a 10 second pause when running on gevent integrated server
So I am trying to make a (relatively small) webapp production ready by moving off of the builtin WSGI server, and am encountering some issues with flask-socketio and gevent integration. I don't have my heart set on this integration, but it was the easiest to implement first, and the issues I'm experiencing feel more like I'm doing something wrong than a failing of the tooling itself.
With
The built-in WSGI sever does not seem to have this issue, messages are sent and arrive as soon as they are logged that they've been sent.
I'm pretty confident I'm simply doing something wrong, but I'm not sure what. What follows is a non-exhaustive story of what I've tried, how things work currently, and where I'm at. I'd like to switch over
/r/flask
https://redd.it/1mf1n3h
So I am trying to make a (relatively small) webapp production ready by moving off of the builtin WSGI server, and am encountering some issues with flask-socketio and gevent integration. I don't have my heart set on this integration, but it was the easiest to implement first, and the issues I'm experiencing feel more like I'm doing something wrong than a failing of the tooling itself.
With
gevent installed, the issue I'm having is that while the server logs that messages are being sent as soon as they arrive, the frontend shows them arriving in ~10s bursts. That is to say that the server will log messages emitted in a smooth stream, but the frontend shows no messages, for roughly a 5 to 10 second pause, then shows all of the messages arriving at the same time.The built-in WSGI sever does not seem to have this issue, messages are sent and arrive as soon as they are logged that they've been sent.
I'm pretty confident I'm simply doing something wrong, but I'm not sure what. What follows is a non-exhaustive story of what I've tried, how things work currently, and where I'm at. I'd like to switch over
/r/flask
https://redd.it/1mf1n3h
MCP-Agent - Python Open Source Framework for building AI agents with native MCP support
Hi r/Python \- I wanted to share something that my team and I built for agent builders using Python.
We've spent the last 6 months working on MCP-Agent \- an open source Python framework for building AI agents using the Model Context Protocol (MCP) for tool calls and structured agent-to-agent communication and orchestration.
Model Context Protocol (MCP) is a protocol that standardizes how LLMs interact with tools, memory, and prompts. This allows you to connect to Slack and Github, which means you can now ask an LLM to summarize all your Github issues, prioritize them by urgency, and post it on Slack.
What does our project do?
MCP-Agent is a developer-friendly, open-source framework for building and orchestrating AI agents with MCP as the core communication protocol. It is a simple but powerful library built with the fundamental building blocks for agentic systems outlined by Anthropic's Building effective agents post.
This makes it easy for Python developers to create workflows like:
Supabase to github typesync agent
Agents with chat-based browser usage
Deep research agents
Target audience
We've designed this library with production in mind, with features like:
Integration into Temporal for long-running agentic workflows
OTEL telemetry to connect to your own observability tools
YAML-based configurations for defining connections
/r/Python
https://redd.it/1mf0cih
Hi r/Python \- I wanted to share something that my team and I built for agent builders using Python.
We've spent the last 6 months working on MCP-Agent \- an open source Python framework for building AI agents using the Model Context Protocol (MCP) for tool calls and structured agent-to-agent communication and orchestration.
Model Context Protocol (MCP) is a protocol that standardizes how LLMs interact with tools, memory, and prompts. This allows you to connect to Slack and Github, which means you can now ask an LLM to summarize all your Github issues, prioritize them by urgency, and post it on Slack.
What does our project do?
MCP-Agent is a developer-friendly, open-source framework for building and orchestrating AI agents with MCP as the core communication protocol. It is a simple but powerful library built with the fundamental building blocks for agentic systems outlined by Anthropic's Building effective agents post.
This makes it easy for Python developers to create workflows like:
Supabase to github typesync agent
Agents with chat-based browser usage
Deep research agents
Target audience
We've designed this library with production in mind, with features like:
Integration into Temporal for long-running agentic workflows
OTEL telemetry to connect to your own observability tools
YAML-based configurations for defining connections
/r/Python
https://redd.it/1mf0cih
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
D Simple Questions Thread
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead!
Thread will stay alive until next one so keep posting after the date in the title.
Thanks to everyone for answering questions in the previous thread!
/r/MachineLearning
https://redd.it/1meysr1
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead!
Thread will stay alive until next one so keep posting after the date in the title.
Thanks to everyone for answering questions in the previous thread!
/r/MachineLearning
https://redd.it/1meysr1
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Why I Chose Django Instead of Microservices for My Cloud Dev Platform
Hey everyone,
I wanted to share my experience building Onix Enviro, a cloud development platform that lets users run full development environments directly in the browser. It provides container-based workspaces, port forwarding, templates for popular stacks like Flask or Node.js, and a browser-based VS Code editor.
At first, I thought microservices were the right approach. So I built the first version using:
* **FastAPI** for backend services
* **Svelte** for the frontend
* **Keycloak** for authentication
* **REST APIs** for communication between services
* **Kubernetes** for orchestrating everything, even in local development
* Everything deployed with Docker containers
Technically it worked, but it quickly became a nightmare.
* **Authentication was one of the hardest parts**: I went through a lot of trial and error trying to secure services. OAuth2 proxies were clunky and hard to manage across multiple apps.
* **Dev workflow**: Local development required running Kubernetes clusters, which made the setup heavy and slow. Just spinning things up could take 5 to 10 minutes.
* **Debugging pain**: Every issue meant digging through logs across multiple pods and services to find the root cause.
* **Slower iteration**: Even small features like template selection required updates across several services and configs.
* **Too much infrastructure**: I spent more time maintaining the system than improving
/r/django
https://redd.it/1mf951v
Hey everyone,
I wanted to share my experience building Onix Enviro, a cloud development platform that lets users run full development environments directly in the browser. It provides container-based workspaces, port forwarding, templates for popular stacks like Flask or Node.js, and a browser-based VS Code editor.
At first, I thought microservices were the right approach. So I built the first version using:
* **FastAPI** for backend services
* **Svelte** for the frontend
* **Keycloak** for authentication
* **REST APIs** for communication between services
* **Kubernetes** for orchestrating everything, even in local development
* Everything deployed with Docker containers
Technically it worked, but it quickly became a nightmare.
* **Authentication was one of the hardest parts**: I went through a lot of trial and error trying to secure services. OAuth2 proxies were clunky and hard to manage across multiple apps.
* **Dev workflow**: Local development required running Kubernetes clusters, which made the setup heavy and slow. Just spinning things up could take 5 to 10 minutes.
* **Debugging pain**: Every issue meant digging through logs across multiple pods and services to find the root cause.
* **Slower iteration**: Even small features like template selection required updates across several services and configs.
* **Too much infrastructure**: I spent more time maintaining the system than improving
/r/django
https://redd.it/1mf951v
Reddit
From the django community on Reddit: Why I Chose Django Instead of Microservices for My Cloud Dev Platform
Posted by Marksh11 - 38 votes and 16 comments
I have a django website that allows tutors to schedule sessions for later dates with students who can then book the sessions,am using celery workers to create and schedule tasks that changes a sessions is_expired to true after the date and time that was set by the tutor
I have noticed that everytime i start my development sever i also have to manually start my celery workers inorder to have that effect,what will i need to do when my website is in production mode and do u guys have or know of any other alternative ways to do this?
/r/django
https://redd.it/1mf0i4a
I have noticed that everytime i start my development sever i also have to manually start my celery workers inorder to have that effect,what will i need to do when my website is in production mode and do u guys have or know of any other alternative ways to do this?
/r/django
https://redd.it/1mf0i4a
Reddit
From the django community on Reddit
Explore this post and more from the django community
several users saving records to the database
I have a small internal service in company built on Django, and now more people are using it (up to 20). It's run by gunicorn on Linux. Recently, I've been receiving reports that, despite saving a form, the record isn't in the database. This is a rare occurrence, and almost every user has reported this. Theoretically, there's a message confirming the record was created, but as with people, I don't trust them when they say there was a 100% message. Can this generally happen, and does the number of users matter? If several users use the same form from different places, can there be any collisions and, for example, a record for user A was created but a message appeared for user B? Could it be due to different browsers? in other words, could the reason lie somewhere else than in the django service?
def new_subscription_with_fake_card(request):
plate = ""
content = {}
# Check if the "create new subscription" button was pressed
if request.POST.get("button_create_new_subscription") is not None:
/r/django
https://redd.it/1me6kqu
I have a small internal service in company built on Django, and now more people are using it (up to 20). It's run by gunicorn on Linux. Recently, I've been receiving reports that, despite saving a form, the record isn't in the database. This is a rare occurrence, and almost every user has reported this. Theoretically, there's a message confirming the record was created, but as with people, I don't trust them when they say there was a 100% message. Can this generally happen, and does the number of users matter? If several users use the same form from different places, can there be any collisions and, for example, a record for user A was created but a message appeared for user B? Could it be due to different browsers? in other words, could the reason lie somewhere else than in the django service?
def new_subscription_with_fake_card(request):
plate = ""
content = {}
# Check if the "create new subscription" button was pressed
if request.POST.get("button_create_new_subscription") is not None:
/r/django
https://redd.it/1me6kqu
Reddit
From the django community on Reddit
Explore this post and more from the django community
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1mfc9h3
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1mfc9h3
YouTube
Data Structures and Algorithms in Python - Full Course for Beginners
A beginner-friendly introduction to common data structures (linked lists, stacks, queues, graphs) and algorithms (search, sorting, recursion, dynamic programming) in Python. This course will help you prepare for coding interviews and assessments.
🔗 Course…
🔗 Course…
🐍 [Feedback] DJ Maker – Generate Full Django CRUD Apps & DRF APIs with One Command!
Hey Django community! 👋
I’m excited to share an open-source tool that has greatly improved my Django workflow:
🚀 [**DJ Maker – GitHub**](https://github.com/giacomo/dj-maker)
A lightweight yet early powerful Django code generator that instantly creates full apps – including models, views, URLs, templates, and even **Django REST Framework APIs** – right from the command line.
# 🔧 Why DJ Maker?
**✨ Key Features:**
* 🔁 Full **CRUD** app scaffolding (models, views, urls, templates)
* ⚙️ Support for **api, default and advanced routes**
* 🎨 Auto-generated **Bootstrap 5** HTML templates
* 💻 Beautiful CLI with interactive prompts (powered by Rich and Typer)
* 🧪 Preview and `--dry-run` modes to visualize before generating
* ✅ **91% test coverage**, built with best practices in mind
* 📦 Built-in API namespacing, versioning, and DRF router support
I'd love to hear your feedback, get a star ⭐, or even see a PR! Got feature ideas or suggestions? Drop a comment – I’m actively maintaining it 😄
**I hope you'll join this adventure – give it a spin and let me know what you think!**
**PyPI package:** [https://pypi.org/project/dj-maker/](https://pypi.org/project/dj-maker/)
/r/django
https://redd.it/1mfq7xr
Hey Django community! 👋
I’m excited to share an open-source tool that has greatly improved my Django workflow:
🚀 [**DJ Maker – GitHub**](https://github.com/giacomo/dj-maker)
A lightweight yet early powerful Django code generator that instantly creates full apps – including models, views, URLs, templates, and even **Django REST Framework APIs** – right from the command line.
# 🔧 Why DJ Maker?
**✨ Key Features:**
* 🔁 Full **CRUD** app scaffolding (models, views, urls, templates)
* ⚙️ Support for **api, default and advanced routes**
* 🎨 Auto-generated **Bootstrap 5** HTML templates
* 💻 Beautiful CLI with interactive prompts (powered by Rich and Typer)
* 🧪 Preview and `--dry-run` modes to visualize before generating
* ✅ **91% test coverage**, built with best practices in mind
* 📦 Built-in API namespacing, versioning, and DRF router support
I'd love to hear your feedback, get a star ⭐, or even see a PR! Got feature ideas or suggestions? Drop a comment – I’m actively maintaining it 😄
**I hope you'll join this adventure – give it a spin and let me know what you think!**
**PyPI package:** [https://pypi.org/project/dj-maker/](https://pypi.org/project/dj-maker/)
/r/django
https://redd.it/1mfq7xr
GitHub
GitHub - giacomo/dj-maker: A lightweight Django project generator for rapid scaffolding of apps, models, and templates. Ideal for…
A lightweight Django project generator for rapid scaffolding of apps, models, and templates. Ideal for developers seeking to streamline Django setup and code generation. - giacomo/dj-maker
Help For Resource
Hii everyone I'm a working professional and I just joined a company where i neeed to work on Django and I come from JS and Express ecosystem can someone tell me best resources to learn Django in a fast and effective way ,,,FYI. I know basic python already
/r/djangolearning
https://redd.it/1mf7zvr
Hii everyone I'm a working professional and I just joined a company where i neeed to work on Django and I come from JS and Express ecosystem can someone tell me best resources to learn Django in a fast and effective way ,,,FYI. I know basic python already
/r/djangolearning
https://redd.it/1mf7zvr
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Sunday Daily Thread: What's everyone working on this week?
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1mg53kt
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1mg53kt
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Anyone using GPT-4o + RAG to generate Django ORM queries? Struggling with hallucinations
Hi all,
I'm working on an internal project at my company where we're trying to connect a large language model (GPT-4o via OpenAI) to our Django-based web application. I’m looking for advice on how to improve accuracy and reduce hallucinations in the current setup.
Context:
Our web platform is a core internal tool developed with Django + PostgreSQL, and it tracks the technical sophistication of our international teams. We use a structured evaluation matrix that assesses each company across various criteria.
The platform includes data such as:
• Companies and their projects
• Sophistication levels for each evaluation criterion
• Discussion threads (like a forum)
• Tasks, attachments, and certifications
We’re often asked to generate ad hoc reports based on this data.
The idea is to build a chatbot assistant that helps us write Django ORM querysets in response to natural language questions like:
“How many companies have at least one project with ambition marked as ‘excellent’?”
Eventually, we’d like the assistant to run these queries (against a non-prod DB, of course) and return the actual results — but for now, the first step is generating correct and usable querysets.
What we’ve built so far:
• We’ve populated OpenAI’s vector store with the Python files from our Django app (mainly the
/r/django
https://redd.it/1mfqmjk
Hi all,
I'm working on an internal project at my company where we're trying to connect a large language model (GPT-4o via OpenAI) to our Django-based web application. I’m looking for advice on how to improve accuracy and reduce hallucinations in the current setup.
Context:
Our web platform is a core internal tool developed with Django + PostgreSQL, and it tracks the technical sophistication of our international teams. We use a structured evaluation matrix that assesses each company across various criteria.
The platform includes data such as:
• Companies and their projects
• Sophistication levels for each evaluation criterion
• Discussion threads (like a forum)
• Tasks, attachments, and certifications
We’re often asked to generate ad hoc reports based on this data.
The idea is to build a chatbot assistant that helps us write Django ORM querysets in response to natural language questions like:
“How many companies have at least one project with ambition marked as ‘excellent’?”
Eventually, we’d like the assistant to run these queries (against a non-prod DB, of course) and return the actual results — but for now, the first step is generating correct and usable querysets.
What we’ve built so far:
• We’ve populated OpenAI’s vector store with the Python files from our Django app (mainly the
/r/django
https://redd.it/1mfqmjk
Reddit
From the django community on Reddit
Explore this post and more from the django community
I built webpath to eliminate API boilerplate
I built webpath for myself. I did showcase it here last time and got some feedback. So i implemented the feedback. Anyway, it uses
So, why not just use
You can, but this removes all the long boilerplate code that you need to write in your entire workflow.
Instead of manually performing separate steps, you chain everything into a command:
1. Build a URL with
2. Make your request.
3. Query the nested JSON from the res object.
Before (more procedural, stpe 1 do this, step 2 do that, step 3 do blah blah blah)
response = httpx.get("https://api.github.com/repos/duriantaco/webpath")
response.raiseforstatus()
data = response.json()
owner = jmespath.search("owner.login", data)
print(f"Owner: {owner}")
After (more declarative, state your intent, what you want)
owner = Client("https://api.github.com").get("repos", "duriantaco", "webpath").find("owner.login")
print(f"Owner: {owner}")
It handles other things like auto-pagination and caching also. Basically, i wrote this for myself to stop writing plumbing code and focus on the data.
Less boilerplate.
# Target audience
Anyone dealing with apis
If you like to contribute or
/r/Python
https://redd.it/1mg61fl
I built webpath for myself. I did showcase it here last time and got some feedback. So i implemented the feedback. Anyway, it uses
httpx and jmespath under the hood.So, why not just use
requests or httpx \+ jmespath separately?You can, but this removes all the long boilerplate code that you need to write in your entire workflow.
Instead of manually performing separate steps, you chain everything into a command:
1. Build a URL with
/ just like pathlib.2. Make your request.
3. Query the nested JSON from the res object.
Before (more procedural, stpe 1 do this, step 2 do that, step 3 do blah blah blah)
response = httpx.get("https://api.github.com/repos/duriantaco/webpath")
response.raiseforstatus()
data = response.json()
owner = jmespath.search("owner.login", data)
print(f"Owner: {owner}")
After (more declarative, state your intent, what you want)
owner = Client("https://api.github.com").get("repos", "duriantaco", "webpath").find("owner.login")
print(f"Owner: {owner}")
It handles other things like auto-pagination and caching also. Basically, i wrote this for myself to stop writing plumbing code and focus on the data.
Less boilerplate.
# Target audience
Anyone dealing with apis
If you like to contribute or
/r/Python
https://redd.it/1mg61fl
But really, why use ‘uv’?
Overall, I think uv does a really good job at accomplishing its goal of being a net improvement on Python’s tooling. It works well and is fast.
That said, as a consumer of Python packages, I interact with uv maybe 2-3 times per month. Otherwise, I’m using my already-existing Python environments.
So, the questions I have are: Does the value provided by uv justify having another tool installed on my system? Why not just stick with Python tooling and accept ‘pip’ or ‘venv’ will be slightly slower? What am I missing here?
Edit: Thanks to some really insightful comments, I’m convinced that uv is worthwhile - even as a dev who doesn’t manage my project’s build process.
/r/Python
https://redd.it/1mfd3ww
Overall, I think uv does a really good job at accomplishing its goal of being a net improvement on Python’s tooling. It works well and is fast.
That said, as a consumer of Python packages, I interact with uv maybe 2-3 times per month. Otherwise, I’m using my already-existing Python environments.
So, the questions I have are: Does the value provided by uv justify having another tool installed on my system? Why not just stick with Python tooling and accept ‘pip’ or ‘venv’ will be slightly slower? What am I missing here?
Edit: Thanks to some really insightful comments, I’m convinced that uv is worthwhile - even as a dev who doesn’t manage my project’s build process.
/r/Python
https://redd.it/1mfd3ww
GitHub
GitHub - astral-sh/uv: An extremely fast Python package and project manager, written in Rust.
An extremely fast Python package and project manager, written in Rust. - astral-sh/uv
Built Fixie: AI Agent Debugger using LangChain + Ollama
Just finished building **Fixie**, an AI-powered debugging assistant that uses multiple specialized agents to analyze Python code, detect bugs, and suggest fixes. Thought I'd share it here for feedback and to see if others find it useful! It's fast, private (runs locally), and built with modularity in mind.
**What My project does:**
* **Multi-agent workflow**: Three specialized AI agents (SyntaxChecker, LogicReasoner, FixSuggester) work together
* **Intelligent bug detection**: Finds syntax errors, runtime issues, and identifies exact line numbers
* **Complete fix suggestions**: Provides full corrected code, not just hints
* **Confidence scoring**: Tells you how confident the AI is about its fix
* **Local & private**: Uses Ollama with Llama 3.2 - no data sent to external APIs
* **LangGraph orchestration**: Proper agent coordination and state management
🎯 **Target Audience**
Fixie is aimed at:
* Intermediate to advanced Python developers who want help debugging faster
* Tinkerers and AI builders exploring multi-agent systems
* Anyone who prefers **local, private AI tools** over cloud-based LLM APIs
It’s functional enough for light production use, but still has some rough edges.
🔍 **Comparison**
Unlike tools like GitHub Copilot or ChatGPT plugins:
* Fixie runs **entirely locally** — no API calls, no data sharing
* Uses a **multi-agent architecture**, with each agent focusing on a specific task
# Example output:
--- Fixie AI Debugger ---
/r/Python
https://redd.it/1mg6cew
Just finished building **Fixie**, an AI-powered debugging assistant that uses multiple specialized agents to analyze Python code, detect bugs, and suggest fixes. Thought I'd share it here for feedback and to see if others find it useful! It's fast, private (runs locally), and built with modularity in mind.
**What My project does:**
* **Multi-agent workflow**: Three specialized AI agents (SyntaxChecker, LogicReasoner, FixSuggester) work together
* **Intelligent bug detection**: Finds syntax errors, runtime issues, and identifies exact line numbers
* **Complete fix suggestions**: Provides full corrected code, not just hints
* **Confidence scoring**: Tells you how confident the AI is about its fix
* **Local & private**: Uses Ollama with Llama 3.2 - no data sent to external APIs
* **LangGraph orchestration**: Proper agent coordination and state management
🎯 **Target Audience**
Fixie is aimed at:
* Intermediate to advanced Python developers who want help debugging faster
* Tinkerers and AI builders exploring multi-agent systems
* Anyone who prefers **local, private AI tools** over cloud-based LLM APIs
It’s functional enough for light production use, but still has some rough edges.
🔍 **Comparison**
Unlike tools like GitHub Copilot or ChatGPT plugins:
* Fixie runs **entirely locally** — no API calls, no data sharing
* Uses a **multi-agent architecture**, with each agent focusing on a specific task
# Example output:
--- Fixie AI Debugger ---
/r/Python
https://redd.it/1mg6cew
Reddit
From the Python community on Reddit: Built Fixie: AI Agent Debugger using LangChain + Ollama
Explore this post and more from the Python community
Snob: Only run tests that matter, saving time and resources.
What the project does:
Most of the time, running your full test suite is a waste of time and resources, since only a portion of the files has changed since your last CI run / deploy.
Snob speeds up your development workflow and reduces CI testing costs dramatically by analyzing your Python project's dependency graph to intelligently select which tests to run based on code changes.
Target audience:
Python developers tired of long iteration cycles / CI runs.
Comparison:
I don't know of any real alternatives to this that aren't testrunner specific.
Github: https://github.com/alexpasmantier/snob
/r/Python
https://redd.it/1mgf5mu
What the project does:
Most of the time, running your full test suite is a waste of time and resources, since only a portion of the files has changed since your last CI run / deploy.
Snob speeds up your development workflow and reduces CI testing costs dramatically by analyzing your Python project's dependency graph to intelligently select which tests to run based on code changes.
Target audience:
Python developers tired of long iteration cycles / CI runs.
Comparison:
I don't know of any real alternatives to this that aren't testrunner specific.
Github: https://github.com/alexpasmantier/snob
/r/Python
https://redd.it/1mgf5mu
GitHub
GitHub - alexpasmantier/snob: A picky test selector 🧐
A picky test selector 🧐. Contribute to alexpasmantier/snob development by creating an account on GitHub.
ANN django‑smart‑ratelimit v0.8.0: Circuit Breaker Pattern for Enhanced Reliability
Major Features
Circuit Breaker Pattern: automatic failure detection and recovery for all backends
Exponential Backoff: smart recovery timing that increases delay on repeated failures
Built‑in by Default: all rate limiting automatically includes circuit breaker protection
Zero Configuration: works out‑of‑the‑box with sensible defaults
Full Customization: global settings, backend‑specific config, or disable if needed
Quality & Compatibility
50+ new tests covering scenarios & edge cases
Complete mypy compliance and thread‑safe operations
Minimal performance overhead and zero breaking changes
Install
pip install django‑smart‑ratelimit==0.8.0
Links
GitHub → https://github.com/YasserShkeir/django-smart-ratelimit
Looking forward to your feedback and real‑world performance stories!
/r/django
https://redd.it/1mgf25x
Major Features
Circuit Breaker Pattern: automatic failure detection and recovery for all backends
Exponential Backoff: smart recovery timing that increases delay on repeated failures
Built‑in by Default: all rate limiting automatically includes circuit breaker protection
Zero Configuration: works out‑of‑the‑box with sensible defaults
Full Customization: global settings, backend‑specific config, or disable if needed
Quality & Compatibility
50+ new tests covering scenarios & edge cases
Complete mypy compliance and thread‑safe operations
Minimal performance overhead and zero breaking changes
Install
pip install django‑smart‑ratelimit==0.8.0
Links
GitHub → https://github.com/YasserShkeir/django-smart-ratelimit
Looking forward to your feedback and real‑world performance stories!
/r/django
https://redd.it/1mgf25x
GitHub
GitHub - YasserShkeir/django-smart-ratelimit: A flexible and efficient rate limiting library for Django applications
A flexible and efficient rate limiting library for Django applications - YasserShkeir/django-smart-ratelimit