Run background tasks in Django with zero external dependencies. Here's an update on my library, django-async-manager.
Hey Django community!
I've posted here before about **django-async-manager**, a library I've been developing, and I wanted to share an update on its progress and features.
**What is django-async-manager?**
It's a lightweight, database-backed task queue for Django that provides a Celery-like experience without external dependencies. Perfect for projects where you need background task processing but don't want the overhead of setting up Redis, RabbitMQ, etc.
**✨ New Feature: Memory Management**
The latest update adds memory limit capabilities to prevent tasks from consuming too much RAM. This is especially useful for long-running tasks or when working in environments with limited resources.
# Task with Memory Limit
@background_task(memory_limit=512) # Limit to 512MB
def memory_intensive_task():
# This task will be terminated if it exceeds 512MB
large_data = process_large_dataset()
return analyze_data(large_data)
# Key Features
* **Simple decorator-based API** \- Just add `@background_task` to any function
* **Task prioritization** \- Set tasks as low, medium, high, or critical priority
* **Multiple queues** \- Route tasks to different workers
* **Task dependencies** \- Chain tasks together
* **Automatic retries** \- With configurable exponential backoff
* **Scheduled tasks** \- Cron-like scheduling for periodic tasks
* **Timeout control** \- Prevent tasks from running too long
* **Memory limits** \- Stop tasks from consuming
/r/django
https://redd.it/1lhxz7r
Hey Django community!
I've posted here before about **django-async-manager**, a library I've been developing, and I wanted to share an update on its progress and features.
**What is django-async-manager?**
It's a lightweight, database-backed task queue for Django that provides a Celery-like experience without external dependencies. Perfect for projects where you need background task processing but don't want the overhead of setting up Redis, RabbitMQ, etc.
**✨ New Feature: Memory Management**
The latest update adds memory limit capabilities to prevent tasks from consuming too much RAM. This is especially useful for long-running tasks or when working in environments with limited resources.
# Task with Memory Limit
@background_task(memory_limit=512) # Limit to 512MB
def memory_intensive_task():
# This task will be terminated if it exceeds 512MB
large_data = process_large_dataset()
return analyze_data(large_data)
# Key Features
* **Simple decorator-based API** \- Just add `@background_task` to any function
* **Task prioritization** \- Set tasks as low, medium, high, or critical priority
* **Multiple queues** \- Route tasks to different workers
* **Task dependencies** \- Chain tasks together
* **Automatic retries** \- With configurable exponential backoff
* **Scheduled tasks** \- Cron-like scheduling for periodic tasks
* **Timeout control** \- Prevent tasks from running too long
* **Memory limits** \- Stop tasks from consuming
/r/django
https://redd.it/1lhxz7r
Reddit
From the django community on Reddit: Run background tasks in Django with zero external dependencies. Here's an update on my library…
Explore this post and more from the django community
I made Flask-Squeeze which minifies and compresses responses!
https://github.com/mkrd/Flask-Squeeze
/r/flask
https://redd.it/1lhn2yu
https://github.com/mkrd/Flask-Squeeze
/r/flask
https://redd.it/1lhn2yu
GitHub
GitHub - mkrd/Flask-Squeeze: Automatically minify JS/CSS and compress all responses with brotli, defalte or gzip, with caching…
Automatically minify JS/CSS and compress all responses with brotli, defalte or gzip, with caching for static assets - mkrd/Flask-Squeeze
This media is not supported in your browser
VIEW IN TELEGRAM
[P] I made a website to visualize machine learning algorithms + derive math from scratch
/r/MachineLearning
https://redd.it/1lhtkr4
/r/MachineLearning
https://redd.it/1lhtkr4
Fenix: I built an algorithmic trading bot with CrewAI, Ollama, and Pandas.
Hey r/Python,
I'm excited to share a project I've been passionately working on, built entirely within the Python ecosystem: Fenix Trading Bot. The post was removed earlier for missing some sections, so here is a more structured breakdown.
GitHub Link: https://github.com/Ganador1/FenixAI\_tradingBot
# What My Project Does
Fenix is an open-source framework for algorithmic cryptocurrency trading. Instead of relying on a single strategy, it uses a crew of specialized AI agents orchestrated by CrewAI to make decisions. The workflow is:
1. It scrapes data from multiple sources: news feeds, social media (Twitter/Reddit), and real-time market data.
2. It uses a Visual Agent with a vision model (LLaVA) to analyze screenshots of TradingView charts, identifying visual patterns.
3. A Technical Agent analyzes quantitative indicators (RSI, MACD, etc.).
4. A Sentiment Agent reads news/social media to gauge market sentiment.
5. The analyses are passed to Consensus and Risk Management agents that weigh the evidence, check against user-defined risk parameters, and make the final BUY, SELL, or HOLD decision. The entire AI analysis runs 100% locally using Ollama, ensuring privacy and zero API costs.
# Target Audience
This project is aimed at:
Python Developers & AI Enthusiasts: Who want to see a real-world, complex application of modern Python libraries like CrewAI, Ollama, Pydantic, and Selenium working together. It serves as a great case study for building multi-agent systems.
Algorithmic Traders & Quants: Who are looking for a flexible, open-source framework that goes beyond
/r/Python
https://redd.it/1li8id5
Hey r/Python,
I'm excited to share a project I've been passionately working on, built entirely within the Python ecosystem: Fenix Trading Bot. The post was removed earlier for missing some sections, so here is a more structured breakdown.
GitHub Link: https://github.com/Ganador1/FenixAI\_tradingBot
# What My Project Does
Fenix is an open-source framework for algorithmic cryptocurrency trading. Instead of relying on a single strategy, it uses a crew of specialized AI agents orchestrated by CrewAI to make decisions. The workflow is:
1. It scrapes data from multiple sources: news feeds, social media (Twitter/Reddit), and real-time market data.
2. It uses a Visual Agent with a vision model (LLaVA) to analyze screenshots of TradingView charts, identifying visual patterns.
3. A Technical Agent analyzes quantitative indicators (RSI, MACD, etc.).
4. A Sentiment Agent reads news/social media to gauge market sentiment.
5. The analyses are passed to Consensus and Risk Management agents that weigh the evidence, check against user-defined risk parameters, and make the final BUY, SELL, or HOLD decision. The entire AI analysis runs 100% locally using Ollama, ensuring privacy and zero API costs.
# Target Audience
This project is aimed at:
Python Developers & AI Enthusiasts: Who want to see a real-world, complex application of modern Python libraries like CrewAI, Ollama, Pydantic, and Selenium working together. It serves as a great case study for building multi-agent systems.
Algorithmic Traders & Quants: Who are looking for a flexible, open-source framework that goes beyond
/r/Python
https://redd.it/1li8id5
GitHub
GitHub - Ganador1/FenixAI_tradingBot: Fenix Ai Trading Bot with crew ai and ollama
Fenix Ai Trading Bot with crew ai and ollama . Contribute to Ganador1/FenixAI_tradingBot development by creating an account on GitHub.
sodalite - an open source media downloader with a pure python backend
Made this as a passion project, hope you'll like it :) If you did, please star it! did it as a part of a hackathon and l'd appreciate the support.
What my project does
It detects a link you paste from a supported service, parses it via a network request and serves the file through a FastAPI backend.
Intended audience
Mostly someone who's willing to host this, production ig?
Repo link
https://github.com/oterin/sodalite
/r/Python
https://redd.it/1li6ek4
Made this as a passion project, hope you'll like it :) If you did, please star it! did it as a part of a hackathon and l'd appreciate the support.
What my project does
It detects a link you paste from a supported service, parses it via a network request and serves the file through a FastAPI backend.
Intended audience
Mostly someone who's willing to host this, production ig?
Repo link
https://github.com/oterin/sodalite
/r/Python
https://redd.it/1li6ek4
GitHub
GitHub - oterin/sodalite: open. paste. save.
open. paste. save. Contribute to oterin/sodalite development by creating an account on GitHub.
pandas/python functions (pushing and calling dataframe)
Hello all,
I am fairly new to python and all so i am having difficulty managing next.
So i wanted to create a dim table in separate file, then push few columns to SQL, and allow somehow for few other columns to be allowed to be pulled in another python file, where i would merge it with that data-frame.(creating ID keys basically),
But i am having difficulties doing that,its giving me some long as error. (This part when i am calling in other file : (product_table= Orders_product() )
Could someone point me to right direction?
Product table:
import pandas as pd
from MySQL import getmysqlengine
#getting file
File=r"ExcelFilePath"
Sheet="Orders"
df=pd.readexcel(File, sheetname=Sheet)
productcolumns=["Product Category","Product Sub-Category","Product Container","Product Name"]
def Ordersproduct():
#cleaning text/droping duplicates
dfproducts = df[productcolumns].copy()
for productCol in productcolumns:
dfproducts[productCol] = dfproducts[productCol].str.strip()
dfproducts['ProductKeyJoin'] = dfproductsproduct_columns.agg('|'.join, axis=1)
/r/Python
https://redd.it/1lhyni4
Hello all,
I am fairly new to python and all so i am having difficulty managing next.
So i wanted to create a dim table in separate file, then push few columns to SQL, and allow somehow for few other columns to be allowed to be pulled in another python file, where i would merge it with that data-frame.(creating ID keys basically),
But i am having difficulties doing that,its giving me some long as error. (This part when i am calling in other file : (product_table= Orders_product() )
Could someone point me to right direction?
Product table:
import pandas as pd
from MySQL import getmysqlengine
#getting file
File=r"ExcelFilePath"
Sheet="Orders"
df=pd.readexcel(File, sheetname=Sheet)
productcolumns=["Product Category","Product Sub-Category","Product Container","Product Name"]
def Ordersproduct():
#cleaning text/droping duplicates
dfproducts = df[productcolumns].copy()
for productCol in productcolumns:
dfproducts[productCol] = dfproducts[productCol].str.strip()
dfproducts['ProductKeyJoin'] = dfproductsproduct_columns.agg('|'.join, axis=1)
/r/Python
https://redd.it/1lhyni4
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I built a new package for processing documents for LLM applications: SplitterMR
Hi!
Over the past few months, I've been mulling over the idea of making a Python library. I work as an AI engineer, and I was a little tired of having to reinvent the wheel every time I had to make an RAG to process documents: chunking, reading, image processing, etc.
So, I've started working on a personal project and developed a library to process files you pass in Markdown format and then easily chunk them. I have called it SplitterMR. This library uses several cool things: it has support for Docling, MarkItDown, and PDFPlumber; it can split tables, describe images using VLMs, split text recursively, or do it by tokens. It is very very simple to use!
It's still in development, and I need to keep working on it, but if you could take a look at it in the meantime and tell me how it goes, I'd appreciate it :)
The code repository is: https://github.com/andreshere00/Splitter\_MR/, and the PyPi package is published here: https://pypi.org/project/splitter-mr/
I've also posted a documentation server with several plug-and-play examples so you can try them out and take a look: https://andreshere00.github.io/Splitter\_MR/
And as I said, I'm here for anything. Let me know!
/r/Python
https://redd.it/1liepo1
Hi!
Over the past few months, I've been mulling over the idea of making a Python library. I work as an AI engineer, and I was a little tired of having to reinvent the wheel every time I had to make an RAG to process documents: chunking, reading, image processing, etc.
So, I've started working on a personal project and developed a library to process files you pass in Markdown format and then easily chunk them. I have called it SplitterMR. This library uses several cool things: it has support for Docling, MarkItDown, and PDFPlumber; it can split tables, describe images using VLMs, split text recursively, or do it by tokens. It is very very simple to use!
It's still in development, and I need to keep working on it, but if you could take a look at it in the meantime and tell me how it goes, I'd appreciate it :)
The code repository is: https://github.com/andreshere00/Splitter\_MR/, and the PyPi package is published here: https://pypi.org/project/splitter-mr/
I've also posted a documentation server with several plug-and-play examples so you can try them out and take a look: https://andreshere00.github.io/Splitter\_MR/
And as I said, I'm here for anything. Let me know!
/r/Python
https://redd.it/1liepo1
GitHub
GitHub - andreshere00/Splitter_MR: Chunk your data into markdown text blocks for your LLM applications
Chunk your data into markdown text blocks for your LLM applications - andreshere00/Splitter_MR
[Showcase] leetfetch – A CLI tool to fetch and organize your LeetCode submissions
**GitHub**: [https://github.com/Rage997/leetfetch](https://github.com/Rage997/leetfetch)
**Example output repo**: [https://github.com/Rage997/LeetCode](https://github.com/Rage997/LeetCode)
# What It Does
**leetfetch** is a command-line Python tool that downloads all your LeetCode submissions and problem descriptions using your browser session (no password or API key needed). It groups them by problem and language, and creates Markdown summaries.
# Target Audience
Anyone who solves problems on LeetCode and wants to:
* Back up their work
* Track progress locally or on GitHub
# How It’s Different
Compared to other tools, leetfetch:
* Uses the current GraphQL API
* Filters by accepted (or all) submissions
* Generates a clean, browsable folder structure
# Example Usage
# Download accepted Python3 submissions
python3 main.py --languages python3
# Download all submissions in all languages
python3 main.py --no-only-accepted --all-languages
# Only fetch problems not yet saved
python3 main.py --sync
No login needed – just need to be signed in with your browser.
Let me know what you think.
/r/Python
https://redd.it/1liej6o
**GitHub**: [https://github.com/Rage997/leetfetch](https://github.com/Rage997/leetfetch)
**Example output repo**: [https://github.com/Rage997/LeetCode](https://github.com/Rage997/LeetCode)
# What It Does
**leetfetch** is a command-line Python tool that downloads all your LeetCode submissions and problem descriptions using your browser session (no password or API key needed). It groups them by problem and language, and creates Markdown summaries.
# Target Audience
Anyone who solves problems on LeetCode and wants to:
* Back up their work
* Track progress locally or on GitHub
# How It’s Different
Compared to other tools, leetfetch:
* Uses the current GraphQL API
* Filters by accepted (or all) submissions
* Generates a clean, browsable folder structure
# Example Usage
# Download accepted Python3 submissions
python3 main.py --languages python3
# Download all submissions in all languages
python3 main.py --no-only-accepted --all-languages
# Only fetch problems not yet saved
python3 main.py --sync
No login needed – just need to be signed in with your browser.
Let me know what you think.
/r/Python
https://redd.it/1liej6o
GitHub
GitHub - Rage997/leetfetch: A commandline python tool to fetch all leetcode submissions.
A commandline python tool to fetch all leetcode submissions. - Rage997/leetfetch
django-hstore-field, An easy to use postgres hstore field that is based on django-hstore-widget
Hello everyone,
Today i released django-hstore-field, an easy to use postgres hstore field that is based on `django-hstore-widget`.
This project is based on stencil.js framework and uses web-components
# 🧐 Usage:
# 🚀 Features:
Drop in replacement for `django.contrib.postgres.HStoreField`
It leverages postgres hstore to give developers a key:value widget in the admin field.
It includes a admin panel widget to input and visualize the data.
It has error detection, to prevent malformed json in the widget.
It has a fallback json textarera (same one shipped with django's default implementation)
The widgets have the same style as the admin panel.
Only one [file](https://github.com/baseplate-admin/django-hstore-field/blob/master/src/django_hstore_field/fields.py).
# ⚖ Comparison with other project:
django-postgres-extensions: As far as i checked, the postgres extensions does not offer the built in admin panel extension. Also this package dosen't align with my philosophy "do one thing and do it well".
# 😎 Example:
Picture:
Rendered using django-hstore-field
Thank you guys all, if you guys like the project a ⭐ please.
/r/django
https://redd.it/1lig4t8
Hello everyone,
Today i released django-hstore-field, an easy to use postgres hstore field that is based on `django-hstore-widget`.
This project is based on stencil.js framework and uses web-components
# 🧐 Usage:
# yourapp/models.py
from django.db import models
from django_hstore_field import HStoreField
class ExampleModel(models.Model):
data = HStoreField()
# 🚀 Features:
Drop in replacement for `django.contrib.postgres.HStoreField`
It leverages postgres hstore to give developers a key:value widget in the admin field.
It includes a admin panel widget to input and visualize the data.
It has error detection, to prevent malformed json in the widget.
It has a fallback json textarera (same one shipped with django's default implementation)
The widgets have the same style as the admin panel.
Only one [file](https://github.com/baseplate-admin/django-hstore-field/blob/master/src/django_hstore_field/fields.py).
# ⚖ Comparison with other project:
django-postgres-extensions: As far as i checked, the postgres extensions does not offer the built in admin panel extension. Also this package dosen't align with my philosophy "do one thing and do it well".
# 😎 Example:
Picture:
Rendered using django-hstore-field
Thank you guys all, if you guys like the project a ⭐ please.
/r/django
https://redd.it/1lig4t8
GitHub
GitHub - baseplate-admin/django-hstore-field: An easy to use postgres hstore field that is based on django-hstore-widget
An easy to use postgres hstore field that is based on django-hstore-widget - baseplate-admin/django-hstore-field
D Conceptually/On a Code Basis - Why does Pytorch work with CUDA out of the box, with minimal setup required, but tensorflow would require all sorts of dependencies?
Hopefully this question doesn't break rule 6.
When I first learned machine learning, we primarily used TensorFlow on platforms like Google Colab or cloud platforms like Databricks, so I never had to worry about setting up Python or TensorFlow environments myself.
Now that I’m working on personal projects, I want to leverage my gaming PC to accelerate training using my GPU. Since I’m most familiar with the TensorFlow model training process, I started off with TensorFlow.
But my god—it was such a pain to set up. As you all probably know, getting it to work often involves very roundabout methods, like using WSL or setting up a Docker dev container.
Then I tried PyTorch, and realized how much easier it is to get everything running with CUDA. That got me thinking: conceptually, why does PyTorch require minimal setup to use CUDA, while TensorFlow needs all sorts of dependencies and is just generally a pain to get working?
/r/MachineLearning
https://redd.it/1lialoj
Hopefully this question doesn't break rule 6.
When I first learned machine learning, we primarily used TensorFlow on platforms like Google Colab or cloud platforms like Databricks, so I never had to worry about setting up Python or TensorFlow environments myself.
Now that I’m working on personal projects, I want to leverage my gaming PC to accelerate training using my GPU. Since I’m most familiar with the TensorFlow model training process, I started off with TensorFlow.
But my god—it was such a pain to set up. As you all probably know, getting it to work often involves very roundabout methods, like using WSL or setting up a Docker dev container.
Then I tried PyTorch, and realized how much easier it is to get everything running with CUDA. That got me thinking: conceptually, why does PyTorch require minimal setup to use CUDA, while TensorFlow needs all sorts of dependencies and is just generally a pain to get working?
/r/MachineLearning
https://redd.it/1lialoj
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
I made a FOSS feature rich Python template with SOTA tools, security, CI/CD, yet easy to use
## Introduction
Hey, created a FOSS Python library template with features I have never seen (especially in Python development) and which IMO is the most comprehensive, yet focused on usability (template setup is one click and one `pdm setup` command to setup locally, after that only `src`, `tests` and `pyproject.toml` should be of your concern), but I'll let you be the judge.
> GitHub repository: https://github.com/open-nudge/opentemplate
Feedback, questions, ideas, all are welcome, either here or on the GitHub's [discussions](https://github.com/open-nudge/opentemplate/discussions) or [issues](https://github.com/open-nudge/opentemplate/issues) (if you find some
bugs), thanks in advance!
- This was posted previously, but reposting as I think I did a very poor job describing what it does, hopefully I did a better job this time, but [here](https://www.reddit.com/r/Python/comments/1lelh8a/opentemplate_foss_python_template_focused_on/) it is anyway.
Also thanks to [u/wyattxdev](https://www.reddit.com/user/wyattxdev/) and his template [here](https://www.reddit.com/r/Python/comments/1lcz532/a_modern_python_project_cookiecutter_template/) for a great showcase how to present the project correctly!
- __This post is also featured on `r/cybersecurity` subreddit__ (focused more on the security side of things, but feel free to check it out if you are interested): https://www.reddit.com/r/cybersecurity/comments/1lim3k5/i_made_a_foss_python_template_with_cicd_security/
## TLDR Overview
- [__Truly open source__](https://open-nudge.github.io/opentemplate/template/about/philosophy): no tokens, no fees, no premium plans, open source software only
- [__State of the art__](https://open-nudge.github.io/opentemplate/template/details): best checkers for Python, YAML, Markdown, prose, and more unified
- [__Easy to use__](https://open-nudge.github.io/opentemplate/template/quickstart/usage): clone templated repo, run `pdm
/r/Python
https://redd.it/1lim6fb
## Introduction
Hey, created a FOSS Python library template with features I have never seen (especially in Python development) and which IMO is the most comprehensive, yet focused on usability (template setup is one click and one `pdm setup` command to setup locally, after that only `src`, `tests` and `pyproject.toml` should be of your concern), but I'll let you be the judge.
> GitHub repository: https://github.com/open-nudge/opentemplate
Feedback, questions, ideas, all are welcome, either here or on the GitHub's [discussions](https://github.com/open-nudge/opentemplate/discussions) or [issues](https://github.com/open-nudge/opentemplate/issues) (if you find some
bugs), thanks in advance!
- This was posted previously, but reposting as I think I did a very poor job describing what it does, hopefully I did a better job this time, but [here](https://www.reddit.com/r/Python/comments/1lelh8a/opentemplate_foss_python_template_focused_on/) it is anyway.
Also thanks to [u/wyattxdev](https://www.reddit.com/user/wyattxdev/) and his template [here](https://www.reddit.com/r/Python/comments/1lcz532/a_modern_python_project_cookiecutter_template/) for a great showcase how to present the project correctly!
- __This post is also featured on `r/cybersecurity` subreddit__ (focused more on the security side of things, but feel free to check it out if you are interested): https://www.reddit.com/r/cybersecurity/comments/1lim3k5/i_made_a_foss_python_template_with_cicd_security/
## TLDR Overview
- [__Truly open source__](https://open-nudge.github.io/opentemplate/template/about/philosophy): no tokens, no fees, no premium plans, open source software only
- [__State of the art__](https://open-nudge.github.io/opentemplate/template/details): best checkers for Python, YAML, Markdown, prose, and more unified
- [__Easy to use__](https://open-nudge.github.io/opentemplate/template/quickstart/usage): clone templated repo, run `pdm
/r/Python
https://redd.it/1lim6fb
GitHub
GitHub - open-nudge/opentemplate: All-in-one Python template. One click. Everything included.
All-in-one Python template. One click. Everything included. - open-nudge/opentemplate
R Reinforcement Learning Teachers of Test Time Scaling
TL;DR: The raw outputs of our new 7B RL model provide stronger distillation and cold-starting than the filtered and post-processed reasoning traces of orders-of-magnitude larger LMs such as DeepSeek-R1.
How did we achieve this result? We turned the RL task on its head. Rather than training to solve challenging problems from scratch, we optimize our models to generate clear, step-by-step "explanations" to "teach" their students, providing both the problem’s question and its solution already in their input prompt.
This makes the RL training task much easier and also directly aligned with downstream distillation, allowing us to train tiny 7B teachers, boosting the performance of even larger 32B students.
If you are interested to learn more, please check out our new work:
Paper: https://arxiv.org/abs/2506.08388
Blog: https://sakana.ai/rlt/
Open source code: https://github.com/SakanaAI/RLT
If you have any questions, please ask them below or feel free to get in touch, any discussion is more than welcome :)
/r/MachineLearning
https://redd.it/1lid95g
TL;DR: The raw outputs of our new 7B RL model provide stronger distillation and cold-starting than the filtered and post-processed reasoning traces of orders-of-magnitude larger LMs such as DeepSeek-R1.
How did we achieve this result? We turned the RL task on its head. Rather than training to solve challenging problems from scratch, we optimize our models to generate clear, step-by-step "explanations" to "teach" their students, providing both the problem’s question and its solution already in their input prompt.
This makes the RL training task much easier and also directly aligned with downstream distillation, allowing us to train tiny 7B teachers, boosting the performance of even larger 32B students.
If you are interested to learn more, please check out our new work:
Paper: https://arxiv.org/abs/2506.08388
Blog: https://sakana.ai/rlt/
Open source code: https://github.com/SakanaAI/RLT
If you have any questions, please ask them below or feel free to get in touch, any discussion is more than welcome :)
/r/MachineLearning
https://redd.it/1lid95g
arXiv.org
Reinforcement Learning Teachers of Test Time Scaling
Training reasoning language models (LMs) with reinforcement learning (RL) for one-hot correctness inherently relies on the LM being able to explore and solve its task with some chance at...
How to implement multi-tenancy with django-tenants for my SaaS ?
Hey devs,
I'm building a SaaS healthcare CRM targeting small or solo medical practices. I want each clinic (tenant) to have its own isolated database schema using django-tenants.
So far, I’ve done the following:
Created a Clinic model using TenantMixin and set autocreateschema = True
Added a Domain model for routing using DomainMixin
Created a custom User model for each tenant
Installed and configured django-tenants
But I still have questions to clarify the right implementation:
1. How should I structure the signup process?
Should I register the tenant (clinic), then switch to that schema and create users?
2. Should the user model be shared (in the public schema) or be tenant-specific?
I need users (doctors/staff) to be isolated per clinic.
3. How can I make sure user login works correctly and is scoped to the right schema?
4. What's the best way to handle domain/subdomain routing for tenants (ex: clinic1.mycrm.com, clinic2.mycrm.com)?
5. Any example repo, best practices, or gotchas I should be aware of?
I’d love to get some feedback or code architecture examples from anyone who’s implemented a similar setup. My goal is to keep tenant data fully isolated and support a clean onboarding experience for new clinics.
Thanks a lot in advance!
/r/django
https://redd.it/1lilzpg
Hey devs,
I'm building a SaaS healthcare CRM targeting small or solo medical practices. I want each clinic (tenant) to have its own isolated database schema using django-tenants.
So far, I’ve done the following:
Created a Clinic model using TenantMixin and set autocreateschema = True
Added a Domain model for routing using DomainMixin
Created a custom User model for each tenant
Installed and configured django-tenants
But I still have questions to clarify the right implementation:
1. How should I structure the signup process?
Should I register the tenant (clinic), then switch to that schema and create users?
2. Should the user model be shared (in the public schema) or be tenant-specific?
I need users (doctors/staff) to be isolated per clinic.
3. How can I make sure user login works correctly and is scoped to the right schema?
4. What's the best way to handle domain/subdomain routing for tenants (ex: clinic1.mycrm.com, clinic2.mycrm.com)?
5. Any example repo, best practices, or gotchas I should be aware of?
I’d love to get some feedback or code architecture examples from anyone who’s implemented a similar setup. My goal is to keep tenant data fully isolated and support a clean onboarding experience for new clinics.
Thanks a lot in advance!
/r/django
https://redd.it/1lilzpg
Reddit
From the django community on Reddit
Explore this post and more from the django community
If someone have pdf for django please send
I am learning django and yt tutorial are good but they explain less. While CBVs are considered best practices but many youtube tutorial are old or new just doesn't cover CBVs that much.if you have pdf please send me.
/r/djangolearning
https://redd.it/1lin4io
I am learning django and yt tutorial are good but they explain less. While CBVs are considered best practices but many youtube tutorial are old or new just doesn't cover CBVs that much.if you have pdf please send me.
/r/djangolearning
https://redd.it/1lin4io
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
How to export editing history of a model
Hi bro,
I have a Django web app
1. How can I export the add, editing history of a model. I want to export all the history of all objects of specific model
https://preview.redd.it/lfswatd76s8f1.png?width=917&format=png&auto=webp&s=fb1cab85d7175af24eadf6d0ba591b2f32af0f79
2. How can I export history activities of user?
Thank you very much
/r/django
https://redd.it/1liyuto
Hi bro,
I have a Django web app
1. How can I export the add, editing history of a model. I want to export all the history of all objects of specific model
https://preview.redd.it/lfswatd76s8f1.png?width=917&format=png&auto=webp&s=fb1cab85d7175af24eadf6d0ba591b2f32af0f79
2. How can I export history activities of user?
Thank you very much
/r/django
https://redd.it/1liyuto
Interview Advice for fresher role as backend Django Developer ( AWS is a plus )
Greetings to everyone,
I received an email saying there is an interview scheduled on upcoming wednesday 26june2025 this is my first interview which is technical round 1 (there are two+hr round). I am a bit nervous right now and wanted to ask for the resources or topics to prepare well for these interviews. The job opening is for freshers and hiring for django+aws.
About my resume : I have written two internships but those are frontend based and two projects which are of django and three certifications (aws,django,react).
As people here always help students therefore I came straight here to ask.
Thank you.
For the people who work in similar position, what do they expect you on your interview?
/r/django
https://redd.it/1linndz
Greetings to everyone,
I received an email saying there is an interview scheduled on upcoming wednesday 26june2025 this is my first interview which is technical round 1 (there are two+hr round). I am a bit nervous right now and wanted to ask for the resources or topics to prepare well for these interviews. The job opening is for freshers and hiring for django+aws.
About my resume : I have written two internships but those are frontend based and two projects which are of django and three certifications (aws,django,react).
As people here always help students therefore I came straight here to ask.
Thank you.
For the people who work in similar position, what do they expect you on your interview?
/r/django
https://redd.it/1linndz
Reddit
From the django community on Reddit
Explore this post and more from the django community
C++ in JupyterLite (WebAssembly) — Interpreting C++ in the Web
https://blog.jupyter.org/c-in-jupyter-interpreting-c-in-the-web-c9d93542f20b
/r/IPython
https://redd.it/1lj57en
https://blog.jupyter.org/c-in-jupyter-interpreting-c-in-the-web-c9d93542f20b
/r/IPython
https://redd.it/1lj57en
Medium
C++ in Jupyter — Interpreting C++ in the Web
A Jupyter kernel for C++ running in the Web browser
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1liwla2
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1liwla2
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Is it good idea to use debug_toolbar to learn ORM and SQL?
I have recently found out about this tool and it has enormously helped me in understanding ORM and the SQL magic behind it
/r/django
https://redd.it/1lj5ccd
I have recently found out about this tool and it has enormously helped me in understanding ORM and the SQL magic behind it
/r/django
https://redd.it/1lj5ccd
Reddit
From the django community on Reddit
Explore this post and more from the django community
htmx accessibility gaps: data and recommendations
https://wagtail.org/blog/htmx-accessibility-gaps-data-and-recommendations/
/r/django
https://redd.it/1lj9vci
https://wagtail.org/blog/htmx-accessibility-gaps-data-and-recommendations/
/r/django
https://redd.it/1lj9vci
Wagtail CMS
htmx accessibility gaps: data and recommendations | Wagtail CMS
A look at available data, known gotchas, and how to address the gaps