Adding Vite to Django for a Modern Front End with React and Tailwind CSS (Part of "Modern JavaScript for Django Developers")
Hey folks! Author of the "Modern JavaScript for Django Developers" series here. I'm back with a fresh new guide.
When I first wrote about using modern JavaScript with Django back in 2020, the state-of-the-art in terms of tooling was Webpack and Babel. But—as we know well—the JavaScript world doesn't stay still for long.
About a year ago, I started using Vite as a replacement for Webpack on all my projects, and I'm super happy with the change. It's faster, easier to set up, and lets you do some very nice things like auto-refresh your app when JS/CSS changes.
I've finally taken the time to revisit my "Modern JavaScript for Django Developers" series and update it with what I feel is the best front end setup for Django projects in 2025. There are three parts to this update:
* A brand new guide: [Adding Vite to Django, so you can use Modern JavaScript, React, and Tailwind CSS](https://www.saaspegasus.com/guides/modern-javascript-for-django-developers/integrating-javascript-pipeline-vite/)
* A video walkthrough: [Setting up a Django project with Vite, React, and Tailwind CSS](https://youtu.be/GztJ1h6ZXA0)
* An open-source repository with the finished product you can use to start a new project: [https://github.com/saaspegasus/django-vite-tailwind-starter/](https://github.com/saaspegasus/django-vite-tailwind-starter/)
Hope this is useful, and let me know if you have any questions or feedback!
/r/django
https://redd.it/1p5g975
Hey folks! Author of the "Modern JavaScript for Django Developers" series here. I'm back with a fresh new guide.
When I first wrote about using modern JavaScript with Django back in 2020, the state-of-the-art in terms of tooling was Webpack and Babel. But—as we know well—the JavaScript world doesn't stay still for long.
About a year ago, I started using Vite as a replacement for Webpack on all my projects, and I'm super happy with the change. It's faster, easier to set up, and lets you do some very nice things like auto-refresh your app when JS/CSS changes.
I've finally taken the time to revisit my "Modern JavaScript for Django Developers" series and update it with what I feel is the best front end setup for Django projects in 2025. There are three parts to this update:
* A brand new guide: [Adding Vite to Django, so you can use Modern JavaScript, React, and Tailwind CSS](https://www.saaspegasus.com/guides/modern-javascript-for-django-developers/integrating-javascript-pipeline-vite/)
* A video walkthrough: [Setting up a Django project with Vite, React, and Tailwind CSS](https://youtu.be/GztJ1h6ZXA0)
* An open-source repository with the finished product you can use to start a new project: [https://github.com/saaspegasus/django-vite-tailwind-starter/](https://github.com/saaspegasus/django-vite-tailwind-starter/)
Hope this is useful, and let me know if you have any questions or feedback!
/r/django
https://redd.it/1p5g975
SaaS Pegasus
Adding Vite to Django, so you can use Modern JavaScript, React, and Tailwind CSS
The nuts and bolts of integrating a Vite front-end pipeline into a Django project—and using it to create some simple "Hello World" applications with Vite, React, and Tailwind CSS.
JupyterLab 4.5 and Notebook 7.5 are available!
https://blog.jupyter.org/jupyterlab-4-5-and-notebook-7-5-are-available-1bcd1fa19a47
/r/IPython
https://redd.it/1p5lfm6
https://blog.jupyter.org/jupyterlab-4-5-and-notebook-7-5-are-available-1bcd1fa19a47
/r/IPython
https://redd.it/1p5lfm6
Medium
JupyterLab 4.5 and Notebook 7.5 are available!
JupyterLab 4.5 has been released! This new minor release of JupyterLab includes 51 new features and enhancements, 81 bug fixes, 44…
GeoPolars is unblocked and moving forward
TL;DR: GeoPolars is a similar extension of Polars as GeoPandas is from Pandas. It was blocked by upstream issues on Polars side, but those have now been resolved. Development is restarting!
GeoPolars is a high-performance library designed to extend the Polars DataFrame library for use with geospatial data. Written in Rust with Python bindings, it utilizes the GeoArrow specification for its internal memory model to enable efficient, multithreaded spatial processing. By leveraging the speed of Polars and the zero-copy capabilities of Arrow, GeoPolars aims to provide a significantly faster alternative to existing tools like GeoPandas, though it is currently considered a prototype.
Development on the project is officially resuming after a period of inactivity caused by upstream technical blockers. The project was previously stalled waiting for Polars to support "Extension Types," a feature necessary to persist geometry type information and Coordinate Reference System (CRS) metadata within the DataFrames. With the Polars team now actively implementing support for these extension types, the primary hurdle has been removed, allowing the maintainers to revitalize the project and move toward a functional implementation.
The immediate roadmap focuses on establishing a stable core architecture before expanding functionality. Short-term goals include implementing Arrow data conversion between the underlying Rust
/r/Python
https://redd.it/1p5dtvn
TL;DR: GeoPolars is a similar extension of Polars as GeoPandas is from Pandas. It was blocked by upstream issues on Polars side, but those have now been resolved. Development is restarting!
GeoPolars is a high-performance library designed to extend the Polars DataFrame library for use with geospatial data. Written in Rust with Python bindings, it utilizes the GeoArrow specification for its internal memory model to enable efficient, multithreaded spatial processing. By leveraging the speed of Polars and the zero-copy capabilities of Arrow, GeoPolars aims to provide a significantly faster alternative to existing tools like GeoPandas, though it is currently considered a prototype.
Development on the project is officially resuming after a period of inactivity caused by upstream technical blockers. The project was previously stalled waiting for Polars to support "Extension Types," a feature necessary to persist geometry type information and Coordinate Reference System (CRS) metadata within the DataFrames. With the Polars team now actively implementing support for these extension types, the primary hurdle has been removed, allowing the maintainers to revitalize the project and move toward a functional implementation.
The immediate roadmap focuses on establishing a stable core architecture before expanding functionality. Short-term goals include implementing Arrow data conversion between the underlying Rust
/r/Python
https://redd.it/1p5dtvn
GitHub
GitHub - geopolars/geopolars: Geospatial extensions for Polars
Geospatial extensions for Polars. Contribute to geopolars/geopolars development by creating an account on GitHub.
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1p5xih6
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1p5xih6
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Writing comprehensive integration tests for Django applications
https://www.honeybadger.io/blog/django-integration-testing/
/r/django
https://redd.it/1p5slzm
https://www.honeybadger.io/blog/django-integration-testing/
/r/django
https://redd.it/1p5slzm
Honeybadger Developer Blog
Django integration testing
Learn how to write Django integration tests for user authentication, workflows, and external services like Cloudinary.
P Feedback/Usage of SAM (Segment Anything)
Hi folks!
I'm one of the maintainers of Pixeltable and we are looking to provide a built-in support for SAM (Segment Anything) and I'd love to chat with people who are using it on a daily/weekly basis and what their workflows look like.
Pixeltable is quite unique in the way that we can provide an API/Dataframe/Engine to manipulate video/frames/arrays/json as first-class data types to work with among other things which makes it very unique programmatically to work with SAM outputs/masks.
Feel free to reply here/DM me or others :)
Thanks and really appreciated!
/r/MachineLearning
https://redd.it/1p5xmhw
Hi folks!
I'm one of the maintainers of Pixeltable and we are looking to provide a built-in support for SAM (Segment Anything) and I'd love to chat with people who are using it on a daily/weekly basis and what their workflows look like.
Pixeltable is quite unique in the way that we can provide an API/Dataframe/Engine to manipulate video/frames/arrays/json as first-class data types to work with among other things which makes it very unique programmatically to work with SAM outputs/masks.
Feel free to reply here/DM me or others :)
Thanks and really appreciated!
/r/MachineLearning
https://redd.it/1p5xmhw
GitHub
GitHub - pixeltable/pixeltable: Pixeltable — Data Infrastructure providing a declarative, incremental approach for multimodal AI…
Pixeltable — Data Infrastructure providing a declarative, incremental approach for multimodal AI workloads. - pixeltable/pixeltable
CORS Error in my Flask | React web app
Hey, everyone, how's it going?
I'm getting a CORS error in a web application I'm developing for my portfolio.
I'm trying to use CORS in my <app.py> and in my endpoint, but the error persists.
I think it is a simple error, but I am trying several ways to solve it, without success!
Right now, I have this line in my <app.py>, above my blueprint.
# imports
from flaskcors import CORS
def createapp():
app = Flask(name)
...
CORS(app)
...
if name == 'main:
...
PS: I read something about the possibility of it being a Swagger-related error, but I don't know if that makes sense.
/r/flask
https://redd.it/1p5j53r
Hey, everyone, how's it going?
I'm getting a CORS error in a web application I'm developing for my portfolio.
I'm trying to use CORS in my <app.py> and in my endpoint, but the error persists.
I think it is a simple error, but I am trying several ways to solve it, without success!
Right now, I have this line in my <app.py>, above my blueprint.
# imports
from flaskcors import CORS
def createapp():
app = Flask(name)
...
CORS(app)
...
if name == 'main:
...
PS: I read something about the possibility of it being a Swagger-related error, but I don't know if that makes sense.
/r/flask
https://redd.it/1p5j53r
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
MovieLite: A MoviePy alternative for video editing that is up to 4x faster
Hi r/Python,
I love the simplicity of MoviePy, but it often becomes very slow when doing complex things like resizing or mixing multiple videos. So, I built MovieLite.
This started as a module inside a personal project where I had to migrate away from MoviePy due to performance bottlenecks. I decided to extract the code into its own library to help others with similar issues. It is currently in early alpha, but stable enough for my internal use cases.
Repo: https://github.com/francozanardi/movielite
### What My Project Does
MovieLite is a library for programmatic video editing (cutting, concatenating, text overlays, effects). It delegates I/O to FFmpeg but handles pixel processing in Python.
It is designed to be CPU Optimized using Numba to speed up pixel-heavy operations. Note that it is not GPU optimized and currently only supports exporting to MP4 (h264).
### Target Audience
This is for Python Developers doing backend video automation who find MoviePy too slow for production. It is not a full-featured video editor replacement yet, but a faster tool for the most common automation tasks.
### Comparison & Benchmarks
The main difference is performance. Here are real benchmarks comparing MovieLite vs. MoviePy (v2.x) on a 1280x720 video at 30fps.
These tests were run using 1 single process, and the
/r/Python
https://redd.it/1p5vkia
Hi r/Python,
I love the simplicity of MoviePy, but it often becomes very slow when doing complex things like resizing or mixing multiple videos. So, I built MovieLite.
This started as a module inside a personal project where I had to migrate away from MoviePy due to performance bottlenecks. I decided to extract the code into its own library to help others with similar issues. It is currently in early alpha, but stable enough for my internal use cases.
Repo: https://github.com/francozanardi/movielite
### What My Project Does
MovieLite is a library for programmatic video editing (cutting, concatenating, text overlays, effects). It delegates I/O to FFmpeg but handles pixel processing in Python.
It is designed to be CPU Optimized using Numba to speed up pixel-heavy operations. Note that it is not GPU optimized and currently only supports exporting to MP4 (h264).
### Target Audience
This is for Python Developers doing backend video automation who find MoviePy too slow for production. It is not a full-featured video editor replacement yet, but a faster tool for the most common automation tasks.
### Comparison & Benchmarks
The main difference is performance. Here are real benchmarks comparing MovieLite vs. MoviePy (v2.x) on a 1280x720 video at 30fps.
These tests were run using 1 single process, and the
/r/Python
https://redd.it/1p5vkia
GitHub
GitHub - francozanardi/movielite: Performance-focused Python video editing library. Alternative to MoviePy, powered by Numba.
Performance-focused Python video editing library. Alternative to MoviePy, powered by Numba. - francozanardi/movielite
How to do resource provisioning
I have developed a study platform in django
for the first time i'm hosting it
i'm aware of how much storage i will need
but, don't know how many CPU cores, RAM and bandwidth needed?
/r/django
https://redd.it/1p673l9
I have developed a study platform in django
for the first time i'm hosting it
i'm aware of how much storage i will need
but, don't know how many CPU cores, RAM and bandwidth needed?
/r/django
https://redd.it/1p673l9
Reddit
From the django community on Reddit
Explore this post and more from the django community
API tracing with Django and Nginx
Hi everyone,
I’m trying to measure the exact time spent in each stage of my API request flow — starting from the browser, through Nginx, into Django, then the database, and back out through Django and Nginx to the client.
Essentially, I want to capture timestamps and time intervals for:
* When the browser sends the request
* When Nginx receives it
* When Django starts processing it
* Time spent in the database
* Django response time
* Nginx response time
* When the browser receives the response
Is there any Django package or best practice that can help log these timing metrics end-to-end? Currently I have to manually add timestamps in nginx conf file, django middleware, before and after the fetch call in the frontend.
Thanks!
/r/django
https://redd.it/1p64pzt
Hi everyone,
I’m trying to measure the exact time spent in each stage of my API request flow — starting from the browser, through Nginx, into Django, then the database, and back out through Django and Nginx to the client.
Essentially, I want to capture timestamps and time intervals for:
* When the browser sends the request
* When Nginx receives it
* When Django starts processing it
* Time spent in the database
* Django response time
* Nginx response time
* When the browser receives the response
Is there any Django package or best practice that can help log these timing metrics end-to-end? Currently I have to manually add timestamps in nginx conf file, django middleware, before and after the fetch call in the frontend.
Thanks!
/r/django
https://redd.it/1p64pzt
Reddit
From the django community on Reddit
Explore this post and more from the django community
modeltranslation
As the title states;
how do you guys handle `modeltranslation` as of 2025.
/r/django
https://redd.it/1p6d4wq
As the title states;
how do you guys handle `modeltranslation` as of 2025.
/r/django
https://redd.it/1p6d4wq
Reddit
From the django community on Reddit
Explore this post and more from the django community
Django project flow for understanding
I am developing a project and parallelly learning django rest framework.
Currently, I have comfortably created models, and a customuser (with AbstractBaseUser) and corresponding customusermanager which will communicate with jwt auth. I have also implemented djangorestframework-simplejwt for obtaining token pair. Now, at this point I am at a standstill as to how should I proceed. I also have some confusions regarding customuser and customusermanager, and while studying stumbled upon some extra info such as there are forms and admin to be customized as well for customuser.
Also wondering as, how will I verify the user with jwt token obtained for some other functionalities.
Need help for understanding the general flow for drf+jwt and detailed answers for my abovementioned confusions are appreciated.
Thanks in advance.
/r/djangolearning
https://redd.it/1p6f4bj
I am developing a project and parallelly learning django rest framework.
Currently, I have comfortably created models, and a customuser (with AbstractBaseUser) and corresponding customusermanager which will communicate with jwt auth. I have also implemented djangorestframework-simplejwt for obtaining token pair. Now, at this point I am at a standstill as to how should I proceed. I also have some confusions regarding customuser and customusermanager, and while studying stumbled upon some extra info such as there are forms and admin to be customized as well for customuser.
Also wondering as, how will I verify the user with jwt token obtained for some other functionalities.
Need help for understanding the general flow for drf+jwt and detailed answers for my abovementioned confusions are appreciated.
Thanks in advance.
/r/djangolearning
https://redd.it/1p6f4bj
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
uvlink – A CLI to keep .venv in a centralized cache for uv
# GitHub Repo
* [https://github.com/c0rychu/uvlink](https://github.com/c0rychu/uvlink)
# What My Project Does
This tiny Python CLI tool `uvlink` keeps `.venv` out of cloud-synced project directories by storing the real env in a centralized cache and symlinking it from the project.
Basically, I'm trying to solve this `uv` issue: [https://github.com/astral-sh/uv/issues/1495](https://github.com/astral-sh/uv/issues/1495)
# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
It is perfect for `uv` users who sync code to Dropbox, Google Drive, or iCloud. Only your source code syncs, not gigabytes of .venv dependencies.
# Comparison (A brief comparison explaining how it differs from existing alternatives.)
* venvlink: It claims that it only supports Windows.
* uv-workon: It basically does the opposite; it creates symlinks at a centralized link back to the project's virtual environment.
Unless `uv` supports this natively in the future; I'm not aware of a good publicly available solution. (except for switching to `poetry`)
Any feedback is welcome :)
/r/Python
https://redd.it/1p662t0
# GitHub Repo
* [https://github.com/c0rychu/uvlink](https://github.com/c0rychu/uvlink)
# What My Project Does
This tiny Python CLI tool `uvlink` keeps `.venv` out of cloud-synced project directories by storing the real env in a centralized cache and symlinking it from the project.
Basically, I'm trying to solve this `uv` issue: [https://github.com/astral-sh/uv/issues/1495](https://github.com/astral-sh/uv/issues/1495)
# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
It is perfect for `uv` users who sync code to Dropbox, Google Drive, or iCloud. Only your source code syncs, not gigabytes of .venv dependencies.
# Comparison (A brief comparison explaining how it differs from existing alternatives.)
* venvlink: It claims that it only supports Windows.
* uv-workon: It basically does the opposite; it creates symlinks at a centralized link back to the project's virtual environment.
Unless `uv` supports this natively in the future; I'm not aware of a good publicly available solution. (except for switching to `poetry`)
Any feedback is welcome :)
/r/Python
https://redd.it/1p662t0
GitHub
GitHub - c0rychu/uvlink: storing venv in a system-wise cache and symbolic link them back into project file
storing venv in a system-wise cache and symbolic link them back into project file - c0rychu/uvlink
How good can NumPy get?
I was reading this article doing some research on optimizing my code and came something that I found interesting (I am a beginner lol)
For creating a simple binary column (like an IF/ELSE) in a 1 million-row Pandas DataFrame, the common
I always treated
Is this massive speed difference common knowledge?
Why is the gap so huge? Is it purely due to Python's row-wise iteration vs. NumPy's C-compiled vectorization, or are there other factors at play (like memory management or overhead)?
Have any of you hit this bottleneck?
I'm trying to understand the underlying mechanics better
/r/Python
https://redd.it/1p65vcm
I was reading this article doing some research on optimizing my code and came something that I found interesting (I am a beginner lol)
For creating a simple binary column (like an IF/ELSE) in a 1 million-row Pandas DataFrame, the common
df.apply(lambda...) method was apparently 49.2 times slower than using np.where().I always treated
df.apply() as the standard, efficient way to run element-wise operations.Is this massive speed difference common knowledge?
Why is the gap so huge? Is it purely due to Python's row-wise iteration vs. NumPy's C-compiled vectorization, or are there other factors at play (like memory management or overhead)?
Have any of you hit this bottleneck?
I'm trying to understand the underlying mechanics better
/r/Python
https://redd.it/1p65vcm
Medium
Stop Using Lambda for Conditional Column Creation in Pandas! Use this instead.
Speed vs Familiarity!
Breaking Django convention? Using a variable key in template to acces a dict value
I have an application that tracks working hours. Users will make entries for a work day. Internally, the entry is made up of UserEntry and UserEntryItem. UserEntry will have the date amongst other things. UserEntryItems are made of a ForeignKey to a WorkType and a field for the acutal hours.
The data from these entries will be displayed in a table. This table is dynamic since different workers have different WorkTypeProfiles, and also the WorkTypeProfile can change where a worker did general services plus driving services but eventually he goes back to just general services.
So tables will have different columns depending on who and when. The way I want to solve this is: build up an index of columns which is just a list of column handles. The table has a header row and a footer row with special content. The body rows are all the same in structure, just with different values.
For top and bottom row, I want to build a dictionary with key = one of the column handles, and value = what goes into the table cell. For the body, I want to build a list of dictionaries with each dictionary representing one row.
In order to build the
/r/django
https://redd.it/1p6knne
I have an application that tracks working hours. Users will make entries for a work day. Internally, the entry is made up of UserEntry and UserEntryItem. UserEntry will have the date amongst other things. UserEntryItems are made of a ForeignKey to a WorkType and a field for the acutal hours.
The data from these entries will be displayed in a table. This table is dynamic since different workers have different WorkTypeProfiles, and also the WorkTypeProfile can change where a worker did general services plus driving services but eventually he goes back to just general services.
So tables will have different columns depending on who and when. The way I want to solve this is: build up an index of columns which is just a list of column handles. The table has a header row and a footer row with special content. The body rows are all the same in structure, just with different values.
For top and bottom row, I want to build a dictionary with key = one of the column handles, and value = what goes into the table cell. For the body, I want to build a list of dictionaries with each dictionary representing one row.
In order to build the
/r/django
https://redd.it/1p6knne
Reddit
From the django community on Reddit
Explore this post and more from the django community
DAG-style sync engine in Django
Project backstory: I had an existing WooCommerce website. Then I opened a retail store and added a Clover POS system and needed to sync data between them. There were not any commercial off the shelf syncing options that I could find that would fit my specific use case. So I created a simple python script that connects to both APIs and syncs data between them. Problem solved! But then I wanted to turn my long single script into some kind of auditable task log.
So I created a dag-style sync engine whichs runs in Django. It is a database driven task routing system controlled by a django front end. It consists of an orchestrator which determines the sequence of tasks and a dispatcher for task routing. Each sync job is initiated by essentially writing a row with queued status in the synccomand table with the dag name and initial payload. Django signals are used to fire the orchestrator and dispatcher and the task steps are run in Celery. It also features a built in idempotency guard so each step can be fully replayed/restarted.
I have deployed this
/r/django
https://redd.it/1p6hgid
Project backstory: I had an existing WooCommerce website. Then I opened a retail store and added a Clover POS system and needed to sync data between them. There were not any commercial off the shelf syncing options that I could find that would fit my specific use case. So I created a simple python script that connects to both APIs and syncs data between them. Problem solved! But then I wanted to turn my long single script into some kind of auditable task log.
So I created a dag-style sync engine whichs runs in Django. It is a database driven task routing system controlled by a django front end. It consists of an orchestrator which determines the sequence of tasks and a dispatcher for task routing. Each sync job is initiated by essentially writing a row with queued status in the synccomand table with the dag name and initial payload. Django signals are used to fire the orchestrator and dispatcher and the task steps are run in Celery. It also features a built in idempotency guard so each step can be fully replayed/restarted.
I have deployed this
/r/django
https://redd.it/1p6hgid
Reddit
From the django community on Reddit
Explore this post and more from the django community
Spent a bunch of time choosing between Loguru, Structlog and native logging
Python's native logging module is just fine but modern options like Loguru and Structlog are eye-catching. As someone who wants to use the best tooling so that I can make my life easy, I agonized over choosing one.. perhaps a little too much (I'd rather expend calories now rather than being in production hell and trying to wrangle logs).
I've boiled down what I've learnt to the following:
- Read some good advice here on r/Python to switch to a third party library only when you find/need something that the native libraries can't do - this basically holds true.
- Loguru's (most popular 3rd party library) value prop (zero config, dev ex prioritized) in the age of AI coding is much less appealing. AI can handle writing config boiler plate with the native logging module
- What kills loguru is that it isnt opentelemetry compatible. Meaning if you are using it for a production or production intent codebase, loguru really shouldnt be an option.
- Structlog feels like a more powerful and featured option but this brings with it the need to learn, understand a new system. Plus it still needs a custom "processor" to integrate with OTEL.
- Structlog's biggest value prop -
/r/Python
https://redd.it/1p6qy1e
Python's native logging module is just fine but modern options like Loguru and Structlog are eye-catching. As someone who wants to use the best tooling so that I can make my life easy, I agonized over choosing one.. perhaps a little too much (I'd rather expend calories now rather than being in production hell and trying to wrangle logs).
I've boiled down what I've learnt to the following:
- Read some good advice here on r/Python to switch to a third party library only when you find/need something that the native libraries can't do - this basically holds true.
- Loguru's (most popular 3rd party library) value prop (zero config, dev ex prioritized) in the age of AI coding is much less appealing. AI can handle writing config boiler plate with the native logging module
- What kills loguru is that it isnt opentelemetry compatible. Meaning if you are using it for a production or production intent codebase, loguru really shouldnt be an option.
- Structlog feels like a more powerful and featured option but this brings with it the need to learn, understand a new system. Plus it still needs a custom "processor" to integrate with OTEL.
- Structlog's biggest value prop -
/r/Python
https://redd.it/1p6qy1e
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Naming Things in really complex situations and as codebase size increases.
Naming has become a real challenge for me. It’s easy when I’m following a YouTube tutorial and building mock projects, but in real production projects it gets difficult. In the beginning it’s manageable, but as the project grows, naming things becomes harder.
For example, I have various formatters. A formatter takes a database object—basically a Django model instance—and formats it. It’s similar to a serializer, though I have specific reasons to create my own instead of using the built-in Python or Django REST Framework serializers. The language or framework isn’t the main point here; I’m mentioning them only for clarity.
So I create one formatter that returns some structured data. Then I need another formatter that returns about 80% of the same data, but with slight additions or removals. There might be an order formatter, then another order formatter with user data, another one without the “order received” date, and so on. None of this reflects my actual project—it’s not e-commerce but an internal tool I can’t discuss in detail—but it does involve many formatters for different use cases. Depending on the role, I may need to send different versions of order data with certain fields blank. This is only the formatter
/r/django
https://redd.it/1p7ayhs
Naming has become a real challenge for me. It’s easy when I’m following a YouTube tutorial and building mock projects, but in real production projects it gets difficult. In the beginning it’s manageable, but as the project grows, naming things becomes harder.
For example, I have various formatters. A formatter takes a database object—basically a Django model instance—and formats it. It’s similar to a serializer, though I have specific reasons to create my own instead of using the built-in Python or Django REST Framework serializers. The language or framework isn’t the main point here; I’m mentioning them only for clarity.
So I create one formatter that returns some structured data. Then I need another formatter that returns about 80% of the same data, but with slight additions or removals. There might be an order formatter, then another order formatter with user data, another one without the “order received” date, and so on. None of this reflects my actual project—it’s not e-commerce but an internal tool I can’t discuss in detail—but it does involve many formatters for different use cases. Depending on the role, I may need to send different versions of order data with certain fields blank. This is only the formatter
/r/django
https://redd.it/1p7ayhs
Reddit
From the django community on Reddit
Explore this post and more from the django community
i18n with AI?
i18n for Django apps is a lot of tough work. I am wondering if anyone here knows any good AI tools to speed this process up? I am talking about automatically generating the translations when making messages.
/r/django
https://redd.it/1p7crih
i18n for Django apps is a lot of tough work. I am wondering if anyone here knows any good AI tools to speed this process up? I am talking about automatically generating the translations when making messages.
/r/django
https://redd.it/1p7crih
Reddit
From the django community on Reddit
Explore this post and more from the django community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1p7nn45
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1p7nn45
Reddit
From the Python community on Reddit
Explore this post and more from the Python community