ChanX: Type-Safe WebSocket Framework for Django and FastAPI
# What My Project Does
ChanX is a batteries-included WebSocket framework that works with both Django Channels and FastAPI. It eliminates the boilerplate and repetitive patterns in WebSocket development by providing:
Automatic message routing using Pydantic discriminated unions - no more if-else chains
Type safety with full mypy/pyright support and runtime Pydantic validation
Auto-generated AsyncAPI 3.0 documentation - like OpenAPI/Swagger but for WebSockets
Channel layer integration for broadcasting messages across servers with Redis
Event system to trigger WebSocket messages from anywhere in your application (HTTP views, Celery tasks, management commands)
Built-in authentication with Django REST framework permissions support
Comprehensive testing utilities for both frameworks
Structured logging with automatic request/response tracing
The same decorator-based API works for both Django Channels and FastAPI:
from typing import Literal
from chanx.messages.base import BaseMessage
from chanx.core.decorators import wshandler, channel
from chanx.channels.websocket import AsyncJsonWebsocketConsumer # Django
# from chanx.fastchannels.websocket import AsyncJsonWebsocketConsumer # FastAPI
class ChatMessage(BaseMessage):
action: Literal"chat" = "chat"
payload: str
(name="chat")
/r/Python
https://redd.it/1o5ro8i
# What My Project Does
ChanX is a batteries-included WebSocket framework that works with both Django Channels and FastAPI. It eliminates the boilerplate and repetitive patterns in WebSocket development by providing:
Automatic message routing using Pydantic discriminated unions - no more if-else chains
Type safety with full mypy/pyright support and runtime Pydantic validation
Auto-generated AsyncAPI 3.0 documentation - like OpenAPI/Swagger but for WebSockets
Channel layer integration for broadcasting messages across servers with Redis
Event system to trigger WebSocket messages from anywhere in your application (HTTP views, Celery tasks, management commands)
Built-in authentication with Django REST framework permissions support
Comprehensive testing utilities for both frameworks
Structured logging with automatic request/response tracing
The same decorator-based API works for both Django Channels and FastAPI:
from typing import Literal
from chanx.messages.base import BaseMessage
from chanx.core.decorators import wshandler, channel
from chanx.channels.websocket import AsyncJsonWebsocketConsumer # Django
# from chanx.fastchannels.websocket import AsyncJsonWebsocketConsumer # FastAPI
class ChatMessage(BaseMessage):
action: Literal"chat" = "chat"
payload: str
(name="chat")
/r/Python
https://redd.it/1o5ro8i
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Pyrefly eats CPU like nobodies business.
So I recently tried out the pyrefly and the ty typecheckers/LSPs in my project for ML. While ty wasn't as useful with it's errors and imports, pyrefly was great in that department. Only problem with the latter was that it sent CPU use to near 100% the whole time it ran.
This was worse than even rust-analyzer, notorious for being a heavy-weight tool, which only uses a ton of CPU on startup but works on low CPU throughout but using a ton of RAM.
Is there some configuration for pyrefly I was missing or is this a bug and if it's the latter should I report it?
Or even worse, is this intended behavior? If so, pyrefly will remain unusable to anyone without a really beefy computer making it completely useless for me. Hopefully not thought, cause I can't have an LSP using over 90% CPU while it runs in background running on my laptop.
/r/Python
https://redd.it/1o66tho
So I recently tried out the pyrefly and the ty typecheckers/LSPs in my project for ML. While ty wasn't as useful with it's errors and imports, pyrefly was great in that department. Only problem with the latter was that it sent CPU use to near 100% the whole time it ran.
This was worse than even rust-analyzer, notorious for being a heavy-weight tool, which only uses a ton of CPU on startup but works on low CPU throughout but using a ton of RAM.
Is there some configuration for pyrefly I was missing or is this a bug and if it's the latter should I report it?
Or even worse, is this intended behavior? If so, pyrefly will remain unusable to anyone without a really beefy computer making it completely useless for me. Hopefully not thought, cause I can't have an LSP using over 90% CPU while it runs in background running on my laptop.
/r/Python
https://redd.it/1o66tho
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I built JSONxplode a complex json flattener
I built this tool in python and I hope it will help the community.
This code flattens deep, messy and complex json data into a simple tabular form without the need of providing a schema.
so all you need to do is: from jsonxplode import flatten flattened_json = flatten(messy_json_data)
once this code is finished with the json file none of the object or arrays will be left un packed.
you can access it by doing: pip install jsonxplode
code and proper documentation can be found at:
https://github.com/ThanatosDrive/jsonxplode
https://pypi.org/project/jsonxplode/
in the post i shared at the data engineering sub reddit these were some questions and the answers i provided to them:
why i built this code? because none of the current json flatteners handle properly deep, messy and complex json files without the need of having to read into the json file and define its schema.
how does it deal with some edge case scenarios of eg out of scope duplicate keys? there is a column key counter that increments the column name if it notices that in a row there is 2 of the same columns.
how does it deal with empty values does it do a none or a blank string? data is returned as a list of dictionaries
/r/Python
https://redd.it/1o69cvi
I built this tool in python and I hope it will help the community.
This code flattens deep, messy and complex json data into a simple tabular form without the need of providing a schema.
so all you need to do is: from jsonxplode import flatten flattened_json = flatten(messy_json_data)
once this code is finished with the json file none of the object or arrays will be left un packed.
you can access it by doing: pip install jsonxplode
code and proper documentation can be found at:
https://github.com/ThanatosDrive/jsonxplode
https://pypi.org/project/jsonxplode/
in the post i shared at the data engineering sub reddit these were some questions and the answers i provided to them:
why i built this code? because none of the current json flatteners handle properly deep, messy and complex json files without the need of having to read into the json file and define its schema.
how does it deal with some edge case scenarios of eg out of scope duplicate keys? there is a column key counter that increments the column name if it notices that in a row there is 2 of the same columns.
how does it deal with empty values does it do a none or a blank string? data is returned as a list of dictionaries
/r/Python
https://redd.it/1o69cvi
GitHub
GitHub - ThanatosDrive/jsonxplode: Flatten nested JSON structures into flat dictionaries. Handles complex nesting, lists, and arrays…
Flatten nested JSON structures into flat dictionaries. Handles complex nesting, lists, and arrays with dot notation. - ThanatosDrive/jsonxplode
How to use annotate for DB optimization
Hi, I posted a popular comment to a post a couple days ago asking what some advanced Django topics to focus on are: https://www.reddit.com/r/django/comments/1o52kon/comment/nj6i2hs/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button
I mentioned
Here is an
def cities(self, location=None, filtervalue=None):
entitylocationlookup = {f'{self.cityfieldlookup()}id': OuterRef('pk')}
cities = City.objects.annotate(
hasactiveentities=Exists(
self.getqueryset().filter(entitylocationlookup),
),
).filter(hasactiveentities=True)
/r/django
https://redd.it/1o6jepy
Hi, I posted a popular comment to a post a couple days ago asking what some advanced Django topics to focus on are: https://www.reddit.com/r/django/comments/1o52kon/comment/nj6i2hs/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button
I mentioned
annotate as being low hanging fruit for optimization and the top response to my comment was a question asking for details about it. Its a bit involved to respond to that question, and I figured it would get lost in the archive, so this post is a more thorough explanation of the concept that will reach more people who want to read about it.Here is an
annotate I pulled from real production code that I wrote a couple years ago while refactoring crusty 10+ year old code from Django 1.something:def cities(self, location=None, filtervalue=None):
entitylocationlookup = {f'{self.cityfieldlookup()}id': OuterRef('pk')}
cities = City.objects.annotate(
hasactiveentities=Exists(
self.getqueryset().filter(entitylocationlookup),
),
).filter(hasactiveentities=True)
/r/django
https://redd.it/1o6jepy
Reddit
1ncehost's comment on "What is considered truly advanced in Django?"
Explore this conversation and more from the django community
How to prevent TransactionTestCase from truncating all tables?
For my tests, I copy down the production database, and I use the liveserver test case because my frontend is an SPA and so I need to use playwright to have a browser with the tests.
The challenge is that once the liveserver testcase is done, all my data is blown away, because as the docs tell us, "A TransactionTestCase resets the database after the test runs by truncating all tables."
That's fine for CI, but when testing locally it means I have to keep restoring my database manually. Is there any way to stop it from truncating tables? It seems needlessly annoying that it truncates *all* data!
I tried serialized_rollback=True, but this didn't work. I tried googling around for this, but most of the results I get are folks who are having trouble because their database is *not* reset after a test.
/r/django
https://redd.it/1o6ky68
For my tests, I copy down the production database, and I use the liveserver test case because my frontend is an SPA and so I need to use playwright to have a browser with the tests.
The challenge is that once the liveserver testcase is done, all my data is blown away, because as the docs tell us, "A TransactionTestCase resets the database after the test runs by truncating all tables."
That's fine for CI, but when testing locally it means I have to keep restoring my database manually. Is there any way to stop it from truncating tables? It seems needlessly annoying that it truncates *all* data!
I tried serialized_rollback=True, but this didn't work. I tried googling around for this, but most of the results I get are folks who are having trouble because their database is *not* reset after a test.
/r/django
https://redd.it/1o6ky68
Reddit
From the django community on Reddit: How to prevent TransactionTestCase from truncating all tables?
Explore this post and more from the django community
Tutorial: Building Real-Time WebSocket Apps with Django Channels and ChanX
Hi everyone, I created a hands-on tutorial for learning how to build WebSocket applications with Django Channels using modern best practices. If you're interested in adding real-time features to your Django projects or learning about WebSockets, this might help.
# What You'll Build
The tutorial walks you through building a complete real-time chat application with multiple features:
Real-time chat functionality with message broadcasting
AI assistant chat system with streaming responses
Notification system that updates in real-time
Background task processing with WebSocket notifications
Complete testing setup for WebSocket endpoints
# What You'll Learn
Throughout the tutorial, you'll learn:
Setting up Django Channels with Redis
Creating WebSocket consumers with automatic message routing
Using Pydantic for type-safe message validation
Broadcasting messages to groups of users
Handling channel layer events
Testing WebSocket consumers properly
Generating automatic API documentation for WebSockets
# Tutorial Structure
The tutorial uses a Git repository with checkpoints at each major step. This means you can:
Start from any point if you're already familiar with basics
Compare your code with the reference implementation
Reset to a checkpoint if you get stuck
See exactly what changes at each step
Tutorial link: https://chanx.readthedocs.io/en/latest/tutorial-django/prerequisites.html
# About ChanX
The tutorial uses ChanX, which is a framework I built on top of Django Channels to
/r/djangolearning
https://redd.it/1o5rgd5
Hi everyone, I created a hands-on tutorial for learning how to build WebSocket applications with Django Channels using modern best practices. If you're interested in adding real-time features to your Django projects or learning about WebSockets, this might help.
# What You'll Build
The tutorial walks you through building a complete real-time chat application with multiple features:
Real-time chat functionality with message broadcasting
AI assistant chat system with streaming responses
Notification system that updates in real-time
Background task processing with WebSocket notifications
Complete testing setup for WebSocket endpoints
# What You'll Learn
Throughout the tutorial, you'll learn:
Setting up Django Channels with Redis
Creating WebSocket consumers with automatic message routing
Using Pydantic for type-safe message validation
Broadcasting messages to groups of users
Handling channel layer events
Testing WebSocket consumers properly
Generating automatic API documentation for WebSockets
# Tutorial Structure
The tutorial uses a Git repository with checkpoints at each major step. This means you can:
Start from any point if you're already familiar with basics
Compare your code with the reference implementation
Reset to a checkpoint if you get stuck
See exactly what changes at each step
Tutorial link: https://chanx.readthedocs.io/en/latest/tutorial-django/prerequisites.html
# About ChanX
The tutorial uses ChanX, which is a framework I built on top of Django Channels to
/r/djangolearning
https://redd.it/1o5rgd5
D Only 17 days given to review 5 papers in ICLR 2026...
The paper assignments for ICLR 2026 are in today and I was assigned 5 papers to review. The review deadline is 31st October. I am not sure if this is the normal time period but seems very little. Last year I was assigned 2 papers and was able to write detailed and constructive reviews.
/r/MachineLearning
https://redd.it/1o6hs2w
The paper assignments for ICLR 2026 are in today and I was assigned 5 papers to review. The review deadline is 31st October. I am not sure if this is the normal time period but seems very little. Last year I was assigned 2 papers and was able to write detailed and constructive reviews.
/r/MachineLearning
https://redd.it/1o6hs2w
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Fermi Paradox Flask App
I was going through some older GitHub repos last week and found an early python program I put together. It was a tutorial from a book. It was on Fermi's Paradox. As I looked through it, it was TERRIBLE!!! No imports, not modular, completely done in the CLI. It was cool to see how far I have come. So, I decided to refactor into a flask app. I have it at MVP right now. Still a work in progress, but would love to hear what people think!!
Live App: https://fermi-paradox-project.com/
/r/flask
https://redd.it/1o6j7uy
I was going through some older GitHub repos last week and found an early python program I put together. It was a tutorial from a book. It was on Fermi's Paradox. As I looked through it, it was TERRIBLE!!! No imports, not modular, completely done in the CLI. It was cool to see how far I have come. So, I decided to refactor into a flask app. I have it at MVP right now. Still a work in progress, but would love to hear what people think!!
Live App: https://fermi-paradox-project.com/
/r/flask
https://redd.it/1o6j7uy
devbybrice.com
Fermi Paradox Simulation — by Brice Nelson
Explore the Drake Equation and the search for extraterrestrial life through this interactive simulation by Brice Nelson.
D Why are Monte Carlo methods more popular than Polynomial Chaos Expansion for solving stochastic problems?
I feel like MC methods are king for reinforcement learning and the like, but PCE’s are often cited as being more accurate and efficient. Recently while working on some heavy physics focused problems I’ve found a lot of the folks in Europe use more PCE. Anyone have any thoughts as to why one is more popular? If you want to do a fun deep dive - polynomial chaos (or polynomial chaos expansion) have been a fun random stats deep dive.
/r/MachineLearning
https://redd.it/1o62zfe
I feel like MC methods are king for reinforcement learning and the like, but PCE’s are often cited as being more accurate and efficient. Recently while working on some heavy physics focused problems I’ve found a lot of the folks in Europe use more PCE. Anyone have any thoughts as to why one is more popular? If you want to do a fun deep dive - polynomial chaos (or polynomial chaos expansion) have been a fun random stats deep dive.
/r/MachineLearning
https://redd.it/1o62zfe
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
starting a new project with flasksecurity and flasksqlalchemylite
I have done several flask projects in the past, so I am not a rookie. I recently started a new project that requires role-based access control with fine-grained permissions, so I naturally thought about using flask\security now that it is a pallets project. I am also planning to use flask_sqlalchemy_lite (NOT flask_sqlalchemy). I've built some parts of it, but when I went to build tests I could not get them to work so I went looking for examples in github of real world applications that use flask_security with roles and I found precisely none. I spent an hour or so trying to get copilot to construct some tests, and it was completely confused by the documentation for flask_sqlalchemy and flask_sqlalchemy_lite so it kept recommending code that doesn't work. The complete lack of training data is probably the problem here and the confusingly close APIs that are incompatible.
This has caused me to question my decision to use flask at all, since the support libraries for security and database are so poorly documented and apparently have no serious apps that use them. I'm now thinking of going with django instead. Does anyone know of a real-world example that uses the combination of
/r/flask
https://redd.it/1o6w7e6
I have done several flask projects in the past, so I am not a rookie. I recently started a new project that requires role-based access control with fine-grained permissions, so I naturally thought about using flask\security now that it is a pallets project. I am also planning to use flask_sqlalchemy_lite (NOT flask_sqlalchemy). I've built some parts of it, but when I went to build tests I could not get them to work so I went looking for examples in github of real world applications that use flask_security with roles and I found precisely none. I spent an hour or so trying to get copilot to construct some tests, and it was completely confused by the documentation for flask_sqlalchemy and flask_sqlalchemy_lite so it kept recommending code that doesn't work. The complete lack of training data is probably the problem here and the confusingly close APIs that are incompatible.
This has caused me to question my decision to use flask at all, since the support libraries for security and database are so poorly documented and apparently have no serious apps that use them. I'm now thinking of going with django instead. Does anyone know of a real-world example that uses the combination of
/r/flask
https://redd.it/1o6w7e6
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Python 3.15 Alpha Released
https://docs.python.org/3.15/whatsnew/3.15.html
Summary – Release highlights
[PEP 799](https://peps.python.org/pep-0799/): [A dedicated profiling package for organizing Python profiling tools](https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-sampling-profiler)
**PEP 686**: Python now uses UTF-8 as the default encoding
[PEP 782](https://peps.python.org/pep-0782/): [A new PyBytesWriter C API to create a Python bytes object](https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-pep782)
Improved error messages
/r/Python
https://redd.it/1o6jivj
https://docs.python.org/3.15/whatsnew/3.15.html
Summary – Release highlights
[PEP 799](https://peps.python.org/pep-0799/): [A dedicated profiling package for organizing Python profiling tools](https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-sampling-profiler)
**PEP 686**: Python now uses UTF-8 as the default encoding
[PEP 782](https://peps.python.org/pep-0782/): [A new PyBytesWriter C API to create a Python bytes object](https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-pep782)
Improved error messages
/r/Python
https://redd.it/1o6jivj
Python documentation
What’s new in Python 3.15
Editor, Hugo van Kemenade,. This article explains the new features in Python 3.15, compared to 3.14. For full details, see the changelog. Summary – Release highlights: PEP 799: A dedicated profilin...
Which FrontEnd framework suits Django best?
Simple as that. What FrontEnd framework is it best to pair Django with? I know plan html, css and js and think that its best for me to learn a framework, both for getting a job and being “better”
/r/djangolearning
https://redd.it/1o747ht
Simple as that. What FrontEnd framework is it best to pair Django with? I know plan html, css and js and think that its best for me to learn a framework, both for getting a job and being “better”
/r/djangolearning
https://redd.it/1o747ht
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
P Nanonets-OCR2: An Open-Source Image-to-Markdown Model with LaTeX, Tables, flowcharts, handwritten docs, checkboxes & More
We're excited to share Nanonets-OCR2, a state-of-the-art suite of models designed for advanced image-to-markdown conversion and Visual Question Answering (VQA).
🔍 Key Features:
LaTeX Equation Recognition: Automatically converts mathematical equations and formulas into properly formatted LaTeX syntax. It distinguishes between inline (`$...$`) and display (`$$...$$`) equations.
Intelligent Image Description: Describes images within documents using structured
Signature Detection & Isolation: Identifies and isolates signatures from other text, outputting them within a `<signature>` tag. This is crucial for processing legal and business documents.
Watermark Extraction: Detects and extracts watermark text from documents, placing it within a
Smart Checkbox Handling: Converts form checkboxes and radio buttons into standardized Unicode symbols (`☐`, `☑`, `☒`) for consistent and reliable processing.
Complex Table Extraction: Accurately extracts complex tables from documents and converts them into both markdown and HTML table formats.
Flow charts & Organisational charts: Extracts flow charts and organisational as [mermaid](https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp/blob/main/mermaid.js.org) code.
Handwritten Documents: The model is trained on handwritten documents across multiple languages.
Multilingual: Model is trained on documents of multiple languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Arabic, and many more.
Visual Question Answering (VQA): The model is designed to provide
/r/MachineLearning
https://redd.it/1o7160j
We're excited to share Nanonets-OCR2, a state-of-the-art suite of models designed for advanced image-to-markdown conversion and Visual Question Answering (VQA).
🔍 Key Features:
LaTeX Equation Recognition: Automatically converts mathematical equations and formulas into properly formatted LaTeX syntax. It distinguishes between inline (`$...$`) and display (`$$...$$`) equations.
Intelligent Image Description: Describes images within documents using structured
<img> tags, making them digestible for LLM processing. It can describe various image types, including logos, charts, graphs and so on, detailing their content, style, and context.Signature Detection & Isolation: Identifies and isolates signatures from other text, outputting them within a `<signature>` tag. This is crucial for processing legal and business documents.
Watermark Extraction: Detects and extracts watermark text from documents, placing it within a
<watermark> tag.Smart Checkbox Handling: Converts form checkboxes and radio buttons into standardized Unicode symbols (`☐`, `☑`, `☒`) for consistent and reliable processing.
Complex Table Extraction: Accurately extracts complex tables from documents and converts them into both markdown and HTML table formats.
Flow charts & Organisational charts: Extracts flow charts and organisational as [mermaid](https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp/blob/main/mermaid.js.org) code.
Handwritten Documents: The model is trained on handwritten documents across multiple languages.
Multilingual: Model is trained on documents of multiple languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Arabic, and many more.
Visual Question Answering (VQA): The model is designed to provide
/r/MachineLearning
https://redd.it/1o7160j
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
GIL free and thread safety
For Python 3.14 free GIL version to be usable, shouldn't also Python libraries be re-written to become thread safe? (or the underlying C infrastructure)
/r/Python
https://redd.it/1o71ejn
For Python 3.14 free GIL version to be usable, shouldn't also Python libraries be re-written to become thread safe? (or the underlying C infrastructure)
/r/Python
https://redd.it/1o71ejn
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Steps to update Django?
Hi all, I have a Django project that I worked on from 2022 to 2023. It's Django version 4.1 and has about 30+ packages that I haven't updated since 2023.
I'm thinking to update it to Django version 5.2, and maybe even Django 6 in December.
Looking through it, there's a lot of older dependencies like django-allauth version 0.51.0 while now version 65.0.0 is out now, etc.
I just updated my python version to 3.13, and now I'm going through all the dependencies to see if I still need them.
How do you normally approach a Django update? Do you update the Django version first, and then go through all your packages one by one to make sure everything is still compatible? Do you use something like this auto-update library? https://django-upgrade.readthedocs.io/en/latest/
Am I supposed to first update Django from 4.1 --> 5.2 --> 6?
All experiences/opinions/suggestions/tips welcome! Thanks in advance!
/r/django
https://redd.it/1o733qz
Hi all, I have a Django project that I worked on from 2022 to 2023. It's Django version 4.1 and has about 30+ packages that I haven't updated since 2023.
I'm thinking to update it to Django version 5.2, and maybe even Django 6 in December.
Looking through it, there's a lot of older dependencies like django-allauth version 0.51.0 while now version 65.0.0 is out now, etc.
I just updated my python version to 3.13, and now I'm going through all the dependencies to see if I still need them.
How do you normally approach a Django update? Do you update the Django version first, and then go through all your packages one by one to make sure everything is still compatible? Do you use something like this auto-update library? https://django-upgrade.readthedocs.io/en/latest/
Am I supposed to first update Django from 4.1 --> 5.2 --> 6?
All experiences/opinions/suggestions/tips welcome! Thanks in advance!
/r/django
https://redd.it/1o733qz
Recommending
Hi peeps,
I wanna recommend to all of you the tool prek to you. This is a Rust rewrite of the established Python tool pre-commit, which is widely used. Pre-commit is a great tool but it suffers from several limitations:
1. Its pretty slow (although its surprisingly fast for being written in Python)
2. The maintainer (asottile) made it very clear that he is not willing to introduce monorepo support or any other advanced features (e.g. parallelization) asked over the years
I was following this project from its inception (whats now called Prek) and it evolved both very fast and very well. I am now using it across multiple project, e.g. in Kreuzberg, both locally and in CI and it does bring in an at least x10 speed improvement (linting and autoupdate commands!)
So, I warmly recommend this tool, and do show your support for Prek by giving it a star!
/r/Python
https://redd.it/1o77mip
prek - the necessary Rust rewrite of pre-commitHi peeps,
I wanna recommend to all of you the tool prek to you. This is a Rust rewrite of the established Python tool pre-commit, which is widely used. Pre-commit is a great tool but it suffers from several limitations:
1. Its pretty slow (although its surprisingly fast for being written in Python)
2. The maintainer (asottile) made it very clear that he is not willing to introduce monorepo support or any other advanced features (e.g. parallelization) asked over the years
I was following this project from its inception (whats now called Prek) and it evolved both very fast and very well. I am now using it across multiple project, e.g. in Kreuzberg, both locally and in CI and it does bring in an at least x10 speed improvement (linting and autoupdate commands!)
So, I warmly recommend this tool, and do show your support for Prek by giving it a star!
/r/Python
https://redd.it/1o77mip
GitHub
GitHub - j178/prek: ⚡ Better `pre-commit`, re-engineered in Rust
⚡ Better `pre-commit`, re-engineered in Rust. Contribute to j178/prek development by creating an account on GitHub.
Built a Tool to Sync GitHub Issues to Linear – Feedback Welcome!
Hey everyone,
Target Audience: Useful for technical support engineers, dev leads, or anyone managing projects via GitHub and Linear.
What my project does
I’ve built a tool that automatically syncs GitHub issues into Linear tickets. The idea is to reduce the manual overhead of copy-pasting or re-creating issues across platforms, especially when you're using GitHub for external collaboration (e.g., open source, customer bug reports) and Linear for internal planning and prioritization.
You can find it here:
🔗 https://github.com/olaaustine/github-issues-linear
The README is fairly detailed and should help you get it running quickly — it's currently packaged as a customizable Docker container, so setup should be straightforward if you’re familiar with containers.
🧪 Status:
The project is still in early development, so it’s very much a WIP. But it works, and I’m actively iterating on it. The goal is to make it reliable enough for daily use and eventually extend support to other issue trackers beyond Linear.
I’d really appreciate any thoughts or ideas – even if it’s just a quick reaction. Thanks!
/r/Python
https://redd.it/1o77t5h
Hey everyone,
Target Audience: Useful for technical support engineers, dev leads, or anyone managing projects via GitHub and Linear.
What my project does
I’ve built a tool that automatically syncs GitHub issues into Linear tickets. The idea is to reduce the manual overhead of copy-pasting or re-creating issues across platforms, especially when you're using GitHub for external collaboration (e.g., open source, customer bug reports) and Linear for internal planning and prioritization.
You can find it here:
🔗 https://github.com/olaaustine/github-issues-linear
The README is fairly detailed and should help you get it running quickly — it's currently packaged as a customizable Docker container, so setup should be straightforward if you’re familiar with containers.
🧪 Status:
The project is still in early development, so it’s very much a WIP. But it works, and I’m actively iterating on it. The goal is to make it reliable enough for daily use and eventually extend support to other issue trackers beyond Linear.
I’d really appreciate any thoughts or ideas – even if it’s just a quick reaction. Thanks!
/r/Python
https://redd.it/1o77t5h
GitHub
GitHub - olaaustine/github-issues-linear: Automatically syncs GitHub issues to Linear as tickets — no manual copying or context…
Automatically syncs GitHub issues to Linear as tickets — no manual copying or context switching. - olaaustine/github-issues-linear
Please suggest interesting features for this "generic" movie lookup MVP
#
Hello all,
I just finished building a small hobby project called LetsDiscussMoviez — a minimal web app where you can look up movies and view basic ratings/data (IMDb, Rotten Tomatoes, etc.). It’s currently very generic in functionality — you can browse and view movies, but that’s about it.
Now I need your help:
Instead of turning it into “just another IMDb clone”, I want to add one or two unique, fun or useful features that make it worth visiting regularly.
So — what would you love to see in a movie lookup site?
Some half-baked ideas I’m considering:
“Recommend me a movie like ___ but ___” (mashup-style filters)
Discussion threads under each movie like threads
"People who loved this also hated that” — reverse recommendations maybe?
AI-generated summaries / trivia / character breakdowns
Polls like “Better ending: Fight Club vs Se7en?”
Question for you:
What feature would make you bookmark this site or come back often?
Could be fun, social, niche, or even chaotic — I’m open to weird ideas.
Appreciate any feedback!
/r/django
https://redd.it/1o78vso
#
Hello all,
I just finished building a small hobby project called LetsDiscussMoviez — a minimal web app where you can look up movies and view basic ratings/data (IMDb, Rotten Tomatoes, etc.). It’s currently very generic in functionality — you can browse and view movies, but that’s about it.
Now I need your help:
Instead of turning it into “just another IMDb clone”, I want to add one or two unique, fun or useful features that make it worth visiting regularly.
So — what would you love to see in a movie lookup site?
Some half-baked ideas I’m considering:
“Recommend me a movie like ___ but ___” (mashup-style filters)
Discussion threads under each movie like threads
"People who loved this also hated that” — reverse recommendations maybe?
AI-generated summaries / trivia / character breakdowns
Polls like “Better ending: Fight Club vs Se7en?”
Question for you:
What feature would make you bookmark this site or come back often?
Could be fun, social, niche, or even chaotic — I’m open to weird ideas.
Appreciate any feedback!
/r/django
https://redd.it/1o78vso
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is there an out-of-box django-allauth with beautiful frontend?
I’ve been working on a project with django-allauth for several weeks. It provides me an easy way to integrate with 3rd party OAuth 2. I’ve finished beautifying some of them like login, signup but it seems like there are still a few of them I should work on even I won’t use any of them.
Is there a way to block some of its urls like inactive users?
Or is there a battery included pack which has sleek style for all templates?
/r/django
https://redd.it/1o6xh5p
I’ve been working on a project with django-allauth for several weeks. It provides me an easy way to integrate with 3rd party OAuth 2. I’ve finished beautifying some of them like login, signup but it seems like there are still a few of them I should work on even I won’t use any of them.
Is there a way to block some of its urls like inactive users?
Or is there a battery included pack which has sleek style for all templates?
/r/django
https://redd.it/1o6xh5p
Reddit
From the django community on Reddit
Explore this post and more from the django community