Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1m9ezt4
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1m9ezt4
YouTube
Data Structures and Algorithms in Python - Full Course for Beginners
A beginner-friendly introduction to common data structures (linked lists, stacks, queues, graphs) and algorithms (search, sorting, recursion, dynamic programming) in Python. This course will help you prepare for coding interviews and assessments.
🔗 Course…
🔗 Course…
Stop trying to catch exceptions when its ok to let your program crash
Just found this garbage in our prod code
except Exception as e:
logger.error(json.dumps({"reason":"something unexpected happened", "exception":str(e)}))
return False
This is in an aws lambda that runs as the authorizer in api gateway. Simply letting the lambda crash would be an automatic rejection, which is the desired behavior.
But now the error is obfuscated and I have to modify and rebuild to include more information so I can actually figure out what is going on. And for what? What benefit does catching this exception give? Nothing. Just logging an error that something unexpected happened. Wow great.
and also now I dont get to glance at lambda failures to see if issues are occurring. Now I have to add more assert statements to make sure that a test success is an actual success. Cringe.
stop doing this. let your program crash
/r/Python
https://redd.it/1m96wmi
Just found this garbage in our prod code
except Exception as e:
logger.error(json.dumps({"reason":"something unexpected happened", "exception":str(e)}))
return False
This is in an aws lambda that runs as the authorizer in api gateway. Simply letting the lambda crash would be an automatic rejection, which is the desired behavior.
But now the error is obfuscated and I have to modify and rebuild to include more information so I can actually figure out what is going on. And for what? What benefit does catching this exception give? Nothing. Just logging an error that something unexpected happened. Wow great.
and also now I dont get to glance at lambda failures to see if issues are occurring. Now I have to add more assert statements to make sure that a test success is an actual success. Cringe.
stop doing this. let your program crash
/r/Python
https://redd.it/1m96wmi
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
is this a bad start
After seeing an ad for a website that claims to create apps using AI, I gave it a try. But the result wasn’t what I wanted, so I downloaded the full code (Python) and ran it locally.
At first, I had no idea what I was doing. I used ChatGPT to help me make changes, but I ran into many issues and errors. Still, over time I started to understand things like file paths, libraries, and how the code was structured.
Eventually, I got used to the workflow: give the code to AI, get suggestions, and apply them locally. This process made me curious, so I decided to start learning Python from scratch. Surprisingly, it’s not as hard as I thought.
What do you think about this approach? Any tips or advice for someone going down this path?
/r/flask
https://redd.it/1m74k3m
After seeing an ad for a website that claims to create apps using AI, I gave it a try. But the result wasn’t what I wanted, so I downloaded the full code (Python) and ran it locally.
At first, I had no idea what I was doing. I used ChatGPT to help me make changes, but I ran into many issues and errors. Still, over time I started to understand things like file paths, libraries, and how the code was structured.
Eventually, I got used to the workflow: give the code to AI, get suggestions, and apply them locally. This process made me curious, so I decided to start learning Python from scratch. Surprisingly, it’s not as hard as I thought.
What do you think about this approach? Any tips or advice for someone going down this path?
/r/flask
https://redd.it/1m74k3m
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Flask Web Development
Guys, I would like to have some suggestions from you regarding topics that you would like me to explore in Flask India Blogs. This is my small contribution to giving back to the community.
/r/flask
https://redd.it/1m9pbk5
Guys, I would like to have some suggestions from you regarding topics that you would like me to explore in Flask India Blogs. This is my small contribution to giving back to the community.
/r/flask
https://redd.it/1m9pbk5
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Questions about Django Security in 2025 (Django 5.1.x+)
Hello. Over the past few months I've gotten more and more paranoid with data/network security and I've been working on locking down my digital life (even made an ethernet kill switch for a few machines). I've been working with django for a few years now and I'd like to bump up my security protocols for my live and public instances, but have a few questions before I do too much work.
1. There is a library out there called django-defender that I recently learned about (link), and the last release was in 2024. This library basically makes it so malicious actors can't brute-force login to the admin dashboard. It's one of those deals where after X attempts it locks the account. The idea sounds intriguing to me but its been over a year since the last release, and I was wondering if anyone has used this with Django 5.1 and if this library is even relevant now in mid-2025? If not, are there any alternatives that you have worked with that get the job done?
2. I recently got 2 Yubikeys (one for backup), and I would really like to learn how to do FIDO2/U2F to add another layer of
/r/django
https://redd.it/1m8xuqn
Hello. Over the past few months I've gotten more and more paranoid with data/network security and I've been working on locking down my digital life (even made an ethernet kill switch for a few machines). I've been working with django for a few years now and I'd like to bump up my security protocols for my live and public instances, but have a few questions before I do too much work.
1. There is a library out there called django-defender that I recently learned about (link), and the last release was in 2024. This library basically makes it so malicious actors can't brute-force login to the admin dashboard. It's one of those deals where after X attempts it locks the account. The idea sounds intriguing to me but its been over a year since the last release, and I was wondering if anyone has used this with Django 5.1 and if this library is even relevant now in mid-2025? If not, are there any alternatives that you have worked with that get the job done?
2. I recently got 2 Yubikeys (one for backup), and I would really like to learn how to do FIDO2/U2F to add another layer of
/r/django
https://redd.it/1m8xuqn
GitHub
GitHub - jazzband/django-defender: A simple super fast django reusable app that blocks people from brute forcing login attempts
A simple super fast django reusable app that blocks people from brute forcing login attempts - jazzband/django-defender
Using Django Float fields vs Decimal/Integer fields
I saw a thread that I couldn’t comment on and thought someone may need this knowledge in the future.
People were arguing in the past that they don’t know of a benefit for using float fields.
I’ve written extremely long calculation functions that I use to perform some inverse kinematics on earthmoving machinery components.
Imagine an ExcavatorBoom model with dimension fields like x_a, y_a, x_b etc.
I have a property field called “matrix” that uses numpy to create a sort of matrix of coordinates as a numpy array with the input coordinates. The problem was I had to convert each and every field to a float.
I initially used decimal fields for the dimensions, masses and everything else really because in the 3 years that I have been coding, it never occurred to me to look up if float fields even existed in Django. Extreme tunnel vision…
So within each calculation, I needed to convert every single input into a float. (I calculated over 135 conversions per calculation).
This means testing my calcs took 4-5 days of debugging.
So I ended up converting all decimal and integer fields to float fields and deleted all float conversions in my calculation methods. This made my code infinitely cleaner and easier
/r/django
https://redd.it/1m9vv6x
I saw a thread that I couldn’t comment on and thought someone may need this knowledge in the future.
People were arguing in the past that they don’t know of a benefit for using float fields.
I’ve written extremely long calculation functions that I use to perform some inverse kinematics on earthmoving machinery components.
Imagine an ExcavatorBoom model with dimension fields like x_a, y_a, x_b etc.
I have a property field called “matrix” that uses numpy to create a sort of matrix of coordinates as a numpy array with the input coordinates. The problem was I had to convert each and every field to a float.
I initially used decimal fields for the dimensions, masses and everything else really because in the 3 years that I have been coding, it never occurred to me to look up if float fields even existed in Django. Extreme tunnel vision…
So within each calculation, I needed to convert every single input into a float. (I calculated over 135 conversions per calculation).
This means testing my calcs took 4-5 days of debugging.
So I ended up converting all decimal and integer fields to float fields and deleted all float conversions in my calculation methods. This made my code infinitely cleaner and easier
/r/django
https://redd.it/1m9vv6x
Reddit
From the django community on Reddit
Explore this post and more from the django community
First repository (Appointment booking system)
https://github.com/AtharvaManale/Appointment-Booking
/r/flask
https://redd.it/1m9v9c7
https://github.com/AtharvaManale/Appointment-Booking
/r/flask
https://redd.it/1m9v9c7
GitHub
GitHub - AtharvaManale/Appointment-Booking-System: Just an Appointment booking system for Smartvolt company webpage
Just an Appointment booking system for Smartvolt company webpage - GitHub - AtharvaManale/Appointment-Booking-System: Just an Appointment booking system for Smartvolt company webpage
Erys: A Terminal Interface for Jupyter Notebooks
Erys: A Terminal Interface for Jupyter Notebooks
I recently built a TUI tool called Erys that lets you open, edit, and run Jupyter Notebooks entirely from the terminal. This came out of frustration from having to open GUIs just to comfortably interact with and edit notebook files. Given the impressive rendering capabilities of modern terminals and Textualize.io's Textual library, which helps build great interactive and pretty terminal UI, I decided to build Erys.
What My Project Does
Erys is a TUI for editing, executing, and interacting with Jupyter Notebooks directly from your terminal. It uses the Textual library for creating the interface and `jupyter_client` for managing Python kernels. Some cool features are:
\- Interactive cell manipulation: split, merge, move, collapse, and change cell types.
\- Syntax highlighting for Python, Markdown, and more.
\- Background code cell execution.
\- Markup rendering of ANSI escaped text outputs resulting in pretty error messages, JSONs, and more.
\- Markdown cell rendering.
\- Rendering image and HTML output from code cell execution using Pillow and web-browser.
\- Works as a lightweight editor for source code and text files.
Code execution uses the Python environment in which Erys is opened and requires installation of ipykernel.
In the future, I would like to add code completion
/r/Python
https://redd.it/1ma0852
Erys: A Terminal Interface for Jupyter Notebooks
I recently built a TUI tool called Erys that lets you open, edit, and run Jupyter Notebooks entirely from the terminal. This came out of frustration from having to open GUIs just to comfortably interact with and edit notebook files. Given the impressive rendering capabilities of modern terminals and Textualize.io's Textual library, which helps build great interactive and pretty terminal UI, I decided to build Erys.
What My Project Does
Erys is a TUI for editing, executing, and interacting with Jupyter Notebooks directly from your terminal. It uses the Textual library for creating the interface and `jupyter_client` for managing Python kernels. Some cool features are:
\- Interactive cell manipulation: split, merge, move, collapse, and change cell types.
\- Syntax highlighting for Python, Markdown, and more.
\- Background code cell execution.
\- Markup rendering of ANSI escaped text outputs resulting in pretty error messages, JSONs, and more.
\- Markdown cell rendering.
\- Rendering image and HTML output from code cell execution using Pillow and web-browser.
\- Works as a lightweight editor for source code and text files.
Code execution uses the Python environment in which Erys is opened and requires installation of ipykernel.
In the future, I would like to add code completion
/r/Python
https://redd.it/1ma0852
Reddit
From the Python community on Reddit: Erys: A Terminal Interface for Jupyter Notebooks
Explore this post and more from the Python community
P Sub-millisecond GPU Task Queue: Optimized CUDA Kernels for Small-Batch ML Inference on GTX 1650.
Over the past month, I’ve been working on writing high-throughput, low-latency CUDA kernels for small-batch inference workloads typical in real-time ML use cases (e.g., finance, RL serving).
Despite running on a GTX 1650 (consumer laptop GPU), I achieved:
93,563 ops/sec
0.011 ms median latency
7.3× speedup over PyTorch (float32 GEMV)
30–40% faster than cuBLAS batched GEMV (in small-batch regime)
This was done by hand-optimizing a set of three core kernels:
Batched GEMV
Softmax
Vector elementwise ops (e.g., affine transforms)
# Engineering Highlights:
128-byte staged shared memory blocks (using padding for bank conflict mitigation)
Thread-per-output-element grid strategy
Aggressive loop unrolling and warp-aware memory access
Benchmarked with CUDA events, median+IQR over 1,000 trials
# Why it matters:
cuBLAS (and by extension PyTorch) is heavily tuned for large-batch throughput, but small-batch latency suffers. For real-time systems (e.g., financial models or reinforcement learning), this is a major bottleneck.
This kernel suite shows that even with modest hardware, you can cut inference latency significantly below PyTorch/cuBLAS levels through architecture-aware programming.
# Links:
[GitHub source & benchmark code](https://github.com/shreshthkapai/cuda_latency_benchmark)
Full write-up on Medium
Would love to hear feedback from others doing similar work—especially around kernel tuning strategies, warp divergence handling, and memory hierarchy tradeoffs.
/r/MachineLearning
https://redd.it/1m9vauo
Over the past month, I’ve been working on writing high-throughput, low-latency CUDA kernels for small-batch inference workloads typical in real-time ML use cases (e.g., finance, RL serving).
Despite running on a GTX 1650 (consumer laptop GPU), I achieved:
93,563 ops/sec
0.011 ms median latency
7.3× speedup over PyTorch (float32 GEMV)
30–40% faster than cuBLAS batched GEMV (in small-batch regime)
This was done by hand-optimizing a set of three core kernels:
Batched GEMV
Softmax
Vector elementwise ops (e.g., affine transforms)
# Engineering Highlights:
float4 vectorization with proper alignment checks128-byte staged shared memory blocks (using padding for bank conflict mitigation)
Thread-per-output-element grid strategy
Aggressive loop unrolling and warp-aware memory access
Benchmarked with CUDA events, median+IQR over 1,000 trials
# Why it matters:
cuBLAS (and by extension PyTorch) is heavily tuned for large-batch throughput, but small-batch latency suffers. For real-time systems (e.g., financial models or reinforcement learning), this is a major bottleneck.
This kernel suite shows that even with modest hardware, you can cut inference latency significantly below PyTorch/cuBLAS levels through architecture-aware programming.
# Links:
[GitHub source & benchmark code](https://github.com/shreshthkapai/cuda_latency_benchmark)
Full write-up on Medium
Would love to hear feedback from others doing similar work—especially around kernel tuning strategies, warp divergence handling, and memory hierarchy tradeoffs.
/r/MachineLearning
https://redd.it/1m9vauo
GitHub
GitHub - shreshthkapai/cuda_latency_benchmark: High-performance CUDA kernels for real-time financial low latency inference, optimized…
High-performance CUDA kernels for real-time financial low latency inference, optimized for both consumer and datacenter GPUs. - shreshthkapai/cuda_latency_benchmark
Polylith: a Monorepo Architecture
Project name: The Python tools for the Polylith Architecture
# What My Project Does
The main use case is to support Microservices (or apps) in a Monorepo, and easily share code between the services. You can use Polylith with uv, Poetry, Hatch, Pixi or any of your favorite packaging & dependency management tool.
Polylith is an Architecture with tooling support. The architecture is about writing small & reusable Python components - building blocks - that are very much like LEGO bricks. Features are built by composing bricks. It’s really simple. The tooling adds visualization of the Monorepo, templating for creating new bricks and CI-specific features (such as determining which services to deploy when code has changed).
# Target Audience
Python developer teams that develop and maintain services using a Microservice setup.
# Comparison
There’s similar solutions, such as uv workspaces or Pants build. Polylith adds the Architecture and Organization of a Monorepo. All code in a Polylith setup - yes, all Python code - is available for reuse. All code lives in the same virtual environment. This means you have one set of linting and typing rules, and run all code with the same versions of dependencies.
This fits very well with REPL Driven Development and interactive Notebooks.
Recently,
/r/Python
https://redd.it/1m9so08
Project name: The Python tools for the Polylith Architecture
# What My Project Does
The main use case is to support Microservices (or apps) in a Monorepo, and easily share code between the services. You can use Polylith with uv, Poetry, Hatch, Pixi or any of your favorite packaging & dependency management tool.
Polylith is an Architecture with tooling support. The architecture is about writing small & reusable Python components - building blocks - that are very much like LEGO bricks. Features are built by composing bricks. It’s really simple. The tooling adds visualization of the Monorepo, templating for creating new bricks and CI-specific features (such as determining which services to deploy when code has changed).
# Target Audience
Python developer teams that develop and maintain services using a Microservice setup.
# Comparison
There’s similar solutions, such as uv workspaces or Pants build. Polylith adds the Architecture and Organization of a Monorepo. All code in a Polylith setup - yes, all Python code - is available for reuse. All code lives in the same virtual environment. This means you have one set of linting and typing rules, and run all code with the same versions of dependencies.
This fits very well with REPL Driven Development and interactive Notebooks.
Recently,
/r/Python
https://redd.it/1m9so08
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Sunday Daily Thread: What's everyone working on this week?
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1ma85ub
# Weekly Thread: What's Everyone Working On This Week? 🛠️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
## How it Works:
1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.
## Guidelines:
Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
## Example Shares:
1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟
/r/Python
https://redd.it/1ma85ub
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
CSRF cookie set but not sent with POST request in frontend (works with curl)
---
**Title: CSRF cookie set but not sent with POST request in frontend (works with curl)**
Hey everyone,
I'm stuck with a frustrating CSRF issue and could really use some help. This has been bugging me for two days straight.
### 🧱 Project Setup
- **Backend** (Django, running locally at `localhost:8000` and exposed via Ngrok):
```
https://0394b903a90d.ngrok-free.app/
```
- **Frontend** (Vite/React, running on a different machine at `localhost:5173` and also exposed via Ngrok):
```
https://6226c43205c9.ngrok-free.app/
```
---
### ✅ What’s Working
1. **CSRF GET request from frontend**:
- Frontend sends a request to:
`https://0394b903a90d.ngrok-free.app/api/accounts/csrf/`
- Response includes:
```
set-cookie: csrftoken=CSsCzLxxuYy2Nn4xq0Dabrg0aZdtYShy; expires=...; SameSite=None; Secure
```
- The cookie **shows up in the network tab**, but not accessible via JavaScript (as expected since it's HTTPOnly=False).
- Backend view:
```python
def get_csrf_token(request):
allow_all = getattr(settings, 'CORS_ALLOW_ALL_ORIGINS', 'NOT_FOUND')
allowed_list = getattr(settings, 'CORS_ALLOWED_ORIGINS', 'NOT_FOUND')
return JsonResponse({
/r/django
https://redd.it/1m9y71m
---
**Title: CSRF cookie set but not sent with POST request in frontend (works with curl)**
Hey everyone,
I'm stuck with a frustrating CSRF issue and could really use some help. This has been bugging me for two days straight.
### 🧱 Project Setup
- **Backend** (Django, running locally at `localhost:8000` and exposed via Ngrok):
```
https://0394b903a90d.ngrok-free.app/
```
- **Frontend** (Vite/React, running on a different machine at `localhost:5173` and also exposed via Ngrok):
```
https://6226c43205c9.ngrok-free.app/
```
---
### ✅ What’s Working
1. **CSRF GET request from frontend**:
- Frontend sends a request to:
`https://0394b903a90d.ngrok-free.app/api/accounts/csrf/`
- Response includes:
```
set-cookie: csrftoken=CSsCzLxxuYy2Nn4xq0Dabrg0aZdtYShy; expires=...; SameSite=None; Secure
```
- The cookie **shows up in the network tab**, but not accessible via JavaScript (as expected since it's HTTPOnly=False).
- Backend view:
```python
def get_csrf_token(request):
allow_all = getattr(settings, 'CORS_ALLOW_ALL_ORIGINS', 'NOT_FOUND')
allowed_list = getattr(settings, 'CORS_ALLOWED_ORIGINS', 'NOT_FOUND')
return JsonResponse({
/r/django
https://redd.it/1m9y71m
Python 3.14: time for a release name?
I know we don't have release names, but if it's not called "Pi-thon" it's gonna be such a missed opportunity. There will only be one version 3.14 ever...
/r/Python
https://redd.it/1ma6dbd
I know we don't have release names, but if it's not called "Pi-thon" it's gonna be such a missed opportunity. There will only be one version 3.14 ever...
/r/Python
https://redd.it/1ma6dbd
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Karaoke maker python project
Hii,
I tried using some of the karaoke video makers but from what I've seen, they use speech-to-text to time the lyrics. However, I am lazy and wondered why we can't just use the already timed lyrics in musixmatch and lrclib. The only drawback is that most of them are done per line as opposed to per word but that was an okay compromise for me.
So I (vibe) coded this simple python workflow that takes everything from a search query or youtube url to a karaoke video. It goes like this:
search term or url -> downloads mp3 -> split vocals / instrumental using nomadkaraoke/python-audio-separator\-> get synced lyrics using moehmeni/syncedlyrics\-> convert to subtitles -> burn subtitles with instrumental for final video
here's the project: el-tahir/karaoke. and here is an example of the generated video : https://youtu.be/vKunrdRmMCE?si=xsyavSAVk43t5GnB .
I would love some feedback, especially from experienced devs!!
What My Project Does:
creates karaoke videos from a search term or youtube url.
Target Audience:
just a toy project
Comparison:
Instead of trying to use speech-to-text to time lyrics, it uses already synced lyrics from sources like musixmatch and lrclib.
/r/Python
https://redd.it/1m9zwfh
Hii,
I tried using some of the karaoke video makers but from what I've seen, they use speech-to-text to time the lyrics. However, I am lazy and wondered why we can't just use the already timed lyrics in musixmatch and lrclib. The only drawback is that most of them are done per line as opposed to per word but that was an okay compromise for me.
So I (vibe) coded this simple python workflow that takes everything from a search query or youtube url to a karaoke video. It goes like this:
search term or url -> downloads mp3 -> split vocals / instrumental using nomadkaraoke/python-audio-separator\-> get synced lyrics using moehmeni/syncedlyrics\-> convert to subtitles -> burn subtitles with instrumental for final video
here's the project: el-tahir/karaoke. and here is an example of the generated video : https://youtu.be/vKunrdRmMCE?si=xsyavSAVk43t5GnB .
I would love some feedback, especially from experienced devs!!
What My Project Does:
creates karaoke videos from a search term or youtube url.
Target Audience:
just a toy project
Comparison:
Instead of trying to use speech-to-text to time lyrics, it uses already synced lyrics from sources like musixmatch and lrclib.
/r/Python
https://redd.it/1m9zwfh
GitHub
GitHub - nomadkaraoke/python-audio-separator: Easy to use stem (e.g. instrumental/vocals) separation from CLI or as a python package…
Easy to use stem (e.g. instrumental/vocals) separation from CLI or as a python package, using a variety of amazing pre-trained models (primarily from UVR) - nomadkaraoke/python-audio-separator
Speech-to-speech conversational agent
Has anyone been able to build a conversational AI app? I’m looking for affordable speech-to-speech APIs, came across Hume AI EVI 3 APIs, but it’s been frustrating to say the least as I haven’t been successful. I also implemented deep gram for transcripts then sending to openAI for text response and then openAI text to speech, but looking for an affordable speech-to-speech workflow. OpenAI’s conversational API are expensive, so anything other than that. Any suggestions? Django integration is what’s needed. Thanks.
/r/django
https://redd.it/1madeki
Has anyone been able to build a conversational AI app? I’m looking for affordable speech-to-speech APIs, came across Hume AI EVI 3 APIs, but it’s been frustrating to say the least as I haven’t been successful. I also implemented deep gram for transcripts then sending to openAI for text response and then openAI text to speech, but looking for an affordable speech-to-speech workflow. OpenAI’s conversational API are expensive, so anything other than that. Any suggestions? Django integration is what’s needed. Thanks.
/r/django
https://redd.it/1madeki
Reddit
From the django community on Reddit
Explore this post and more from the django community
I would like to integrate my cookiecutter django with my vite+react+tanstackrouter frontend.
Is there a way to do it cleanly? I think allauth complicates things a lot but I am recently started to use cookiecutter django. How do I configure it in order to use jwt?
/r/django
https://redd.it/1machm5
Is there a way to do it cleanly? I think allauth complicates things a lot but I am recently started to use cookiecutter django. How do I configure it in order to use jwt?
/r/django
https://redd.it/1machm5
Reddit
From the django community on Reddit
Explore this post and more from the django community
Save form data with a foreign key added?
I have a model, `Division` which is one section of a `Tournament`, created via `Division(tournament=tournament, name=name)`. I want to add divisions to a tournament via a form embedded in the tournament detail view, `Add division: ____ [submit]`, so that the `AddDivisionForm` has a single field for the division name.
I'm having trouble figuring out how I retrieve the parent tournament when the form is submitted (the `???` in the code below), i.e. how I pass the tournament id between the `get_context_data` and `post` calls:
class TournamentDetailView(TemplateView):
template_name = "director/tournament_detail.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
tournament = Tournament.objects.get(pk=context["pk"])
context["object"] = tournament
context["form"] = AddDivisionForm()
return context
def post(self, request, *args, **kwargs):
form = AddDivisionForm(request.POST)
if form.is_valid():
name = form.cleaned_data["name"]
/r/django
https://redd.it/1m9hhj9
I have a model, `Division` which is one section of a `Tournament`, created via `Division(tournament=tournament, name=name)`. I want to add divisions to a tournament via a form embedded in the tournament detail view, `Add division: ____ [submit]`, so that the `AddDivisionForm` has a single field for the division name.
I'm having trouble figuring out how I retrieve the parent tournament when the form is submitted (the `???` in the code below), i.e. how I pass the tournament id between the `get_context_data` and `post` calls:
class TournamentDetailView(TemplateView):
template_name = "director/tournament_detail.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
tournament = Tournament.objects.get(pk=context["pk"])
context["object"] = tournament
context["form"] = AddDivisionForm()
return context
def post(self, request, *args, **kwargs):
form = AddDivisionForm(request.POST)
if form.is_valid():
name = form.cleaned_data["name"]
/r/django
https://redd.it/1m9hhj9
Reddit
From the django community on Reddit
Explore this post and more from the django community
Seeking AI Video Translation English-to-German (Dubbing/Voiceover) – Approx. 5400
Seeking AI Video Translation English-to-German (Dubbing/Voiceover) – Approx. 5400 Min./Month – Cost-Efficient Solutions & GitHub Projects?
Hello dear Community!
I'm looking for effective and cost-efficient AI tools for video translation from English to German. My requirement is around 5400 minutes of video per month, primarily for dubbing/voiceover.
Which commercial providers can you recommend for this volume, also considering price-performance ratio and cost models?
Are there also any interesting open-source projects on GitHub that go in this direction and could be usable or adaptable for such a volume? Perhaps solutions that one could self-host?
I'm also open to project ideas where such tools could be effectively utilized at this scale.
Looking forward to your recommendations and insights!
/r/Python
https://redd.it/1malx1e
Seeking AI Video Translation English-to-German (Dubbing/Voiceover) – Approx. 5400 Min./Month – Cost-Efficient Solutions & GitHub Projects?
Hello dear Community!
I'm looking for effective and cost-efficient AI tools for video translation from English to German. My requirement is around 5400 minutes of video per month, primarily for dubbing/voiceover.
Which commercial providers can you recommend for this volume, also considering price-performance ratio and cost models?
Are there also any interesting open-source projects on GitHub that go in this direction and could be usable or adaptable for such a volume? Perhaps solutions that one could self-host?
I'm also open to project ideas where such tools could be effectively utilized at this scale.
Looking forward to your recommendations and insights!
/r/Python
https://redd.it/1malx1e
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Any good pygame tutorials?
I really need a short, clear Pygame tutorial. Watched Clear Code, but his explanations feel too long and I forget details. Any recommendations?
/r/Python
https://redd.it/1ma72k4
I really need a short, clear Pygame tutorial. Watched Clear Code, but his explanations feel too long and I forget details. Any recommendations?
/r/Python
https://redd.it/1ma72k4
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
📊 Check Out djangokpi: A Work-in-Progress KPI Management Package for Django!
Hey everyone! 👋
I'm excited to share my ongoing project, **djangokpi, a Django package designed for creating, tracking, and managing Key Performance Indicators (KPIs) in your projects.
### Current Status:
While the package is still under active development and not yet ready for production use, I’m thrilled to announce that the KPI cards API is ready for preview!
### Features (WIP):
- Define Custom KPIs: Tailor KPIs to fit your project's needs.
- Track Performance Over Time: Monitor KPI evolution (in progress).
- Flexible Configuration: Easy integration into existing Django projects.
- Django Admin Support: Manage KPIs via the Django admin interface or API.
### Preview the KPI Cards:
Check out the API for KPI cards and see how it can enhance your project!
### Installation:
To install, use pip:
Add it to your
### Contribution:
I'm looking for contributors! If you're interested, please submit a pull request or open an issue with your ideas.
Check it out on GitHub and let me know your thoughts! Any feedback is appreciated as I work to improve it!
Thanks! 😊
/r/django
https://redd.it/1maozo5
Hey everyone! 👋
I'm excited to share my ongoing project, **djangokpi, a Django package designed for creating, tracking, and managing Key Performance Indicators (KPIs) in your projects.
### Current Status:
While the package is still under active development and not yet ready for production use, I’m thrilled to announce that the KPI cards API is ready for preview!
### Features (WIP):
- Define Custom KPIs: Tailor KPIs to fit your project's needs.
- Track Performance Over Time: Monitor KPI evolution (in progress).
- Flexible Configuration: Easy integration into existing Django projects.
- Django Admin Support: Manage KPIs via the Django admin interface or API.
### Preview the KPI Cards:
Check out the API for KPI cards and see how it can enhance your project!
### Installation:
To install, use pip:
pip install django_kpi
Add it to your
INSTALLED_APPS and include the URLs in your project!### Contribution:
I'm looking for contributors! If you're interested, please submit a pull request or open an issue with your ideas.
Check it out on GitHub and let me know your thoughts! Any feedback is appreciated as I work to improve it!
Thanks! 😊
/r/django
https://redd.it/1maozo5
GitHub
GitHub - M97Chahboun/django_kpi: Django package designed to create flexible Key Performance Indicators (KPIs) for your projects.…
Django package designed to create flexible Key Performance Indicators (KPIs) for your projects. This Django-app allows you to define, track, and manage KPIs with ease. - M97Chahboun/django_kpi