What's a good host for Django now?
I was planning to use heroku because I thought it was free, but it was not. Are there any good free hosting for django websites right now (if you can tell me the pro and cons that would be good too)? THANK YOU!
It would be nice, if I could also have my databases with the suggestions.
/r/django
https://redd.it/1ogbye8
I was planning to use heroku because I thought it was free, but it was not. Are there any good free hosting for django websites right now (if you can tell me the pro and cons that would be good too)? THANK YOU!
It would be nice, if I could also have my databases with the suggestions.
/r/django
https://redd.it/1ogbye8
Reddit
From the django community on Reddit
Explore this post and more from the django community
P Cutting Inference Costs from $46K to $7.5K by Fine-Tuning Qwen-Image-Edit
Wanted to share some learnings we had optimizing and deploying Qwen-Image-Edit at scale to replace Nano-Banana. The goal was to generate a product catalogue of 1.2m images, which would have cost $46k with Nano-Banana or GPT-Image-Edit.
Qwen-Image-Edit being Apache 2.0 allows you to fine-tune and apply a few tricks like compilation, lightning lora and quantization to cut costs.
The base model takes \~15s to generate an image which would mean we would need 1,200,000*15/60/60=5,000 compute hours.
Compilation of the PyTorch graph + applying a lightning LoRA cut inference down to \~4s per image which resulted in \~1,333 compute hours.
I'm a big fan of open source models, so wanted to share the details in case it inspires you to own your own weights in the future.
https://www.oxen.ai/blog/how-we-cut-inference-costs-from-46k-to-7-5k-fine-tuning-qwen-image-edit
/r/MachineLearning
https://redd.it/1ogud3r
Wanted to share some learnings we had optimizing and deploying Qwen-Image-Edit at scale to replace Nano-Banana. The goal was to generate a product catalogue of 1.2m images, which would have cost $46k with Nano-Banana or GPT-Image-Edit.
Qwen-Image-Edit being Apache 2.0 allows you to fine-tune and apply a few tricks like compilation, lightning lora and quantization to cut costs.
The base model takes \~15s to generate an image which would mean we would need 1,200,000*15/60/60=5,000 compute hours.
Compilation of the PyTorch graph + applying a lightning LoRA cut inference down to \~4s per image which resulted in \~1,333 compute hours.
I'm a big fan of open source models, so wanted to share the details in case it inspires you to own your own weights in the future.
https://www.oxen.ai/blog/how-we-cut-inference-costs-from-46k-to-7-5k-fine-tuning-qwen-image-edit
/r/MachineLearning
https://redd.it/1ogud3r
www.oxen.ai
How We Cut Inference Costs from $46K to $6.5K Fine-Tuning Qwen-Image-Edit | Oxen.ai
At Oxen.ai, we think a lot about what it takes to run high-quality inference at scale. It’s one thing to produce a handful of impressive results with a cutting-edge image editing model,but an entirely different challenge when you’re generating millions of…
寻python+vue项目指导老师,有偿
接了一个学术网站项目
Postgresql数据库
+Django
+Vue
前端后端数据库,代码基本都有了
问题是我第一次接触全栈,没有经验
路由配置,前后端配置不知道哪里有问题,数据库数据获取失败
寻有经验的老师线上指导
详谈
V:G_L_M_H
/r/django
https://redd.it/1ogl0xt
接了一个学术网站项目
Postgresql数据库
+Django
+Vue
前端后端数据库,代码基本都有了
问题是我第一次接触全栈,没有经验
路由配置,前后端配置不知道哪里有问题,数据库数据获取失败
寻有经验的老师线上指导
详谈
V:G_L_M_H
/r/django
https://redd.it/1ogl0xt
Reddit
From the django community on Reddit
Explore this post and more from the django community
Monday Daily Thread: Project ideas!
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Description**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Description**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Description**: Create a script that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1ogzye9
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Description**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Description**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Description**: Create a script that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1ogzye9
YouTube
Build & Integrate your own custom chatbot to a website (Python & JavaScript)
In this fun project you learn how to build a custom chatbot in Python and then integrate this to a website using Flask and JavaScript.
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Meta: Limiting project posts to a single day of the week?
Given that this subreddit is currently being overrun by "here's my new project" posts (with a varying level of LLMs involved), would it be a good idea to move all those posts to a single day? (similar to what other subreddits does with Show-off Saturdays, for example).
It'd greatly reduce the noise during the week, and maybe actual content and interesting posts could get any decent attention instead of drowning out in the constant stream of projects.
Currently the last eight posts under "New" on this subreddit is about projects, before the post about backwards compatibility in libraries - a post that actually created a good discussion and presented a different viewpoint.
A quick guess seems to be that currently at least 80-85% of all posts are of the type "here's my new project".
/r/Python
https://redd.it/1oguael
Given that this subreddit is currently being overrun by "here's my new project" posts (with a varying level of LLMs involved), would it be a good idea to move all those posts to a single day? (similar to what other subreddits does with Show-off Saturdays, for example).
It'd greatly reduce the noise during the week, and maybe actual content and interesting posts could get any decent attention instead of drowning out in the constant stream of projects.
Currently the last eight posts under "New" on this subreddit is about projects, before the post about backwards compatibility in libraries - a post that actually created a good discussion and presented a different viewpoint.
A quick guess seems to be that currently at least 80-85% of all posts are of the type "here's my new project".
/r/Python
https://redd.it/1oguael
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
ttkbootstrap-icons 2.0 supports 8 new icon sets! material, font-awesome, remix, fluent, etc...
I'm excited to announce that ttkbootstrap-icons 2.0 has been release and now supports 8 new icon sets.
The icon sets are extensions and can be installed as needed for your project. Bootstrap icons are included by default, but you can now install the following icon providers:
pip install ttkbootstrap-icons-fa # Font Awesome (Free)
pip install ttkbootstrap-icons-fluent # Fluent System Icons
pip install ttkbootstrap-icons-gmi # Google Material Icons
pip install ttkbootstrap-icons-ion # Ionicons v2 (font)
pip install ttkbootstrap-icons-lucide # Lucide Icons
pip install ttkbootstrap-icons-mat # Material Design Icons (MDI)
pip install ttkbootstrap-icons-remix # Remix Icon
pip install ttkbootstrap-icons-simple # Simple Icons (community font)
pip install ttkbootstrap-icons-weather # Weather Icons
After installing, run `ttkbootstrap-icons` from your command line and you can preview and search for icons in any installed icon provider.
israel-dryer/ttkbootstrap-icons: Font-based icons for Tkinter/ttkbootstrap with a built-in Bootstrap set and installable providers: Font
/r/Python
[https://redd.it/1oh3x1p
I'm excited to announce that ttkbootstrap-icons 2.0 has been release and now supports 8 new icon sets.
The icon sets are extensions and can be installed as needed for your project. Bootstrap icons are included by default, but you can now install the following icon providers:
pip install ttkbootstrap-icons-fa # Font Awesome (Free)
pip install ttkbootstrap-icons-fluent # Fluent System Icons
pip install ttkbootstrap-icons-gmi # Google Material Icons
pip install ttkbootstrap-icons-ion # Ionicons v2 (font)
pip install ttkbootstrap-icons-lucide # Lucide Icons
pip install ttkbootstrap-icons-mat # Material Design Icons (MDI)
pip install ttkbootstrap-icons-remix # Remix Icon
pip install ttkbootstrap-icons-simple # Simple Icons (community font)
pip install ttkbootstrap-icons-weather # Weather Icons
After installing, run `ttkbootstrap-icons` from your command line and you can preview and search for icons in any installed icon provider.
israel-dryer/ttkbootstrap-icons: Font-based icons for Tkinter/ttkbootstrap with a built-in Bootstrap set and installable providers: Font
/r/Python
[https://redd.it/1oh3x1p
Reddit
From the Python community on Reddit: ttkbootstrap-icons 2.0 supports 8 new icon sets! material, font-awesome, remix, fluent, etc...
Explore this post and more from the Python community
Google PhD Fellowship recipients 2025 D
Google have just announced the 2025 recipients.
What are the criteria to get this fellowship?
https://research.google/programs-and-events/phd-fellowship/recipients/
/r/MachineLearning
https://redd.it/1ogy6z9
Google have just announced the 2025 recipients.
What are the criteria to get this fellowship?
https://research.google/programs-and-events/phd-fellowship/recipients/
/r/MachineLearning
https://redd.it/1ogy6z9
research.google
Google PhD Fellowship recipients
The Google PhD Fellowship Program recognizes outstanding graduate students doing exceptional work in computer science, related disciplines, or promising research areas. Meet the recipients.
Duron - Durable async runtime for Python
Hi r/Python!
I built **Duron**, a lightweight **durable execution runtime** for Python async workflows. It provides replayable execution primitives that can work standalone or serve as building blocks for complex workflow engines.
**GitHub:** [https://github.com/brian14708/duron](https://github.com/brian14708/duron)
## What My Project Does
Duron helps you write Python async workflows that can pause, resume, and continue even after a crash or restart.
It captures and replays async function progress through deterministic logs and pluggable storage backends, allowing consistent recovery and integration with custom workflow systems.
## Target Audience
* Embed simple durable workflows into application
* Building custom durable execution engines
* Exploring ideas for interactive, durable agents
## Comparison
Compared to temporal.io or restate.dev:
- Focuses purely on Python async runtime, not distributed scheduling or other languages
- Keeps things lightweight and embeddable
- Experimental features: tracing, signals, and streams
---
Still early-stage and experimental — any feedback, thoughts, or contributions are very welcome!
/r/Python
https://redd.it/1ohbuhl
Hi r/Python!
I built **Duron**, a lightweight **durable execution runtime** for Python async workflows. It provides replayable execution primitives that can work standalone or serve as building blocks for complex workflow engines.
**GitHub:** [https://github.com/brian14708/duron](https://github.com/brian14708/duron)
## What My Project Does
Duron helps you write Python async workflows that can pause, resume, and continue even after a crash or restart.
It captures and replays async function progress through deterministic logs and pluggable storage backends, allowing consistent recovery and integration with custom workflow systems.
## Target Audience
* Embed simple durable workflows into application
* Building custom durable execution engines
* Exploring ideas for interactive, durable agents
## Comparison
Compared to temporal.io or restate.dev:
- Focuses purely on Python async runtime, not distributed scheduling or other languages
- Keeps things lightweight and embeddable
- Experimental features: tracing, signals, and streams
---
Still early-stage and experimental — any feedback, thoughts, or contributions are very welcome!
/r/Python
https://redd.it/1ohbuhl
GitHub
GitHub - brian14708/duron: 🌀 Durable async runtime for Python
🌀 Durable async runtime for Python. Contribute to brian14708/duron development by creating an account on GitHub.
The PSF has withdrawn $1.5 million proposal to US government grant program
In January 2025, the PSF submitted a proposal to the US government National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program to address structural vulnerabilities in Python and PyPI. It was the PSF’s first time applying for government funding, and navigating the intensive process was a steep learning curve for our small team to climb. Seth Larson, PSF Security Developer in Residence, serving as Principal Investigator (PI) with Loren Crary, PSF Deputy Executive Director, as co-PI, led the multi-round proposal writing process as well as the months-long vetting process. We invested our time and effort because we felt the PSF’s work is a strong fit for the program and that the benefit to the community if our proposal were accepted was considerable.
We were honored when, after many months of work, our proposal was recommended for funding, particularly as only 36% of new NSF grant applicants are successful on their first attempt. We became concerned, however, when we were presented with the terms and conditions we would be required to agree to if we accepted the grant. These terms included affirming the statement that we “do not, and will not during the term of this financial
/r/Python
https://redd.it/1ohh6v2
In January 2025, the PSF submitted a proposal to the US government National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program to address structural vulnerabilities in Python and PyPI. It was the PSF’s first time applying for government funding, and navigating the intensive process was a steep learning curve for our small team to climb. Seth Larson, PSF Security Developer in Residence, serving as Principal Investigator (PI) with Loren Crary, PSF Deputy Executive Director, as co-PI, led the multi-round proposal writing process as well as the months-long vetting process. We invested our time and effort because we felt the PSF’s work is a strong fit for the program and that the benefit to the community if our proposal were accepted was considerable.
We were honored when, after many months of work, our proposal was recommended for funding, particularly as only 36% of new NSF grant applicants are successful on their first attempt. We became concerned, however, when we were presented with the terms and conditions we would be required to agree to if we accepted the grant. These terms included affirming the statement that we “do not, and will not during the term of this financial
/r/Python
https://redd.it/1ohh6v2
NSF - National Science Foundation
Safety, Security, and Privacy of Open-Source Ecosystems (Safe-OSE)
Retry manager for arbitrary code block
There are about two pages of retry decorators in Pypi. I know about it. But, I found one case which is not covered by all other retries libraries (correct me if I'm wrong).
I needed to retry an arbitrary block of code, and not to be limited to a lambda or a function.
So, I wrote a library loopretry which does this. It combines an iterator with a context manager to wrap any block into retry.
from loopretry import retries
import time
for retry in retries(10):
with retry():
# any code you want to retry in case of exception
print(time.time())
assert int(time.time()) % 10 == 0, "Not a round number!"
Is it a novel approach or not?
Library code (any critique is highly welcomed): at Github.
If you want to try it:
/r/Python
https://redd.it/1ohczpq
There are about two pages of retry decorators in Pypi. I know about it. But, I found one case which is not covered by all other retries libraries (correct me if I'm wrong).
I needed to retry an arbitrary block of code, and not to be limited to a lambda or a function.
So, I wrote a library loopretry which does this. It combines an iterator with a context manager to wrap any block into retry.
from loopretry import retries
import time
for retry in retries(10):
with retry():
# any code you want to retry in case of exception
print(time.time())
assert int(time.time()) % 10 == 0, "Not a round number!"
Is it a novel approach or not?
Library code (any critique is highly welcomed): at Github.
If you want to try it:
pip install loopretry./r/Python
https://redd.it/1ohczpq
Reddit
From the Python community on Reddit: Retry manager for arbitrary code block
Explore this post and more from the Python community
The State of Django 2025 is here – 4,600+ developers share how they use Django
/r/django
https://redd.it/1ohlesf
/r/django
https://redd.it/1ohlesf
Looking for a python course that’s worth it
Hi I am a BSBA major graduating this semester and have very basic experience with python. I am looking for a course that’s worth it and that would give me a solid foundation. Thanks
/r/Python
https://redd.it/1ohe75v
Hi I am a BSBA major graduating this semester and have very basic experience with python. I am looking for a course that’s worth it and that would give me a solid foundation. Thanks
/r/Python
https://redd.it/1ohe75v
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Lightweight Python Implementation of Shamir's Secret Sharing with Verifiable Shares
Hi r/Python!
I built a lightweight Python library for Shamir's Secret Sharing (SSS), which splits secrets (like keys) into shares, needing only a threshold to reconstruct. It also supports Feldman's Verifiable Secret Sharing to check share validity securely.
What my project does
Basically you have a secret(a password, a key, an access token, an API token, password for your cryptowallet, a secret formula/recipe, codes for nuclear missiles). You can split your secret in n shares between your friends, coworkers, partner etc. and to reconstruct your secret you will need at least k shares. For example: total of 5 shares but you need at least 3 to recover the secret). An impostor having less than k shares learns nothing about the secret(for context if he has 2 out of 3 shares he can't recover the secret even with unlimited computing power - unless he exploits the discrete log problem but this is infeasible for current computers). If you want to you can not to use this Feldman's scheme(which verifies the share) so your secret is safe even with unlimited computing power, even with unlimited quantum computers - mathematically with fewer than k shares it is impossible to recover the secret
Features:
Minimal deps (pycryptodome), pure Python.
/r/Python
https://redd.it/1oh8yh4
Hi r/Python!
I built a lightweight Python library for Shamir's Secret Sharing (SSS), which splits secrets (like keys) into shares, needing only a threshold to reconstruct. It also supports Feldman's Verifiable Secret Sharing to check share validity securely.
What my project does
Basically you have a secret(a password, a key, an access token, an API token, password for your cryptowallet, a secret formula/recipe, codes for nuclear missiles). You can split your secret in n shares between your friends, coworkers, partner etc. and to reconstruct your secret you will need at least k shares. For example: total of 5 shares but you need at least 3 to recover the secret). An impostor having less than k shares learns nothing about the secret(for context if he has 2 out of 3 shares he can't recover the secret even with unlimited computing power - unless he exploits the discrete log problem but this is infeasible for current computers). If you want to you can not to use this Feldman's scheme(which verifies the share) so your secret is safe even with unlimited computing power, even with unlimited quantum computers - mathematically with fewer than k shares it is impossible to recover the secret
Features:
Minimal deps (pycryptodome), pure Python.
/r/Python
https://redd.it/1oh8yh4
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
Python Foundation goes ride or DEI, rejects government grant with strings attached
https://www.theregister.com/2025/10/27/pythonfoundationabandons15mnsf/
> The Python Software Foundation (PSF) has walked away from a $1.5 million government grant and you can blame the Trump administration's war on woke for effectively weakening some open source security.
/r/Python
https://redd.it/1ohqqgc
https://www.theregister.com/2025/10/27/pythonfoundationabandons15mnsf/
> The Python Software Foundation (PSF) has walked away from a $1.5 million government grant and you can blame the Trump administration's war on woke for effectively weakening some open source security.
/r/Python
https://redd.it/1ohqqgc
Reddit
From the Python community on Reddit: Python Foundation goes ride or DEI, rejects government grant with strings attached
Explore this post and more from the Python community
R PKBoost: Gradient boosting that stays accurate under data drift (2% degradation vs XGBoost's 32%)
I've been working on a gradient boosting implementation that handles two problems I kept running into with XGBoost/LightGBM in production:
1. Performance collapse on extreme imbalance (under 1% positive class)
2. Silent degradation when data drifts (sensor drift, behavior changes, etc.)
Key Results
Imbalanced data (Credit Card Fraud - 0.2% positives):
- PKBoost: 87.8% PR-AUC
- LightGBM: 79.3% PR-AUC
- XGBoost: 74.5% PR-AUC
Under realistic drift (gradual covariate shift):
\- PKBoost: 86.2% PR-AUC (−2.0% degradation)
\- XGBoost: 50.8% PR-AUC (−31.8% degradation)
\- LightGBM: 45.6% PR-AUC (−42.5% degradation)
What's Different
The main innovation is using Shannon entropy in the split criterion alongside gradients. Each split maximizes:
Gain = GradientGain + λ·InformationGain
where λ adapts based on class imbalance. This explicitly optimizes for information gain on the minority class instead of just minimizing loss.
Combined with:
\- Quantile-based binning (robust to scale shifts)
\- Conservative regularization (prevents overfitting to majority)
\- PR-AUC early stopping (focuses on minority performance)
The architecture is inherently more robust to drift without needing online adaptation.
Trade-offs
The good:
\- Auto-tunes for your data (no hyperparameter search needed)
\- Works out-of-the-box on extreme imbalance
\- Comparable inference speed to XGBoost
The honest:
\- \~2-4x slower training (45s vs 12s on 170K samples)
\- Slightly behind on balanced data (use XGBoost there)
\- Built in Rust, so less Python ecosystem integration
Why I'm Sharing
This started as
/r/MachineLearning
https://redd.it/1ohbdgu
I've been working on a gradient boosting implementation that handles two problems I kept running into with XGBoost/LightGBM in production:
1. Performance collapse on extreme imbalance (under 1% positive class)
2. Silent degradation when data drifts (sensor drift, behavior changes, etc.)
Key Results
Imbalanced data (Credit Card Fraud - 0.2% positives):
- PKBoost: 87.8% PR-AUC
- LightGBM: 79.3% PR-AUC
- XGBoost: 74.5% PR-AUC
Under realistic drift (gradual covariate shift):
\- PKBoost: 86.2% PR-AUC (−2.0% degradation)
\- XGBoost: 50.8% PR-AUC (−31.8% degradation)
\- LightGBM: 45.6% PR-AUC (−42.5% degradation)
What's Different
The main innovation is using Shannon entropy in the split criterion alongside gradients. Each split maximizes:
Gain = GradientGain + λ·InformationGain
where λ adapts based on class imbalance. This explicitly optimizes for information gain on the minority class instead of just minimizing loss.
Combined with:
\- Quantile-based binning (robust to scale shifts)
\- Conservative regularization (prevents overfitting to majority)
\- PR-AUC early stopping (focuses on minority performance)
The architecture is inherently more robust to drift without needing online adaptation.
Trade-offs
The good:
\- Auto-tunes for your data (no hyperparameter search needed)
\- Works out-of-the-box on extreme imbalance
\- Comparable inference speed to XGBoost
The honest:
\- \~2-4x slower training (45s vs 12s on 170K samples)
\- Slightly behind on balanced data (use XGBoost there)
\- Built in Rust, so less Python ecosystem integration
Why I'm Sharing
This started as
/r/MachineLearning
https://redd.it/1ohbdgu
Reddit
From the MachineLearning community on Reddit: [R] PKBoost: Gradient boosting that stays accurate under data drift (2% degradation…
Explore this post and more from the MachineLearning community
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1ohusug
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1ohusug
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Python mobile app
Hi, i just wanted to ask what to build my finance tracker app on, since I want others to use it too, so im looking for some good options.
/r/Python
https://redd.it/1ohuito
Hi, i just wanted to ask what to build my finance tracker app on, since I want others to use it too, so im looking for some good options.
/r/Python
https://redd.it/1ohuito
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
A Flask based service idea with supabase db and auth any thoughts on this
/r/flask
https://redd.it/1ohs5sz
/r/flask
https://redd.it/1ohs5sz
How I can use Django with MongoDB to have similar workflow when use Django with PostgreSQL?
I’m working on a project where I want to use the Django + Django ninja + MongoDb. I want a suggestions on this if I choose a good stack or not. If someone already has used these and have experience on them. Please provide suggestions on this?
/r/django
https://redd.it/1ohzim8
I’m working on a project where I want to use the Django + Django ninja + MongoDb. I want a suggestions on this if I choose a good stack or not. If someone already has used these and have experience on them. Please provide suggestions on this?
/r/django
https://redd.it/1ohzim8
Reddit
From the django community on Reddit
Explore this post and more from the django community
D For those who’ve published on code reasoning — how did you handle dataset collection and validation?
I’ve been diving into how people build datasets for code-related ML research — things like program synthesis, code reasoning, SWE-bench-style evaluation, or DPO/RLHF.
From what I’ve seen, most projects still rely on scraping or synthetic generation, with a lot of manual cleanup and little reproducibility.
Even published benchmarks vary wildly in annotation quality and documentation.
So I’m curious:
1. How are you collecting or validating your datasets for code-focused experiments?
2. Are you using public data, synthetic generation, or human annotation pipelines?
3. What’s been the hardest part — scale, quality, or reproducibility?
I’ve been studying this problem closely and have been experimenting with a small side project to make dataset creation easier for researchers (happy to share more if anyone’s interested).
Would love to hear what’s worked — or totally hasn’t — in your experience :)
/r/MachineLearning
https://redd.it/1ohge3t
I’ve been diving into how people build datasets for code-related ML research — things like program synthesis, code reasoning, SWE-bench-style evaluation, or DPO/RLHF.
From what I’ve seen, most projects still rely on scraping or synthetic generation, with a lot of manual cleanup and little reproducibility.
Even published benchmarks vary wildly in annotation quality and documentation.
So I’m curious:
1. How are you collecting or validating your datasets for code-focused experiments?
2. Are you using public data, synthetic generation, or human annotation pipelines?
3. What’s been the hardest part — scale, quality, or reproducibility?
I’ve been studying this problem closely and have been experimenting with a small side project to make dataset creation easier for researchers (happy to share more if anyone’s interested).
Would love to hear what’s worked — or totally hasn’t — in your experience :)
/r/MachineLearning
https://redd.it/1ohge3t
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community