D Anyone using smaller, specialized models instead of massive LLMs?
My team’s realizing we don’t need a billion-parameter model to solve our actual problem, a smaller custom model works faster and cheaper. But there’s so much hype around bigger is better. Curious what others are using for production cases.
/r/MachineLearning
https://redd.it/1o2334q
My team’s realizing we don’t need a billion-parameter model to solve our actual problem, a smaller custom model works faster and cheaper. But there’s so much hype around bigger is better. Curious what others are using for production cases.
/r/MachineLearning
https://redd.it/1o2334q
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Single Source of Truth - Generating ORM, REST, GQL, MCP, SDK and Tests from Pydantic
# What My Project Does
I built an extensible AGPL-3.0 Python server framework on FastAPI and SQLAlchemy after getting sick of writing the same thing 4+ times in different ways. It takes your Pydantic models and automatically generates:
The ORM models with relationships
The migrations
FastAPI REST endpoints (CRUD - including batch, with relationship navigation and field specifiers)
GraphQL schema via Strawberry (including nested relationships)
MCP (Model Context Protocol) integration
SDK for other projects
Pytest tests for all of the above
Coming Soon: External API federation from third-party APIs directly into your models (including into the GQL schema) - early preview screenshot
# Target Audience
Anyone who's also tired of writing the same thing 4 different ways and wants to ship ASAP.
# Comparison
Most tools solve one piece of this problem:
`SQLModel` generates SQLAlchemy models from Pydantic but doesn't handle REST/GraphQL/tests
`FastAPI-utils/FastAPI-CRUD` generate REST endpoints but require manual GraphQL and testing setup
This framework generates all of it - ORM, REST, GraphQL, SDK, and tests - from a single Pydantic definition. The API federation feature also
/r/Python
https://redd.it/1o29byq
# What My Project Does
I built an extensible AGPL-3.0 Python server framework on FastAPI and SQLAlchemy after getting sick of writing the same thing 4+ times in different ways. It takes your Pydantic models and automatically generates:
The ORM models with relationships
The migrations
FastAPI REST endpoints (CRUD - including batch, with relationship navigation and field specifiers)
GraphQL schema via Strawberry (including nested relationships)
MCP (Model Context Protocol) integration
SDK for other projects
Pytest tests for all of the above
Coming Soon: External API federation from third-party APIs directly into your models (including into the GQL schema) - early preview screenshot
# Target Audience
Anyone who's also tired of writing the same thing 4 different ways and wants to ship ASAP.
# Comparison
Most tools solve one piece of this problem:
`SQLModel` generates SQLAlchemy models from Pydantic but doesn't handle REST/GraphQL/tests
Strawberry/Graphene Extensions generate GraphQL schemas but require separate REST endpoints and ORM definitions`FastAPI-utils/FastAPI-CRUD` generate REST endpoints but require manual GraphQL and testing setup
Hasura/PostGraphile auto-generate GraphQL from databases but aren't Python-native and don't integrate with your existing Pydantic modelsThis framework generates all of it - ORM, REST, GraphQL, SDK, and tests - from a single Pydantic definition. The API federation feature also
/r/Python
https://redd.it/1o29byq
Reddit
From the Python community on Reddit: Single Source of Truth - Generating ORM, REST, GQL, MCP, SDK and Tests from Pydantic
Explore this post and more from the Python community
Automation code quality, performance and security
Hey guys,
I have a project coming up in near future, which will involve reviewing lots of Python automation code for quality, maintainability, security and performance. I'm looking for recommendations for study materials to better prepare for it.
Thank you!
/r/Python
https://redd.it/1o286bq
Hey guys,
I have a project coming up in near future, which will involve reviewing lots of Python automation code for quality, maintainability, security and performance. I'm looking for recommendations for study materials to better prepare for it.
Thank you!
/r/Python
https://redd.it/1o286bq
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Is SQLAlchemy really that worth ?
As someone who knows SQL well enough, it feels a pain to learn and understand. All I want is an SQLBuilder that allows me to write a general-like SQL syntax and is capable of translating it to multiple dialects like MySQL, PostgreSQL or SQLite. I want to build a medium-sized website and whenever I open the SQLAlchemy page I feel overwhelmed by the tons of things there are from some of which look like magic to me, making me asking questions like "why that" and so on. Is it really worth to stick through with SQLAlchemy for, let's say, a job opening or something or should I simply make my life easier with using another library (or even writing my own) ?
/r/flask
https://redd.it/1o1ul17
As someone who knows SQL well enough, it feels a pain to learn and understand. All I want is an SQLBuilder that allows me to write a general-like SQL syntax and is capable of translating it to multiple dialects like MySQL, PostgreSQL or SQLite. I want to build a medium-sized website and whenever I open the SQLAlchemy page I feel overwhelmed by the tons of things there are from some of which look like magic to me, making me asking questions like "why that" and so on. Is it really worth to stick through with SQLAlchemy for, let's say, a job opening or something or should I simply make my life easier with using another library (or even writing my own) ?
/r/flask
https://redd.it/1o1ul17
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Friday Daily Thread: r/Python Meta and Free-Talk Fridays
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1o2lz3l
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1o2lz3l
Redditinc
Reddit Rules
Reddit Rules - Reddit
How do I make abstract tests not execute?
I made a mixin containing two tests that a lot of test classes will inherit. The mixin inherits from TestCase which I believe makes sense because tests are written inside. The thing is I would like said tests to not be executed when I run my test suite because they throw errors as not every attributes they try to access are defined before they are inheritted by children classes.
I could skip those tests but then I get a buch of "S" in the terminal when I run my tests which I don't find pretty as those skipped tests are not meant to be executed (it's not a temporary thing). I could make them not inherit from TestCase but then PyCharm will cry throwing warnings at every "assert" method in said tests.
So what should I do?
EDIT:
I solved this by making my Mixin classes not inherit from TestCase but ABC instead. I then defined the methods and attributes that raised warnings with "@abstractmethod" and "@property".
/r/djangolearning
https://redd.it/1o1t8x8
I made a mixin containing two tests that a lot of test classes will inherit. The mixin inherits from TestCase which I believe makes sense because tests are written inside. The thing is I would like said tests to not be executed when I run my test suite because they throw errors as not every attributes they try to access are defined before they are inheritted by children classes.
I could skip those tests but then I get a buch of "S" in the terminal when I run my tests which I don't find pretty as those skipped tests are not meant to be executed (it's not a temporary thing). I could make them not inherit from TestCase but then PyCharm will cry throwing warnings at every "assert" method in said tests.
So what should I do?
EDIT:
I solved this by making my Mixin classes not inherit from TestCase but ABC instead. I then defined the methods and attributes that raised warnings with "@abstractmethod" and "@property".
/r/djangolearning
https://redd.it/1o1t8x8
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Ergonomic Concurrency
**Project name:** Pipevine
**Project link:** [https://github.com/arrno/pipevine](https://github.com/arrno/pipevine)
**What My Project Does**
Pipevine is a lightweight async pipeline and worker-pool library for Python.
It helps you compose concurrent dataflows with backpressure, retries, and cancellation.. without all the asyncio boilerplate.
**Target Audience**
Developers who work with data pipelines, streaming, or CPU/IO-bound workloads in Python.
It’s designed to be production-ready but lightweight enough for side projects and experimentation.
**How to Get Started**
pip install pipevine
import asyncio
from pipevine import Pipeline, work_pool
@work_pool(buffer=10, retries=3, num_workers=4)
async def process_data(item, state):
# Your processing logic here
return item * 2
@work_pool(buffer=5, retries=1)
async def validate_data(item, state):
if item < 0:
raise ValueError("Negative values not allowed")
return item
# Create and run pipeline
pipe = Pipeline(range(100)) >>
/r/Python
https://redd.it/1o2n119
**Project name:** Pipevine
**Project link:** [https://github.com/arrno/pipevine](https://github.com/arrno/pipevine)
**What My Project Does**
Pipevine is a lightweight async pipeline and worker-pool library for Python.
It helps you compose concurrent dataflows with backpressure, retries, and cancellation.. without all the asyncio boilerplate.
**Target Audience**
Developers who work with data pipelines, streaming, or CPU/IO-bound workloads in Python.
It’s designed to be production-ready but lightweight enough for side projects and experimentation.
**How to Get Started**
pip install pipevine
import asyncio
from pipevine import Pipeline, work_pool
@work_pool(buffer=10, retries=3, num_workers=4)
async def process_data(item, state):
# Your processing logic here
return item * 2
@work_pool(buffer=5, retries=1)
async def validate_data(item, state):
if item < 0:
raise ValueError("Negative values not allowed")
return item
# Create and run pipeline
pipe = Pipeline(range(100)) >>
/r/Python
https://redd.it/1o2n119
GitHub
GitHub - arrno/pipevine: generic async iter tools in python
generic async iter tools in python. Contribute to arrno/pipevine development by creating an account on GitHub.
R DeepSeek 3.2's sparse attention mechanism
https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek\_V3\_2.pdf
The new DeepSeek model uses a novel sparse attention mechanism, with a lightning indexer and a token selection mechanism. Please feel free to discuss in this thread :)
Are there any open-source implementations of this (eg. in PyTorch) that can be used for training transformers from scratch? The DeepSeek implementation involves FlashMLA kernel, which seems rather complex.
https://github.com/deepseek-ai/FlashMLA/pull/98
/r/MachineLearning
https://redd.it/1o2pzxk
https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek\_V3\_2.pdf
The new DeepSeek model uses a novel sparse attention mechanism, with a lightning indexer and a token selection mechanism. Please feel free to discuss in this thread :)
Are there any open-source implementations of this (eg. in PyTorch) that can be used for training transformers from scratch? The DeepSeek implementation involves FlashMLA kernel, which seems rather complex.
https://github.com/deepseek-ai/FlashMLA/pull/98
/r/MachineLearning
https://redd.it/1o2pzxk
GitHub
DeepSeek-V3.2-Exp/DeepSeek_V3_2.pdf at main · deepseek-ai/DeepSeek-V3.2-Exp
Contribute to deepseek-ai/DeepSeek-V3.2-Exp development by creating an account on GitHub.
Python/flask alternative to NextCloud
Hi friends!!, after working for several months, my group of friends and I have finally released a version that we consider “good” of our alternative to NextCloud, OpenHosting, a 100% open source solution.
If you have any feedback, requests, or questions, don't hesitate to let us know!
The project is available on GitHub: https://github.com/Ciela2002/openhosting/tree/main
Seven months ago, we posted another article introducing the project, which was still very much in beta.
We focused mainly on security, because I have to admit that when I started this project, I had no idea what I was doing.
I thought it was going to be “super easy” LOL, yeah... so easy.
/r/flask
https://redd.it/1o2prso
Hi friends!!, after working for several months, my group of friends and I have finally released a version that we consider “good” of our alternative to NextCloud, OpenHosting, a 100% open source solution.
If you have any feedback, requests, or questions, don't hesitate to let us know!
The project is available on GitHub: https://github.com/Ciela2002/openhosting/tree/main
Seven months ago, we posted another article introducing the project, which was still very much in beta.
We focused mainly on security, because I have to admit that when I started this project, I had no idea what I was doing.
I thought it was going to be “super easy” LOL, yeah... so easy.
/r/flask
https://redd.it/1o2prso
GitHub
GitHub - Ciela2002/openhosting: Open Hosting is a 100% python-based alternative to NextCloud and similar services.
Open Hosting is a 100% python-based alternative to NextCloud and similar services. - GitHub - Ciela2002/openhosting: Open Hosting is a 100% python-based alternative to NextCloud and similar services.
2025 Malcolm Tredinnick Memorial Prize awarded to Tim Schilling
https://www.djangoproject.com/weblog/2025/oct/10/malcolm-prize-awarded-to-tim-schilling/?2
/r/django
https://redd.it/1o2uprl
https://www.djangoproject.com/weblog/2025/oct/10/malcolm-prize-awarded-to-tim-schilling/?2
/r/django
https://redd.it/1o2uprl
Django Project
2025 Malcolm Tredinnick Memorial Prize awarded to Tim Schilling
Posted by Sarah Abderemane & Thibaud Colas on Oct. 10, 2025
uv cheatsheet with most common/useful commands
I've been having lots of fun using Astral's uv and also teaching it to friends and students, so I decided to create a cheatsheet with the most common/useful commands.
uv cheatsheet with most common/useful commands
I included sections about
- project creation;
- dependency management;
- project lifecycle & versioning;
- installing/working with tools;
- working with scripts;
- uv's interface for
- some meta & miscellaneous commands.
The link above takes you to a page with all these sections as regular tables and to high-resolution/print-quality downloadable files you can get for yourself from the link above.
I hope this is helpful for you and if you have any feedback, I'm all ears!
/r/Python
https://redd.it/1o2viq3
I've been having lots of fun using Astral's uv and also teaching it to friends and students, so I decided to create a cheatsheet with the most common/useful commands.
uv cheatsheet with most common/useful commands
I included sections about
- project creation;
- dependency management;
- project lifecycle & versioning;
- installing/working with tools;
- working with scripts;
- uv's interface for
pip and venv; and- some meta & miscellaneous commands.
The link above takes you to a page with all these sections as regular tables and to high-resolution/print-quality downloadable files you can get for yourself from the link above.
I hope this is helpful for you and if you have any feedback, I'm all ears!
/r/Python
https://redd.it/1o2viq3
Mathspp
uv cheatsheet
Cheatsheet with the most common and useful uv commands to manage projects and dependencies, publish projects, manage tools, and more.
django-allauth - Accounts app deep dive
https://www.youtube.com/watch?v=5a_I_HaKSTw
/r/django
https://redd.it/1o30o18
https://www.youtube.com/watch?v=5a_I_HaKSTw
/r/django
https://redd.it/1o30o18
YouTube
django-allauth - Accounts app deep dive for authentication, registration, and more!
🙏 Join our channel to get access to perks:
https://www.youtube.com/channel/UCTwxaBjziKfy6y_uWu30orA/join
☕️ 𝗕𝘂𝘆 𝗺𝗲 𝗮 𝗰𝗼𝗳𝗳𝗲𝗲:
To support the channel and encourage new videos, please consider buying a coffee here:
https://ko-fi.com/bugbytes
In this video…
https://www.youtube.com/channel/UCTwxaBjziKfy6y_uWu30orA/join
☕️ 𝗕𝘂𝘆 𝗺𝗲 𝗮 𝗰𝗼𝗳𝗳𝗲𝗲:
To support the channel and encourage new videos, please consider buying a coffee here:
https://ko-fi.com/bugbytes
In this video…
PipeFunc: Build Lightning-Fast Pipelines with Python: DAGs Made Easy
Hey r/Python!
I'm excited to share
What My Project Does:
1. Automatic Dependency Resolution:
2. Lightning-Fast Execution: With minimal overhead (around 10 µs per function call),
3. Effortless Parallelization:
4. Intuitive Visualization: Generate interactive graphs to visualize your pipeline's structure and understand data flow.
5. Simplified Parameter Sweeps:
6. Resource Profiling: Gain insights into your pipeline's performance with detailed CPU, memory, and timing reports.
7. Caching: Avoid redundant computations with multiple caching backends.
8. Type Annotation Validation: Ensures type consistency across your pipeline to catch errors
/r/Python
https://redd.it/1o3323m
Hey r/Python!
I'm excited to share
pipefunc (github.com/pipefunc/pipefunc), a Python library designed to make building and running complex computational workflows incredibly fast and easy. If you've ever dealt with intricate dependencies between functions, struggled with parallelization, or wished for a simpler way to create and manage DAG pipelines, pipefunc is here to help.What My Project Does:
pipefunc empowers you to easily construct Directed Acyclic Graph (DAG) pipelines in Python. It handles:1. Automatic Dependency Resolution:
pipefunc automatically determines the correct execution order of your functions, eliminating manual dependency management.2. Lightning-Fast Execution: With minimal overhead (around 10 µs per function call),
pipefunc ensures your pipelines run super fast.3. Effortless Parallelization:
pipefunc automatically parallelizes independent tasks, whether on your local machine or a SLURM cluster. It supports any concurrent.futures.Executor!4. Intuitive Visualization: Generate interactive graphs to visualize your pipeline's structure and understand data flow.
5. Simplified Parameter Sweeps:
pipefunc's mapspec feature lets you easily define and run N-dimensional parameter sweeps, which is perfect for scientific computing, simulations, and hyperparameter tuning.6. Resource Profiling: Gain insights into your pipeline's performance with detailed CPU, memory, and timing reports.
7. Caching: Avoid redundant computations with multiple caching backends.
8. Type Annotation Validation: Ensures type consistency across your pipeline to catch errors
/r/Python
https://redd.it/1o3323m
GitHub
GitHub - pipefunc/pipefunc: Lightweight fast function pipeline (DAG) creation in pure Python for scientific (HPC) workflows 🕸️🧪
Lightweight fast function pipeline (DAG) creation in pure Python for scientific (HPC) workflows 🕸️🧪 - pipefunc/pipefunc
How to use async functions in Celery with Django and connection pooling
https://mrdonbrown.blogspot.com/2025/10/using-async-functions-in-celery-with.html
/r/django
https://redd.it/1o2kkqg
https://mrdonbrown.blogspot.com/2025/10/using-async-functions-in-celery-with.html
/r/django
https://redd.it/1o2kkqg
Blogspot
Using Async Functions in Celery with Django Connection Pooling
We have a Django application and wanted to start writing async code. Thanks to recent Django versions, you can, but if you also use Celery, ...
Vision Agents 0.1
First steps here, we've just released 0.1 of Vision Agents. https://github.com/GetStream/Vision-Agents
What My Project Does
The idea is that it makes it super simple to build vision agents, combining fast models like Yolo with Gemini/Openai realtime. We're going for low latency & a completely open sdk. So you can use any vision model or video edge network.
Here's an example of running live video through Yolo and then passing it to Gemini
agent = Agent(
edge=getstream.Edge(),
agentuser=agentuser,
instructions="Read @golfcoach.md",
llm=openai.Realtime(fps=10),
#llm=gemini.Realtime(fps=1), # Careful with FPS can get expensive
processors=[ultralytics.YOLOPoseProcessor(modelpath="yolo11n-pose.pt")],
)
Target Audience
Vision AI is like chatgpt in 2022. It's really fun to see how it works and what's possible. Anything from live coaching, to sports, to physical therapy, robotics, drones etc. But it's not production quality yet. Gemini and OpenAI both hallucinate a ton for vision AI. It seems close to being viable though, especially fun to have it describe your surroundings etc.
Comparison
Similar to Livekit
/r/Python
https://redd.it/1o2yh3k
First steps here, we've just released 0.1 of Vision Agents. https://github.com/GetStream/Vision-Agents
What My Project Does
The idea is that it makes it super simple to build vision agents, combining fast models like Yolo with Gemini/Openai realtime. We're going for low latency & a completely open sdk. So you can use any vision model or video edge network.
Here's an example of running live video through Yolo and then passing it to Gemini
agent = Agent(
edge=getstream.Edge(),
agentuser=agentuser,
instructions="Read @golfcoach.md",
llm=openai.Realtime(fps=10),
#llm=gemini.Realtime(fps=1), # Careful with FPS can get expensive
processors=[ultralytics.YOLOPoseProcessor(modelpath="yolo11n-pose.pt")],
)
Target Audience
Vision AI is like chatgpt in 2022. It's really fun to see how it works and what's possible. Anything from live coaching, to sports, to physical therapy, robotics, drones etc. But it's not production quality yet. Gemini and OpenAI both hallucinate a ton for vision AI. It seems close to being viable though, especially fun to have it describe your surroundings etc.
Comparison
Similar to Livekit
/r/Python
https://redd.it/1o2yh3k
GitHub
GitHub - GetStream/Vision-Agents: Open Vision Agents by Stream. Build Vision Agents quickly with any model or video provider. Uses…
Open Vision Agents by Stream. Build Vision Agents quickly with any model or video provider. Uses Stream's edge network for ultra-low latency. - GetStream/Vision-Agents
Connecting Cloud Apps to Industrial Equipment with Tailscale
https://wedgworth.dev/connecting-cloud-apps-to-industrial-equipment-with-tailscale/
/r/django
https://redd.it/1o2p9ll
https://wedgworth.dev/connecting-cloud-apps-to-industrial-equipment-with-tailscale/
/r/django
https://redd.it/1o2p9ll
Wedgworth Technology
Connecting Cloud Apps to Industrial Equipment with Tailscale
How to bridge the gap between cloud-based Django apps and on-premise equipment with Tailscale
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1o3h3uy
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1o3h3uy
YouTube
Data Structures and Algorithms in Python - Full Course for Beginners
A beginner-friendly introduction to common data structures (linked lists, stacks, queues, graphs) and algorithms (search, sorting, recursion, dynamic programming) in Python. This course will help you prepare for coding interviews and assessments.
🔗 Course…
🔗 Course…
Looking for an easy and free way to deploy a small Flask + SQLAlchemy app (SQLite DB) for students
Hey everyone! 👋
I’m a Python tutor currently teaching Flask to my students. As part of our lessons, we built a small web app using Flask + SQLAlchemy with an internal SQLite database. You can check the project here:
👉 https://github.com/Chinyiskan/Flask-Diary
In the course, they recommend PythonAnywhere, but honestly, it feels a bit too complex for my students to set up — especially for beginners.
I’m looking for a free and modern platform (something like Vercel for Node.js projects) that would allow an easy and straightforward deployment of this kind of small Flask app.
Do you have any suggestions or workflows that you’ve found simple for students to use and understand?
Thanks in advance for any ideas or recommendations 🙏
/r/flask
https://redd.it/1o3cu0j
Hey everyone! 👋
I’m a Python tutor currently teaching Flask to my students. As part of our lessons, we built a small web app using Flask + SQLAlchemy with an internal SQLite database. You can check the project here:
👉 https://github.com/Chinyiskan/Flask-Diary
In the course, they recommend PythonAnywhere, but honestly, it feels a bit too complex for my students to set up — especially for beginners.
I’m looking for a free and modern platform (something like Vercel for Node.js projects) that would allow an easy and straightforward deployment of this kind of small Flask app.
Do you have any suggestions or workflows that you’ve found simple for students to use and understand?
Thanks in advance for any ideas or recommendations 🙏
/r/flask
https://redd.it/1o3cu0j
GitHub
GitHub - Chinyiskan/Flask-Diary
Contribute to Chinyiskan/Flask-Diary development by creating an account on GitHub.
SPDL - Scalable and Performant Data Loading
Hi Python community,
Inspired by recent showcases on pipeline libraries ([Pipevine](https://www.reddit.com/r/Python/comments/1o2n119/ergonomic_concurrency/), [pipefunc](https://www.reddit.com/r/Python/comments/1o3323m/pipefunc_build_lightningfast_pipelines_with/)), I’d like to share my project: **SPDL (Scalable and Performant Data Loading)**.
# What My Project Does
SPDL is designed to address the data loading bottleneck in machine learning (ML) and AI training pipelines. You break down data loading into discrete tasks with different constraints (network, CPU, GPU transfer etc) and construct a pipeline, and SPDL executes them efficiently. It features a task execution engine (pipeline abstraction) built on asyncio, alongside an independent I/O module for media processing.
# Resources:
* **Repo:** [https://github.com/facebookresearch/spdl](https://github.com/facebookresearch/spdl)
* **Documentation:** [SPDL Docs](https://facebookresearch.github.io/spdl/main/)
* **PyPI:**
* Install with `pip install spdl`
* [spdl-core](https://pypi.org/project/spdl-core/) (no dependency)
* [spdl-io](https://pypi.org/project/spdl-io/) (requires NumPy, and optionally PyTorch / Numba / Jax)
* **arXiv:** [2504.20067](https://arxiv.org/abs/2504.20067)
# Target Audience
ML practitioners whose focus is model training rather than software engineering. It is production-ready.
# Core Principles
* **High Throughput & Efficiency:** SPDL maximizes data loading speed and minimizes CPU/memory overhead to keep GPUs busy.
* **Flexibility:** The pipeline abstraction is highly customizable, allowing users to tailor the structure to their environment, data, and requirements.
* **Observability:** SPDL provides runtime statistics for each pipeline component, helping users identify bottlenecks and optimize performance.
* **Intuitive Construction:** Pipelines are easy to
/r/Python
https://redd.it/1o396tf
Hi Python community,
Inspired by recent showcases on pipeline libraries ([Pipevine](https://www.reddit.com/r/Python/comments/1o2n119/ergonomic_concurrency/), [pipefunc](https://www.reddit.com/r/Python/comments/1o3323m/pipefunc_build_lightningfast_pipelines_with/)), I’d like to share my project: **SPDL (Scalable and Performant Data Loading)**.
# What My Project Does
SPDL is designed to address the data loading bottleneck in machine learning (ML) and AI training pipelines. You break down data loading into discrete tasks with different constraints (network, CPU, GPU transfer etc) and construct a pipeline, and SPDL executes them efficiently. It features a task execution engine (pipeline abstraction) built on asyncio, alongside an independent I/O module for media processing.
# Resources:
* **Repo:** [https://github.com/facebookresearch/spdl](https://github.com/facebookresearch/spdl)
* **Documentation:** [SPDL Docs](https://facebookresearch.github.io/spdl/main/)
* **PyPI:**
* Install with `pip install spdl`
* [spdl-core](https://pypi.org/project/spdl-core/) (no dependency)
* [spdl-io](https://pypi.org/project/spdl-io/) (requires NumPy, and optionally PyTorch / Numba / Jax)
* **arXiv:** [2504.20067](https://arxiv.org/abs/2504.20067)
# Target Audience
ML practitioners whose focus is model training rather than software engineering. It is production-ready.
# Core Principles
* **High Throughput & Efficiency:** SPDL maximizes data loading speed and minimizes CPU/memory overhead to keep GPUs busy.
* **Flexibility:** The pipeline abstraction is highly customizable, allowing users to tailor the structure to their environment, data, and requirements.
* **Observability:** SPDL provides runtime statistics for each pipeline component, helping users identify bottlenecks and optimize performance.
* **Intuitive Construction:** Pipelines are easy to
/r/Python
https://redd.it/1o396tf
Reddit
From the Python community on Reddit: Ergonomic Concurrency
Explore this post and more from the Python community
Having issues connecting to a Flask API in server
So, I have a web app deployed on Render, with a backend Flask API in the same server, which uses a Postgresql located in Supabase.
In local everything works fine, the front connects fine with the back, waiting for a response. But on Render, when the front calls a GET endpoint from the back, instantly receives a 200 response (or 304 if it's not the first time that same call is made), and then the back processes the call, and I know that because I see the database gets updated with that data. From the browser I can also see that right before getting the 200 or the 304, I get a net::ERR_CONNECTION_REFUSED response.
Been checking what it could be, and saw it could be CORS, which I think it's configured fine, or that I had to specify the host of the API when running it, by setting it to 0.0.0.0.
But still, nothing works...
Thanks in advance!
/r/flask
https://redd.it/1o1ph7m
So, I have a web app deployed on Render, with a backend Flask API in the same server, which uses a Postgresql located in Supabase.
In local everything works fine, the front connects fine with the back, waiting for a response. But on Render, when the front calls a GET endpoint from the back, instantly receives a 200 response (or 304 if it's not the first time that same call is made), and then the back processes the call, and I know that because I see the database gets updated with that data. From the browser I can also see that right before getting the 200 or the 304, I get a net::ERR_CONNECTION_REFUSED response.
Been checking what it could be, and saw it could be CORS, which I think it's configured fine, or that I had to specify the host of the API when running it, by setting it to 0.0.0.0.
But still, nothing works...
Thanks in advance!
/r/flask
https://redd.it/1o1ph7m
Reddit
From the flask community on Reddit
Explore this post and more from the flask community