Flask Error
from flask import Flask
app = Flask(name)
@app.route("/")
def home():
return "Offline Flask is working!"
if name == "main":
print("Starting Flask server...")
app.run(debug=True)
after running I tried http://127.0.0.1:5000/ in browser and it is not showing anything
I am new this and tried a simple thing
/r/flask
https://redd.it/1lsjhhl
from flask import Flask
app = Flask(name)
@app.route("/")
def home():
return "Offline Flask is working!"
if name == "main":
print("Starting Flask server...")
app.run(debug=True)
after running I tried http://127.0.0.1:5000/ in browser and it is not showing anything
I am new this and tried a simple thing
/r/flask
https://redd.it/1lsjhhl
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Deepface authentication - library and demo site
I recently published under the MIT License a Django app for face recognition authentication using DeepFace and pgvector. It's intended for audiences where the same group of people authenticate frequently without remembering their passwords, or want minimal keyboard usage. It uses the camera built in to your laptop or screen - in the same way you might use MS Teams, Google Meet, or WhatsApp.
It works fine with a good CPU, but will fly with a GPU.
I would probably use it with the default settings, but there are options you can experiment with in different environments. Because of the use of pgvector, which is currently not indexed, but can be very simply, it should be possible to support many thousands of user.
Github stars and comments appreciated.
https://github.com/topiaruss/django-deepface
/r/django
https://redd.it/1lu7hou
I recently published under the MIT License a Django app for face recognition authentication using DeepFace and pgvector. It's intended for audiences where the same group of people authenticate frequently without remembering their passwords, or want minimal keyboard usage. It uses the camera built in to your laptop or screen - in the same way you might use MS Teams, Google Meet, or WhatsApp.
It works fine with a good CPU, but will fly with a GPU.
I would probably use it with the default settings, but there are options you can experiment with in different environments. Because of the use of pgvector, which is currently not indexed, but can be very simply, it should be possible to support many thousands of user.
Github stars and comments appreciated.
https://github.com/topiaruss/django-deepface
/r/django
https://redd.it/1lu7hou
GitHub
GitHub - topiaruss/django-deepface: Django app for face recognition authentication using DeepFace
Django app for face recognition authentication using DeepFace - topiaruss/django-deepface
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1lua5dh
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1lua5dh
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Radiate - evolutionary/genetic algorithm engine
Hello! For the past 5 or so years I've been building `radiate` \- a genetic/evolutionary algorithm written in rust. Over the past few months I've been working on a python wrapper using pyo3 for the core rust code and have reached a point where I think its worth sharing.
**What my project does**:
* Traditional genetic algorithm implementation.
* Single & Multi-objective optimization support.
* Neuroevolution (graph-based representation - [evolving neural networks](http://www.scholarpedia.org/article/Neuroevolution)) support. Simmilar to [NEAT](https://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).
* Genetic programming support ([tree-based representation](https://en.wikipedia.org/wiki/Gene_expression_programming#:~:text=In%20computer%20programming%2C%20gene%20expression,much%20like%20a%20living%20organism.))
* Built-in support for parallelism.
* Extensive selection, crossover, and mutation operators.
* Opt-in speciation for maintaining diversity.
* Novelty search support. (This isn't available for python quite yet, I'm still testing it out in rust, but its looking promising - coming soon to py)
**Target Audience**
Production ready EA/GA problems.
**Comparison** I think the closest existing package is [PyGAD](https://pygad.readthedocs.io/en/latest/). I've used PyGAD before and it was fantastic, but I needed something a little more general purpose. Hence, radiate's python package was born.
**Source Code**
* [Github](https://github.com/pkalivas/radiate).
* [User Guide](https://pkalivas.github.io/radiate/).
* [Python specific examples](https://github.com/pkalivas/radiate/tree/master/py-radiate/examples).
I know EA/GAs have a somewhat niche community within the AI/ML ecosystem, but hopefully some find it useful. Would love to hear any thoughts, criticisms, or suggestions!
/r/Python
https://redd.it/1lu8wvp
Hello! For the past 5 or so years I've been building `radiate` \- a genetic/evolutionary algorithm written in rust. Over the past few months I've been working on a python wrapper using pyo3 for the core rust code and have reached a point where I think its worth sharing.
**What my project does**:
* Traditional genetic algorithm implementation.
* Single & Multi-objective optimization support.
* Neuroevolution (graph-based representation - [evolving neural networks](http://www.scholarpedia.org/article/Neuroevolution)) support. Simmilar to [NEAT](https://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).
* Genetic programming support ([tree-based representation](https://en.wikipedia.org/wiki/Gene_expression_programming#:~:text=In%20computer%20programming%2C%20gene%20expression,much%20like%20a%20living%20organism.))
* Built-in support for parallelism.
* Extensive selection, crossover, and mutation operators.
* Opt-in speciation for maintaining diversity.
* Novelty search support. (This isn't available for python quite yet, I'm still testing it out in rust, but its looking promising - coming soon to py)
**Target Audience**
Production ready EA/GA problems.
**Comparison** I think the closest existing package is [PyGAD](https://pygad.readthedocs.io/en/latest/). I've used PyGAD before and it was fantastic, but I needed something a little more general purpose. Hence, radiate's python package was born.
**Source Code**
* [Github](https://github.com/pkalivas/radiate).
* [User Guide](https://pkalivas.github.io/radiate/).
* [Python specific examples](https://github.com/pkalivas/radiate/tree/master/py-radiate/examples).
I know EA/GAs have a somewhat niche community within the AI/ML ecosystem, but hopefully some find it useful. Would love to hear any thoughts, criticisms, or suggestions!
/r/Python
https://redd.it/1lu8wvp
Tired of forgetting local git changes? I built a tool to track the status of all your local repos at
As someone who juggles many small projects—both personal and for clients—I often find myself with dozens of local git repositories scattered across my machine. Sometimes I forget about changes I made in a repo I haven’t opened in a few days, and that can lead to lost time or even lost work.
To solve this, I built gits-statuses: a simple tool that gives you a bird’s-eye view of the status of all your local git repositories.
It scans a directory (recursively) and shows you which repos have uncommitted changes, unpushed commits, or are clean. It’s a quick way to stay on top of your work and avoid surprises.
There are two versions:
Python: cross-platform and easy to integrate into scripts or cron jobs
PowerShell: great for Windows users who want native terminal integration
Check it out here: https://github.com/nicolgit/gits-statuses
Feedback and contributions are welcome!
/r/Python
https://redd.it/1luiz8o
As someone who juggles many small projects—both personal and for clients—I often find myself with dozens of local git repositories scattered across my machine. Sometimes I forget about changes I made in a repo I haven’t opened in a few days, and that can lead to lost time or even lost work.
To solve this, I built gits-statuses: a simple tool that gives you a bird’s-eye view of the status of all your local git repositories.
It scans a directory (recursively) and shows you which repos have uncommitted changes, unpushed commits, or are clean. It’s a quick way to stay on top of your work and avoid surprises.
There are two versions:
Python: cross-platform and easy to integrate into scripts or cron jobs
PowerShell: great for Windows users who want native terminal integration
Check it out here: https://github.com/nicolgit/gits-statuses
Feedback and contributions are welcome!
/r/Python
https://redd.it/1luiz8o
GitHub
GitHub - nicolgit/gits-statuses: A python/powershell command-line tool to display the status of multiple Git repositories in a…
A python/powershell command-line tool to display the status of multiple Git repositories in a clear, tabular format. - nicolgit/gits-statuses
D Stop building monolithic AI agents - Pipeline of Agents pattern
Context: Needed to build scan → attack → report workflow for cybersecurity. First attempt was typical "everything in one graph" disaster.
The mess: One LangGraph trying to do everything. Unmaintainable. Untestable. Classic big ball of mud but with AI.
The fix: Pipeline of Agents
Sequential execution with clean interfaces
State isolation between child graphs
Each agent independently developable/testable
Follows actual software engineering principles
Technical details: Used LangGraph wrapper nodes to convert parent state to child state. Only pass minimal required data. No global state sharing.
Result: Actually maintainable AI architecture that doesn't make you hate your life.
Full breakdown with Python implementation: https://vitaliihonchar.com/insights/how-to-build-pipeline-of-agents
Question: Are others finding similar patterns necessary as AI systems get more complex?
/r/MachineLearning
https://redd.it/1lumxa6
Context: Needed to build scan → attack → report workflow for cybersecurity. First attempt was typical "everything in one graph" disaster.
The mess: One LangGraph trying to do everything. Unmaintainable. Untestable. Classic big ball of mud but with AI.
The fix: Pipeline of Agents
Sequential execution with clean interfaces
State isolation between child graphs
Each agent independently developable/testable
Follows actual software engineering principles
Technical details: Used LangGraph wrapper nodes to convert parent state to child state. Only pass minimal required data. No global state sharing.
Result: Actually maintainable AI architecture that doesn't make you hate your life.
Full breakdown with Python implementation: https://vitaliihonchar.com/insights/how-to-build-pipeline-of-agents
Question: Are others finding similar patterns necessary as AI systems get more complex?
/r/MachineLearning
https://redd.it/1lumxa6
Vitalii Honchar
Pipeline of Agents Pattern: Building Maintainable AI Workflows with LangGraph
Learn how to build scalable AI agent systems using the Pipeline of Agents pattern. Discover why monolithic agents fail and how to architect modular, testable AI workflows with Python and LangGraph.
Project Using LDV-style compression to create an innovation machine
I'm experimenting with a method to increase the conceptual density of ideas by compressing science and engineering concepts into minimal-vocabulary statements using the Longman Defining Vocabulary (LDV) - the core 2,000 building block words of the English language.
The hypothesis: reducing lexical complexity increases the chance that a language model will recombine latent structural similarities between otherwise distant concepts, when prompted accordingly ( I've got a whole program on these prompts as well).
That is, I'm trying to build a genuine innovation machine, bit by byte.
Rather than maximizing fluency, the goal is to preserve mechanistic structure using ~2,000 basic English words. This trades off precision and abstraction in favor of semantic alignment, similar to how concept bottlenecks work in neuro-symbolic systems.
The Why:
LLMs today are surprisingly poor at discovering cross-domain connections. When pushed, they tend to revert to well-trodden academic hallucinations, the kinds you find in introductions and conclusions of academic papers.
A compressed lexical environment, like LDV, exposes the mechanical spine of each idea. The hope is that this makes unexpected adjacencies more accessible.
Examples:
LDV-style input: 3 mechanisms
1. “A bucket with a hole lets water out slowly.”
→ time-delay or pressure bleed-off
2. “A button lets water go from one part to another.”
→ valve or
/r/MachineLearning
https://redd.it/1lun8s3
I'm experimenting with a method to increase the conceptual density of ideas by compressing science and engineering concepts into minimal-vocabulary statements using the Longman Defining Vocabulary (LDV) - the core 2,000 building block words of the English language.
The hypothesis: reducing lexical complexity increases the chance that a language model will recombine latent structural similarities between otherwise distant concepts, when prompted accordingly ( I've got a whole program on these prompts as well).
That is, I'm trying to build a genuine innovation machine, bit by byte.
Rather than maximizing fluency, the goal is to preserve mechanistic structure using ~2,000 basic English words. This trades off precision and abstraction in favor of semantic alignment, similar to how concept bottlenecks work in neuro-symbolic systems.
The Why:
LLMs today are surprisingly poor at discovering cross-domain connections. When pushed, they tend to revert to well-trodden academic hallucinations, the kinds you find in introductions and conclusions of academic papers.
A compressed lexical environment, like LDV, exposes the mechanical spine of each idea. The hope is that this makes unexpected adjacencies more accessible.
Examples:
LDV-style input: 3 mechanisms
1. “A bucket with a hole lets water out slowly.”
→ time-delay or pressure bleed-off
2. “A button lets water go from one part to another.”
→ valve or
/r/MachineLearning
https://redd.it/1lun8s3
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
PatchMind: A CLI tool that turns Git repos into visual HTML insight. no cloud, no bloat
# What My Project Does
**PatchMind** is a modular Python CLI tool that analyzes local Git repos and generates a self-contained HTML report. It highlights patch-level diffs, file tree changes, file history, risk scoring, and blame info — all visual, all local, and no third-party integrations required.
# Target Audience
Primarily intended for developers who want fast, local insight into codebase evolution — ideal for solo projects, CI pipelines, or anyone sick of clicking through slow Git web UIs just to see what changed.
# Comparison
Unlike tools like GitHub’s diff viewer or GitKraken, PatchMind is entirely local and focused on generating reports you can keep or archive. There’s no sync, no telemetry, and no server required — just run it in your terminal, open the HTML, and you’re good.
It’s also **zero-config**, supports **risk scoring**, and can show **inline blame summaries** alongside patch details.
**How Python Is Involved**
The entire tool is written in Python 3.10+, using:
* `GitPython` for Git interaction
* `jinja2` for templating HTML
* `pyyaml`, `rich`, and `pytest` for config, CLI output, and tests
**Install:**
pip install patchmind
**Source Code:**
🌐 [GitHub - Darkstar420/patchmind](https://github.com/Darkstar420/patchmind)
Let me know what you think — or just use it and never look back. It’s under Apache-2.0, so
/r/Python
https://redd.it/1luifni
# What My Project Does
**PatchMind** is a modular Python CLI tool that analyzes local Git repos and generates a self-contained HTML report. It highlights patch-level diffs, file tree changes, file history, risk scoring, and blame info — all visual, all local, and no third-party integrations required.
# Target Audience
Primarily intended for developers who want fast, local insight into codebase evolution — ideal for solo projects, CI pipelines, or anyone sick of clicking through slow Git web UIs just to see what changed.
# Comparison
Unlike tools like GitHub’s diff viewer or GitKraken, PatchMind is entirely local and focused on generating reports you can keep or archive. There’s no sync, no telemetry, and no server required — just run it in your terminal, open the HTML, and you’re good.
It’s also **zero-config**, supports **risk scoring**, and can show **inline blame summaries** alongside patch details.
**How Python Is Involved**
The entire tool is written in Python 3.10+, using:
* `GitPython` for Git interaction
* `jinja2` for templating HTML
* `pyyaml`, `rich`, and `pytest` for config, CLI output, and tests
**Install:**
pip install patchmind
**Source Code:**
🌐 [GitHub - Darkstar420/patchmind](https://github.com/Darkstar420/patchmind)
Let me know what you think — or just use it and never look back. It’s under Apache-2.0, so
/r/Python
https://redd.it/1luifni
GitHub
GitHub - Darkstar420/patchmind: AI-powered Git patch reporter. PatchMind scans your Git repo and generates clean, standalone HTML…
AI-powered Git patch reporter. PatchMind scans your Git repo and generates clean, standalone HTML reports showing patch-level diffs, file history, tree changes, risk scoring, and blame data. Built ...
How to learn Django?
Do I follow documentation or a youtube series or anything else. I have been following the python roadmap on roadmap.sh and i am planning on learning django as my main framework for python.
P.S: I suck at reading documentation, so if you can suggest how to read documentations too.
/r/django
https://redd.it/1luvtga
Do I follow documentation or a youtube series or anything else. I have been following the python roadmap on roadmap.sh and i am planning on learning django as my main framework for python.
P.S: I suck at reading documentation, so if you can suggest how to read documentations too.
/r/django
https://redd.it/1luvtga
roadmap.sh
Developer Roadmaps - roadmap.sh
Community driven roadmaps, articles and guides for developers to grow in their career.
Favorite ML paper of 2024? D
What were the most interesting or important papers of 2024?
/r/MachineLearning
https://redd.it/1luvynh
What were the most interesting or important papers of 2024?
/r/MachineLearning
https://redd.it/1luvynh
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Lost Chapter of Automate the Boring Stuff: Audio, Video, and Webcams
https://inventwithpython.com/blog/lost-av-chapter.html
The third edition of Automate the Boring Stuff with Python is now available for purchase or to read for free online. It has updated content and several new chapters, but one chapter that was left on the cutting room floor was "Working with Audio, Video, and Webcams". I present the 26-page rough draft chapter in this blog, where you can learn how to write Python code that records and plays multimedia content.
/r/Python
https://redd.it/1luv77k
https://inventwithpython.com/blog/lost-av-chapter.html
The third edition of Automate the Boring Stuff with Python is now available for purchase or to read for free online. It has updated content and several new chapters, but one chapter that was left on the cutting room floor was "Working with Audio, Video, and Webcams". I present the 26-page rough draft chapter in this blog, where you can learn how to write Python code that records and plays multimedia content.
/r/Python
https://redd.it/1luv77k
Inventwithpython
Audio, Video, and Webcams in Python (Lost Chapter from Automate the Boring Stuff)
Beginner's Guide
Hello! I have finished learning Python. I want to make a website on Django. Please, recommend beginner's guide: books or web resources that thoroughly discuss website creation. I liked A Complete Beginner's Guide to Django (2017) by Vitor Freitas. Completed the whole thing and deploy one training site. But maybe there are more up-to-date instructions/books. Thank you! P.S. Django documentation is always open.
/r/django
https://redd.it/1lv7g7l
Hello! I have finished learning Python. I want to make a website on Django. Please, recommend beginner's guide: books or web resources that thoroughly discuss website creation. I liked A Complete Beginner's Guide to Django (2017) by Vitor Freitas. Completed the whole thing and deploy one training site. But maybe there are more up-to-date instructions/books. Thank you! P.S. Django documentation is always open.
/r/django
https://redd.it/1lv7g7l
Reddit
From the django community on Reddit
Explore this post and more from the django community
R Adopting a human developmental visual diet yields robust, shape-based AI vision
Happy to announce an exciting new project from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. An exciting case where brain inspiration profoundly changed and improved deep neural network representations for computer vision.
Link: https://arxiv.org/abs/2507.03168
The idea: instead of high-fidelity training from the get-go (the de facto gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD).
We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI:
1. shape-texture bias
2. recognising abstract shapes embedded in complex backgrounds
3. robustness to image perturbations
4. adversarial robustness.
We report a new SOTA on shape-bias (reaching human level), outperform AI foundation models in terms of abstract shape recognition, show better alignment with human behaviour upon image degradations, and improved robustness to adversarial noise - all with this one preprocessing trick.
This is observed across all conditions tested, and generalises across training datasets and multiple model architectures.
We are excited about this, because DVD may offers a resource-efficient path toward safer, perhaps more human-aligned AI vision. This work suggests that biology, neuroscience, and psychology have much to offer
/r/MachineLearning
https://redd.it/1luz9wu
Happy to announce an exciting new project from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. An exciting case where brain inspiration profoundly changed and improved deep neural network representations for computer vision.
Link: https://arxiv.org/abs/2507.03168
The idea: instead of high-fidelity training from the get-go (the de facto gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD).
We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI:
1. shape-texture bias
2. recognising abstract shapes embedded in complex backgrounds
3. robustness to image perturbations
4. adversarial robustness.
We report a new SOTA on shape-bias (reaching human level), outperform AI foundation models in terms of abstract shape recognition, show better alignment with human behaviour upon image degradations, and improved robustness to adversarial noise - all with this one preprocessing trick.
This is observed across all conditions tested, and generalises across training datasets and multiple model architectures.
We are excited about this, because DVD may offers a resource-efficient path toward safer, perhaps more human-aligned AI vision. This work suggests that biology, neuroscience, and psychology have much to offer
/r/MachineLearning
https://redd.it/1luz9wu
arXiv.org
Adopting a human developmental visual diet yields robust,...
Despite years of research and the dramatic scaling of artificial intelligence (AI) systems, a striking misalignment between artificial and human vision persists. Contrary to humans, AI heavily...
I've written a post about async/await. Could someone with deep knowledge check the Python sections?
I realized a few weeks ago that many of my colleagues do not understand
https://yoric.github.io/post/quite-a-few-words-about-async/
Thanks!
/r/Python
https://redd.it/1lv3u6i
I realized a few weeks ago that many of my colleagues do not understand
async/await clearly, so I wrote a blog post to present the topic a bit in depth. That being said, while I've written a fair bit of Python, Python is not my main language, so I'd be glad if someone with deep understanding of the implementation of async/await/Awaitable/co-routines in Python could double-check.https://yoric.github.io/post/quite-a-few-words-about-async/
Thanks!
/r/Python
https://redd.it/1lv3u6i
Il y a du thé renversé au bord de la table, by David "Yoric" Teller
(Quite) A Few Words About Async
I’ve had a few conversations about async code recently (and not so recently) and seen some code that seems to make wrong assumptions about async, so I figured out it was time to have a serious chat about async, what it’s for, what it guarantees and what it…
Local labs for real-time data streaming with Python (Kafka, PySpark, PyFlink)
I'm part of the team at [Factor House](https://factorhouse.io/), and we've just open-sourced a new set of free, hands-on labs to help Python developers get into real-time data engineering. The goal is to let you build and experiment with production-inspired data pipelines (using tools like Kafka, Flink, and Spark) all on your local machine, with a strong focus on Python.
You can stop just reading about data streaming and start building it with Python today.
🔗 **GitHub Repo:** [https://github.com/factorhouse/examples/tree/main/fh-local-labs](https://github.com/factorhouse/examples/tree/main/fh-local-labs)
We wanted to make sure this was genuinely useful for the Python community, so we've added practical, Python-centric examples.
**Here's the Python-specific stuff you can dive into:**
* 🐍 **Producing & Consuming from Kafka with Python (Lab 1):** This is the foundational lab. You'll learn how to use Python clients to produce and consume Avro-encoded messages with a Schema Registry, ensuring data quality and handling schema evolution—a must-have skill for robust data pipelines.
* 🐍 **Real-time ETL with PySpark (Lab 10):** Build a complete Structured Streaming job with `PySpark`. This lab guides you through ingesting data from Kafka, deserializing Avro messages, and writing the processed data into a modern data lakehouse table using Apache Iceberg.
* 🐍 **Building Reactive Python Clients (Labs 11
/r/Python
https://redd.it/1lvbdd4
I'm part of the team at [Factor House](https://factorhouse.io/), and we've just open-sourced a new set of free, hands-on labs to help Python developers get into real-time data engineering. The goal is to let you build and experiment with production-inspired data pipelines (using tools like Kafka, Flink, and Spark) all on your local machine, with a strong focus on Python.
You can stop just reading about data streaming and start building it with Python today.
🔗 **GitHub Repo:** [https://github.com/factorhouse/examples/tree/main/fh-local-labs](https://github.com/factorhouse/examples/tree/main/fh-local-labs)
We wanted to make sure this was genuinely useful for the Python community, so we've added practical, Python-centric examples.
**Here's the Python-specific stuff you can dive into:**
* 🐍 **Producing & Consuming from Kafka with Python (Lab 1):** This is the foundational lab. You'll learn how to use Python clients to produce and consume Avro-encoded messages with a Schema Registry, ensuring data quality and handling schema evolution—a must-have skill for robust data pipelines.
* 🐍 **Real-time ETL with PySpark (Lab 10):** Build a complete Structured Streaming job with `PySpark`. This lab guides you through ingesting data from Kafka, deserializing Avro messages, and writing the processed data into a modern data lakehouse table using Apache Iceberg.
* 🐍 **Building Reactive Python Clients (Labs 11
/r/Python
https://redd.it/1lvbdd4
Factor House
Factor House | Here for Engineers
What terminal is recommended?
Hello. Im pretty new to this and been searching for good terminals. What kind of terminals would you recommend for begginers on Windows?
/r/Python
https://redd.it/1lvdaj6
Hello. Im pretty new to this and been searching for good terminals. What kind of terminals would you recommend for begginers on Windows?
/r/Python
https://redd.it/1lvdaj6
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Python Hackathon Backend for rapid development and Feedback-Driven shipping
Within a year, I had the opportunity to participate in hackathons organized by Mistral, OpenAI, and DeepMind in Paris. Leveraging this experience, I’ve created a Python-backend boilerplate incorporating my feedback to streamline collaboration and urgent deployments.
What it does: This open-source GitHub template provides a standardized and efficient boilerplate to expedite setup, collaboration, and rapid deployment during hackathons. The template encompasses several essential components, including the uv package manager complemented by intuitive pre-configured
Target audience: This project is specifically targeted at software developers and AI engineers, particularly those preparing for their first hackathon experience, whether working individually or within a collaborative team environment. It caters to developers looking for rapid prototyping and deployment solutions, emphasizing efficiency, maintainability, and robustness under tight deadlines and challenging operational conditions.
Comparison: While numerous boilerplate templates and rapid-deployment solutions exist in the Python ecosystem, this template distinguishes itself by specifically addressing the common
/r/Python
https://redd.it/1lvfbxl
Within a year, I had the opportunity to participate in hackathons organized by Mistral, OpenAI, and DeepMind in Paris. Leveraging this experience, I’ve created a Python-backend boilerplate incorporating my feedback to streamline collaboration and urgent deployments.
What it does: This open-source GitHub template provides a standardized and efficient boilerplate to expedite setup, collaboration, and rapid deployment during hackathons. The template encompasses several essential components, including the uv package manager complemented by intuitive pre-configured
make commands, FastAPI for rapid and modular API development, and Pydantic for robust data validation and type checking. Additionally, it incorporates custom instructions optimized for agent tools like Cline and GitHub Copilot, enhancing both developer productivity and code comprehension. Comprehensive unit tests and a minimal Continuous Integration (CI) configuration ensure seamless integration and prevent merging faulty code into production.Target audience: This project is specifically targeted at software developers and AI engineers, particularly those preparing for their first hackathon experience, whether working individually or within a collaborative team environment. It caters to developers looking for rapid prototyping and deployment solutions, emphasizing efficiency, maintainability, and robustness under tight deadlines and challenging operational conditions.
Comparison: While numerous boilerplate templates and rapid-deployment solutions exist in the Python ecosystem, this template distinguishes itself by specifically addressing the common
/r/Python
https://redd.it/1lvfbxl
Reddit
From the Python community on Reddit: Python Hackathon Backend for rapid development and Feedback-Driven shipping
Explore this post and more from the Python community
Problems with rabbitmq and pika
Hi everyone, I am creating a microservice in Flask. I need this microservice to connect as a consumer to a simple queue with rabbit. The message is sended correctly, but the consumer does not print anything. If the app is rebuilded by flask (after an edit) it prints the body of the last message correctly. I don't know what is the problem.
# [app.py](http://app.py)
from flask import Flask
import threading
from components.message_listener import consume
from controllers.data_processor_rest_controller import measurements_bp
from repositories.pollution_measurement_repository import PollutionMeasurementsRepository
from services.measurement_to_datamap_converter_service import periodic_task
import os
app = Flask(__name__)
PollutionMeasurementsRepository()
def config_amqp():
threading.Thread(target=consume, daemon=True).start()
if __name__ == "__main__":
config_amqp()
app.register_blueprint(measurements_bp)
app.run(host="0.0.0.0",port=8080)
# message_listener.py
import pika
import time
def callback(ch, method, properties, body):
/r/flask
https://redd.it/1lve5rn
Hi everyone, I am creating a microservice in Flask. I need this microservice to connect as a consumer to a simple queue with rabbit. The message is sended correctly, but the consumer does not print anything. If the app is rebuilded by flask (after an edit) it prints the body of the last message correctly. I don't know what is the problem.
# [app.py](http://app.py)
from flask import Flask
import threading
from components.message_listener import consume
from controllers.data_processor_rest_controller import measurements_bp
from repositories.pollution_measurement_repository import PollutionMeasurementsRepository
from services.measurement_to_datamap_converter_service import periodic_task
import os
app = Flask(__name__)
PollutionMeasurementsRepository()
def config_amqp():
threading.Thread(target=consume, daemon=True).start()
if __name__ == "__main__":
config_amqp()
app.register_blueprint(measurements_bp)
app.run(host="0.0.0.0",port=8080)
# message_listener.py
import pika
import time
def callback(ch, method, properties, body):
/r/flask
https://redd.it/1lve5rn
Reddit
From the flask community on Reddit
Explore this post and more from the flask community