Tired of forgetting local git changes? I built a tool to track the status of all your local repos at
As someone who juggles many small projects—both personal and for clients—I often find myself with dozens of local git repositories scattered across my machine. Sometimes I forget about changes I made in a repo I haven’t opened in a few days, and that can lead to lost time or even lost work.
To solve this, I built gits-statuses: a simple tool that gives you a bird’s-eye view of the status of all your local git repositories.
It scans a directory (recursively) and shows you which repos have uncommitted changes, unpushed commits, or are clean. It’s a quick way to stay on top of your work and avoid surprises.
There are two versions:
Python: cross-platform and easy to integrate into scripts or cron jobs
PowerShell: great for Windows users who want native terminal integration
Check it out here: https://github.com/nicolgit/gits-statuses
Feedback and contributions are welcome!
/r/Python
https://redd.it/1luiz8o
As someone who juggles many small projects—both personal and for clients—I often find myself with dozens of local git repositories scattered across my machine. Sometimes I forget about changes I made in a repo I haven’t opened in a few days, and that can lead to lost time or even lost work.
To solve this, I built gits-statuses: a simple tool that gives you a bird’s-eye view of the status of all your local git repositories.
It scans a directory (recursively) and shows you which repos have uncommitted changes, unpushed commits, or are clean. It’s a quick way to stay on top of your work and avoid surprises.
There are two versions:
Python: cross-platform and easy to integrate into scripts or cron jobs
PowerShell: great for Windows users who want native terminal integration
Check it out here: https://github.com/nicolgit/gits-statuses
Feedback and contributions are welcome!
/r/Python
https://redd.it/1luiz8o
GitHub
GitHub - nicolgit/gits-statuses: A python/powershell command-line tool to display the status of multiple Git repositories in a…
A python/powershell command-line tool to display the status of multiple Git repositories in a clear, tabular format. - nicolgit/gits-statuses
D Stop building monolithic AI agents - Pipeline of Agents pattern
Context: Needed to build scan → attack → report workflow for cybersecurity. First attempt was typical "everything in one graph" disaster.
The mess: One LangGraph trying to do everything. Unmaintainable. Untestable. Classic big ball of mud but with AI.
The fix: Pipeline of Agents
Sequential execution with clean interfaces
State isolation between child graphs
Each agent independently developable/testable
Follows actual software engineering principles
Technical details: Used LangGraph wrapper nodes to convert parent state to child state. Only pass minimal required data. No global state sharing.
Result: Actually maintainable AI architecture that doesn't make you hate your life.
Full breakdown with Python implementation: https://vitaliihonchar.com/insights/how-to-build-pipeline-of-agents
Question: Are others finding similar patterns necessary as AI systems get more complex?
/r/MachineLearning
https://redd.it/1lumxa6
Context: Needed to build scan → attack → report workflow for cybersecurity. First attempt was typical "everything in one graph" disaster.
The mess: One LangGraph trying to do everything. Unmaintainable. Untestable. Classic big ball of mud but with AI.
The fix: Pipeline of Agents
Sequential execution with clean interfaces
State isolation between child graphs
Each agent independently developable/testable
Follows actual software engineering principles
Technical details: Used LangGraph wrapper nodes to convert parent state to child state. Only pass minimal required data. No global state sharing.
Result: Actually maintainable AI architecture that doesn't make you hate your life.
Full breakdown with Python implementation: https://vitaliihonchar.com/insights/how-to-build-pipeline-of-agents
Question: Are others finding similar patterns necessary as AI systems get more complex?
/r/MachineLearning
https://redd.it/1lumxa6
Vitalii Honchar
Pipeline of Agents Pattern: Building Maintainable AI Workflows with LangGraph
Learn how to build scalable AI agent systems using the Pipeline of Agents pattern. Discover why monolithic agents fail and how to architect modular, testable AI workflows with Python and LangGraph.
Project Using LDV-style compression to create an innovation machine
I'm experimenting with a method to increase the conceptual density of ideas by compressing science and engineering concepts into minimal-vocabulary statements using the Longman Defining Vocabulary (LDV) - the core 2,000 building block words of the English language.
The hypothesis: reducing lexical complexity increases the chance that a language model will recombine latent structural similarities between otherwise distant concepts, when prompted accordingly ( I've got a whole program on these prompts as well).
That is, I'm trying to build a genuine innovation machine, bit by byte.
Rather than maximizing fluency, the goal is to preserve mechanistic structure using ~2,000 basic English words. This trades off precision and abstraction in favor of semantic alignment, similar to how concept bottlenecks work in neuro-symbolic systems.
The Why:
LLMs today are surprisingly poor at discovering cross-domain connections. When pushed, they tend to revert to well-trodden academic hallucinations, the kinds you find in introductions and conclusions of academic papers.
A compressed lexical environment, like LDV, exposes the mechanical spine of each idea. The hope is that this makes unexpected adjacencies more accessible.
Examples:
LDV-style input: 3 mechanisms
1. “A bucket with a hole lets water out slowly.”
→ time-delay or pressure bleed-off
2. “A button lets water go from one part to another.”
→ valve or
/r/MachineLearning
https://redd.it/1lun8s3
I'm experimenting with a method to increase the conceptual density of ideas by compressing science and engineering concepts into minimal-vocabulary statements using the Longman Defining Vocabulary (LDV) - the core 2,000 building block words of the English language.
The hypothesis: reducing lexical complexity increases the chance that a language model will recombine latent structural similarities between otherwise distant concepts, when prompted accordingly ( I've got a whole program on these prompts as well).
That is, I'm trying to build a genuine innovation machine, bit by byte.
Rather than maximizing fluency, the goal is to preserve mechanistic structure using ~2,000 basic English words. This trades off precision and abstraction in favor of semantic alignment, similar to how concept bottlenecks work in neuro-symbolic systems.
The Why:
LLMs today are surprisingly poor at discovering cross-domain connections. When pushed, they tend to revert to well-trodden academic hallucinations, the kinds you find in introductions and conclusions of academic papers.
A compressed lexical environment, like LDV, exposes the mechanical spine of each idea. The hope is that this makes unexpected adjacencies more accessible.
Examples:
LDV-style input: 3 mechanisms
1. “A bucket with a hole lets water out slowly.”
→ time-delay or pressure bleed-off
2. “A button lets water go from one part to another.”
→ valve or
/r/MachineLearning
https://redd.it/1lun8s3
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
PatchMind: A CLI tool that turns Git repos into visual HTML insight. no cloud, no bloat
# What My Project Does
**PatchMind** is a modular Python CLI tool that analyzes local Git repos and generates a self-contained HTML report. It highlights patch-level diffs, file tree changes, file history, risk scoring, and blame info — all visual, all local, and no third-party integrations required.
# Target Audience
Primarily intended for developers who want fast, local insight into codebase evolution — ideal for solo projects, CI pipelines, or anyone sick of clicking through slow Git web UIs just to see what changed.
# Comparison
Unlike tools like GitHub’s diff viewer or GitKraken, PatchMind is entirely local and focused on generating reports you can keep or archive. There’s no sync, no telemetry, and no server required — just run it in your terminal, open the HTML, and you’re good.
It’s also **zero-config**, supports **risk scoring**, and can show **inline blame summaries** alongside patch details.
**How Python Is Involved**
The entire tool is written in Python 3.10+, using:
* `GitPython` for Git interaction
* `jinja2` for templating HTML
* `pyyaml`, `rich`, and `pytest` for config, CLI output, and tests
**Install:**
pip install patchmind
**Source Code:**
🌐 [GitHub - Darkstar420/patchmind](https://github.com/Darkstar420/patchmind)
Let me know what you think — or just use it and never look back. It’s under Apache-2.0, so
/r/Python
https://redd.it/1luifni
# What My Project Does
**PatchMind** is a modular Python CLI tool that analyzes local Git repos and generates a self-contained HTML report. It highlights patch-level diffs, file tree changes, file history, risk scoring, and blame info — all visual, all local, and no third-party integrations required.
# Target Audience
Primarily intended for developers who want fast, local insight into codebase evolution — ideal for solo projects, CI pipelines, or anyone sick of clicking through slow Git web UIs just to see what changed.
# Comparison
Unlike tools like GitHub’s diff viewer or GitKraken, PatchMind is entirely local and focused on generating reports you can keep or archive. There’s no sync, no telemetry, and no server required — just run it in your terminal, open the HTML, and you’re good.
It’s also **zero-config**, supports **risk scoring**, and can show **inline blame summaries** alongside patch details.
**How Python Is Involved**
The entire tool is written in Python 3.10+, using:
* `GitPython` for Git interaction
* `jinja2` for templating HTML
* `pyyaml`, `rich`, and `pytest` for config, CLI output, and tests
**Install:**
pip install patchmind
**Source Code:**
🌐 [GitHub - Darkstar420/patchmind](https://github.com/Darkstar420/patchmind)
Let me know what you think — or just use it and never look back. It’s under Apache-2.0, so
/r/Python
https://redd.it/1luifni
GitHub
GitHub - Darkstar420/patchmind: AI-powered Git patch reporter. PatchMind scans your Git repo and generates clean, standalone HTML…
AI-powered Git patch reporter. PatchMind scans your Git repo and generates clean, standalone HTML reports showing patch-level diffs, file history, tree changes, risk scoring, and blame data. Built ...
How to learn Django?
Do I follow documentation or a youtube series or anything else. I have been following the python roadmap on roadmap.sh and i am planning on learning django as my main framework for python.
P.S: I suck at reading documentation, so if you can suggest how to read documentations too.
/r/django
https://redd.it/1luvtga
Do I follow documentation or a youtube series or anything else. I have been following the python roadmap on roadmap.sh and i am planning on learning django as my main framework for python.
P.S: I suck at reading documentation, so if you can suggest how to read documentations too.
/r/django
https://redd.it/1luvtga
roadmap.sh
Developer Roadmaps - roadmap.sh
Community driven roadmaps, articles and guides for developers to grow in their career.
Favorite ML paper of 2024? D
What were the most interesting or important papers of 2024?
/r/MachineLearning
https://redd.it/1luvynh
What were the most interesting or important papers of 2024?
/r/MachineLearning
https://redd.it/1luvynh
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Lost Chapter of Automate the Boring Stuff: Audio, Video, and Webcams
https://inventwithpython.com/blog/lost-av-chapter.html
The third edition of Automate the Boring Stuff with Python is now available for purchase or to read for free online. It has updated content and several new chapters, but one chapter that was left on the cutting room floor was "Working with Audio, Video, and Webcams". I present the 26-page rough draft chapter in this blog, where you can learn how to write Python code that records and plays multimedia content.
/r/Python
https://redd.it/1luv77k
https://inventwithpython.com/blog/lost-av-chapter.html
The third edition of Automate the Boring Stuff with Python is now available for purchase or to read for free online. It has updated content and several new chapters, but one chapter that was left on the cutting room floor was "Working with Audio, Video, and Webcams". I present the 26-page rough draft chapter in this blog, where you can learn how to write Python code that records and plays multimedia content.
/r/Python
https://redd.it/1luv77k
Inventwithpython
Audio, Video, and Webcams in Python (Lost Chapter from Automate the Boring Stuff)
Beginner's Guide
Hello! I have finished learning Python. I want to make a website on Django. Please, recommend beginner's guide: books or web resources that thoroughly discuss website creation. I liked A Complete Beginner's Guide to Django (2017) by Vitor Freitas. Completed the whole thing and deploy one training site. But maybe there are more up-to-date instructions/books. Thank you! P.S. Django documentation is always open.
/r/django
https://redd.it/1lv7g7l
Hello! I have finished learning Python. I want to make a website on Django. Please, recommend beginner's guide: books or web resources that thoroughly discuss website creation. I liked A Complete Beginner's Guide to Django (2017) by Vitor Freitas. Completed the whole thing and deploy one training site. But maybe there are more up-to-date instructions/books. Thank you! P.S. Django documentation is always open.
/r/django
https://redd.it/1lv7g7l
Reddit
From the django community on Reddit
Explore this post and more from the django community
R Adopting a human developmental visual diet yields robust, shape-based AI vision
Happy to announce an exciting new project from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. An exciting case where brain inspiration profoundly changed and improved deep neural network representations for computer vision.
Link: https://arxiv.org/abs/2507.03168
The idea: instead of high-fidelity training from the get-go (the de facto gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD).
We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI:
1. shape-texture bias
2. recognising abstract shapes embedded in complex backgrounds
3. robustness to image perturbations
4. adversarial robustness.
We report a new SOTA on shape-bias (reaching human level), outperform AI foundation models in terms of abstract shape recognition, show better alignment with human behaviour upon image degradations, and improved robustness to adversarial noise - all with this one preprocessing trick.
This is observed across all conditions tested, and generalises across training datasets and multiple model architectures.
We are excited about this, because DVD may offers a resource-efficient path toward safer, perhaps more human-aligned AI vision. This work suggests that biology, neuroscience, and psychology have much to offer
/r/MachineLearning
https://redd.it/1luz9wu
Happy to announce an exciting new project from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. An exciting case where brain inspiration profoundly changed and improved deep neural network representations for computer vision.
Link: https://arxiv.org/abs/2507.03168
The idea: instead of high-fidelity training from the get-go (the de facto gold standard), we simulate the visual development from newborns to 25 years of age by synthesising decades of developmental vision research into an AI preprocessing pipeline (Developmental Visual Diet - DVD).
We then test the resulting DNNs across a range of conditions, each selected because they are challenging to AI:
1. shape-texture bias
2. recognising abstract shapes embedded in complex backgrounds
3. robustness to image perturbations
4. adversarial robustness.
We report a new SOTA on shape-bias (reaching human level), outperform AI foundation models in terms of abstract shape recognition, show better alignment with human behaviour upon image degradations, and improved robustness to adversarial noise - all with this one preprocessing trick.
This is observed across all conditions tested, and generalises across training datasets and multiple model architectures.
We are excited about this, because DVD may offers a resource-efficient path toward safer, perhaps more human-aligned AI vision. This work suggests that biology, neuroscience, and psychology have much to offer
/r/MachineLearning
https://redd.it/1luz9wu
arXiv.org
Adopting a human developmental visual diet yields robust,...
Despite years of research and the dramatic scaling of artificial intelligence (AI) systems, a striking misalignment between artificial and human vision persists. Contrary to humans, AI heavily...
I've written a post about async/await. Could someone with deep knowledge check the Python sections?
I realized a few weeks ago that many of my colleagues do not understand
https://yoric.github.io/post/quite-a-few-words-about-async/
Thanks!
/r/Python
https://redd.it/1lv3u6i
I realized a few weeks ago that many of my colleagues do not understand
async/await clearly, so I wrote a blog post to present the topic a bit in depth. That being said, while I've written a fair bit of Python, Python is not my main language, so I'd be glad if someone with deep understanding of the implementation of async/await/Awaitable/co-routines in Python could double-check.https://yoric.github.io/post/quite-a-few-words-about-async/
Thanks!
/r/Python
https://redd.it/1lv3u6i
Il y a du thé renversé au bord de la table, by David "Yoric" Teller
(Quite) A Few Words About Async
I’ve had a few conversations about async code recently (and not so recently) and seen some code that seems to make wrong assumptions about async, so I figured out it was time to have a serious chat about async, what it’s for, what it guarantees and what it…
Local labs for real-time data streaming with Python (Kafka, PySpark, PyFlink)
I'm part of the team at [Factor House](https://factorhouse.io/), and we've just open-sourced a new set of free, hands-on labs to help Python developers get into real-time data engineering. The goal is to let you build and experiment with production-inspired data pipelines (using tools like Kafka, Flink, and Spark) all on your local machine, with a strong focus on Python.
You can stop just reading about data streaming and start building it with Python today.
🔗 **GitHub Repo:** [https://github.com/factorhouse/examples/tree/main/fh-local-labs](https://github.com/factorhouse/examples/tree/main/fh-local-labs)
We wanted to make sure this was genuinely useful for the Python community, so we've added practical, Python-centric examples.
**Here's the Python-specific stuff you can dive into:**
* 🐍 **Producing & Consuming from Kafka with Python (Lab 1):** This is the foundational lab. You'll learn how to use Python clients to produce and consume Avro-encoded messages with a Schema Registry, ensuring data quality and handling schema evolution—a must-have skill for robust data pipelines.
* 🐍 **Real-time ETL with PySpark (Lab 10):** Build a complete Structured Streaming job with `PySpark`. This lab guides you through ingesting data from Kafka, deserializing Avro messages, and writing the processed data into a modern data lakehouse table using Apache Iceberg.
* 🐍 **Building Reactive Python Clients (Labs 11
/r/Python
https://redd.it/1lvbdd4
I'm part of the team at [Factor House](https://factorhouse.io/), and we've just open-sourced a new set of free, hands-on labs to help Python developers get into real-time data engineering. The goal is to let you build and experiment with production-inspired data pipelines (using tools like Kafka, Flink, and Spark) all on your local machine, with a strong focus on Python.
You can stop just reading about data streaming and start building it with Python today.
🔗 **GitHub Repo:** [https://github.com/factorhouse/examples/tree/main/fh-local-labs](https://github.com/factorhouse/examples/tree/main/fh-local-labs)
We wanted to make sure this was genuinely useful for the Python community, so we've added practical, Python-centric examples.
**Here's the Python-specific stuff you can dive into:**
* 🐍 **Producing & Consuming from Kafka with Python (Lab 1):** This is the foundational lab. You'll learn how to use Python clients to produce and consume Avro-encoded messages with a Schema Registry, ensuring data quality and handling schema evolution—a must-have skill for robust data pipelines.
* 🐍 **Real-time ETL with PySpark (Lab 10):** Build a complete Structured Streaming job with `PySpark`. This lab guides you through ingesting data from Kafka, deserializing Avro messages, and writing the processed data into a modern data lakehouse table using Apache Iceberg.
* 🐍 **Building Reactive Python Clients (Labs 11
/r/Python
https://redd.it/1lvbdd4
Factor House
Factor House | Here for Engineers
What terminal is recommended?
Hello. Im pretty new to this and been searching for good terminals. What kind of terminals would you recommend for begginers on Windows?
/r/Python
https://redd.it/1lvdaj6
Hello. Im pretty new to this and been searching for good terminals. What kind of terminals would you recommend for begginers on Windows?
/r/Python
https://redd.it/1lvdaj6
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Python Hackathon Backend for rapid development and Feedback-Driven shipping
Within a year, I had the opportunity to participate in hackathons organized by Mistral, OpenAI, and DeepMind in Paris. Leveraging this experience, I’ve created a Python-backend boilerplate incorporating my feedback to streamline collaboration and urgent deployments.
What it does: This open-source GitHub template provides a standardized and efficient boilerplate to expedite setup, collaboration, and rapid deployment during hackathons. The template encompasses several essential components, including the uv package manager complemented by intuitive pre-configured
Target audience: This project is specifically targeted at software developers and AI engineers, particularly those preparing for their first hackathon experience, whether working individually or within a collaborative team environment. It caters to developers looking for rapid prototyping and deployment solutions, emphasizing efficiency, maintainability, and robustness under tight deadlines and challenging operational conditions.
Comparison: While numerous boilerplate templates and rapid-deployment solutions exist in the Python ecosystem, this template distinguishes itself by specifically addressing the common
/r/Python
https://redd.it/1lvfbxl
Within a year, I had the opportunity to participate in hackathons organized by Mistral, OpenAI, and DeepMind in Paris. Leveraging this experience, I’ve created a Python-backend boilerplate incorporating my feedback to streamline collaboration and urgent deployments.
What it does: This open-source GitHub template provides a standardized and efficient boilerplate to expedite setup, collaboration, and rapid deployment during hackathons. The template encompasses several essential components, including the uv package manager complemented by intuitive pre-configured
make commands, FastAPI for rapid and modular API development, and Pydantic for robust data validation and type checking. Additionally, it incorporates custom instructions optimized for agent tools like Cline and GitHub Copilot, enhancing both developer productivity and code comprehension. Comprehensive unit tests and a minimal Continuous Integration (CI) configuration ensure seamless integration and prevent merging faulty code into production.Target audience: This project is specifically targeted at software developers and AI engineers, particularly those preparing for their first hackathon experience, whether working individually or within a collaborative team environment. It caters to developers looking for rapid prototyping and deployment solutions, emphasizing efficiency, maintainability, and robustness under tight deadlines and challenging operational conditions.
Comparison: While numerous boilerplate templates and rapid-deployment solutions exist in the Python ecosystem, this template distinguishes itself by specifically addressing the common
/r/Python
https://redd.it/1lvfbxl
Reddit
From the Python community on Reddit: Python Hackathon Backend for rapid development and Feedback-Driven shipping
Explore this post and more from the Python community
Problems with rabbitmq and pika
Hi everyone, I am creating a microservice in Flask. I need this microservice to connect as a consumer to a simple queue with rabbit. The message is sended correctly, but the consumer does not print anything. If the app is rebuilded by flask (after an edit) it prints the body of the last message correctly. I don't know what is the problem.
# [app.py](http://app.py)
from flask import Flask
import threading
from components.message_listener import consume
from controllers.data_processor_rest_controller import measurements_bp
from repositories.pollution_measurement_repository import PollutionMeasurementsRepository
from services.measurement_to_datamap_converter_service import periodic_task
import os
app = Flask(__name__)
PollutionMeasurementsRepository()
def config_amqp():
threading.Thread(target=consume, daemon=True).start()
if __name__ == "__main__":
config_amqp()
app.register_blueprint(measurements_bp)
app.run(host="0.0.0.0",port=8080)
# message_listener.py
import pika
import time
def callback(ch, method, properties, body):
/r/flask
https://redd.it/1lve5rn
Hi everyone, I am creating a microservice in Flask. I need this microservice to connect as a consumer to a simple queue with rabbit. The message is sended correctly, but the consumer does not print anything. If the app is rebuilded by flask (after an edit) it prints the body of the last message correctly. I don't know what is the problem.
# [app.py](http://app.py)
from flask import Flask
import threading
from components.message_listener import consume
from controllers.data_processor_rest_controller import measurements_bp
from repositories.pollution_measurement_repository import PollutionMeasurementsRepository
from services.measurement_to_datamap_converter_service import periodic_task
import os
app = Flask(__name__)
PollutionMeasurementsRepository()
def config_amqp():
threading.Thread(target=consume, daemon=True).start()
if __name__ == "__main__":
config_amqp()
app.register_blueprint(measurements_bp)
app.run(host="0.0.0.0",port=8080)
# message_listener.py
import pika
import time
def callback(ch, method, properties, body):
/r/flask
https://redd.it/1lve5rn
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
D Trains a human activity or habit classifier, then concludes "human cognition captured." What could go wrong?
A screenshot of an article's title that was published on the Nature journal. It reads \\"A foundation model to predict and capture human cognition\\"
The fine-tuning dtaset, from the paper: "trial-by-trial data from more than 60,000 participants performing in excess of 10,000,000 choices in 160 experiments."
An influential author in the author list is clearly trolling. It is rare to see an article conclusion that is about anticipating an attack from other researchers. They write "This could lead to an 'attack of the killer bees', in which researchers in more-conventional fields would fiercely critique or reject the new model to defend their established approaches."
What are the ML community's thoughts on this?
/r/MachineLearning
https://redd.it/1lvcs2k
A screenshot of an article's title that was published on the Nature journal. It reads \\"A foundation model to predict and capture human cognition\\"
The fine-tuning dtaset, from the paper: "trial-by-trial data from more than 60,000 participants performing in excess of 10,000,000 choices in 160 experiments."
An influential author in the author list is clearly trolling. It is rare to see an article conclusion that is about anticipating an attack from other researchers. They write "This could lead to an 'attack of the killer bees', in which researchers in more-conventional fields would fiercely critique or reject the new model to defend their established approaches."
What are the ML community's thoughts on this?
/r/MachineLearning
https://redd.it/1lvcs2k
Best options for deploying Flask app for a non-techie
I have just built a Flask app on my home desktop. It uses a mySQL database and integrates into a payment widget which uses webhooks as part of its payment confirmation. Other than this it is fairly straight forward. Some pandas, some form data collection.
In terms of hosting, I need it to be on all the time, but I anticipate it will not have heavy traffic, nor will the space requirement be particularly large. I would like to integrate it into my existing website - I.e. access the app via my existing website URL.
Some cost to host is fine, but low is better, particularly given low usage and space requirements.
I am not particularly technical, so ease of deployment is quite important for me.
Please could you suggest some possible services / strategies I could employ to deploy this.
TIA
/r/flask
https://redd.it/1luxr0d
I have just built a Flask app on my home desktop. It uses a mySQL database and integrates into a payment widget which uses webhooks as part of its payment confirmation. Other than this it is fairly straight forward. Some pandas, some form data collection.
In terms of hosting, I need it to be on all the time, but I anticipate it will not have heavy traffic, nor will the space requirement be particularly large. I would like to integrate it into my existing website - I.e. access the app via my existing website URL.
Some cost to host is fine, but low is better, particularly given low usage and space requirements.
I am not particularly technical, so ease of deployment is quite important for me.
Please could you suggest some possible services / strategies I could employ to deploy this.
TIA
/r/flask
https://redd.it/1luxr0d
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
How to delete saved sessions if I'm using flask-session with sqlalchemy?
I'm currently using flask-session with sqlalchemy, and would like to delete all the sessions stored on my database when a user sends a specific request to an endpoint in my server. I thought I could use
This is my repo if you want to see it
/r/flask
https://redd.it/1lux7t4
I'm currently using flask-session with sqlalchemy, and would like to delete all the sessions stored on my database when a user sends a specific request to an endpoint in my server. I thought I could use
session.clear() for that, but it's not working.This is my repo if you want to see it
/r/flask
https://redd.it/1lux7t4
Free-threaded (multicore, parallel) Python will be fully supported starting Python 3.14!
Python had experimental support for multithreaded interpreter.
Starting in Python 3.14, it will be fully supported as non-experimental: https://docs.python.org/3.14/whatsnew/3.14.html#whatsnew314-pep779
/r/Python
https://redd.it/1lvjrgj
Python had experimental support for multithreaded interpreter.
Starting in Python 3.14, it will be fully supported as non-experimental: https://docs.python.org/3.14/whatsnew/3.14.html#whatsnew314-pep779
/r/Python
https://redd.it/1lvjrgj
Python documentation
What’s new in Python 3.14
Editors, Adam Turner and Hugo van Kemenade,. This article explains the new features in Python 3.14, compared to 3.13. Python 3.14 was released on 7 October 2025. For full details, see the changelog...
Devs in Johannesburg
Hi All,
The company I work for is hiring devs in Johannesburg South Africa.
Specifically a Senior Developer and Jnr Developer who can be in office in Johannesburg.
But we are struggling to find good hires, anyone know where to find the best Django devs in Joburg?
Cheers!
/r/django
https://redd.it/1lvkmxb
Hi All,
The company I work for is hiring devs in Johannesburg South Africa.
Specifically a Senior Developer and Jnr Developer who can be in office in Johannesburg.
But we are struggling to find good hires, anyone know where to find the best Django devs in Joburg?
Cheers!
/r/django
https://redd.it/1lvkmxb
Reddit
From the django community on Reddit
Explore this post and more from the django community