Tired of static reports? I built a CLI War Room for live C2 tracking.
Hi everyone! 👋
I work in cybersecurity, and I've always been frustrated by static malware analysis reports. They tell you a file is malicious, but they don't give you the "live" feeling of the attack.
So, I spent the last few weeks building ZeroScout. It’s an open-source CLI tool that acts as a Cyber Defense HQ right in your terminal.
🎥 What does it actually do?
Instead of just scanning a file, it:
1. Live War Room: Extracts C2 IPs and simulates the network traffic on an ASCII World Map in real-time.
2. Genetic Attribution: Uses ImpHash and code analysis to identify the APT Group (e.g., Lazarus, APT28) even if the file is a 0-day.
3. Auto-Defense: It automatically writes **YARA** and **SIGMA** rules for you based on the analysis.
4. Hybrid Engine: Works offline (Local Heuristics) or online (Cloud Sandbox integration).
📺 Demo Video: https://youtu.be/P-MemgcX8g8
💻 Source Code:
It's fully open-source (MIT License). I’d love to hear your feedback or feature requests!
👉 **GitHub:** https://github.com/SUmidcyber/ZeroScout
If you find it useful, a ⭐ on GitHub would mean the world to me!
Thanks for checking it out.
/r/Python
https://redd.it/1pbcpbj
Hi everyone! 👋
I work in cybersecurity, and I've always been frustrated by static malware analysis reports. They tell you a file is malicious, but they don't give you the "live" feeling of the attack.
So, I spent the last few weeks building ZeroScout. It’s an open-source CLI tool that acts as a Cyber Defense HQ right in your terminal.
🎥 What does it actually do?
Instead of just scanning a file, it:
1. Live War Room: Extracts C2 IPs and simulates the network traffic on an ASCII World Map in real-time.
2. Genetic Attribution: Uses ImpHash and code analysis to identify the APT Group (e.g., Lazarus, APT28) even if the file is a 0-day.
3. Auto-Defense: It automatically writes **YARA** and **SIGMA** rules for you based on the analysis.
4. Hybrid Engine: Works offline (Local Heuristics) or online (Cloud Sandbox integration).
📺 Demo Video: https://youtu.be/P-MemgcX8g8
💻 Source Code:
It's fully open-source (MIT License). I’d love to hear your feedback or feature requests!
👉 **GitHub:** https://github.com/SUmidcyber/ZeroScout
If you find it useful, a ⭐ on GitHub would mean the world to me!
Thanks for checking it out.
/r/Python
https://redd.it/1pbcpbj
YouTube
🦅 Python ile Kendi Siber Güvenlik Aracımı Geliştirdim: ZeroScout (Canlı Demo)
Siber güvenlikte sıkıcı raporlardan bıktınız mı? Ben bıktım ve terminalde çalışan canlı bir "Siber Savaş Odası" geliştirdim! 🦅
Bugün sizlere, tamamen açık kaynaklı olarak geliştirdiğim, siber saldırıları dünya haritasında canlı izleten ve arkasındaki hacker…
Bugün sizlere, tamamen açık kaynaklı olarak geliştirdiğim, siber saldırıları dünya haritasında canlı izleten ve arkasındaki hacker…
Learning AI/ML as a CS Student
Hello there!
I'm curious about how AI works in the backend this curiosity drives me to learn AIML
As I researched now this topic I got various Roadmaps but that blown me up. Someone say learn xyz some say abc and the list continues
But there were some common things in all of them which isp
1.python
2.pandas
3.numpy
4.matplotlib
5.seaborn
After that they seperate
As I started the journey I got python, pandas, numpy almost done now I'm confused😵 what to learn after that
Plzz guide me with actual things I should learn
As I saw here working professionals and developers lots of experience hope you guys will help 😃
/r/Python
https://redd.it/1pbam48
Hello there!
I'm curious about how AI works in the backend this curiosity drives me to learn AIML
As I researched now this topic I got various Roadmaps but that blown me up. Someone say learn xyz some say abc and the list continues
But there were some common things in all of them which isp
1.python
2.pandas
3.numpy
4.matplotlib
5.seaborn
After that they seperate
As I started the journey I got python, pandas, numpy almost done now I'm confused😵 what to learn after that
Plzz guide me with actual things I should learn
As I saw here working professionals and developers lots of experience hope you guys will help 😃
/r/Python
https://redd.it/1pbam48
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Mp4-To-Srv3 - Convert Video Into Colored-Braille Subtitles For YouTube
### What My Project Does
This project converts an MP4 video, or a single PNG image (useful for thumbnails), into YouTube's internal SRV3 subtitle format, rendering frames as colored braille characters.
It optionally takes an SRT file and uses it to overlay subtitles onto the generated output.
For each braille character, the converter selects up to 8 subpixels to approximate brightness and assigns an average 12-bit color. This is not color-optimal but is very fast.
For better color control you can stack up to 8 layers per frame; colors are grouped by brightness and file size grows accordingly.
Resolutions up to 84 rows are supported (portrait mode required above 63 rows). Higher resolutions reduce FPS quadratically, so the tool applies motion blur to maintain motion perception when frames are skipped.
Demo video cycling weekly through multiple examples (Never Gonna Give You Up, Bad Apple, Plants vs. Zombies, Minecraft and Geometry Dash): https://youtu.be/XtfY7RMEPIg
(files: https://github.com/nineteendo/yt-editor-public)
Source code: https://github.com/nineteendo/Mp4-To-Srv3
(Fork of https://github.com/Nachtwind1/Mp4-To-Srt)
### Target Audience
- Anyone experimenting with ASCII/Unicode rendering or nonstandard video encodings
- Hobbyists interested in creative visualizations or color quantization experiments
- Not intended for production encoding, mainly an experimental and creative tool
### Comparison
Compared to the original Mp4-To-Srt:
- Outputs full SRV3 with colored braille rendering
- Supports layered color control, motion blur,
/r/Python
https://redd.it/1pblbal
### What My Project Does
This project converts an MP4 video, or a single PNG image (useful for thumbnails), into YouTube's internal SRV3 subtitle format, rendering frames as colored braille characters.
It optionally takes an SRT file and uses it to overlay subtitles onto the generated output.
For each braille character, the converter selects up to 8 subpixels to approximate brightness and assigns an average 12-bit color. This is not color-optimal but is very fast.
For better color control you can stack up to 8 layers per frame; colors are grouped by brightness and file size grows accordingly.
Resolutions up to 84 rows are supported (portrait mode required above 63 rows). Higher resolutions reduce FPS quadratically, so the tool applies motion blur to maintain motion perception when frames are skipped.
Demo video cycling weekly through multiple examples (Never Gonna Give You Up, Bad Apple, Plants vs. Zombies, Minecraft and Geometry Dash): https://youtu.be/XtfY7RMEPIg
(files: https://github.com/nineteendo/yt-editor-public)
Source code: https://github.com/nineteendo/Mp4-To-Srv3
(Fork of https://github.com/Nachtwind1/Mp4-To-Srt)
### Target Audience
- Anyone experimenting with ASCII/Unicode rendering or nonstandard video encodings
- Hobbyists interested in creative visualizations or color quantization experiments
- Not intended for production encoding, mainly an experimental and creative tool
### Comparison
Compared to the original Mp4-To-Srt:
- Outputs full SRV3 with colored braille rendering
- Supports layered color control, motion blur,
/r/Python
https://redd.it/1pblbal
YouTube
ZUN - Bad Apple!! - but it's subtitles
Lyrics (Original):
流れてく 時の中ででも 気だるさが ほらグルグル廻って
私から 離れる心も 見えないわ そう知らない?
自分から 動くこともなく 時の隙間に 流され続けて
知らないわ 周りのことなど 私は私 それだけ
夢見てる? なにも見てない?
語るも無駄な 自分の言葉
悲しむなんて 疲れるだけよ 何も感じず 過ごせばいいの
Lyrics (English cover):
Ever on and on I continue circling with nothing but my…
流れてく 時の中ででも 気だるさが ほらグルグル廻って
私から 離れる心も 見えないわ そう知らない?
自分から 動くこともなく 時の隙間に 流され続けて
知らないわ 周りのことなど 私は私 それだけ
夢見てる? なにも見てない?
語るも無駄な 自分の言葉
悲しむなんて 疲れるだけよ 何も感じず 過ごせばいいの
Lyrics (English cover):
Ever on and on I continue circling with nothing but my…
Show & Tell: Python lib to track logging costs by file:line (find expensive statements in production
What My Project Does
LogCost is a small Python library + CLI that shows which specific logging calls in your code (file:line) generate the most log data and cost.
It:
wraps the standard logging module (and optionally print)
aggregates per call site: {file, line, level, message_template, count, bytes}
estimates cost for GCP/AWS/Azure based on current pricing
exports JSON you can analyze via a CLI (no raw log payloads stored)
works with logging.getLogger() in plain apps, Django, Flask, FastAPI, etc.
The main question it tries to answer is:
“for this Python service, which log statements are actually burning most of the logging budget?”
Repo (MIT): [https://github.com/ubermorgenland/LogCost](https://github.com/ubermorgenland/LogCost)
———
Target Audience
Python developers running services in production (APIs, workers, web apps) where cloud logging cost is non‑trivial.
People in small teams/startups who both:
write the Python code, and
feel the CloudWatch / GCP Logging bill.
Platform/SRE/DevOps engineers supporting Python apps who get asked “why are logs so expensive?” and need a more concrete answer than “this log group is big”.
It’s intended for real production use (we run it on live services), not just a toy, but you can also point it at local/dev traffic to get a feel for your log patterns.
———
Comparison (How it
/r/Python
https://redd.it/1pblatk
What My Project Does
LogCost is a small Python library + CLI that shows which specific logging calls in your code (file:line) generate the most log data and cost.
It:
wraps the standard logging module (and optionally print)
aggregates per call site: {file, line, level, message_template, count, bytes}
estimates cost for GCP/AWS/Azure based on current pricing
exports JSON you can analyze via a CLI (no raw log payloads stored)
works with logging.getLogger() in plain apps, Django, Flask, FastAPI, etc.
The main question it tries to answer is:
“for this Python service, which log statements are actually burning most of the logging budget?”
Repo (MIT): [https://github.com/ubermorgenland/LogCost](https://github.com/ubermorgenland/LogCost)
———
Target Audience
Python developers running services in production (APIs, workers, web apps) where cloud logging cost is non‑trivial.
People in small teams/startups who both:
write the Python code, and
feel the CloudWatch / GCP Logging bill.
Platform/SRE/DevOps engineers supporting Python apps who get asked “why are logs so expensive?” and need a more concrete answer than “this log group is big”.
It’s intended for real production use (we run it on live services), not just a toy, but you can also point it at local/dev traffic to get a feel for your log patterns.
———
Comparison (How it
/r/Python
https://redd.it/1pblatk
GitHub
GitHub - ubermorgenland/LogCost: Find and fix expensive log statements to reduce cloud logging costs
Find and fix expensive log statements to reduce cloud logging costs - ubermorgenland/LogCost
AI Agent from scratch: Django + Ollama + Pydantic AI - A Step-by-Step Guide
Hey-up Reddit. I’m excited to share my latest project with you, a detailed, step-by-step guide on building a basic AI agent using Django, Ollama, and Pydantic AI.
I’ve broken down the entire process, making it accessible even if you’re just starting with Python. In the first part I'll show you how to:
Set up a Django project with Django Ninja for rapid API development.
Integrate your local Ollama engine.
Use Pydantic AI to manage your agent’s context and tool calls.
Build a functional AI agent in just a few lines of code!
This is a great starting point for anyone wanting to experiment with local LLMs and build their own AI agents from scratch.
Read the full article **here**.
In the next part I'll be diving into memory management – giving your agent the ability to remember past conversations and interactions.
Looking forward to your comments!
/r/django
https://redd.it/1pbbjdw
Hey-up Reddit. I’m excited to share my latest project with you, a detailed, step-by-step guide on building a basic AI agent using Django, Ollama, and Pydantic AI.
I’ve broken down the entire process, making it accessible even if you’re just starting with Python. In the first part I'll show you how to:
Set up a Django project with Django Ninja for rapid API development.
Integrate your local Ollama engine.
Use Pydantic AI to manage your agent’s context and tool calls.
Build a functional AI agent in just a few lines of code!
This is a great starting point for anyone wanting to experiment with local LLMs and build their own AI agents from scratch.
Read the full article **here**.
In the next part I'll be diving into memory management – giving your agent the ability to remember past conversations and interactions.
Looking forward to your comments!
/r/django
https://redd.it/1pbbjdw
Medium
Build self-hosted AI Agent with Ollama, Pydantic AI and Django Ninja
Part 1: Project set-up
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1pbt718
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1pbt718
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Want to ship a native-like launcher for your Python app? Meet PyAppExec
Hi all
I'm the developer of PyAppExec, a lightweight cross-platform bootstrapper / launcher that helps you distribute Python desktop applications almost like native executables without freezing them using PyInstaller / cx_Freeze / Nuitka, which are great tools for many use cases, but sometimes you need another approach.
# What My Project Does
Instead of packaging a full Python runtime and dependencies into a big bundled executable, PyAppExec automatically sets up the environment (and any third-party tools if needed) on first launch, keeps your actual Python sources untouched, and then runs your entry script directly.
PyAppExec consists of two components: an installer and a bootstrapper.
The installer scans your Python project, detects the entry point (supports various layouts such as
🎥 Short demo GIF:
https://github.com/hyperfield/pyappexec/blob/v0.4.0/resources/screenshots/pyappexec.gif
# Target Audience
PyAppExec is intended for developers who want to distribute Python desktop applications to end-users without requiring them to provision Python and third-party environments manually, but also without freezing the app into a large binary.
Ideal use cases:
Lightweight distribution requirements (small downloads)
Deploying Python apps to non-technical users
Tools that depend on external binaries
Apps that update frequently and need fast iteration
# Comparison With Alternatives
Freezing tools
/r/Python
https://redd.it/1pbp679
Hi all
I'm the developer of PyAppExec, a lightweight cross-platform bootstrapper / launcher that helps you distribute Python desktop applications almost like native executables without freezing them using PyInstaller / cx_Freeze / Nuitka, which are great tools for many use cases, but sometimes you need another approach.
# What My Project Does
Instead of packaging a full Python runtime and dependencies into a big bundled executable, PyAppExec automatically sets up the environment (and any third-party tools if needed) on first launch, keeps your actual Python sources untouched, and then runs your entry script directly.
PyAppExec consists of two components: an installer and a bootstrapper.
The installer scans your Python project, detects the entry point (supports various layouts such as
src/\-based or flat modules), generates a .ini config, and copies the launcher (CLI or GUI) into place.🎥 Short demo GIF:
https://github.com/hyperfield/pyappexec/blob/v0.4.0/resources/screenshots/pyappexec.gif
# Target Audience
PyAppExec is intended for developers who want to distribute Python desktop applications to end-users without requiring them to provision Python and third-party environments manually, but also without freezing the app into a large binary.
Ideal use cases:
Lightweight distribution requirements (small downloads)
Deploying Python apps to non-technical users
Tools that depend on external binaries
Apps that update frequently and need fast iteration
# Comparison With Alternatives
Freezing tools
/r/Python
https://redd.it/1pbp679
GitHub
pyappexec/resources/screenshots/pyappexec.gif at v0.4.0 · hyperfield/pyappexec
Cross‑platform launcher that prepares Python runtime, virtualenv, and external dependencies, then runs your app like a native executable. - hyperfield/pyappexec
i built a key-value DB in python with a small tcp server
hello everyone im a CS student currently studying databases, and to practice i tried implementing a simple key-value db in python, with a TCP server that supports multiple clients. (im a redis fan)
my goal isn’t performance, but understanding the internal mechanisms (command parsing, concurrency, persistence, ecc…)
in this moment now it only supports lists and hashes, but id like to add more data structures.
i alao implemented a system that saves the data to an external file every 30 seconds, and id like to optimize it.
if anyone wants to take a look, leave some feedback, or even contribute, id really appreciate it 🙌
the repo is:
https://github.com/edoromanodev/photondb
/r/Python
https://redd.it/1pbpb8w
hello everyone im a CS student currently studying databases, and to practice i tried implementing a simple key-value db in python, with a TCP server that supports multiple clients. (im a redis fan)
my goal isn’t performance, but understanding the internal mechanisms (command parsing, concurrency, persistence, ecc…)
in this moment now it only supports lists and hashes, but id like to add more data structures.
i alao implemented a system that saves the data to an external file every 30 seconds, and id like to optimize it.
if anyone wants to take a look, leave some feedback, or even contribute, id really appreciate it 🙌
the repo is:
https://github.com/edoromanodev/photondb
/r/Python
https://redd.it/1pbpb8w
GitHub
GitHub - edoromanodev/photondb: PhotonDB is an in-memory database with a TCP server, zero dependencies, TTL, persistence, and Redis…
PhotonDB is an in-memory database with a TCP server, zero dependencies, TTL, persistence, and Redis-like commandsbuilt in Python. - edoromanodev/photondb
Introducing NetSnap - Linux net/route/neigh cfg & stats -> python without hardcoded kernel constants
What the project does: NetSnap generates python objects or JSON stdout of everything to do with networking setup and stats, routes, rules and neighbor/mdb info.
Target Audience: Those needing a stable, cross-distro, cross-kernel way to get everything to do with kernel networking setup and operations, that uses the runtime kernel as the single source of truth for all major constants -- no duplication as hardcoded numbers in python code.
Announcing a comprehensive, maintainable open-source python programming package for pulling nearly all details of Linux networking into reliable and broadly usable form as objects or JSON stdout.
Link here: https://github.com/hcoin/netsnap
From configuration to statistics, NetSnap uses the fastest available api: RTNetlink and Generic Netlink. NetSnap can fuction in either standalone fashion generating JSON output, or provide Python 3.8+ objects. NetSnap provides deep visibility into network interfaces, routing tables, neighbor tables, multicast databases, and routing rules through direct kernel communication via CFFI. More maintainable than alternatives as NetSnap avoids any hard-coded duplication of numeric constants. This improves NetSnap's portability and maintainability across distros and kernel releases since the kernel running on each system is the 'single source of truth' for all symbolic definitions.
In use cases where network configuration changes happen every second or
/r/Python
https://redd.it/1pbkih2
What the project does: NetSnap generates python objects or JSON stdout of everything to do with networking setup and stats, routes, rules and neighbor/mdb info.
Target Audience: Those needing a stable, cross-distro, cross-kernel way to get everything to do with kernel networking setup and operations, that uses the runtime kernel as the single source of truth for all major constants -- no duplication as hardcoded numbers in python code.
Announcing a comprehensive, maintainable open-source python programming package for pulling nearly all details of Linux networking into reliable and broadly usable form as objects or JSON stdout.
Link here: https://github.com/hcoin/netsnap
From configuration to statistics, NetSnap uses the fastest available api: RTNetlink and Generic Netlink. NetSnap can fuction in either standalone fashion generating JSON output, or provide Python 3.8+ objects. NetSnap provides deep visibility into network interfaces, routing tables, neighbor tables, multicast databases, and routing rules through direct kernel communication via CFFI. More maintainable than alternatives as NetSnap avoids any hard-coded duplication of numeric constants. This improves NetSnap's portability and maintainability across distros and kernel releases since the kernel running on each system is the 'single source of truth' for all symbolic definitions.
In use cases where network configuration changes happen every second or
/r/Python
https://redd.it/1pbkih2
GitHub
GitHub - hcoin/netsnap: Snapshots all kernel network-related stats and configs into json or python objects. Alternative to pyroute2…
Snapshots all kernel network-related stats and configs into json or python objects. Alternative to pyroute2 with no hard-coded kernel constants.. - hcoin/netsnap
I created a open-source visual editable wiki for your codebase
Repo: https://github.com/davialabs/davia
What My Project Does
Davia is an open-source tool designed for AI coding agents to generate interactive internal documentation for your codebase. When your AI coding agent uses Davia, it writes documentation files locally with interactive visualizations and editable whiteboards that you can edit in a Notion-like platform or locally in your IDE.
Target Audience
Davia is for engineering teams and AI developers working in large or evolving codebases who want documentation that stays accurate over time. It turns AI agent reasoning and code changes into persistent, interactive technical knowledge.
It still an early project, and would love to have your feedbacks!
/r/Python
https://redd.it/1pbugj3
Repo: https://github.com/davialabs/davia
What My Project Does
Davia is an open-source tool designed for AI coding agents to generate interactive internal documentation for your codebase. When your AI coding agent uses Davia, it writes documentation files locally with interactive visualizations and editable whiteboards that you can edit in a Notion-like platform or locally in your IDE.
Target Audience
Davia is for engineering teams and AI developers working in large or evolving codebases who want documentation that stays accurate over time. It turns AI agent reasoning and code changes into persistent, interactive technical knowledge.
It still an early project, and would love to have your feedbacks!
/r/Python
https://redd.it/1pbugj3
GitHub
GitHub - davialabs/davia: Interactive, editable docs designed for coding agents
Interactive, editable docs designed for coding agents - davialabs/davia
I built an open-source AI governance framework for Python — looking for feedback
I've been working on Ranex, a runtime governance framework for Python apps that use AI coding assistants (Copilot, Claude, Cursor, etc).
The problem I'm solving: AI-generated code is fast but often introduces security issues, breaks architecture rules, or skips validation. Ranex adds guardrails at runtime — contract enforcement, state machine validation, security scanning, and architecture checks.
It's built with a Rust core for performance (sub-100ns validation) and integrates with FastAPI.
What it does:
Runtime contract enforcement via `@Contract` decorator
Security scanning (SAST, dependency vulnerabilities)
State machine validation
Architecture enforcement
GitHub: https://github.com/anthonykewl20/ranex-framework
I'm looking for honest feedback from Python developers. What's missing? What's confusing? Would you actually use this?
/r/Python
https://redd.it/1pc36m8
I've been working on Ranex, a runtime governance framework for Python apps that use AI coding assistants (Copilot, Claude, Cursor, etc).
The problem I'm solving: AI-generated code is fast but often introduces security issues, breaks architecture rules, or skips validation. Ranex adds guardrails at runtime — contract enforcement, state machine validation, security scanning, and architecture checks.
It's built with a Rust core for performance (sub-100ns validation) and integrates with FastAPI.
What it does:
Runtime contract enforcement via `@Contract` decorator
Security scanning (SAST, dependency vulnerabilities)
State machine validation
Architecture enforcement
GitHub: https://github.com/anthonykewl20/ranex-framework
I'm looking for honest feedback from Python developers. What's missing? What's confusing? Would you actually use this?
/r/Python
https://redd.it/1pc36m8
GitHub
GitHub - anthonykewl20/ranex-framework: Production-ready AI governance framework with Rust core and MCP architecture for protocol…
Production-ready AI governance framework with Rust core and MCP architecture for protocol-level enforcement in AI-assisted development - anthonykewl20/ranex-framework
teams bot integration for user specific notification alerts
Hi everyone,
I’m working on a small POC at my company and could really use some advice from people who’ve worked with Microsoft Teams integrations recently.
Our stack is Java (backend) + React (frontend).
Users on our platform receive alerts/notifications, and I’ve been asked to build a POC that sends each user a daily message through: Email, Microsoft Teams
The message is something simple like:
“Hey {user}, you have X unseen alerts on our platform. Please log in to review them.” No conversations, no replies, no chat logic. just a one-time, user-specific daily notification.
Since this message is per user and not a broadcast, I’m trying to figure out the cleanest and most future-proof approach for Teams.
Looking for suggestions from anyone who’s done this before:
- What approach worked best for user-specific messages?
- Is using the Microsoft Graph API enough for this use case?
- Any issues with permissions, throttling, app-only auth, or Teams quirks?
- Any docs, examples, or blogs you’d recommend?
Basically, the entire job of this integration is to Notify the user once per day on Teams that they have X unseen alerts on our platform. the suggestions i have been getting so far is to use python.
Any help or direction would be really appreciated. Thanks!
/r/Python
https://redd.it/1pc3qiu
Hi everyone,
I’m working on a small POC at my company and could really use some advice from people who’ve worked with Microsoft Teams integrations recently.
Our stack is Java (backend) + React (frontend).
Users on our platform receive alerts/notifications, and I’ve been asked to build a POC that sends each user a daily message through: Email, Microsoft Teams
The message is something simple like:
“Hey {user}, you have X unseen alerts on our platform. Please log in to review them.” No conversations, no replies, no chat logic. just a one-time, user-specific daily notification.
Since this message is per user and not a broadcast, I’m trying to figure out the cleanest and most future-proof approach for Teams.
Looking for suggestions from anyone who’s done this before:
- What approach worked best for user-specific messages?
- Is using the Microsoft Graph API enough for this use case?
- Any issues with permissions, throttling, app-only auth, or Teams quirks?
- Any docs, examples, or blogs you’d recommend?
Basically, the entire job of this integration is to Notify the user once per day on Teams that they have X unseen alerts on our platform. the suggestions i have been getting so far is to use python.
Any help or direction would be really appreciated. Thanks!
/r/Python
https://redd.it/1pc3qiu
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I spent 2 years building a dead-simple Dependency Injection package for Python
Hello everyone,
I'm making this post to share a package I've been working on for a while: `python-injection`. I already wrote a post about it a few months ago, but since I've made significant improvements, I think it's worth writing a new one with more details and some examples to get you interested in trying it out.
For context, when I truly understood the value of dependency injection a few years ago, I really wanted to use it in almost all of my projects. The problem you encounter pretty quickly is that it's really complicated to know where to instantiate dependencies with the right sub-dependencies, and how to manage their lifecycles. You might also want to vary dependencies based on an execution profile. In short, all these little things may seem trivial, but if you've ever tried to manage them without a package, you've probably realized it was a nightmare.
I started by looking at existing popular packages to handle this problem, but honestly none of them convinced me. Either they weren't simple enough for my taste, or they required way too much configuration. That's why I started writing my own DI package.
I've been developing it alone for about 2 years now, and
/r/Python
https://redd.it/1pc77j2
Hello everyone,
I'm making this post to share a package I've been working on for a while: `python-injection`. I already wrote a post about it a few months ago, but since I've made significant improvements, I think it's worth writing a new one with more details and some examples to get you interested in trying it out.
For context, when I truly understood the value of dependency injection a few years ago, I really wanted to use it in almost all of my projects. The problem you encounter pretty quickly is that it's really complicated to know where to instantiate dependencies with the right sub-dependencies, and how to manage their lifecycles. You might also want to vary dependencies based on an execution profile. In short, all these little things may seem trivial, but if you've ever tried to manage them without a package, you've probably realized it was a nightmare.
I started by looking at existing popular packages to handle this problem, but honestly none of them convinced me. Either they weren't simple enough for my taste, or they required way too much configuration. That's why I started writing my own DI package.
I've been developing it alone for about 2 years now, and
/r/Python
https://redd.it/1pc77j2
GitHub
GitHub - 100nm/python-injection: Fast and easy dependency injection framework.
Fast and easy dependency injection framework. Contribute to 100nm/python-injection development by creating an account on GitHub.
PyImageCUDA - GPU-accelerated image compositing for Python
## What My Project Does
PyImageCUDA is a lightweight (~1MB) library for GPU-accelerated image composition. Unlike OpenCV (computer vision) or Pillow (CPU-only), it fills the gap for high-performance design workflows.
10-400x speedups for GPU-friendly operations with a Pythonic API.
## Target Audience
- Generative Art - Render thousands of variations in seconds
- Video Processing - Real-time frame manipulation
- Data Augmentation - Batch transformations for ML
- Tool Development - Backend for image editors
- Game Development - Procedural asset generation
## Why I Built This
I wanted to learn CUDA from scratch. This evolved into the core engine for a parametric node-based image editor I'm building (release coming soon!).
The gap: CuPy/OpenCV lack design primitives. Pillow is CPU-only and slow. Existing solutions require CUDA Toolkit or lack composition features.
The solution: "Pillow on steroids" - render drop shadows, gradients, blend modes... without writing raw kernels. Zero heavy dependencies (just pip install), design-first API, smart memory management.
## Key Features
✅ Zero Setup - No CUDA Toolkit/Visual Studio, just standard NVIDIA drivers
✅ 1MB Library - Ultra-lightweight
✅ Float32 Precision - Prevents color banding
✅ Smart Memory - Reuse buffers, resize without reallocation
✅ NumPy Integration - Works with OpenCV, Pillow, Matplotlib
✅ Rich Features - +40 operations
/r/Python
https://redd.it/1pcce1w
## What My Project Does
PyImageCUDA is a lightweight (~1MB) library for GPU-accelerated image composition. Unlike OpenCV (computer vision) or Pillow (CPU-only), it fills the gap for high-performance design workflows.
10-400x speedups for GPU-friendly operations with a Pythonic API.
## Target Audience
- Generative Art - Render thousands of variations in seconds
- Video Processing - Real-time frame manipulation
- Data Augmentation - Batch transformations for ML
- Tool Development - Backend for image editors
- Game Development - Procedural asset generation
## Why I Built This
I wanted to learn CUDA from scratch. This evolved into the core engine for a parametric node-based image editor I'm building (release coming soon!).
The gap: CuPy/OpenCV lack design primitives. Pillow is CPU-only and slow. Existing solutions require CUDA Toolkit or lack composition features.
The solution: "Pillow on steroids" - render drop shadows, gradients, blend modes... without writing raw kernels. Zero heavy dependencies (just pip install), design-first API, smart memory management.
## Key Features
✅ Zero Setup - No CUDA Toolkit/Visual Studio, just standard NVIDIA drivers
✅ 1MB Library - Ultra-lightweight
✅ Float32 Precision - Prevents color banding
✅ Smart Memory - Reuse buffers, resize without reallocation
✅ NumPy Integration - Works with OpenCV, Pillow, Matplotlib
✅ Rich Features - +40 operations
/r/Python
https://redd.it/1pcce1w
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I pushed my first Django enterprise boilerplate (Kanban, calendar, 2FA, Celery, Docker, Channels) – would love feedback
Hey everyone,
I’ve just put my first public repo on GitHub and would really appreciate some honest feedback.
It’s a Django Enterprise Boilerplate meant to give a solid starting point for apps that need auth, background tasks, and a usable UI out of the box. I built this using Antigravity.
Key features:
Kanban board – drag-and-drop task management with dynamic updates
Calendar – month and day views with basic event scheduling
Authentication – login, registration, and 2FA (TOTP) wired in
UI – Tailwind CSS + Alpine.js with a Geist-inspired layout
Async-ready – Celery, Redis, and WebSockets (Daphne) pre-configured
Dockerized – comes with Docker Compose for local setup
Tech stack: Django 5, Python 3.11, PostgreSQL, Redis, Tailwind, Alpine.js
Repo:
https://github.com/antonybenhur/Django-Alpine-BoilerPlate
I’ll be adding more features soon, including Google sign-in, Stripe payment integration, and other quality-of-life improvements as I go.
Thanks in advance..
https://preview.redd.it/7ktgs55e0r4g1.png?width=1024&format=png&auto=webp&s=3f65a00d66dcc1f6cc4513545ce3e1e64ce06585
https://preview.redd.it/vh70jwae0r4g1.png?width=2221&format=png&auto=webp&s=9f722a734278b07dad60f8d74f233b4edbde4ade
https://preview.redd.it/5xrhk3he0r4g1.png?width=2377&format=png&auto=webp&s=d7bf8f3f2c58890b8ab513ad7ccdf9791bd369ee
https://preview.redd.it/olbkc55e0r4g1.png?width=2440&format=png&auto=webp&s=08ab209b7b7145ee2a9169f5ff6fa27b6ae8081f
https://preview.redd.it/4vwy665e0r4g1.png?width=2083&format=png&auto=webp&s=6c1da0cdccdcca24fd69ce36c09fed75ec28342c
https://preview.redd.it/b7lnr55e0r4g1.png?width=2089&format=png&auto=webp&s=031f5a3624d62a98bd90978554c7b63abaf0b548
https://preview.redd.it/g1ora95e0r4g1.png?width=2088&format=png&auto=webp&s=df694de063b05183deecb634e5c49ab13e39630c
https://preview.redd.it/2fby465e0r4g1.png?width=2230&format=png&auto=webp&s=bdcb593bf36778c311deee73edf290931032aacb
https://preview.redd.it/ay5sq55e0r4g1.png?width=1915&format=png&auto=webp&s=ac939150f885e118aefa7dfdb185290c2c84f81d
/r/django
https://redd.it/1pc2x47
Hey everyone,
I’ve just put my first public repo on GitHub and would really appreciate some honest feedback.
It’s a Django Enterprise Boilerplate meant to give a solid starting point for apps that need auth, background tasks, and a usable UI out of the box. I built this using Antigravity.
Key features:
Kanban board – drag-and-drop task management with dynamic updates
Calendar – month and day views with basic event scheduling
Authentication – login, registration, and 2FA (TOTP) wired in
UI – Tailwind CSS + Alpine.js with a Geist-inspired layout
Async-ready – Celery, Redis, and WebSockets (Daphne) pre-configured
Dockerized – comes with Docker Compose for local setup
Tech stack: Django 5, Python 3.11, PostgreSQL, Redis, Tailwind, Alpine.js
Repo:
https://github.com/antonybenhur/Django-Alpine-BoilerPlate
I’ll be adding more features soon, including Google sign-in, Stripe payment integration, and other quality-of-life improvements as I go.
Thanks in advance..
https://preview.redd.it/7ktgs55e0r4g1.png?width=1024&format=png&auto=webp&s=3f65a00d66dcc1f6cc4513545ce3e1e64ce06585
https://preview.redd.it/vh70jwae0r4g1.png?width=2221&format=png&auto=webp&s=9f722a734278b07dad60f8d74f233b4edbde4ade
https://preview.redd.it/5xrhk3he0r4g1.png?width=2377&format=png&auto=webp&s=d7bf8f3f2c58890b8ab513ad7ccdf9791bd369ee
https://preview.redd.it/olbkc55e0r4g1.png?width=2440&format=png&auto=webp&s=08ab209b7b7145ee2a9169f5ff6fa27b6ae8081f
https://preview.redd.it/4vwy665e0r4g1.png?width=2083&format=png&auto=webp&s=6c1da0cdccdcca24fd69ce36c09fed75ec28342c
https://preview.redd.it/b7lnr55e0r4g1.png?width=2089&format=png&auto=webp&s=031f5a3624d62a98bd90978554c7b63abaf0b548
https://preview.redd.it/g1ora95e0r4g1.png?width=2088&format=png&auto=webp&s=df694de063b05183deecb634e5c49ab13e39630c
https://preview.redd.it/2fby465e0r4g1.png?width=2230&format=png&auto=webp&s=bdcb593bf36778c311deee73edf290931032aacb
https://preview.redd.it/ay5sq55e0r4g1.png?width=1915&format=png&auto=webp&s=ac939150f885e118aefa7dfdb185290c2c84f81d
/r/django
https://redd.it/1pc2x47
GitHub
GitHub - antonybenhur/Django-Alpine-BoilerPlate
Contribute to antonybenhur/Django-Alpine-BoilerPlate development by creating an account on GitHub.
Trending Django projects in November
https://django.wtf/trending/?trending=30
/r/django
https://redd.it/1pccjkr
https://django.wtf/trending/?trending=30
/r/django
https://redd.it/1pccjkr
django.wtf
Trending Django projects
Trending Django projects in the past 14 days
Structure Large Python Projects for Maintainability
I'm scaling a Python project from "works for me" to "multiple people need to work on this," and I'm realizing my structure isn't great.
Current situation:
I have one main directory with 50+ modules. No clear separation of concerns. Tests are scattered. Imports are a mess. It works, but it's hard to navigate and modify.
Questions I have:
What's a good folder structure for a medium-sized Python project (5K-20K lines)?
How do you organize code by domain vs by layer (models, services, utils)?
How strict should you be about import rules (no circular imports, etc.)?
When should you split code into separate packages?
What does a good test directory structure look like?
How do you handle configuration and environment-specific settings?
What I'm trying to achieve:
Make it easy for new developers to understand the codebase
Prevent coupling between different parts
Make testing straightforward
Reduce merge conflicts when multiple people work on it
Do you follow a specific pattern, or make your own rules?
/r/Python
https://redd.it/1pccbk4
I'm scaling a Python project from "works for me" to "multiple people need to work on this," and I'm realizing my structure isn't great.
Current situation:
I have one main directory with 50+ modules. No clear separation of concerns. Tests are scattered. Imports are a mess. It works, but it's hard to navigate and modify.
Questions I have:
What's a good folder structure for a medium-sized Python project (5K-20K lines)?
How do you organize code by domain vs by layer (models, services, utils)?
How strict should you be about import rules (no circular imports, etc.)?
When should you split code into separate packages?
What does a good test directory structure look like?
How do you handle configuration and environment-specific settings?
What I'm trying to achieve:
Make it easy for new developers to understand the codebase
Prevent coupling between different parts
Make testing straightforward
Reduce merge conflicts when multiple people work on it
Do you follow a specific pattern, or make your own rules?
/r/Python
https://redd.it/1pccbk4
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Django security releases issued: 5.2.9, 5.1.15, and 4.2.27
https://www.djangoproject.com/weblog/2025/dec/02/security-releases/
/r/django
https://redd.it/1pcaamb
https://www.djangoproject.com/weblog/2025/dec/02/security-releases/
/r/django
https://redd.it/1pcaamb
Django Project
Django security releases issued: 5.2.9, 5.1.15, and 4.2.27
Posted by Natalia Bidart on Dec. 2, 2025
Python-native mocking of realistic datasets by defining schemas for prototyping, testing, and demos
https://github.com/DavidTorpey/datamock
What my project does:
This is a piece of work I developed recentlv that I've found quite useful. I decided to neaten it up and release it in case anyone else finds it useful.
It's useful when trving to mock structured data during development, for things like prototyping or testing. The declarative schema based approach feels Pythonic and intuitive (to me at least!).
I may add more features if there's interest.
Target audience: Simple toy project I've decided to release
Comparison:
Hypothesis and Faker is the closest things out these available in Python. However, Hypothesis is closely coupled with testing rather than generic data generation. Faker is focused on generating individual instances, whereas datamock allows for grouping of fields to express and generating data for more complex types and fields more easily. Datamock, in fact, utilises Faker under the hood for some of the field data generation.
/r/Python
https://redd.it/1pcrbn4
https://github.com/DavidTorpey/datamock
What my project does:
This is a piece of work I developed recentlv that I've found quite useful. I decided to neaten it up and release it in case anyone else finds it useful.
It's useful when trving to mock structured data during development, for things like prototyping or testing. The declarative schema based approach feels Pythonic and intuitive (to me at least!).
I may add more features if there's interest.
Target audience: Simple toy project I've decided to release
Comparison:
Hypothesis and Faker is the closest things out these available in Python. However, Hypothesis is closely coupled with testing rather than generic data generation. Faker is focused on generating individual instances, whereas datamock allows for grouping of fields to express and generating data for more complex types and fields more easily. Datamock, in fact, utilises Faker under the hood for some of the field data generation.
/r/Python
https://redd.it/1pcrbn4
GitHub
GitHub - DavidTorpey/datamock: Python-native mocking of realistic datasets by defining schemas for prototyping, testing, and demos
Python-native mocking of realistic datasets by defining schemas for prototyping, testing, and demos - DavidTorpey/datamock