Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
AIWAF Flask: Drop in Security Middleware with AI Anomaly Detection

Just launched AIWAF Flask, a lightweight yet powerful Web Application Firewall for Flask apps. It combines classic protections like IP blocking, rate limiting, honeypot timing, header validation, and UUID tampering checks with an AI powered anomaly detection system. Instead of relying only on static rules, it can learn suspicious patterns from logs and dynamically adapt to new attack vectors.

The setup is dead simple. By default, just pip install aiwaf-flask and wrap your Flask app with AIWAF(app) and it automatically enables all seven protection layers out of the box. You can go further with decorators like aiwaf_exempt or aiwaf_only for fine grained control, and even choose between CSV, database, or in memory storage depending on your environment. For those who want smarter defenses, installing with [ai] enables anomaly detection using NumPy and scikit-learn.

AIWAF Flask also includes a CLI (aiwaf) for managing IP blacklists/whitelists, blocked keywords, training the AI model from logs, and analyzing traffic patterns. It’s designed for developers who want stronger security in Flask without a steep learning curve or heavy dependencies.

aiwaf-flask · PyPI

/r/flask
https://redd.it/1niw2ii
Update: I got tired of Django project setup, so I built a tool to automate it all

/r/django
https://redd.it/1nir37r
List of 87 Programming Ideas for Beginners (with Python implementations)

https://inventwithpython.com/blog/programming-ideas-beginners-big-book-python.html

I've compiled a list of beginner-friendly programming projects, with example implementations in Python. These projects are drawn from my free Python books, but since they only use stdio text, you can implement them in any language.

I got tired of the copy-paste "1001 project" posts that obviously were copied from other posts or generated by AI which included everything from "make a coin flip program" to "make an operating system". I've personally curated this list to be small enough for beginners. The implementations are all usually under 100 or 200 lines of code.

/r/Python
https://redd.it/1nitzoz
Do you prefer sticking to the standard library or pulling in external packages?

I’ve been writing Python for a while and I keep running into this situation. Python’s standard library is huge and covers so much, but sometimes it feels easier (or just faster) to grab a popular external package from PyPI.

For example, I’ve seen people write entire data processing scripts with just built-in modules, while others immediately bring in pandas or requests even for simple tasks.

I’m curious how you all approach this. Do you try to keep dependencies minimal and stick to the stdlib as much as possible, or do you reach for external packages early to save development time?

/r/Python
https://redd.it/1nj12yr
D How is IEEE TIP viewed in the CV/AI/ML community?

Hi everyone,

I’m a PhD student working on video research, and I recently submitted a paper to IEEE Transactions on Image Processing (TIP). After a very long review process (almost a year), it finally reached the “AQ” stage.

Now I’m curious—how do people in the community actually see TIP these days?
Some of my colleagues say it’s still one of the top journals in vision, basically right after TPAMI. Others think it’s kind of outdated and not really read much anymore.

Also, how would you compare it to the major conferences (CVPR/ICCV/ECCV, NeurIPS, ICLR, AAAI)? Is publishing in TIP seen as on par with those, or is it considered more like the “second-tier” conferences (WACV, BMVC, etc.)?

I’m close to graduation, so maybe I’m overthinking this. I know the contribution and philosophy of the work itself matters more than the venue. But I’d still love to hear how people generally view TIP these days, both in academia and in the field.

Thanks!


/r/MachineLearning
https://redd.it/1nj38ur
Flask + gspread: multiple Google Sheets API calls (20+) per scan instead of 1

I’m building a Flask web app for a Model UN conference with around 350-400 registered delegates.

* OCs (Organizing Committee members) log in.
* They scan delegate IDs (QR codes or manual input).
* The app then fetches delegate info from a Google Sheet and logs attendance in another sheet.

All delegate, OC, and attendance data is currently stored in Google Sheets

Whenever a delegate is scanned, the app seems to make many Google Sheets API calls (sometimes 20–25 for a single scan).

I already tried to:

* Cache delegates (load once from master sheet at startup).
* Cache attendance records.
* Batch writes (`append_rows` in chunks of 50).

But I still see too many API calls, and I’m worried about hitting the Google Sheets API quota limits during the event.

After rewriting the backend, I still get around 10 API calls for one instance, now I'm not sure is it because of the backend or frontend, here I've attached MRE of my backend and have attached the HTML code for home page

from flask import Flask, request, render_template, redirect, url_for, session, flash
import gspread, os, json
from google.oauth2.service_account import Credentials
from datetime import datetime, timedelta



/r/flask
https://redd.it/1nj6d9n
Let your Python agents play an MMO: Agent-to-Agent protocol + SDK



Repo: https://github.com/Summoner-Network/summoner-agents

TL;DR: We are building Summoner, a Python SDK with a Rust server for agent-to-agent networking across machines. Early beta (beta version 1.0).

What my project does: A protocol for live agent interaction with a desktop app to track network-wide agent state (battles, collaborations, reputation), so you can build MMO-style games, simulations, and tools.

Target audience: Students, indie devs, and small teams who want to build networked multi-agent projects, simulations, or MMO-style experiments in Python.

Comparison:

LangChain and CrewAI are app frameworks and an API spec for serving agents, not an on-the-wire interop protocol;
Google A2A is an HTTP-based spec that uses JSON-RPC by default (with optional gRPC or REST);
MCP standardizes model-to-tool and data connections.
Summoner targets live, persistent agent-to-agent networking for MMO-style coordination.

Status

Our Beta 1.0. works with example agents today. Expect sharp edges.

More

Github page: https://github.com/Summoner-Network

Docs/design notes: https://github.com/Summoner-Network/summoner-docs

Core runtime: https://github.com/Summoner-Network/summoner-core

Site: https://summoner.org

/r/Python
https://redd.it/1niqudg
Python's role in the AI infrastructure stack – sharing lessons from building production AI systems

Python's dominance in AI/ML is undeniable, but after building several production AI systems, I've learned that the language choice is just the beginning. The real challenges are in architecture, deployment, and scaling.

**Current project:** Multi-agent system processing 100k+ documents daily
**Stack:** FastAPI, Celery, Redis, PostgreSQL, Docker
**Scale:** \~50 concurrent AI workflows, 1M+ API calls/month

**What's working well:**

* **FastAPI for API development** – async support handles concurrent AI calls beautifully
* **Celery for background processing** – essential for long-running AI tasks
* **Pydantic for data validation** – catches errors before they hit expensive AI models
* **Rich ecosystem** – libraries like LangChain, Transformers, and OpenAI client make development fast

**Pain points I've encountered:**

* **Memory management** – AI models are memory-hungry, garbage collection becomes critical
* **Dependency hell** – AI libraries have complex requirements that conflict frequently
* **Performance bottlenecks** – Python's GIL becomes apparent under heavy concurrent loads
* **Deployment complexity** – managing GPU dependencies and model weights in containers

**Architecture decisions that paid off:**

1. **Async everywhere** – using asyncio for all I/O operations, including AI model calls
2. **Worker pools** – separate processes for different AI tasks to isolate failures
3. **Caching layer** – Redis for expensive AI results, dramatically improved response times
4. **Health checks** – monitoring AI model availability and fallback mechanisms

**Code patterns that emerged:**

`# Context manager for AI model lifecycle`

`@asynccontextmanager`

`async def ai_model_context(model_name: str):`

`model

/r/Python
https://redd.it/1nj7y99
Built a small PyPI Package for explainable preprocessing

I made a Python package that explains preprocessing with reports and plots

Note: This project started as a way for me to learn packaging and publishing on PyPI, but I thought it might also be useful for beginners who want not just preprocessing, but also clear reports and plots of what happened during preprocessing.


What my project does: It’s a simple ML preprocessing helper package called ml-explain-preprocess. Along with handling basic preprocessing tasks (missing values, encoding, scaling, and outliers), it also generates additional outputs to make the process more transparent:

Text reports

JSON reports

(Optional) visual plots of distributions and outliers


The idea was to make it easier for beginners not only to preprocess data but also to understand what happened during preprocessing, since I couldn’t find many libraries that provide clear reports or visualizations alongside transformations.

It’s nothing advanced and definitely not optimized for production-level pipelines, but it was a good exercise in learning how packaging works and how to publish to PyPI.

Target audience: beginners in ML who want preprocessing plus some transparency. Experts probably won’t find it very useful, but maybe it can help people starting out.

Comparison: To my knowledge, most existing libraries handle preprocessing well, but they don’t directly give reports/plots. This project tries

/r/Python
https://redd.it/1njb946
stop wrong ai answers in your django app before they show up: one tiny middleware + grandma clinic (beginner, mit)

hi folks, last time i posted about “semantic firewalls” and it was too abstract. this is the ultra simple django version that you can paste in 5 minutes.

**what this does in one line**
instead of fixing bad llm answers after users see them, we check the payload before returning the response. if there’s no evidence, we block it politely.

**before vs after**

* before: view returns a fluent answer with zero proof, users see it, you fix later
* after: view includes small evidence, middleware checks it, only stable answers go out

below is a minimal copy-paste. it works with any provider or local model because it’s just json discipline.

---

### 1) middleware: block ungrounded answers

`core/middleware.py`

```python
# core/middleware.py
import json
from typing import Callable
from django.http import HttpRequest, HttpResponse, JsonResponse

class SemanticFirewall:
"""
minimal 'evidence-first' guard for AI responses.
contract we expect from the view:
{ "answer": "...", "refs": [...], "coverage_ok": true }
if refs is empty or coverage_ok is false or missing, we return 422.
"""

def __init__(self, get_response: Callable):
self.get_response = get_response

def __call__(self, request: HttpRequest)

/r/django
https://redd.it/1nj9mmu
R Need model/paper/code suggestion for document template extraction

I am looking to create a document template extraction pipeline for document similarity. One important thing I need to do as part of this is create a template mask. Essentially, say I have a collection of documents which all follow a similar format (imagine a form or a report). I want to

1. extract text from the document in a structured format (OCR but more like VQA type). About this, I have looked at a few VQA models. Some are too big but I think this a straightforward task.
2. (what I need help with) I want a model that can, given a collection of documents or any one document, can generate a layout mask without the text, so a template). I have looked at Document Analysis models, but most are centered around classifying different sections of the document into tables, paragraphs, etc. I have not come across a mask generation pipeline or model.

If anyone has encountered such a pipeline before or worked on document template extraction, I would love some help or links to papers.

/r/MachineLearning
https://redd.it/1njgjdd
BS4 vs xml.etree.ElementTree



Beautiful Soup or standard library (xml.etree.ElementTree)? I am building an ETL process for extracting notes from Evernote ENML. I hear BS4 is easier but standard library performs faster. This alone makes me want to stick with the standard library. Any reason why I should reconsider?

/r/Python
https://redd.it/1njiy79
Where's a good place to find people to talk about projects?

I'm a hobbyist programmer, dabbling in coding for like 20 years now, but never anything professional minus a three month stint. I'm trying to work on a medium sized Python project but honestly, I'm looking to work with someone who's a little bit more experienced so I can properly learn and ask questions instead of being reliant on a hallucinating chat bot.

But where would be the best place to discuss projects and look for like minded folks?

/r/Python
https://redd.it/1njo1k2
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!

# Weekly Thread: Professional Use, Jobs, and Education 🏢

Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.

---

## How it Works:

1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.

---

## Guidelines:

- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.

---

## Example Topics:

1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?

---

Let's help each other grow in our careers and education. Happy discussing! 🌟

/r/Python
https://redd.it/1njtelc