Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
django and celery logging

Hi All,

I have logging mostly figured out but...

I made an https logging server which will accept a json payload that can contain information to be logged. It works well mostly but not with one of my django/celery apps. I have to use a custom class to log to https.

I have the django/celery app set to log to console as well as to the https. Every log that is on the console is making it to the https server and into the db.

I am using python logger, have the LOGGING dict in settings.py and it is a pretty standard setup.

Not complete data is logged from background services

I have a custom format but essentially I set up JSON and the message is the msg that comes from the log record.

{"module": "/usr/local/lib/python3.12/site-packages/celery/app/trace.py", "logName": "celery.app.trace", "message": "Task %(name)s[%(id)s] succeeded in %(runtime)ss: %(return_value)s", "function": "info"}

You can see the %s which is not complete with data. Not sure how to even google for ideas on this one.

/r/djangolearning
https://redd.it/1mgd2o8
D What’s the realistic future of Spiking Neural Networks (SNNs)? Curious to hear your thoughts

I’ve been diving into the world of Spiking Neural Networks (SNNs) lately and I’m both fascinated and a bit puzzled by their current and future potential.

From what I understand, SNNs are biologically inspired, more energy-efficient, and capable of processing information in a temporally dynamic way.

That being said, they seem quite far from being able to compete with traditional ANN-based models (like Transformers) in terms of scalability, training methods, and general-purpose applications.

# So I wanted to ask :

Do you believe SNNs have a practical future beyond niche applications?
Can you see them being used in real-world products (outside academia or defense)?
Is it worth learning and building with them today, if I want to be early in something big?
Have you seen any recent papers or startups doing something truly promising with SNNs?

Would love to hear your insights, whether you’re deep in neuromorphic computing or just casually watching the space.

Thanks in advance!

/r/MachineLearning
https://redd.it/1mgocly
A lightweight and framework-agnostic Python library to handle social login with OAuth2

Hey everyone! 👋

I just open-sourced a Python package I had been using internally in multiple projects, and I thought it could be useful for others too.

SimpleSocialAuthLib is a small, framework-agnostic library designed to simplify social authentication in Python. It helps you handle the OAuth2 flow and retrieve user data from popular social platforms, without being tied to any specific web framework.

### Why use it?

Framework-Agnostic: Works with any Python web stack — FastAPI, Django, Flask, etc.
Simplicity: Clean and intuitive API to deal with social login flows.
Flexibility: Consistent interface across all providers.
Type Safety: Uses Python type hints for better dev experience.
Extensibility: Easily add custom providers by subclassing the base.
Security: Includes CSRF protection with state parameter verification.

### Supported providers:

Google
GitHub
Twitter/X (coming soon)
LinkedIn (coming soon)

It’s still evolving, but stable enough to use.
I’d love to hear your feedback, ideas, or PRs! 🙌

Repo: https://github.com/Macktireh/SimpleSocialAuthLib

/r/flask
https://redd.it/1mgrq0f
Help with form and values

I am creating a form where the the choices have a value (int). In the end based on the amount of “points” you would get an answer.

Is it a good idea to use a nested dictionary in the choicefield? So the answers have a value connected to them. Later on I would combine the values for the end result

Also I am seeing this as a multi page form. My plan is to use JS to hide and show parts of the form with a “next” button. And keep it on the same URL. Are there any other ways I’m not familiar with?

Cheers

/r/django
https://redd.it/1mhd033
PicTex v1.0 is here: a declarative layout engine for creating images in Python

Hey r/Python,

A few weeks ago, I [posted](https://www.reddit.com/r/Python/comments/1lwjsar/pictex_a_python_library_to_easily_create_stylized/) about my personal project, `PicTex`, a library for making stylized text images. I'm really happy for all the feedback and suggestions I received.

It was a huge motivator and inspired me to take the project to the next level. I realized the core idea of a simple, declarative API could be applied to more than just a single block of text. So, `PicTex` has evolved. It's no longer just a "text-styler"; it's now a declarative UI-to-image layout engine.

You can still do simple, beautiful text banners easily:

```python
from pictex import Canvas, Shadow, LinearGradient

# 1. Create a style template using the fluent API
canvas = (
Canvas()
.font_family("Poppins-Bold.ttf")
.font_size(60)
.color("white")
.padding(20)
.background_color(LinearGradient(["#2C3E50", "#FD746C"]))
.border_radius(10)
.text_shadows(Shadow(offset=(2, 2), blur_radius=3, color="black"))
)

# 2. Render some text using the template
image = canvas.render("Hello, World! 🎨")

# 3. Save or show the result
image.save("hello.png")
```
Result: [https://imgur.com/a/Wp5TgGt](https://imgur.com/a/Wp5TgGt)

But now you can compose different components together. Instead of just rendering text, you can now build a whole tree of `Row`, `Column`, `Text`, and `Image` nodes.

Here's a card example:

```python
from pictex import *

# 1. Create the individual content builders
avatar

/r/Python
https://redd.it/1mhdbcf
Is mutating the iterable of a list comprehension during comprehension intended?

Sorry in advance if this post is confusing or this is the wrong subreddit to post to

I was playing around with list comprehension and this seems to be valid for Python 3.13.5

(lambda it: [(x, it.append(x+1))[0] for x in it if x <= 10])([0])

it = 0
print((x, it.append(x+1))[0 for x in it if x <= 10])

The line above will print a list containing 0 to 10. The part Im confused about is why mutating it is allowed during list comprehension that depends on it itself, rather than throwing an exception?

/r/Python
https://redd.it/1mhdjdc
A free goldmine of tutorials for the components you need to create production-level agents
Extensive

I’ve worked really hard and launched a FREE resource with 30+ detailed tutorials for building comprehensive production-level AI agents, as part of my Gen AI educational initiative.

The tutorials cover all the key components you need to create agents that are ready for real-world deployment. I plan to keep adding more tutorials over time and will make sure the content stays up to date.

The response so far has been incredible! (the repo got nearly 10,000 stars in one month from launch - all organic) This is part of my broader effort to create high-quality open source educational material. I already have over 130 code tutorials on GitHub with over 50,000 stars.

I hope you find it useful. The tutorials are available here: https://github.com/NirDiamant/agents-towards-production

The content is organized into these categories:

1. Orchestration
2. Tool integration
3. Observability
4. Deployment
5. Memory
6. UI & Frontend
7. Agent Frameworks
8. Model Customization
9. Multi-agent Coordination
10. Security
11. Evaluation
12. Tracing & Debugging
13. Web Scraping

/r/Python
https://redd.it/1mhgs5b
Best Resources to Learn Django Project Structure

Hi,
I’m a bootcamp grad with some self-taught background. I’ve only used Flask so far and now I’m diving into Django. I’ve read blog posts (especially from James Bennett), which helped, but I still feel like I need more direct and practical advice, especially around separation of concerns and structuring a Django project the right way.

Since I’ll be putting this project in my portfolio, I want to avoid bad decisions and show that I understand scalable, maintainable architecture. I know there’s no single “right way,” but I’m looking for solid patterns that reflect professional practice.

What resources (projects, repos, guides, blog posts, etc.) would you recommend to really grasp proper Django structure and best practices?

Thank you in advance.

/r/django
https://redd.it/1mhlucn
Tuesday Daily Thread: Advanced questions

# Weekly Wednesday Thread: Advanced Questions 🐍

Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.

## How it Works:

1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.

## Guidelines:

* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.

## Recommended Resources:

* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.

## Example Questions:

1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the

/r/Python
https://redd.it/1mhu1bt
User defined forms (maybe)

Hi All,

New to django and I'm trying to learn by solving a problem I have.

Context

I'm trying to build and app where one role can define a (partial) json structure e,g

{

"Weight" : int,

"Statement" : str

}

there might be another:

{

"Height" : int,

"Cheese eaten": float

}

And another role can say I want to creat an instance of this JSON file - and it will fire up a form so that you might end up with stored in a column as JSON.

{

"Weight":10.

"Statement" : "Kittens love No-Ocelot-1179"

}

Question

Is there a name for this patterern or approach? I'm trying to find guidance online but I'm just find a lot of stuff about defining column types. So either this is mad, I'm missing some terminology, and options C/D both or neither are true.

My working theory at the moment is that there is a default key column and a type column. The type column I think has to contain the text rep of the type and I need to parse that when I use it. Unless I missed there ia a type... type?

So thats my question: Does anyone have any pointers or reading materials for this situation?


Many thanks,

No-Ocelot-1179

/r/django
https://redd.it/1mhn7jj
Built Coffy: an embedded database engine for Python (Graph + NoSQL)

I got tired of the overhead:

Setting up full Neo4j instances for tiny graph experiments
Jumping between libraries for SQL, NoSQL, and graph data
Wrestling with heavy frameworks just to run a simple script

So, I built Coffy. (
https://github.com/nsarathy/coffy)

Coffy is an embedded database engine for Python that supports NoSQL, SQL, and Graph data models. One Python library, that comes with:

NoSQL (coffy.nosql) - Store and query JSON documents locally with a chainable API. Filter, aggregate, and join data without setting up MongoDB or any server.
Graph (coffy.graph) - Build and traverse graphs. Query nodes and relationships, and match patterns. No servers, no setup.
SQL (coffy.sql) - Thin SQLite wrapper. Available if you need it.

What Coffy won't do: Run a billion-user app or handle distributed workloads.

What Coffy will do:

Make local prototyping feel effortless again.
Eliminate setup friction - no servers, no drivers, no environment juggling.

Coffy is open source, lean, and developer-first.

Curious?

Install Coffy: https://pypi.org/project/coffy/

Or let's make it even better!

https://github.com/nsarathy/coffy


### What My Project Does
Coffy is an embedded Python database engine combining SQL, NoSQL, and Graph in one library for quick local prototyping.

### Target Audience
Developers who want fast, serverless data experiments without production-scale complexity.

### Comparison
Unlike

/r/Python
https://redd.it/1mi0jjw
This media is not supported in your browser
VIEW IN TELEGRAM
I built a cloud development platform with Django

/r/django
https://redd.it/1mi2fj0
D Seeking advice on choosing PhD topic/area

Hello everyone,

I'm currently enrolled in a master's program in statistics, and I want to pursue a PhD focusing on the theoretical foundations of machine learning/deep neural networks.

I'm considering statistical learning theory (primary option) or optimization as my PhD research area, but I'm unsure whether statistical learning theory/optimization is the most appropriate area for my doctoral research given my goal.

Further context: I hope to do theoretical/foundational work on neural networks as a researcher at an AI research lab in the future. 

Question:

1)What area(s) of research would you recommend for someone interested in doing fundamental research in machine learning/DNNs?

2)What are the popular/promising techniques and mathematical frameworks used by researchers working on the theoretical foundations of deep learning?

Thanks a lot for your help.

/r/MachineLearning
https://redd.it/1mi0wz8
Started Working on a FOSS Alternative to Tableau and Power BI 45 Days Ago

It might take another 5-10 years to find the right fit to meet the community's needs. It's not a thing today. But we should be able to launch the first alpha version later this year. The initial idea was too broad and ambitious. But do you have any wild imaginations as to what advanced features would be worth including?

What My Project Does

On the initial stage of the development, I'm trying to mimic the basic functionality of Tableau and Power BI. As well as a subset from Microsoft Excel. On the next stage, we can expect it'll support node editor to manage data pipeline like Alteryx Designer.

Target Audience

It's for production, yes. The original idea was to enable my co-worker at office to load more than 1 million rows of text file (CSV or similar) on a laptop and manually process it using some formulas (think of a spreadsheet app). But the real goal is to provide a new professional alternative for BI, especially on GNU/Linux ecosystem, since I'm a Linux desktop user, a Pandas user as well.

Comparison

I've conducted research on these apps:

- Microsoft Excel
- Google Sheets
- Power BI
- Tableau
- Alteryx Designer
- SmoothCSV

But I have no intention whatsoever to compete with all

/r/Python
https://redd.it/1mi4l6o
Permissions and avoiding them

Hi fellas,

I’ve cooked up an „glorified” filter for my excel job as an .exe in tkinter, my boss saw it and thought it would be great for everyone, but now the issue I have is with the excel, as long as it’s local than it’s fine, but for either onedrive or sharepoint I have an issue accessing the data via python due to restrictions/permisions and I’m thinking how to solve this, if the url and onedrive are locked, maybe I should use some other type of database couse I know excel isn’t really the best solution here, anyone got any ideas :)?

/r/Python
https://redd.it/1mi5i7q
DImproving Hybrid KNN + Keyword Matching Retrieval in OpenSearch (Hit-or-Miss Results)

Hey folks,

I’m working on a Retrieval-Augmented Generation (RAG) pipeline using OpenSearch for document retrieval and an LLM-based reranker. The retriever uses a hybrid approach:
• KNN vector search (dense embeddings)
• Multi-match keyword search (BM25) on title, heading, and text fields

Both are combined in a bool query with should clauses so that results can come from either method, and then I rerank them with an LLM.

The problem:
Even when I pull hundreds of candidates, the performance is hit or miss — sometimes the right passage comes out on top, other times it’s buried deep or missed entirely. This makes final answers inconsistent.

What I’ve tried so far:
• Increased KNN k and BM25 candidate counts
• Adjusted weights between keyword and vector matches
• Prompt tweaks for the reranker to focus only on relevance
• Query reformulation for keyword search

I’d love advice on:
• Tuning OpenSearch for better recall with hybrid KNN + BM25 retrieval
• Balancing lexical vs. vector scoring in a should query
• Ensuring the reranker consistently sees the correct passages in its candidate set
• Improving reranker performance without full fine-tuning

Has anyone else run into this hit-or-miss issue with hybrid retrieval + reranking? How did you make it more consistent?

Thanks!


/r/MachineLearning
https://redd.it/1mi27ab
Most performant tabular data-storage system that allows retrieval from the disk using random access

So far, in most of my projects, I have been saving tabular data in CSV files as the performance of retrieving data from the disk hasn't been a concern. I'm currently working on a project which involves thousands of tables, and each table contains around a million rows. The application requires frequently accessing specific rows from specific tables. Often times, there may only be a need to access not more than ten rows from a specific table, but given that I have my tables saved as CSV files, I have to read an entire table just to read a handful of rows from it. This is very inefficient.

When starting out, I would use the most popular Python library to work with CSV files: Pandas. Upon learning about Polars, I have switched to it, and haven't had to use Pandas ever since. Polars enables around ten-times faster data retrieval from the disk to a DataFrame than Pandas. This is great, but still inefficient, because it still needs to read the entire file. Parquet enables even faster data retrieval, but is still inefficient, because it still requires reading the entire file to retrieve a specific set of rows. SQLite provides the

/r/Python
https://redd.it/1mhaury
Axiom, a new kind of "truth engine" as a tool to fight my own schizophrenia. Now open-sourcing it.

Hey everyone,

I've built a project in Python that is deeply personal to me, and I've reached the point where I believe it could be valuable to others. I'm excited, and a little nervous, to share it with you all. In keeping with the rules, here's the breakdown:

What My Project Does
Axiom is a decentralized, autonomous P2P network that I'm building to be a "truth engine." It's not a search engine that gives you links; it's a knowledge engine that gives you verified, objective facts.

It works through a network of nodes that:

Autonomously discover important topics from data streams.
Investigate these topics across a curated list of high-trust web sources.
Analyze the text with AI (specifically, an analytical NLP model, not a generative LLM) to surgically extract factual statements while discarding opinions, speculation, and biased language.
Verify facts through corroboration. A fact is only considered "trusted" after the network finds multiple independent sources making the same claim.
Store this knowledge in a decentralized, immutable ledger, creating a permanent and community-owned record of truth.
The end goal is a desktop client where anyone can anonymously ask a question and get a clean, direct, and verifiable answer, completely detached from the noise and chaos of the regular internet.

Target Audience
Initially, I

/r/Python
https://redd.it/1miaw6m