Built Coffy: an embedded database engine for Python (Graph + NoSQL)
I got tired of the overhead:
Setting up full Neo4j instances for tiny graph experiments
Jumping between libraries for SQL, NoSQL, and graph data
Wrestling with heavy frameworks just to run a simple script
So, I built Coffy. (https://github.com/nsarathy/coffy)
Coffy is an embedded database engine for Python that supports NoSQL, SQL, and Graph data models. One Python library, that comes with:
NoSQL (coffy.nosql) - Store and query JSON documents locally with a chainable API. Filter, aggregate, and join data without setting up MongoDB or any server.
Graph (coffy.graph) - Build and traverse graphs. Query nodes and relationships, and match patterns. No servers, no setup.
SQL (coffy.sql) - Thin SQLite wrapper. Available if you need it.
What Coffy won't do: Run a billion-user app or handle distributed workloads.
What Coffy will do:
Make local prototyping feel effortless again.
Eliminate setup friction - no servers, no drivers, no environment juggling.
Coffy is open source, lean, and developer-first.
Curious?
Install Coffy: https://pypi.org/project/coffy/
Or let's make it even better!
https://github.com/nsarathy/coffy
### What My Project Does
Coffy is an embedded Python database engine combining SQL, NoSQL, and Graph in one library for quick local prototyping.
### Target Audience
Developers who want fast, serverless data experiments without production-scale complexity.
### Comparison
Unlike
/r/Python
https://redd.it/1mi0jjw
I got tired of the overhead:
Setting up full Neo4j instances for tiny graph experiments
Jumping between libraries for SQL, NoSQL, and graph data
Wrestling with heavy frameworks just to run a simple script
So, I built Coffy. (https://github.com/nsarathy/coffy)
Coffy is an embedded database engine for Python that supports NoSQL, SQL, and Graph data models. One Python library, that comes with:
NoSQL (coffy.nosql) - Store and query JSON documents locally with a chainable API. Filter, aggregate, and join data without setting up MongoDB or any server.
Graph (coffy.graph) - Build and traverse graphs. Query nodes and relationships, and match patterns. No servers, no setup.
SQL (coffy.sql) - Thin SQLite wrapper. Available if you need it.
What Coffy won't do: Run a billion-user app or handle distributed workloads.
What Coffy will do:
Make local prototyping feel effortless again.
Eliminate setup friction - no servers, no drivers, no environment juggling.
Coffy is open source, lean, and developer-first.
Curious?
Install Coffy: https://pypi.org/project/coffy/
Or let's make it even better!
https://github.com/nsarathy/coffy
### What My Project Does
Coffy is an embedded Python database engine combining SQL, NoSQL, and Graph in one library for quick local prototyping.
### Target Audience
Developers who want fast, serverless data experiments without production-scale complexity.
### Comparison
Unlike
/r/Python
https://redd.it/1mi0jjw
GitHub
GitHub - nsarathy/Coffy: Open source lightweight embedded database engine for Python that supports NoSQL, SQL, and Graph data models.
Open source lightweight embedded database engine for Python that supports NoSQL, SQL, and Graph data models. - nsarathy/Coffy
D Seeking advice on choosing PhD topic/area
Hello everyone,
I'm currently enrolled in a master's program in statistics, and I want to pursue a PhD focusing on the theoretical foundations of machine learning/deep neural networks.
I'm considering statistical learning theory (primary option) or optimization as my PhD research area, but I'm unsure whether statistical learning theory/optimization is the most appropriate area for my doctoral research given my goal.
Further context: I hope to do theoretical/foundational work on neural networks as a researcher at an AI research lab in the future.
Question:
1)What area(s) of research would you recommend for someone interested in doing fundamental research in machine learning/DNNs?
2)What are the popular/promising techniques and mathematical frameworks used by researchers working on the theoretical foundations of deep learning?
Thanks a lot for your help.
/r/MachineLearning
https://redd.it/1mi0wz8
Hello everyone,
I'm currently enrolled in a master's program in statistics, and I want to pursue a PhD focusing on the theoretical foundations of machine learning/deep neural networks.
I'm considering statistical learning theory (primary option) or optimization as my PhD research area, but I'm unsure whether statistical learning theory/optimization is the most appropriate area for my doctoral research given my goal.
Further context: I hope to do theoretical/foundational work on neural networks as a researcher at an AI research lab in the future.
Question:
1)What area(s) of research would you recommend for someone interested in doing fundamental research in machine learning/DNNs?
2)What are the popular/promising techniques and mathematical frameworks used by researchers working on the theoretical foundations of deep learning?
Thanks a lot for your help.
/r/MachineLearning
https://redd.it/1mi0wz8
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Started Working on a FOSS Alternative to Tableau and Power BI 45 Days Ago
It might take another 5-10 years to find the right fit to meet the community's needs. It's not a thing today. But we should be able to launch the first alpha version later this year. The initial idea was too broad and ambitious. But do you have any wild imaginations as to what advanced features would be worth including?
What My Project Does
On the initial stage of the development, I'm trying to mimic the basic functionality of Tableau and Power BI. As well as a subset from Microsoft Excel. On the next stage, we can expect it'll support node editor to manage data pipeline like Alteryx Designer.
Target Audience
It's for production, yes. The original idea was to enable my co-worker at office to load more than 1 million rows of text file (CSV or similar) on a laptop and manually process it using some formulas (think of a spreadsheet app). But the real goal is to provide a new professional alternative for BI, especially on GNU/Linux ecosystem, since I'm a Linux desktop user, a Pandas user as well.
Comparison
I've conducted research on these apps:
- Microsoft Excel
- Google Sheets
- Power BI
- Tableau
- Alteryx Designer
- SmoothCSV
But I have no intention whatsoever to compete with all
/r/Python
https://redd.it/1mi4l6o
It might take another 5-10 years to find the right fit to meet the community's needs. It's not a thing today. But we should be able to launch the first alpha version later this year. The initial idea was too broad and ambitious. But do you have any wild imaginations as to what advanced features would be worth including?
What My Project Does
On the initial stage of the development, I'm trying to mimic the basic functionality of Tableau and Power BI. As well as a subset from Microsoft Excel. On the next stage, we can expect it'll support node editor to manage data pipeline like Alteryx Designer.
Target Audience
It's for production, yes. The original idea was to enable my co-worker at office to load more than 1 million rows of text file (CSV or similar) on a laptop and manually process it using some formulas (think of a spreadsheet app). But the real goal is to provide a new professional alternative for BI, especially on GNU/Linux ecosystem, since I'm a Linux desktop user, a Pandas user as well.
Comparison
I've conducted research on these apps:
- Microsoft Excel
- Google Sheets
- Power BI
- Tableau
- Alteryx Designer
- SmoothCSV
But I have no intention whatsoever to compete with all
/r/Python
https://redd.it/1mi4l6o
Reddit
From the Python community on Reddit: Started Working on a FOSS Alternative to Tableau and Power BI 45 Days Ago
Explore this post and more from the Python community
Permissions and avoiding them
Hi fellas,
I’ve cooked up an „glorified” filter for my excel job as an .exe in tkinter, my boss saw it and thought it would be great for everyone, but now the issue I have is with the excel, as long as it’s local than it’s fine, but for either onedrive or sharepoint I have an issue accessing the data via python due to restrictions/permisions and I’m thinking how to solve this, if the url and onedrive are locked, maybe I should use some other type of database couse I know excel isn’t really the best solution here, anyone got any ideas :)?
/r/Python
https://redd.it/1mi5i7q
Hi fellas,
I’ve cooked up an „glorified” filter for my excel job as an .exe in tkinter, my boss saw it and thought it would be great for everyone, but now the issue I have is with the excel, as long as it’s local than it’s fine, but for either onedrive or sharepoint I have an issue accessing the data via python due to restrictions/permisions and I’m thinking how to solve this, if the url and onedrive are locked, maybe I should use some other type of database couse I know excel isn’t really the best solution here, anyone got any ideas :)?
/r/Python
https://redd.it/1mi5i7q
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
DImproving Hybrid KNN + Keyword Matching Retrieval in OpenSearch (Hit-or-Miss Results)
Hey folks,
I’m working on a Retrieval-Augmented Generation (RAG) pipeline using OpenSearch for document retrieval and an LLM-based reranker. The retriever uses a hybrid approach:
• KNN vector search (dense embeddings)
• Multi-match keyword search (BM25) on title, heading, and text fields
Both are combined in a bool query with should clauses so that results can come from either method, and then I rerank them with an LLM.
The problem:
Even when I pull hundreds of candidates, the performance is hit or miss — sometimes the right passage comes out on top, other times it’s buried deep or missed entirely. This makes final answers inconsistent.
What I’ve tried so far:
• Increased KNN k and BM25 candidate counts
• Adjusted weights between keyword and vector matches
• Prompt tweaks for the reranker to focus only on relevance
• Query reformulation for keyword search
I’d love advice on:
• Tuning OpenSearch for better recall with hybrid KNN + BM25 retrieval
• Balancing lexical vs. vector scoring in a should query
• Ensuring the reranker consistently sees the correct passages in its candidate set
• Improving reranker performance without full fine-tuning
Has anyone else run into this hit-or-miss issue with hybrid retrieval + reranking? How did you make it more consistent?
Thanks!
/r/MachineLearning
https://redd.it/1mi27ab
Hey folks,
I’m working on a Retrieval-Augmented Generation (RAG) pipeline using OpenSearch for document retrieval and an LLM-based reranker. The retriever uses a hybrid approach:
• KNN vector search (dense embeddings)
• Multi-match keyword search (BM25) on title, heading, and text fields
Both are combined in a bool query with should clauses so that results can come from either method, and then I rerank them with an LLM.
The problem:
Even when I pull hundreds of candidates, the performance is hit or miss — sometimes the right passage comes out on top, other times it’s buried deep or missed entirely. This makes final answers inconsistent.
What I’ve tried so far:
• Increased KNN k and BM25 candidate counts
• Adjusted weights between keyword and vector matches
• Prompt tweaks for the reranker to focus only on relevance
• Query reformulation for keyword search
I’d love advice on:
• Tuning OpenSearch for better recall with hybrid KNN + BM25 retrieval
• Balancing lexical vs. vector scoring in a should query
• Ensuring the reranker consistently sees the correct passages in its candidate set
• Improving reranker performance without full fine-tuning
Has anyone else run into this hit-or-miss issue with hybrid retrieval + reranking? How did you make it more consistent?
Thanks!
/r/MachineLearning
https://redd.it/1mi27ab
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Most performant tabular data-storage system that allows retrieval from the disk using random access
So far, in most of my projects, I have been saving tabular data in CSV files as the performance of retrieving data from the disk hasn't been a concern. I'm currently working on a project which involves thousands of tables, and each table contains around a million rows. The application requires frequently accessing specific rows from specific tables. Often times, there may only be a need to access not more than ten rows from a specific table, but given that I have my tables saved as CSV files, I have to read an entire table just to read a handful of rows from it. This is very inefficient.
When starting out, I would use the most popular Python library to work with CSV files: Pandas. Upon learning about Polars, I have switched to it, and haven't had to use Pandas ever since. Polars enables around ten-times faster data retrieval from the disk to a DataFrame than Pandas. This is great, but still inefficient, because it still needs to read the entire file. Parquet enables even faster data retrieval, but is still inefficient, because it still requires reading the entire file to retrieve a specific set of rows. SQLite provides the
/r/Python
https://redd.it/1mhaury
So far, in most of my projects, I have been saving tabular data in CSV files as the performance of retrieving data from the disk hasn't been a concern. I'm currently working on a project which involves thousands of tables, and each table contains around a million rows. The application requires frequently accessing specific rows from specific tables. Often times, there may only be a need to access not more than ten rows from a specific table, but given that I have my tables saved as CSV files, I have to read an entire table just to read a handful of rows from it. This is very inefficient.
When starting out, I would use the most popular Python library to work with CSV files: Pandas. Upon learning about Polars, I have switched to it, and haven't had to use Pandas ever since. Polars enables around ten-times faster data retrieval from the disk to a DataFrame than Pandas. This is great, but still inefficient, because it still needs to read the entire file. Parquet enables even faster data retrieval, but is still inefficient, because it still requires reading the entire file to retrieve a specific set of rows. SQLite provides the
/r/Python
https://redd.it/1mhaury
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Axiom, a new kind of "truth engine" as a tool to fight my own schizophrenia. Now open-sourcing it.
Hey everyone,
I've built a project in Python that is deeply personal to me, and I've reached the point where I believe it could be valuable to others. I'm excited, and a little nervous, to share it with you all. In keeping with the rules, here's the breakdown:
What My Project Does
Axiom is a decentralized, autonomous P2P network that I'm building to be a "truth engine." It's not a search engine that gives you links; it's a knowledge engine that gives you verified, objective facts.
It works through a network of nodes that:
Autonomously discover important topics from data streams.
Investigate these topics across a curated list of high-trust web sources.
Analyze the text with AI (specifically, an analytical NLP model, not a generative LLM) to surgically extract factual statements while discarding opinions, speculation, and biased language.
Verify facts through corroboration. A fact is only considered "trusted" after the network finds multiple independent sources making the same claim.
Store this knowledge in a decentralized, immutable ledger, creating a permanent and community-owned record of truth.
The end goal is a desktop client where anyone can anonymously ask a question and get a clean, direct, and verifiable answer, completely detached from the noise and chaos of the regular internet.
Target Audience
Initially, I
/r/Python
https://redd.it/1miaw6m
Hey everyone,
I've built a project in Python that is deeply personal to me, and I've reached the point where I believe it could be valuable to others. I'm excited, and a little nervous, to share it with you all. In keeping with the rules, here's the breakdown:
What My Project Does
Axiom is a decentralized, autonomous P2P network that I'm building to be a "truth engine." It's not a search engine that gives you links; it's a knowledge engine that gives you verified, objective facts.
It works through a network of nodes that:
Autonomously discover important topics from data streams.
Investigate these topics across a curated list of high-trust web sources.
Analyze the text with AI (specifically, an analytical NLP model, not a generative LLM) to surgically extract factual statements while discarding opinions, speculation, and biased language.
Verify facts through corroboration. A fact is only considered "trusted" after the network finds multiple independent sources making the same claim.
Store this knowledge in a decentralized, immutable ledger, creating a permanent and community-owned record of truth.
The end goal is a desktop client where anyone can anonymously ask a question and get a clean, direct, and verifiable answer, completely detached from the noise and chaos of the regular internet.
Target Audience
Initially, I
/r/Python
https://redd.it/1miaw6m
Reddit
From the Python community on Reddit: Axiom, a new kind of "truth engine" as a tool to fight my own schizophrenia. Now open-sourcing…
Explore this post and more from the Python community
Hello! I created a lightweight Django logging app.
Hello! I would like to introduce the django-logbox app. In the early stages of development or when operating a lightweight app, whenever an error occurred, I had to immediately connect to the container or VPS via SSH to check the logs.
I created django-logbox to resolve this inconvenience, and have been using it in production. I just finished writing the documentation, and I am excited to share this project!
* When solutions like Sentry feel excessive
* When you want to identify errors from the admin page in a small-scale app
* When you want to check Python traceback errors during development
* When you want to see which devices users are accessing the site from via the admin page
* When you want to monitor daily traffic from the admin page
Give my app a try! :)
Github: [https://github.com/TGoddessana/django-logbox](https://github.com/TGoddessana/django-logbox)
Documentation: [https://tgoddessana.github.io/django-logbox/](https://tgoddessana.github.io/django-logbox/)
By the way, it's licensed under MIT, and I was greatly inspired by the \`DRF\_API\_LOGGER\` project.
this is example screenshot!
https://preview.redd.it/s8um6l34z7hf1.png?width=2032&format=png&auto=webp&s=7533b036ae0ea2b94412cb5a1adc4decb212b472
https://preview.redd.it/ysczodb5z7hf1.png?width=2032&format=png&auto=webp&s=ccb3261e1f8a84a79f566918669acb72d2db3914
If you like the project, I'd appreciate a star >\_<
/r/django
https://redd.it/1micjf5
Hello! I would like to introduce the django-logbox app. In the early stages of development or when operating a lightweight app, whenever an error occurred, I had to immediately connect to the container or VPS via SSH to check the logs.
I created django-logbox to resolve this inconvenience, and have been using it in production. I just finished writing the documentation, and I am excited to share this project!
* When solutions like Sentry feel excessive
* When you want to identify errors from the admin page in a small-scale app
* When you want to check Python traceback errors during development
* When you want to see which devices users are accessing the site from via the admin page
* When you want to monitor daily traffic from the admin page
Give my app a try! :)
Github: [https://github.com/TGoddessana/django-logbox](https://github.com/TGoddessana/django-logbox)
Documentation: [https://tgoddessana.github.io/django-logbox/](https://tgoddessana.github.io/django-logbox/)
By the way, it's licensed under MIT, and I was greatly inspired by the \`DRF\_API\_LOGGER\` project.
this is example screenshot!
https://preview.redd.it/s8um6l34z7hf1.png?width=2032&format=png&auto=webp&s=7533b036ae0ea2b94412cb5a1adc4decb212b472
https://preview.redd.it/ysczodb5z7hf1.png?width=2032&format=png&auto=webp&s=ccb3261e1f8a84a79f566918669acb72d2db3914
If you like the project, I'd appreciate a star >\_<
/r/django
https://redd.it/1micjf5
GitHub
GitHub - TGoddessana/django-logbox: Your small, but useful django log box. 📦
Your small, but useful django log box. 📦. Contribute to TGoddessana/django-logbox development by creating an account on GitHub.
DeepMind Genie3 architecture speculation
If you haven't seen Genie 3 yet: https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
It is really mind blowing, especially when you look at the comparison between 2 and 3, the most striking thing is that 2 has this clear constant statistical noise in the frame (the walls and such are clearly shifting colours, everything is shifting because its a statistical model conditioned on the previous frames) whereas in 3 this is completely eliminated. I think we know Genie 2 is a diffusion model outputting 1 frame at a time, conditional on the past frames and the keyboard inputs for movement, but Genie 3's perfect keeping of the environment makes me think it is done another way, such as by generating the actual 3d physical world as the models output, saving it as some kind of 3d meshing + textures and then having some rules of what needs to be generated in the world when (anything the user can see in frame).
What do you think? Lets speculate together!
/r/MachineLearning
https://redd.it/1mic820
If you haven't seen Genie 3 yet: https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
It is really mind blowing, especially when you look at the comparison between 2 and 3, the most striking thing is that 2 has this clear constant statistical noise in the frame (the walls and such are clearly shifting colours, everything is shifting because its a statistical model conditioned on the previous frames) whereas in 3 this is completely eliminated. I think we know Genie 2 is a diffusion model outputting 1 frame at a time, conditional on the past frames and the keyboard inputs for movement, but Genie 3's perfect keeping of the environment makes me think it is done another way, such as by generating the actual 3d physical world as the models output, saving it as some kind of 3d meshing + textures and then having some rules of what needs to be generated in the world when (anything the user can see in frame).
What do you think? Lets speculate together!
/r/MachineLearning
https://redd.it/1mic820
Google DeepMind
Genie 3: A new frontier for world models
Today we are announcing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that …
django-modelsearch: Index Django Models with Elasticsearch or OpenSearch and query them with the ORM
https://github.com/kaedroho/django-modelsearch
/r/django
https://redd.it/1mi4my9
https://github.com/kaedroho/django-modelsearch
/r/django
https://redd.it/1mi4my9
GitHub
GitHub - wagtail/django-modelsearch: Index Django Models with Elasticsearch or OpenSearch and query them with the ORM
Index Django Models with Elasticsearch or OpenSearch and query them with the ORM - wagtail/django-modelsearch
Neurocipher: Python project combining cryptography and Hopfield networks
What My Project Does
Neurocipher is a Python-based research project that integrates classic cryptography with neural networks. It goes beyond standard encryption examples by implementing both encryption algorithms and associative memory for key recovery using Hopfield networks.
Key Features
Manual implementation of symmetric (AES/Fernet) and asymmetric (RSA, ECC/ECDSA) encryption.
Fully documented math foundations and code explanations in LaTeX (PDF included).
A Hopfield neural network capable of storing and recovering binary keys (e.g., 128-bit) with up to 40–50% noise.
Recovery experiments automated and visualized in Python (CSV + Matplotlib).
All tests reproducible, with logging, version control and clean structure.
Target Audience
This project is ideal for:
Python developers interested in cryptography internals.
Students or educators looking for educational crypto demos.
ML researchers exploring neural associative memory.
Anyone curious about building crypto + memory systems from scratch.
How It Stands Out
While most crypto projects focus only on encryption/decryption, Neurocipher explores how corrupted or noisy keys could be recovered, bridging the gap between cryptography and biologically-inspired computation.
This is not just a toy project — it’s a testbed for secure, noise-resilient memory.
Get Started
git clone [https://github.com/davidgc17/neurocipher](https://github.com/davidgc17/neurocipher)
cd neurocipher
pip install -r requirements.txt
python demos/demo_symmetric.py
View full documentation, experiments and diagrams in /docs and /graficos.
🔗 GitHub Repo: github.com/davidgc17/neurocipher 📄 License: Apache 2.0 🚀 Release: v1.0 now
/r/Python
https://redd.it/1mib8l9
What My Project Does
Neurocipher is a Python-based research project that integrates classic cryptography with neural networks. It goes beyond standard encryption examples by implementing both encryption algorithms and associative memory for key recovery using Hopfield networks.
Key Features
Manual implementation of symmetric (AES/Fernet) and asymmetric (RSA, ECC/ECDSA) encryption.
Fully documented math foundations and code explanations in LaTeX (PDF included).
A Hopfield neural network capable of storing and recovering binary keys (e.g., 128-bit) with up to 40–50% noise.
Recovery experiments automated and visualized in Python (CSV + Matplotlib).
All tests reproducible, with logging, version control and clean structure.
Target Audience
This project is ideal for:
Python developers interested in cryptography internals.
Students or educators looking for educational crypto demos.
ML researchers exploring neural associative memory.
Anyone curious about building crypto + memory systems from scratch.
How It Stands Out
While most crypto projects focus only on encryption/decryption, Neurocipher explores how corrupted or noisy keys could be recovered, bridging the gap between cryptography and biologically-inspired computation.
This is not just a toy project — it’s a testbed for secure, noise-resilient memory.
Get Started
git clone [https://github.com/davidgc17/neurocipher](https://github.com/davidgc17/neurocipher)
cd neurocipher
pip install -r requirements.txt
python demos/demo_symmetric.py
View full documentation, experiments and diagrams in /docs and /graficos.
🔗 GitHub Repo: github.com/davidgc17/neurocipher 📄 License: Apache 2.0 🚀 Release: v1.0 now
/r/Python
https://redd.it/1mib8l9
GitHub
GitHub - davidgc17/neurocipher: Proyecto de criptografía aplicada con redes neuronales
Proyecto de criptografía aplicada con redes neuronales - davidgc17/neurocipher
Python Code Audit - A modern Python source code analyzer based on distrust.
What My Project Does
Python Codeaudit is a tool to find security issues in Python code. This static application security testing (SAST) tool has great features to simplify the necessary security tasks and make it fun and easy.
Key Features
Vulnerability Detection: Identifies security vulnerabilities in Python files, essential for package security research.
Complexity & Statistics: Reports security-relevant complexity using a fast, lightweight cyclomatic complexity count via Python's AST.
Module Usage & External Vulnerabilities: Detects used modules and reports vulnerabilities in external ones.
Inline Issue Reporting: Shows potential security issues with line numbers and code snippets.
HTML Reports: All output is saved in simple, static HTML reports viewable in any browser.
Target Audience
Anyone who want or must check security risks with Python programs.
Anyone who loves to create functionality using Python. So not only professional programs , but also occasional Python programmers or programmers who are used to working with other languages.
Anyone who wants an easy way to get insight in possible security risks Python programs.
Comparison
There are not many good and maintained FOSS SAST tools for Python available. A well known Python SAST tool is
/r/Python
https://redd.it/1mid59i
What My Project Does
Python Codeaudit is a tool to find security issues in Python code. This static application security testing (SAST) tool has great features to simplify the necessary security tasks and make it fun and easy.
Key Features
Vulnerability Detection: Identifies security vulnerabilities in Python files, essential for package security research.
Complexity & Statistics: Reports security-relevant complexity using a fast, lightweight cyclomatic complexity count via Python's AST.
Module Usage & External Vulnerabilities: Detects used modules and reports vulnerabilities in external ones.
Inline Issue Reporting: Shows potential security issues with line numbers and code snippets.
HTML Reports: All output is saved in simple, static HTML reports viewable in any browser.
Target Audience
Anyone who want or must check security risks with Python programs.
Anyone who loves to create functionality using Python. So not only professional programs , but also occasional Python programmers or programmers who are used to working with other languages.
Anyone who wants an easy way to get insight in possible security risks Python programs.
Comparison
There are not many good and maintained FOSS SAST tools for Python available. A well known Python SAST tool is
Bandit. However Bandit is limited in identifying security issues and has constrains that makes/r/Python
https://redd.it/1mid59i
Image processing to extract miles of rail road track
Anyway to estimate number of miles of red line (rail road track) from this image?
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSFQBtk10HP5jT5JSHjloQ4E5KoNAl32SGo3Q&s
/r/Python
https://redd.it/1mieaaz
Anyway to estimate number of miles of red line (rail road track) from this image?
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSFQBtk10HP5jT5JSHjloQ4E5KoNAl32SGo3Q&s
/r/Python
https://redd.it/1mieaaz
Noobie Created my first "app" today!
Recently got into coding (around a month or so ago) and python was something I remembered from a class I took in high school. Through rehashing my memory on YouTube and other forums, today I built my first "app" I guess? Its a checker for minecraft usernames that connects to the mojang api and allows you to see if usernames are available or not. Working on adding a text file import, but for now its manual typing / paste with one username per line.
Pretty proud of my work and how far I've come in a short time. Can't add an image (I'm guessing cuz I just joined the sub) but here's an imgur of how it looks! Basic I know, but functional! I know some of guys are probably pros and slate me for how it looks but I'm so proud of it lol. Here's to going further!
Image of what I made
/r/Python
https://redd.it/1miuohk
Recently got into coding (around a month or so ago) and python was something I remembered from a class I took in high school. Through rehashing my memory on YouTube and other forums, today I built my first "app" I guess? Its a checker for minecraft usernames that connects to the mojang api and allows you to see if usernames are available or not. Working on adding a text file import, but for now its manual typing / paste with one username per line.
Pretty proud of my work and how far I've come in a short time. Can't add an image (I'm guessing cuz I just joined the sub) but here's an imgur of how it looks! Basic I know, but functional! I know some of guys are probably pros and slate me for how it looks but I'm so proud of it lol. Here's to going further!
Image of what I made
/r/Python
https://redd.it/1miuohk
Imgur
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.
Pybotchi: Lightweight Intent-Based Agent Builder
## Core Architecture:
Nested Intent-Based Supervisor Agent Architecture
## What Core Features Are Currently Supported?
### Lifecycle
- Every agent utilizes pre, core, fallback, and post executions.
### Sequential Combination
- Multiple agent executions can be performed in sequence within a single tool call.
### Concurrent Combination
- Multiple agent executions can be performed concurrently in a single tool call, using either threads or tasks.
### Sequential Iteration
- Multiple agent executions can be performed via iteration.
### MCP Integration
- As Server: Existing agents can be mounted to FastAPI to become an MCP endpoint.
- As Client: Agents can connect to an MCP server and integrate its tools.
- Tools can be overridden.
### Combine/Override/Extend/Nest Everything
- Everything is configurable.
## How to Declare an Agent?
### LLM Declaration
### Imports
### Agent Declaration
- This can already work as an agent.
- You have complete freedom here: call another agent,
/r/Python
https://redd.it/1miw2jm
## Core Architecture:
Nested Intent-Based Supervisor Agent Architecture
## What Core Features Are Currently Supported?
### Lifecycle
- Every agent utilizes pre, core, fallback, and post executions.
### Sequential Combination
- Multiple agent executions can be performed in sequence within a single tool call.
### Concurrent Combination
- Multiple agent executions can be performed concurrently in a single tool call, using either threads or tasks.
### Sequential Iteration
- Multiple agent executions can be performed via iteration.
### MCP Integration
- As Server: Existing agents can be mounted to FastAPI to become an MCP endpoint.
- As Client: Agents can connect to an MCP server and integrate its tools.
- Tools can be overridden.
### Combine/Override/Extend/Nest Everything
- Everything is configurable.
## How to Declare an Agent?
### LLM Declaration
from pybotchi import LLM
from langchain_openai import ChatOpenAI
LLM.add(
base = ChatOpenAI(.....)
)
### Imports
from pybotchi import Action, ActionReturn, Context
### Agent Declaration
class Translation(Action):
"""Translate to specified language."""
async def pre(self, context):
message = await context.llm.ainvoke(context.prompts)
await context.add_response(self, message.content)
return ActionReturn.GO
- This can already work as an agent.
context.llm will use the base LLM.- You have complete freedom here: call another agent,
/r/Python
https://redd.it/1miw2jm
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Reading older books
I am going through a relatively older book written in 2019. The book is about recommendation engines and that is the topic I really want to learn for a project I am working on but it uses Django as a web framework for demonstration of the overall system as opposed to just the modeling part. I haven't used Django in a while and kind of don't want to get back on that train again. Is it worth it to either fork and update the book repo to newest version of Django (and in the process re-learn the basics of it) or to try porting the website to FastAPI(my current preferred tool for web) or should I just use the older versions of the libraries? I know in the long run it all depends on specifics but what is the consensus on cases like this when you have to read an older book, few other tutorials/docs exist that go into that much detail and you know versioning will be a problem. Let me know if this belongs to learnpython instead but I thought this is more of a discussion than a question.
/r/Python
https://redd.it/1miw1t9
I am going through a relatively older book written in 2019. The book is about recommendation engines and that is the topic I really want to learn for a project I am working on but it uses Django as a web framework for demonstration of the overall system as opposed to just the modeling part. I haven't used Django in a while and kind of don't want to get back on that train again. Is it worth it to either fork and update the book repo to newest version of Django (and in the process re-learn the basics of it) or to try porting the website to FastAPI(my current preferred tool for web) or should I just use the older versions of the libraries? I know in the long run it all depends on specifics but what is the consensus on cases like this when you have to read an older book, few other tutorials/docs exist that go into that much detail and you know versioning will be a problem. Let me know if this belongs to learnpython instead but I thought this is more of a discussion than a question.
/r/Python
https://redd.it/1miw1t9
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
ValueError: compile(): unrecognised flags error no matter what IDE I use.
I keep getting this error no matter what IDE i use or what ORM command I use.
It all started when I was going back and forth betwen powershell and bash inside of cursor and vs code. I was doing this because I was learning flutter. Any help fixing this would greatly help. Thanks!
/r/djangolearning
https://redd.it/1mgg14m
$ python `manage.py` shellPython 3.10.0 (default, Jul 20 2022, 12:26:04) [MSC v.1929 64 bit (AMD64)] on win32Type "help", "copyright", "credits" or "license" for more information.(InteractiveConsole)>>> from landingpage.models import PagesValueError: compile(): unrecognised flags>>>I keep getting this error no matter what IDE i use or what ORM command I use.
It all started when I was going back and forth betwen powershell and bash inside of cursor and vs code. I was doing this because I was learning flutter. Any help fixing this would greatly help. Thanks!
/r/djangolearning
https://redd.it/1mgg14m
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Optional chaining operator in Python
I'm trying to implement the optional chaining operator (
## 1. None
myobj = Optional(None)
result = (
myobj # OptionalNone
.attr1 # OptionalNone
.attr2 # OptionalNone
.attr3 # OptionalNone
.value # None
) # None
## 2. Nested Objects
@dataclass
class A:
attr3: int
@dataclass
class B:
attr2: A
@dataclass
class C:
/r/Python
https://redd.it/1mid7mt
I'm trying to implement the optional chaining operator (
?.) from JS in Python. The idea of this implementation is to create an Optional class that wraps a type T and allows getting attributes. When getting an attribute from the wrapped object, the type of result should be the type of the attribute or None. For example:## 1. None
myobj = Optional(None)
result = (
myobj # OptionalNone
.attr1 # OptionalNone
.attr2 # OptionalNone
.attr3 # OptionalNone
.value # None
) # None
## 2. Nested Objects
@dataclass
class A:
attr3: int
@dataclass
class B:
attr2: A
@dataclass
class C:
/r/Python
https://redd.it/1mid7mt
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Django bugfix release issued: 5.2.5
https://www.djangoproject.com/weblog/2025/aug/06/bugfix-releases/
/r/django
https://redd.it/1mj0bbd
https://www.djangoproject.com/weblog/2025/aug/06/bugfix-releases/
/r/django
https://redd.it/1mj0bbd
Django Project
Django bugfix release issued: 5.2.5
Posted by Sarah Boyce on Aug. 6, 2025