FastApi vs Django Ninja vs Django for API only backend
I've been reading posts in this and other python subs debating these frameworks and why one is better than another. I am tempted to try the new, cool thing but I use Django with Graphql at work and it's been stable so far.
I am planning to build and app that will be a CRUD app that needs an ORM but it will also use LLMs for chat bots on the frontend. I only want python for an API layer, I will use next on the frontend. I don't think I need an admin panel. I will also be querying data form BigQuery, likely will be doing this more and more as so keep building out the app and adding users and data.
Here is what I keep mulling over:
- Django ninja - seems like a good solution for my use cases. The problem with it is that it has one maintainer who lives in a war torn country and a backlog of Github issues. I saw that a fork called Django Shinobi was already created of this project so that makes me more hesitant to use this framework.
- FastAPI - I started with this but then started looking at ORMs I
/r/Python
https://redd.it/1km6goh
I've been reading posts in this and other python subs debating these frameworks and why one is better than another. I am tempted to try the new, cool thing but I use Django with Graphql at work and it's been stable so far.
I am planning to build and app that will be a CRUD app that needs an ORM but it will also use LLMs for chat bots on the frontend. I only want python for an API layer, I will use next on the frontend. I don't think I need an admin panel. I will also be querying data form BigQuery, likely will be doing this more and more as so keep building out the app and adding users and data.
Here is what I keep mulling over:
- Django ninja - seems like a good solution for my use cases. The problem with it is that it has one maintainer who lives in a war torn country and a backlog of Github issues. I saw that a fork called Django Shinobi was already created of this project so that makes me more hesitant to use this framework.
- FastAPI - I started with this but then started looking at ORMs I
/r/Python
https://redd.it/1km6goh
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Libraries for Flask+htmx?
Hi everyone!
I'm interested in flask+htmx for hobby projects and I would like to know, from those with experience with it, if you use libraries to simplify this kind of work.
Htmx is great but writing the html code in all responses can be annoying. FastHTML introduced an API to generate html from pure python for this reason.
Do you use a library like that, or maybe some other useful tools?
/r/flask
https://redd.it/1klybvh
Hi everyone!
I'm interested in flask+htmx for hobby projects and I would like to know, from those with experience with it, if you use libraries to simplify this kind of work.
Htmx is great but writing the html code in all responses can be annoying. FastHTML introduced an API to generate html from pure python for this reason.
Do you use a library like that, or maybe some other useful tools?
/r/flask
https://redd.it/1klybvh
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Small Propositional Logic Proof Assistant
**Hey** r/Python!
I just finished working on **Deducto**, a minimalistic assistant for working with propositional logic in Python. If you're into formal logic, discrete math, or building proof tools, this might be interesting to you!
# What My Project Does
Deducto lets you:
* Parse logical expressions involving `AND`, `OR`, `NOT`, `IMPLIES`, `IFF`, and more.
* Apply formal transformation rules like:
* Commutativity, Associativity, Distribution
* De Morgan’s Laws, Idempotency, Absorption, etc.
* Justify each step of a transformation to construct equivalence proofs.
* Experiment with rewriting logic expressions step-by-step using a rule engine.
* Extend the system with your own rules or syntax easily.
# Target Audience
This was built as part of a **Discrete Mathematics** project. It's intended for:
* Students learning formal logic or equivalence transformations
* Educators wanting an interactive tool for classroom demos
* Anyone curious about symbolic logic or proof automation
While it's not as feature-rich as Lean or Coq, it aims to be lightweight and approachable — perfect for educational or exploratory use.
# Comparison
Compared to theorem provers like Lean or proof tools in Coq, Deducto is:
* Much simpler
* Focused purely on propositional logic and equivalence transformations
* Designed to be easy to read, extend, and play with — especially for beginners
If you've ever wanted
/r/Python
https://redd.it/1kmf7pe
**Hey** r/Python!
I just finished working on **Deducto**, a minimalistic assistant for working with propositional logic in Python. If you're into formal logic, discrete math, or building proof tools, this might be interesting to you!
# What My Project Does
Deducto lets you:
* Parse logical expressions involving `AND`, `OR`, `NOT`, `IMPLIES`, `IFF`, and more.
* Apply formal transformation rules like:
* Commutativity, Associativity, Distribution
* De Morgan’s Laws, Idempotency, Absorption, etc.
* Justify each step of a transformation to construct equivalence proofs.
* Experiment with rewriting logic expressions step-by-step using a rule engine.
* Extend the system with your own rules or syntax easily.
# Target Audience
This was built as part of a **Discrete Mathematics** project. It's intended for:
* Students learning formal logic or equivalence transformations
* Educators wanting an interactive tool for classroom demos
* Anyone curious about symbolic logic or proof automation
While it's not as feature-rich as Lean or Coq, it aims to be lightweight and approachable — perfect for educational or exploratory use.
# Comparison
Compared to theorem provers like Lean or proof tools in Coq, Deducto is:
* Much simpler
* Focused purely on propositional logic and equivalence transformations
* Designed to be easy to read, extend, and play with — especially for beginners
If you've ever wanted
/r/Python
https://redd.it/1kmf7pe
Reddit
From the Python community on Reddit: Small Propositional Logic Proof Assistant
Explore this post and more from the Python community
CSV Export Truncates Records with Special Characters
I’m using django-import-export to export CSV/XLSX files. However, when the data contains certain special characters, the CSV output truncates some records.
Here’s my custom response class:
from django.http import HttpResponse
from django.conf import settings
from import_export.formats.base_formats import XLSX, CSV
class CSVorXLSXResponse(HttpResponse):
'''
Custom response object that accepts datasets and returns it as csv or excel
'''
def __init__(self, dataset, export_format, filename, *args, **kwargs):
if export_format == 'csv':
data = CSV().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)
content_type = 'text/csv; charset=utf-8'
else:
data = XLSX().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)
/r/djangolearning
https://redd.it/1kmfu2r
I’m using django-import-export to export CSV/XLSX files. However, when the data contains certain special characters, the CSV output truncates some records.
Here’s my custom response class:
from django.http import HttpResponse
from django.conf import settings
from import_export.formats.base_formats import XLSX, CSV
class CSVorXLSXResponse(HttpResponse):
'''
Custom response object that accepts datasets and returns it as csv or excel
'''
def __init__(self, dataset, export_format, filename, *args, **kwargs):
if export_format == 'csv':
data = CSV().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)
content_type = 'text/csv; charset=utf-8'
else:
data = XLSX().export_data(dataset, escape_formulae=settings.IMPORT_EXPORT_ESCAPE_FORMULAE_ON_EXPORT)
/r/djangolearning
https://redd.it/1kmfu2r
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
synchronous vs asynchronous
Can you recommend a YouTube video that explains synchronous vs asynchronous programming in depth
/r/djangolearning
https://redd.it/1kkxxn0
Can you recommend a YouTube video that explains synchronous vs asynchronous programming in depth
/r/djangolearning
https://redd.it/1kkxxn0
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
sqlalchemy-memory: a pure‑Python in‑RAM dialect for SQLAlchemy 2.0
# What My Project Does
It runs entirely in Python; no database, no serialization, no connection pooling. Just raw Python objects and fast logic.
SQLAlchemy Core & ORM support
No I/O or driver overhead (all in-memory)
Supports group\_by, aggregations, and case() expressions
Lazy query evaluation (generators, short-circuiting, etc.)
Indexes are supported. SELECT queries are optimized using available indexes to speed up equality and range-based lookups.
Commit/rollback simulation
# Links
[GitHub Project link](https://github.com/rundef/sqlalchemy-memory)
Documentation link
[Benchmarks vs SQLite in-memory](https://sqlalchemy-memory.readthedocs.io/en/latest/benchmarks.html)
Blogpost: Beyond SQLite: Supercharging SQLAlchemy with a Pure In-Memory Dialect
# Why I Built It
I wanted a backend that:
Behaved like a real SQLAlchemy engine (ORM and Core)
Avoided SQLite/driver overhead
Let me prototype quickly with real queries and relationships
# Target audience
Backtesting engine builders who want a lightweight, in‑RAM store compatible with their ORM models
Simulation and modeling developers who need high-performance in-memory logic without spinning up a database
Anyone tired of duplicating business logic between an ORM and a memory data layer
Note: It's not a full SQL engine: don't use it to unit test DB behavior or verify SQL standard conformance. But for in‑RAM logic with SQLAlchemy-style
/r/Python
https://redd.it/1kmg3db
# What My Project Does
sqlalchemy-memory is a fast in‑RAM SQLAlchemy 2.0 dialect designed for prototyping, backtesting engines, simulations, and educational tools.It runs entirely in Python; no database, no serialization, no connection pooling. Just raw Python objects and fast logic.
SQLAlchemy Core & ORM support
No I/O or driver overhead (all in-memory)
Supports group\_by, aggregations, and case() expressions
Lazy query evaluation (generators, short-circuiting, etc.)
Indexes are supported. SELECT queries are optimized using available indexes to speed up equality and range-based lookups.
Commit/rollback simulation
# Links
[GitHub Project link](https://github.com/rundef/sqlalchemy-memory)
Documentation link
[Benchmarks vs SQLite in-memory](https://sqlalchemy-memory.readthedocs.io/en/latest/benchmarks.html)
Blogpost: Beyond SQLite: Supercharging SQLAlchemy with a Pure In-Memory Dialect
# Why I Built It
I wanted a backend that:
Behaved like a real SQLAlchemy engine (ORM and Core)
Avoided SQLite/driver overhead
Let me prototype quickly with real queries and relationships
# Target audience
Backtesting engine builders who want a lightweight, in‑RAM store compatible with their ORM models
Simulation and modeling developers who need high-performance in-memory logic without spinning up a database
Anyone tired of duplicating business logic between an ORM and a memory data layer
Note: It's not a full SQL engine: don't use it to unit test DB behavior or verify SQL standard conformance. But for in‑RAM logic with SQLAlchemy-style
/r/Python
https://redd.it/1kmg3db
GitHub
GitHub - rundef/sqlalchemy-memory: In-memory SQLAlchemy 2.0 dialect for blazing‑fast prototyping.
In-memory SQLAlchemy 2.0 dialect for blazing‑fast prototyping. - rundef/sqlalchemy-memory
DBOS - Lightweight Durable Python Workflows
Hi r/Python – I’m Peter and I’ve been working on DBOS, an open-source, lightweight durable workflows library for Python apps. We just released our 1.0 version and I wanted to share it with the community!
GitHub link: https://github.com/dbos-inc/dbos-transact-py
What My Project Does
DBOS provides lightweight durable workflows and queues that you can add to Python apps in just a few lines of code. It’s comparable to popular open-source workflow and queue libraries like Airflow and Celery, but with a greater focus on reliability and automatically recovering from failures.
Our core goal in building DBOS is to make it lightweight and flexible so you can add it to your existing apps with minimal work. Everything you need to run durable workflows and queues is contained in this Python library. You don’t need to manage a separate workflow server: just install the library, connect it to a Postgres database (to store workflow/queue state) and you’re good to go.
When Should You Use My Project?
You should consider using DBOS if your application needs to reliably handle failures. For example, you might be building a payments service that must reliably process transactions even if servers crash mid-operation, or a long-running data pipeline that needs to resume from checkpoints rather
/r/Python
https://redd.it/1kml2h9
Hi r/Python – I’m Peter and I’ve been working on DBOS, an open-source, lightweight durable workflows library for Python apps. We just released our 1.0 version and I wanted to share it with the community!
GitHub link: https://github.com/dbos-inc/dbos-transact-py
What My Project Does
DBOS provides lightweight durable workflows and queues that you can add to Python apps in just a few lines of code. It’s comparable to popular open-source workflow and queue libraries like Airflow and Celery, but with a greater focus on reliability and automatically recovering from failures.
Our core goal in building DBOS is to make it lightweight and flexible so you can add it to your existing apps with minimal work. Everything you need to run durable workflows and queues is contained in this Python library. You don’t need to manage a separate workflow server: just install the library, connect it to a Postgres database (to store workflow/queue state) and you’re good to go.
When Should You Use My Project?
You should consider using DBOS if your application needs to reliably handle failures. For example, you might be building a payments service that must reliably process transactions even if servers crash mid-operation, or a long-running data pipeline that needs to resume from checkpoints rather
/r/Python
https://redd.it/1kml2h9
GitHub
GitHub - dbos-inc/dbos-transact-py: Lightweight Durable Python Workflows
Lightweight Durable Python Workflows. Contribute to dbos-inc/dbos-transact-py development by creating an account on GitHub.
Beam Pod - Run Cloud Containers from Python
Hey all!
Creator of [Beam](https://beam.cloud) here. Beam is a Python-focused cloud for developers—we let you deploy Python functions and scripts without managing any infrastructure, simply by adding decorators to your existing code.
**What My Project Does**
We just launched [Beam Pod](https://docs.beam.cloud/v2/pod/web-service), a Python SDK to instantly deploy containers as HTTPS endpoints on the cloud.
**Comparison**
For years, we searched for a simpler alternative to Docker—something lightweight to run a container behind a TCP port, with built-in load balancing and centralized logging, but without YAML or manual config. Existing solutions like Heroku or Railway felt too heavy for smaller services or quick experiments.
With Beam Pod, everything is Python-native—no YAML, no config files, just code:
from beam import Pod, Image
pod = Pod(
name="my-server",
image=Image(python_version="python3.11"),
gpu="A10G",
ports=[8000],
cpu=1,
memory=1024,
entrypoint=["python3", "-m", "http.server", "8000"],
)
instance = pod.create()
/r/Python
https://redd.it/1kmlmvo
Hey all!
Creator of [Beam](https://beam.cloud) here. Beam is a Python-focused cloud for developers—we let you deploy Python functions and scripts without managing any infrastructure, simply by adding decorators to your existing code.
**What My Project Does**
We just launched [Beam Pod](https://docs.beam.cloud/v2/pod/web-service), a Python SDK to instantly deploy containers as HTTPS endpoints on the cloud.
**Comparison**
For years, we searched for a simpler alternative to Docker—something lightweight to run a container behind a TCP port, with built-in load balancing and centralized logging, but without YAML or manual config. Existing solutions like Heroku or Railway felt too heavy for smaller services or quick experiments.
With Beam Pod, everything is Python-native—no YAML, no config files, just code:
from beam import Pod, Image
pod = Pod(
name="my-server",
image=Image(python_version="python3.11"),
gpu="A10G",
ports=[8000],
cpu=1,
memory=1024,
entrypoint=["python3", "-m", "http.server", "8000"],
)
instance = pod.create()
/r/Python
https://redd.it/1kmlmvo
Beam
Host a Web Service - Beam
Paid Bug Fix Opportunity for LBRY Project (USD) — Python Developers Wanted
Hi r/Python,
I'm posting to help the LBRY Foundation, a non-profit supporting the decentralized digital content protocol LBRY.
We're currently looking for experienced Python developers to help resolve a specific bug in the LBRY Hub codebase. This is a paid opportunity (USD), and we’re open to discussing future, ongoing development work with contributors who demonstrate quality work and reliability.
Project Overview:
Project Type: Bug fix for LBRY’s open-source Python hub codebase
What the LBRY Project Does: LBRY is a decentralized and user-controlled media platform
Language: Python
Repo: https://github.com/LBRYFoundation/hub
Payment: USD (details negotiated individually)
Target Audience: Current and future users of the LBRY desktop app
Comparison: Unlike traditional media platforms like YouTube or Vimeo, LBRY is a fully decentralized, open-source protocol that gives users and creators full ownership and control over their content. Contributing to LBRY means working on infrastructure that supports freedom of speech, censorship resistance, and user empowerment—values not typically prioritized in centralized alternatives. This opportunity offers developers a chance to impact a real, live network of users while working transparently in the open-source space.
Communication: You can reply here or reach out via LBRY’s ‘Developers’ Channel on Discord
We welcome bids from contributors who are passionate about open-source and decentralization. Please comment below or connect on Discord if you’re interested or have questions!
/r/Python
https://redd.it/1kmrd8o
Hi r/Python,
I'm posting to help the LBRY Foundation, a non-profit supporting the decentralized digital content protocol LBRY.
We're currently looking for experienced Python developers to help resolve a specific bug in the LBRY Hub codebase. This is a paid opportunity (USD), and we’re open to discussing future, ongoing development work with contributors who demonstrate quality work and reliability.
Project Overview:
Project Type: Bug fix for LBRY’s open-source Python hub codebase
What the LBRY Project Does: LBRY is a decentralized and user-controlled media platform
Language: Python
Repo: https://github.com/LBRYFoundation/hub
Payment: USD (details negotiated individually)
Target Audience: Current and future users of the LBRY desktop app
Comparison: Unlike traditional media platforms like YouTube or Vimeo, LBRY is a fully decentralized, open-source protocol that gives users and creators full ownership and control over their content. Contributing to LBRY means working on infrastructure that supports freedom of speech, censorship resistance, and user empowerment—values not typically prioritized in centralized alternatives. This opportunity offers developers a chance to impact a real, live network of users while working transparently in the open-source space.
Communication: You can reply here or reach out via LBRY’s ‘Developers’ Channel on Discord
We welcome bids from contributors who are passionate about open-source and decentralization. Please comment below or connect on Discord if you’re interested or have questions!
/r/Python
https://redd.it/1kmrd8o
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
Seeking Guidance on Enterprise-Level Auth in Flask: Role-Based Access & Best Practices
Hello, I’m building an enterprise application that requires robust authentication/authorization (user roles, permissions, etc.). I’ve used Flask-Login for basic auth, but I’m struggling to implement scalable role-based access control (RBAC) for admins, managers, and end-users.
For the experts:
1. What approach would you recommend for enterprise-grade auth in Flask?
- How do you structure roles/permissions at scale (e.g., database design)?
2. What are critical security practices for production ?
3. Resources: Are there tutorials, books, or open-source projects that demonstrate professional Flask auth workflows?
Current Setup:
- Flask-Login (basic sessions)
- SQLAlchemy for user models
Any advice or war stories from real-world projects would be invaluable!
TL;DR: Need advice/resources for enterprise auth in Flask: role-based access, security best practices, and scaling beyond Flask-Login.
/r/flask
https://redd.it/1kmmfdf
Hello, I’m building an enterprise application that requires robust authentication/authorization (user roles, permissions, etc.). I’ve used Flask-Login for basic auth, but I’m struggling to implement scalable role-based access control (RBAC) for admins, managers, and end-users.
For the experts:
1. What approach would you recommend for enterprise-grade auth in Flask?
- How do you structure roles/permissions at scale (e.g., database design)?
2. What are critical security practices for production ?
3. Resources: Are there tutorials, books, or open-source projects that demonstrate professional Flask auth workflows?
Current Setup:
- Flask-Login (basic sessions)
- SQLAlchemy for user models
Any advice or war stories from real-world projects would be invaluable!
TL;DR: Need advice/resources for enterprise auth in Flask: role-based access, security best practices, and scaling beyond Flask-Login.
/r/flask
https://redd.it/1kmmfdf
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
D Rejected a Solid Offer Waiting for My 'Dream Job'
I recently earned my PhD from the UK and moved to the US on a talent visa (EB1). In February, I began actively applying for jobs. After over 100 applications, I finally landed three online interviews. One of those roles was a well-known company within driving distance of where I currently live—this made it my top choice. I’ve got kid who is already settled in school here, and I genuinely like the area.
Around the same time, I received an offer from a company in another state. However, I decided to hold off on accepting it because I was still in the final stages with the local company. I informed them that I had another offer on the table, but they said I was still under serious consideration and invited me for an on-site interview.
The visit went well. I confidently answered all the AI/ML questions they asked. Afterward, the hiring manager gave me a full office tour. I saw all the "green flags" that Chip Huyen mentions in her ML interview book: told this would be my desk, showed all the office amenities, etc. I was even the first candidate they brought on site. All of this made me feel optimistic—maybe
/r/MachineLearning
https://redd.it/1kmpzpy
I recently earned my PhD from the UK and moved to the US on a talent visa (EB1). In February, I began actively applying for jobs. After over 100 applications, I finally landed three online interviews. One of those roles was a well-known company within driving distance of where I currently live—this made it my top choice. I’ve got kid who is already settled in school here, and I genuinely like the area.
Around the same time, I received an offer from a company in another state. However, I decided to hold off on accepting it because I was still in the final stages with the local company. I informed them that I had another offer on the table, but they said I was still under serious consideration and invited me for an on-site interview.
The visit went well. I confidently answered all the AI/ML questions they asked. Afterward, the hiring manager gave me a full office tour. I saw all the "green flags" that Chip Huyen mentions in her ML interview book: told this would be my desk, showed all the office amenities, etc. I was even the first candidate they brought on site. All of this made me feel optimistic—maybe
/r/MachineLearning
https://redd.it/1kmpzpy
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1kmufcq
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1kmufcq
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Microsoft layoffs hit Faster CPython team - including the Technical Lead, Mark Shannon
From Brett Cannon:
> There were layoffs at MS yesterday and 3 Python core devs from the Faster CPython team were caught in them.
> Eric Snow, Irit Katriel, Mark Shannon
IIRC Mark Shannon started the Faster CPython project, and he was its Technical Lead.
/r/Python
https://redd.it/1kmwdbu
From Brett Cannon:
> There were layoffs at MS yesterday and 3 Python core devs from the Faster CPython team were caught in them.
> Eric Snow, Irit Katriel, Mark Shannon
IIRC Mark Shannon started the Faster CPython project, and he was its Technical Lead.
/r/Python
https://redd.it/1kmwdbu
Bluesky Social
Brett Cannon (@snarky.ca)
There were layoffs at MS yesterday and 3 #Python core devs from the Faster CPython team were caught in them. If you know of any jobs, please send them their way:
Eric Snow: https://www.linkedin.com/in/ericsnowcurrently/
Irit Katriel: https://www.linkedin.com/in/irit…
Eric Snow: https://www.linkedin.com/in/ericsnowcurrently/
Irit Katriel: https://www.linkedin.com/in/irit…
Blame as a Service: Open-source for Blaming Others
Blame-as-a-Service (BaaS) : When your mistakes are too mainstream.
Your open-source API for blaming others.
😀
https://github.com/sbmagar13/blame-as-a-service
/r/Python
https://redd.it/1kmxawf
Blame-as-a-Service (BaaS) : When your mistakes are too mainstream.
Your open-source API for blaming others.
😀
https://github.com/sbmagar13/blame-as-a-service
/r/Python
https://redd.it/1kmxawf
GitHub
GitHub - sbmagar13/blame-as-a-service: An API that helps you blame others professionally. Perfect for developers, managers, and…
An API that helps you blame others professionally. Perfect for developers, managers, and anyone avoiding responsibility. - sbmagar13/blame-as-a-service
Query and Eval for Python Polars
I am a longtime pandas user. I hate typing when it comes to slicing and dicing the dataframe. Pandas query and eval come to the rescue.
On the other hand, pandas suffers from the performance and memory issue as many people have discussed. Fortunately, Polars comes to the rescue. I really enjoy all the performance improvements and the lazy frame just makes it possible to handle large dataset with a 32G memory PC.
However, with all the good things about Polars, I still miss the query and eval function of pandas, especially when it comes to data exploration. I just don’t like typing so many pl.col in a chained conditions or pl.when otherwise in nested conditions.
Without much luck with existing solutions, I implemented my own version of query, eval among other things. The idea is using lark to define a set of grammars so that it can parse any string expressions to polars expression.
For example,
“1 < a <= 3” is translated to (pl.col(‘a’)> 1) & (pl.col(‘a’)<=3), “a.sum().over(‘b’)” is translated to pl.col(‘a’).sum().over(‘b’), “ a in @A” where A is a list, is translated to pl.col(‘a’).isin(A), “‘2010-01-01’ <= date < ‘2019-10-01’” is translated accordingly for date time columns. For my
/r/Python
https://redd.it/1kmy3xm
I am a longtime pandas user. I hate typing when it comes to slicing and dicing the dataframe. Pandas query and eval come to the rescue.
On the other hand, pandas suffers from the performance and memory issue as many people have discussed. Fortunately, Polars comes to the rescue. I really enjoy all the performance improvements and the lazy frame just makes it possible to handle large dataset with a 32G memory PC.
However, with all the good things about Polars, I still miss the query and eval function of pandas, especially when it comes to data exploration. I just don’t like typing so many pl.col in a chained conditions or pl.when otherwise in nested conditions.
Without much luck with existing solutions, I implemented my own version of query, eval among other things. The idea is using lark to define a set of grammars so that it can parse any string expressions to polars expression.
For example,
“1 < a <= 3” is translated to (pl.col(‘a’)> 1) & (pl.col(‘a’)<=3), “a.sum().over(‘b’)” is translated to pl.col(‘a’).sum().over(‘b’), “ a in @A” where A is a list, is translated to pl.col(‘a’).isin(A), “‘2010-01-01’ <= date < ‘2019-10-01’” is translated accordingly for date time columns. For my
/r/Python
https://redd.it/1kmy3xm
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Refinedoc - Little text processing lib
Hello everyone!
I'm here to present my latest little project, which I developed as part of a larger project for my work.
What's more, the lib is written in pure Python and has no dependencies other than the standard lib.
What My Project Does
It's called Refinedoc, and it's a little python lib that lets you remove headers and footers from poorly structured texts in a fairly robust and normally not very RAM-intensive way (appreciate the scientific precision of that last point), based on this paper https://www.researchgate.net/publication/221253782\_Header\_and\_Footer\_Extraction\_by\_Page-Association
I developed it initially to manage content extracted from PDFs I process as part of a professional project.
When Should You Use My Project?
The idea behind this library is to enable post-extraction processing of unstructured text content, the best-known example being pdf files. The main idea is to robustly and securely separate the text body from its headers and footers which is very useful when you collect lot of PDF files and want the body oh each.
Comparison
I compare it with pymuPDF4LLM wich is incredible but don't allow to extract specifically headers and footers and the license was a problem in my case.
I'd be delighted to hear your feedback on the code or lib as such!
https://github.com/CyberCRI/refinedoc
/r/Python
https://redd.it/1kn4lfx
Hello everyone!
I'm here to present my latest little project, which I developed as part of a larger project for my work.
What's more, the lib is written in pure Python and has no dependencies other than the standard lib.
What My Project Does
It's called Refinedoc, and it's a little python lib that lets you remove headers and footers from poorly structured texts in a fairly robust and normally not very RAM-intensive way (appreciate the scientific precision of that last point), based on this paper https://www.researchgate.net/publication/221253782\_Header\_and\_Footer\_Extraction\_by\_Page-Association
I developed it initially to manage content extracted from PDFs I process as part of a professional project.
When Should You Use My Project?
The idea behind this library is to enable post-extraction processing of unstructured text content, the best-known example being pdf files. The main idea is to robustly and securely separate the text body from its headers and footers which is very useful when you collect lot of PDF files and want the body oh each.
Comparison
I compare it with pymuPDF4LLM wich is incredible but don't allow to extract specifically headers and footers and the license was a problem in my case.
I'd be delighted to hear your feedback on the code or lib as such!
https://github.com/CyberCRI/refinedoc
/r/Python
https://redd.it/1kn4lfx
ResearchGate
(PDF) Header and Footer Extraction by Page-Association
PDF | This paper introduces a robust algorithm to extract headers and footers from a variety of electronic documents, such as image files, Adobe PDF... | Find, read and cite all the research you need on ResearchGate
PyTorch vs. Keras/Tensorflow D
Hey guys,
I am aware of the intended use cases, but I am interested to learn what you use more often in your projects. PyTorch or Keras and why?
/r/Python
https://redd.it/1kn4132
Hey guys,
I am aware of the intended use cases, but I am interested to learn what you use more often in your projects. PyTorch or Keras and why?
/r/Python
https://redd.it/1kn4132
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
R AlphaEvolve: A coding agent for scientific and algorithmic discovery
Paper: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
Abstract:
In this white paper, we present AlphaEvolve, an evolutionary coding agent that substantially enhances
capabilities of state-of-the-art LLMs on highly challenging tasks such as tackling open scientific problems
or optimizing critical pieces of computational infrastructure. AlphaEvolve orchestrates an autonomous
pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code. Using
an evolutionary approach, continuously receiving feedback from one or more evaluators, AlphaEvolve
iteratively improves the algorithm, potentially leading to new scientific and practical discoveries. We
demonstrate the broad applicability of this approach by applying it to a number of important computational problems. When applied to optimizing critical components of large-scale computational
stacks at Google, AlphaEvolve developed a more efficient scheduling algorithm for data centers, found
a functionally equivalent simplification in the circuit design of hardware accelerators, and accelerated the training of the LLM underpinning AlphaEvolve itself. Furthermore, AlphaEvolve discovered
novel, provably correct algorithms that surpass state-of-the-art solutions on a spectrum of problems
in mathematics and computer science, significantly expanding the scope of prior automated discovery
methods (Romera-Paredes et al., 2023). Notably, AlphaEvolve developed a search algorithm that found a
procedure to multiply two 4 × 4 complex-valued matrices using 48 scalar multiplications; offering the
first improvement, after 56 years, over Strassen’s algorithm in this setting.
/r/MachineLearning
https://redd.it/1kmxi4z
Paper: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
Abstract:
In this white paper, we present AlphaEvolve, an evolutionary coding agent that substantially enhances
capabilities of state-of-the-art LLMs on highly challenging tasks such as tackling open scientific problems
or optimizing critical pieces of computational infrastructure. AlphaEvolve orchestrates an autonomous
pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code. Using
an evolutionary approach, continuously receiving feedback from one or more evaluators, AlphaEvolve
iteratively improves the algorithm, potentially leading to new scientific and practical discoveries. We
demonstrate the broad applicability of this approach by applying it to a number of important computational problems. When applied to optimizing critical components of large-scale computational
stacks at Google, AlphaEvolve developed a more efficient scheduling algorithm for data centers, found
a functionally equivalent simplification in the circuit design of hardware accelerators, and accelerated the training of the LLM underpinning AlphaEvolve itself. Furthermore, AlphaEvolve discovered
novel, provably correct algorithms that surpass state-of-the-art solutions on a spectrum of problems
in mathematics and computer science, significantly expanding the scope of prior automated discovery
methods (Romera-Paredes et al., 2023). Notably, AlphaEvolve developed a search algorithm that found a
procedure to multiply two 4 × 4 complex-valued matrices using 48 scalar multiplications; offering the
first improvement, after 56 years, over Strassen’s algorithm in this setting.
/r/MachineLearning
https://redd.it/1kmxi4z