timelength - A flexible duration parser designed for human readable lengths of time.
Hello!
I'm here to share
GitHub: https://github.com/EtorixDev/timelength
## What My Project Does
Most duration parsers use regex and expect a rather narrow set of input formats, and/or don't allow much deviation by way of mistake, typo, or just quirk of whichever method/individual input the duration.
For automated systems, this is just fine. But when working with real people and natural input, it can be more useful to have flexibility. That's where
The parsing behavior can also be customized by way of
/r/Python
https://redd.it/1kx7x7c
Hello!
I'm here to share
timelength, a project I started 3 years ago for personal use in a Discord bot and which I've sporadically been refining since. I would appreciate any feedback!GitHub: https://github.com/EtorixDev/timelength
## What My Project Does
timelength is a duration parser which is designed for human readable lengths of time. It's goal is ultimate flexibility.Most duration parsers use regex and expect a rather narrow set of input formats, and/or don't allow much deviation by way of mistake, typo, or just quirk of whichever method/individual input the duration.
For automated systems, this is just fine. But when working with real people and natural input, it can be more useful to have flexibility. That's where
timelength comes in.timelength uses a customizable configuration file of tokens allowing for parsing a whole plethora of mixed formats, such as: 1m, 1min, 1 Minute, 1m and 2 SECONDS, 3h, 2 min, 3sec, 1.2d, 1,234s, one hour, twenty-two hours and thirty five minutes, half of a day, 1/2 of a day, 1/4 hour, 1 Day, 2:34:12, 1:2:34:12, 1:5:1/3:27:22 and more.The parsing behavior can also be customized by way of
ParserSettings which will allow or deny certain behaviors, and FailureFlags which will decide whether certain invalid inputs should wholly invalidate/r/Python
https://redd.it/1kx7x7c
GitHub
GitHub - EtorixDev/timelength: A flexible python duration parser designed for human readable lengths of time.
A flexible python duration parser designed for human readable lengths of time. - EtorixDev/timelength
Should I drop pandas and move to polars/duckdb or go?
Good day, everyone!
Recently I have built a pandas pipeline that runs in every two minutes, does pandas ops like pivot tables, merging, and a lot of vectorized operations.
with the ram and speed it is tolerable, however with CPU it is disaster. for context my dataset is small, 5-10k rows at most, and the final dataframe columns can be up to 150-170. the final dataframe size is about 100 kb in memory.
it is over geospatial data, it takes data from 4-5 sources, runs pivot table operations at first, finds h3 cell ids and sums the values on the same cells.
then it merges those sources into single dataframe and does math. all of them are vectorized, so the speed is not problem. it does, cumulative sum operations, numpy calculations, and others.
the app runs alongside fastapi, and shares objects, calculation happens in another process, then passed to main process and the object in main process is updated
the problem is the runs inside not big server inside a kubernetes cluster, alongside go services.
this pod uses a lot of CPU and RAM, the pod has 1.5-2 CPUs and 1.5-2 GB RAM to do the job,
/r/Python
https://redd.it/1kxd97o
Good day, everyone!
Recently I have built a pandas pipeline that runs in every two minutes, does pandas ops like pivot tables, merging, and a lot of vectorized operations.
with the ram and speed it is tolerable, however with CPU it is disaster. for context my dataset is small, 5-10k rows at most, and the final dataframe columns can be up to 150-170. the final dataframe size is about 100 kb in memory.
it is over geospatial data, it takes data from 4-5 sources, runs pivot table operations at first, finds h3 cell ids and sums the values on the same cells.
then it merges those sources into single dataframe and does math. all of them are vectorized, so the speed is not problem. it does, cumulative sum operations, numpy calculations, and others.
the app runs alongside fastapi, and shares objects, calculation happens in another process, then passed to main process and the object in main process is updated
the problem is the runs inside not big server inside a kubernetes cluster, alongside go services.
this pod uses a lot of CPU and RAM, the pod has 1.5-2 CPUs and 1.5-2 GB RAM to do the job,
/r/Python
https://redd.it/1kxd97o
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I Built a Python Bot That Automatically Cleans Up Your Apple Music Library
My friend had **3,000+ songs** rotting in her Apple Music library from over the past 8 years, and manually deleting them was abysmal. 😩 So I programmed a Python bot that nukes unwanted tracks automatically — *and it worked*. It took about 2 hours to clean up the sucker, but now she's alieveated with her fresh start.
**What My Project Does:**
It’s a script that auto-deletes Apple Music tracks based on rules *you* set (like play counts, skips, or date added). No more endless scrolling and tapping.
**Who It’s For:**
Casual users are drowning in old music, **not** production environments. This is a scrappy personal tool — use at your own risk!
**Why This Over Alternatives?**
* **Manual deletion:** Apple still won’t let you bulk-select (why??).
* **Paid apps:** Tools like SongShift or Tune Sweeper cost $$$ and lack customization.
* **Mine:** Free, open-source, and tweakable. Want to delete all songs with <5 plays? Change 1 line of code.
Video demo: [https://www.youtube.com/watch?v=7bDLTM5qMOE](https://www.youtube.com/watch?v=7bDLTM5qMOE)
GitHub (star ⭐ if you’re into it): [https://github.com/tycooperaow/apple\_music\_deleter/tree/main](https://github.com/tycooperaow/apple_music_deleter/tree/main)
/r/Python
https://redd.it/1kx426z
My friend had **3,000+ songs** rotting in her Apple Music library from over the past 8 years, and manually deleting them was abysmal. 😩 So I programmed a Python bot that nukes unwanted tracks automatically — *and it worked*. It took about 2 hours to clean up the sucker, but now she's alieveated with her fresh start.
**What My Project Does:**
It’s a script that auto-deletes Apple Music tracks based on rules *you* set (like play counts, skips, or date added). No more endless scrolling and tapping.
**Who It’s For:**
Casual users are drowning in old music, **not** production environments. This is a scrappy personal tool — use at your own risk!
**Why This Over Alternatives?**
* **Manual deletion:** Apple still won’t let you bulk-select (why??).
* **Paid apps:** Tools like SongShift or Tune Sweeper cost $$$ and lack customization.
* **Mine:** Free, open-source, and tweakable. Want to delete all songs with <5 plays? Change 1 line of code.
Video demo: [https://www.youtube.com/watch?v=7bDLTM5qMOE](https://www.youtube.com/watch?v=7bDLTM5qMOE)
GitHub (star ⭐ if you’re into it): [https://github.com/tycooperaow/apple\_music\_deleter/tree/main](https://github.com/tycooperaow/apple_music_deleter/tree/main)
/r/Python
https://redd.it/1kx426z
YouTube
How to Automatically Delete Songs From Your Apple Music Library (using Python)
🚀 Automatically Clean Up Your Apple Music Library with Python! 🎶
Tired of manually removing unwanted songs from your Apple Music library?
Code to the bot is here: https://github.com/tycooperaow/apple_music_deleter
In this step-by-step tutorial, I’ll show…
Tired of manually removing unwanted songs from your Apple Music library?
Code to the bot is here: https://github.com/tycooperaow/apple_music_deleter
In this step-by-step tutorial, I’ll show…
🚀 Free for First 50 Django Beginners Ebook – Build Real Projects, 100% Off!
Hi everyone,
I just published an ebook called “Django Unchained for Beginners” – a hands-on guide to learning Django by building two complete projects:
1. ✅ To-Do App – Covers core Django CRUD concepts
2. ✅ Blog App – Includes:
Custom user auth
Newsletter system
Comments
Rich Text Editor
PostgreSQL
Deployed for free on Render
📁 Source code included for both projects.
🎁 I'm giving away the ebook 100% free to the first 50 people.
📝 If you grab a copy, I’d really appreciate an honest review to help others!
📎 Gumroad link and blog demo will be added in the comments below. (if you don't find the link in the comment section then you can manually type the link in your browser)
Thanks and happy coding!
https://preview.redd.it/a8eucmbtak3f1.png?width=1460&format=png&auto=webp&s=649fc9148e4ab15976451dd3513d70dff303a800
/r/django
https://redd.it/1kxn2rv
Hi everyone,
I just published an ebook called “Django Unchained for Beginners” – a hands-on guide to learning Django by building two complete projects:
1. ✅ To-Do App – Covers core Django CRUD concepts
2. ✅ Blog App – Includes:
Custom user auth
Newsletter system
Comments
Rich Text Editor
PostgreSQL
Deployed for free on Render
📁 Source code included for both projects.
🎁 I'm giving away the ebook 100% free to the first 50 people.
📝 If you grab a copy, I’d really appreciate an honest review to help others!
📎 Gumroad link and blog demo will be added in the comments below. (if you don't find the link in the comment section then you can manually type the link in your browser)
Thanks and happy coding!
https://preview.redd.it/a8eucmbtak3f1.png?width=1460&format=png&auto=webp&s=649fc9148e4ab15976451dd3513d70dff303a800
/r/django
https://redd.it/1kxn2rv
Repurposed an Old Laptop into a Headless SMS Notification Server — Here's How
What My Project Does
This project listens to desktop notifications on a Fedora Linux machine (like Gmail, WhatsApp Web, Instagram, etc.) and sends them as SMS messages using an old USB GSM modem and Gammu. The whole thing is headless, automated via a systemd user service, and runs persistently even with the laptop lid closed.
I built it out of necessity after switching to a feature phone (yes, really!). Now, my old laptop sits tucked in a drawer, running this service silently and sending me SMS alerts for things I’d normally miss without a smartphone.
GitHub: https://github.com/joshikarthikey/notify-sms
---
Target Audience
Tinkerers who want to repurpose old laptops and modems.
Anyone moving away from smartphones but still wanting critical app notifications.
Hobbyists, sysadmins, and privacy-conscious users.
Great for DIY automation enthusiasts!
This is not a production-grade service, but it’s stable and reliable enough for daily personal use.
---
Comparison to Alternatives
Most alternatives are cloud-based or depend on mobile apps. This project:
Requires no cloud account, no smartphone, and no internet on the phone.
Runs completely offline, powered by Linux, Python, Gammu, and systemd.
Can be installed on any old Linux machine with a USB modem.
Unlike apps like Pushbullet or Twilio-based setups, this is entirely DIY and local.
/r/Python
https://redd.it/1kxs9b0
What My Project Does
This project listens to desktop notifications on a Fedora Linux machine (like Gmail, WhatsApp Web, Instagram, etc.) and sends them as SMS messages using an old USB GSM modem and Gammu. The whole thing is headless, automated via a systemd user service, and runs persistently even with the laptop lid closed.
I built it out of necessity after switching to a feature phone (yes, really!). Now, my old laptop sits tucked in a drawer, running this service silently and sending me SMS alerts for things I’d normally miss without a smartphone.
GitHub: https://github.com/joshikarthikey/notify-sms
---
Target Audience
Tinkerers who want to repurpose old laptops and modems.
Anyone moving away from smartphones but still wanting critical app notifications.
Hobbyists, sysadmins, and privacy-conscious users.
Great for DIY automation enthusiasts!
This is not a production-grade service, but it’s stable and reliable enough for daily personal use.
---
Comparison to Alternatives
Most alternatives are cloud-based or depend on mobile apps. This project:
Requires no cloud account, no smartphone, and no internet on the phone.
Runs completely offline, powered by Linux, Python, Gammu, and systemd.
Can be installed on any old Linux machine with a USB modem.
Unlike apps like Pushbullet or Twilio-based setups, this is entirely DIY and local.
/r/Python
https://redd.it/1kxs9b0
GitHub
GitHub - joshikarthikey/notify-sms
Contribute to joshikarthikey/notify-sms development by creating an account on GitHub.
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1kxwlna
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1kxwlna
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I accidentally built a vector database using video compression
While building a RAG system, I got frustrated watching my 8GB RAM disappear into a vector database just to search my own PDFs. After burning through $150 in cloud costs, I had a weird thought: what if I encoded my documents into video frames?
The idea sounds absurd - why would you store text in video? But modern video codecs have spent decades optimizing for compression. So I tried converting text into QR codes, then encoding those as video frames, letting H.264/H.265 handle the compression magic.
The results surprised me. 10,000 PDFs compressed down to a 1.4GB video file. Search latency came in around 900ms compared to Pinecone’s 820ms, so about 10% slower. But RAM usage dropped from 8GB+ to just 200MB, and it works completely offline with no API keys or monthly bills.
The technical approach is simple: each document chunk gets encoded into QR codes which become video frames. Video compression handles redundancy between similar documents remarkably well. Search works by decoding relevant frame ranges based on a lightweight index.
You get a vector database that’s just a video file you can copy anywhere.
https://github.com/Olow304/memvid
/r/Python
https://redd.it/1ky24a0
While building a RAG system, I got frustrated watching my 8GB RAM disappear into a vector database just to search my own PDFs. After burning through $150 in cloud costs, I had a weird thought: what if I encoded my documents into video frames?
The idea sounds absurd - why would you store text in video? But modern video codecs have spent decades optimizing for compression. So I tried converting text into QR codes, then encoding those as video frames, letting H.264/H.265 handle the compression magic.
The results surprised me. 10,000 PDFs compressed down to a 1.4GB video file. Search latency came in around 900ms compared to Pinecone’s 820ms, so about 10% slower. But RAM usage dropped from 8GB+ to just 200MB, and it works completely offline with no API keys or monthly bills.
The technical approach is simple: each document chunk gets encoded into QR codes which become video frames. Video compression handles redundancy between similar documents remarkably well. Search works by decoding relevant frame ranges based on a lightweight index.
You get a vector database that’s just a video file you can copy anywhere.
https://github.com/Olow304/memvid
/r/Python
https://redd.it/1ky24a0
GitHub
GitHub - Olow304/memvid: Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic…
Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed. - Olow304/memvid
I built a template for FastAPI apps with React frontends using Nginx Unit
Hey guys, this is probably a common experience, but as I built more and more Python apps for actual users, I always found myself eventually having to move away from libraries like Streamlit or Gradio as features and complexity grew.
This meant that I eventually had to reach for React and the disastrous JS ecosystem; it also meant managing two applications (the React frontend and a FastAPI backend), which always made deployment more of a chore. However, having access to building UIs with Tailwind and Shadcn was so good, I preferred to just bite the bullet.
But as I kept working on and polishing this stack, I started to find ways to make it much more manageable. One of the biggest improvements was starting to use Nginx Unit, which is a drop-in replacement for uvicorn in Python terms, but it can also serve SPAs like React incredibly well, while also handling request routing internally.
This setup lets me collapse my two applications into a single runtime, a single container. Which makes it SO much easier to deploy my applications to GCP Cloud Run, Azure Web Apps, Fly Machines, etc.
Anyways, I created a template repo that I could reuse to skip the boilerplate of this
/r/Python
https://redd.it/1ky1bwq
Hey guys, this is probably a common experience, but as I built more and more Python apps for actual users, I always found myself eventually having to move away from libraries like Streamlit or Gradio as features and complexity grew.
This meant that I eventually had to reach for React and the disastrous JS ecosystem; it also meant managing two applications (the React frontend and a FastAPI backend), which always made deployment more of a chore. However, having access to building UIs with Tailwind and Shadcn was so good, I preferred to just bite the bullet.
But as I kept working on and polishing this stack, I started to find ways to make it much more manageable. One of the biggest improvements was starting to use Nginx Unit, which is a drop-in replacement for uvicorn in Python terms, but it can also serve SPAs like React incredibly well, while also handling request routing internally.
This setup lets me collapse my two applications into a single runtime, a single container. Which makes it SO much easier to deploy my applications to GCP Cloud Run, Azure Web Apps, Fly Machines, etc.
Anyways, I created a template repo that I could reuse to skip the boilerplate of this
/r/Python
https://redd.it/1ky1bwq
Reddit
From the Python community on Reddit: I built a template for FastAPI apps with React frontends using Nginx Unit
Explore this post and more from the Python community
R Can't attend to present at ICML
Due to visa issues, no one on our team can attend to present our poster at ICML.
Does anyone have experience with not physically attending in the past? Is ICML typically flexible with this if we register and don't come to stand by the poster? Or do they check conference check-ins?
/r/MachineLearning
https://redd.it/1kxs67w
Due to visa issues, no one on our team can attend to present our poster at ICML.
Does anyone have experience with not physically attending in the past? Is ICML typically flexible with this if we register and don't come to stand by the poster? Or do they check conference check-ins?
/r/MachineLearning
https://redd.it/1kxs67w
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
We built a Python SDK for our open source auth platform - would love feedback from Flask devs!!
Hey everyone, I’m Megan writing from Tesseral, the YC-backed open source authentication platform built specifically for B2B software (think: SAML, SCIM, RBAC, session management, etc.). We released our Python SDK and I’d love feedback from Flask devs….
If you’re interested in auth or if you have experience building it in Flask, would love to know what’s missing / confusing / would make this easier to use in your stack? Also, if you have general gripes about auth (it is very gripeable) would love to hear them.
Here’s our GitHub: https://github.com/tesseral-labs/tesseral
And our docs: https://tesseral.com/docs/what-is-tesseral
Appreciate the feedback!
/r/flask
https://redd.it/1kxs6ff
Hey everyone, I’m Megan writing from Tesseral, the YC-backed open source authentication platform built specifically for B2B software (think: SAML, SCIM, RBAC, session management, etc.). We released our Python SDK and I’d love feedback from Flask devs….
If you’re interested in auth or if you have experience building it in Flask, would love to know what’s missing / confusing / would make this easier to use in your stack? Also, if you have general gripes about auth (it is very gripeable) would love to hear them.
Here’s our GitHub: https://github.com/tesseral-labs/tesseral
And our docs: https://tesseral.com/docs/what-is-tesseral
Appreciate the feedback!
/r/flask
https://redd.it/1kxs6ff
GitHub
GitHub - tesseral-labs/tesseral: Open source auth infrastructure for B2B SaaS
Open source auth infrastructure for B2B SaaS. Contribute to tesseral-labs/tesseral development by creating an account on GitHub.
Architecture and code for a Python RAG API using LangChain, FastAPI, and pgvector
I’ve been experimenting with building a Retrieval-Augmented Generation (RAG) system entirely in Python, and I just completed a write-up that breaks down the architecture and implementation details.
The stack:
Python + FastAPI
LangChain (for orchestration)
PostgreSQL + pgvector
OpenAI embeddings
I cover the high-level design, vector store integration, async handling, and API deployment — all with code and diagrams.
I'd love to hear your feedback on the architecture or tradeoffs, especially if you're also working with vector DBs or LangChain.
📄 Architecture + code walkthrough
/r/Python
https://redd.it/1ky5bgs
I’ve been experimenting with building a Retrieval-Augmented Generation (RAG) system entirely in Python, and I just completed a write-up that breaks down the architecture and implementation details.
The stack:
Python + FastAPI
LangChain (for orchestration)
PostgreSQL + pgvector
OpenAI embeddings
I cover the high-level design, vector store integration, async handling, and API deployment — all with code and diagrams.
I'd love to hear your feedback on the architecture or tradeoffs, especially if you're also working with vector DBs or LangChain.
📄 Architecture + code walkthrough
/r/Python
https://redd.it/1ky5bgs
Vitalii Honchar
Python RAG API Tutorial with LangChain & FastAPI – Complete Guide
Learn how to build a Retrieval-Augmented Generation (RAG) PDF chat service using FastAPI, Postgres pgvector, and OpenAI API in this step-by-step tutorial.
Looking for advice: Applying for a full-stack role with 5-year experience requirement (React/Django) — Internal referral opportunity
Hi everyone,
I’d really appreciate some advice or insight from folks who’ve been in a similar situation.
I was recently referred internally for a full-stack software engineer role that I’m very excited about. It’s a precious opportunity for me, but I’m feeling unsure because the job requires 5 years of experience in designing, developing, and testing web applications using Python, Django, React, and JavaScript.
Here’s my background:
I graduated in 2020 with a degree in Computer Engineering.
I worked for 2.5 years doing manual QA testing on the Google TV platform.
For the past 5 years, I’ve been teaching Python fundamentals and data structures at a coding bootcamp.
I only started learning React and Django a few months ago, but I’ve gone through the official tutorials on both the React and Django websites and have built a few simple full-stack apps. I feel fairly comfortable with the basics and am continuing to learn every day.
While I don't meet the "5 years of professional experience with this exact stack" requirement, I do have relevant technical exposure, strong Python fundamentals, and hands-on experience through teaching and recent personal projects.
If you've been in similar shoes — applying for a role where you didn’t meet all the
/r/django
https://redd.it/1ky3fqc
Hi everyone,
I’d really appreciate some advice or insight from folks who’ve been in a similar situation.
I was recently referred internally for a full-stack software engineer role that I’m very excited about. It’s a precious opportunity for me, but I’m feeling unsure because the job requires 5 years of experience in designing, developing, and testing web applications using Python, Django, React, and JavaScript.
Here’s my background:
I graduated in 2020 with a degree in Computer Engineering.
I worked for 2.5 years doing manual QA testing on the Google TV platform.
For the past 5 years, I’ve been teaching Python fundamentals and data structures at a coding bootcamp.
I only started learning React and Django a few months ago, but I’ve gone through the official tutorials on both the React and Django websites and have built a few simple full-stack apps. I feel fairly comfortable with the basics and am continuing to learn every day.
While I don't meet the "5 years of professional experience with this exact stack" requirement, I do have relevant technical exposure, strong Python fundamentals, and hands-on experience through teaching and recent personal projects.
If you've been in similar shoes — applying for a role where you didn’t meet all the
/r/django
https://redd.it/1ky3fqc
react.dev
React is the library for web and native user interfaces. Build user interfaces out of individual pieces called components written in JavaScript. React is designed to let you seamlessly combine components written by independent people, teams, and organizations.
Recent Noteworthy Package Releases
In the last 7 days, there were these big upgrades.
**Deltalake 1.0.0**
**DeepEval v3.0**
**pytest-asyncio 1.0.0**
**Curlify v3.0.0**
**cachetools v6.0.0**
**Apache Spark 4.0.0**
/r/Python
https://redd.it/1ky8o84
In the last 7 days, there were these big upgrades.
**Deltalake 1.0.0**
**DeepEval v3.0**
**pytest-asyncio 1.0.0**
**Curlify v3.0.0**
**cachetools v6.0.0**
**Apache Spark 4.0.0**
/r/Python
https://redd.it/1ky8o84
GitHub
Release python-v1.0.0: Zero to One · delta-io/delta-rs
It only took us 5 years, but we made it! You can find the upgrade guide here.
Performance improvements
refactor: async writer + multi-part by @ion-elgreco in #3255
perf: use lazy sync reader by @i...
Performance improvements
refactor: async writer + multi-part by @ion-elgreco in #3255
perf: use lazy sync reader by @i...
DTC - CLI tool to dump telegram channels.
🚀 What my project does
extract data from particular telegram channel.
Target Audience
Anyone who wants to dump channel.
Comparison
Never thought about alternatives, because I made up this poject idea this morning.
Key features:
📋 Lists all channels you're subscribed to in a nice tabular format
💾 Dumps complete message history from any channel
📸 Downloads attached photos automatically
💾 Exports everything to structured JSONL format
🖥️ Interactive CLI with clean, readable output
# 🛠️ Tech Stack
Built with some solid Python libraries
:
Telethon \- for Telegram API integration
Pandas \- for data handling and table formatting
Tabulate \- for those beautiful CLI tables
Requires Python 3.8+ and works across platforms.
# 🎯 How it works
The workflow is super simple
:
bash
# List your channels
>> list
+----+----------------------------+-------------+
| | name | telegram id |
+====+============================+=============+
| 0 | My Favorite Channel | 123456789 |
+----+----------------------------+-------------+
/r/Python
https://redd.it/1kya1xc
🚀 What my project does
extract data from particular telegram channel.
Target Audience
Anyone who wants to dump channel.
Comparison
Never thought about alternatives, because I made up this poject idea this morning.
Key features:
📋 Lists all channels you're subscribed to in a nice tabular format
💾 Dumps complete message history from any channel
📸 Downloads attached photos automatically
💾 Exports everything to structured JSONL format
🖥️ Interactive CLI with clean, readable output
# 🛠️ Tech Stack
Built with some solid Python libraries
:
Telethon \- for Telegram API integration
Pandas \- for data handling and table formatting
Tabulate \- for those beautiful CLI tables
Requires Python 3.8+ and works across platforms.
# 🎯 How it works
The workflow is super simple
:
bash
# List your channels
>> list
+----+----------------------------+-------------+
| | name | telegram id |
+====+============================+=============+
| 0 | My Favorite Channel | 123456789 |
+----+----------------------------+-------------+
/r/Python
https://redd.it/1kya1xc
Reddit
From the Python community on Reddit: DTC - CLI tool to dump telegram channels.
Explore this post and more from the Python community
I don't understand the FlaskSQLalchemy conventions
When using the FlaskSQLalchemy package, I don't understand the convention of
class Base(DeclarativeBase):
pass
db=SQLAlchemy(model_class=Base)
Why not just pass in `db=SQLAlchemy(model_class=DeclarativeBase)` ?
/r/flask
https://redd.it/1kyd5l7
When using the FlaskSQLalchemy package, I don't understand the convention of
class Base(DeclarativeBase):
pass
db=SQLAlchemy(model_class=Base)
Why not just pass in `db=SQLAlchemy(model_class=DeclarativeBase)` ?
/r/flask
https://redd.it/1kyd5l7
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Open-source AI-powered test automation library for mobile and web
Hey [r/Python](/r/Python/),
My name is Alex Rodionov and I'm a tech lead of the Selenium project. For the last 10 months, I’ve been working on **Alumnium**. I've already shared it [2 months ago](https://www.reddit.com/r/Python/comments/1jpo96u/i_built_an_opensource_aipowered_library_for_web/), but since then the project gained a lot of new features, notably:
* mobile applications support via Appium;
* built-in caching for faster test execution;
* fully local model support with Ollama and Mistral Small 3.1.
**What My Project Does**
It's an open-source Python library that automates testing for mobile and web applications by leveraging AI, natural language commands and Appium, Playwright, or Selenium.
**Target Audience**
Test automation engineers or anyone writing tests for web applications. It’s an early-stage project, not ready for production use in complex web applications.
**Comparison**
Unlike other similar projects (Shortest, LaVague, Hercules), Alumnium can be used in existing tests without changes to test runners, reporting tools, or any other test infrastructure. This allows me to gradually migrate my test suites (mostly Selenium) and revert whenever something goes wrong (this happens a lot, to be honest). Other major differences:
* dead cheap (works on low-tier models like gpt-4o-mini, costs $20 per month for 1k+ tests)
* not an AI agent (dumb enough to fail the test rather than working around to
/r/Python
https://redd.it/1kyjmwl
Hey [r/Python](/r/Python/),
My name is Alex Rodionov and I'm a tech lead of the Selenium project. For the last 10 months, I’ve been working on **Alumnium**. I've already shared it [2 months ago](https://www.reddit.com/r/Python/comments/1jpo96u/i_built_an_opensource_aipowered_library_for_web/), but since then the project gained a lot of new features, notably:
* mobile applications support via Appium;
* built-in caching for faster test execution;
* fully local model support with Ollama and Mistral Small 3.1.
**What My Project Does**
It's an open-source Python library that automates testing for mobile and web applications by leveraging AI, natural language commands and Appium, Playwright, or Selenium.
**Target Audience**
Test automation engineers or anyone writing tests for web applications. It’s an early-stage project, not ready for production use in complex web applications.
**Comparison**
Unlike other similar projects (Shortest, LaVague, Hercules), Alumnium can be used in existing tests without changes to test runners, reporting tools, or any other test infrastructure. This allows me to gradually migrate my test suites (mostly Selenium) and revert whenever something goes wrong (this happens a lot, to be honest). Other major differences:
* dead cheap (works on low-tier models like gpt-4o-mini, costs $20 per month for 1k+ tests)
* not an AI agent (dumb enough to fail the test rather than working around to
/r/Python
https://redd.it/1kyjmwl
Reddit
From the Python community on Reddit: I built an open-source AI-powered library for web testing
Explore this post and more from the Python community
Problems with Django Autocomplete Light
So, I'm stuck, I'm trying to make two selection boxes, one to select the state, the other to select the city. Both the code and the html are not crashing, but nothing is being loaded into the selection boxes.
Any help would be greatly appreciated!
#models.py
class City(models.Model):
country = models.CharField(maxlength=50)
state = models.CharField(maxlength=50)
city = models.CharField(maxlength=50)
def str(self):
return f"{self.name}, {self.state}"
class City(models.Model):
country = models.CharField(maxlength=50)
state = models.CharField(maxlength=50)
city = models.CharField(maxlength=50)
def str(self):
return f"{self.name}, {self.state}"
#forms.py
class CreateUserForm(forms.ModelForm):
def init(self, args, kwargs):
super().__init__(args, kwargs)
# Ensure city field has proper empty queryset initially
/r/djangolearning
https://redd.it/1kxt7bn
So, I'm stuck, I'm trying to make two selection boxes, one to select the state, the other to select the city. Both the code and the html are not crashing, but nothing is being loaded into the selection boxes.
Any help would be greatly appreciated!
#models.py
class City(models.Model):
country = models.CharField(maxlength=50)
state = models.CharField(maxlength=50)
city = models.CharField(maxlength=50)
def str(self):
return f"{self.name}, {self.state}"
class City(models.Model):
country = models.CharField(maxlength=50)
state = models.CharField(maxlength=50)
city = models.CharField(maxlength=50)
def str(self):
return f"{self.name}, {self.state}"
#forms.py
class CreateUserForm(forms.ModelForm):
def init(self, args, kwargs):
super().__init__(args, kwargs)
# Ensure city field has proper empty queryset initially
/r/djangolearning
https://redd.it/1kxt7bn
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
R How to add confidence intervals to your LLM-as-a-judge
Hi all – I recently built a system that automatically determines how many LLM-as-a-judge runs you need for statistically reliable scores. Key insight: treat each LLM evaluation as a noisy sample, then use confidence intervals to decide when to stop sampling.
The math shows reliability is surprisingly cheap (95% → 99% confidence only costs 1.7x more), but precision is expensive (doubling scale granularity costs 4x more).Also implemented "mixed-expert sampling" - rotating through multiple models (GPT-4, Claude, etc.) in the same batch for better robustness.
I also analyzed how latency, cost and reliability scale in this approach.Typical result: need 5-20 samples instead of guessing. Especially useful for AI safety evals and model comparisons where reliability matters.
Blog: https://www.sunnybak.net/blog/precision-based-sampling
GitHub: https://github.com/sunnybak/precision-based-sampling/blob/main/mixed\_expert.py
I’d love feedback or pointers to related work.
Thanks!
/r/MachineLearning
https://redd.it/1kyl04x
Hi all – I recently built a system that automatically determines how many LLM-as-a-judge runs you need for statistically reliable scores. Key insight: treat each LLM evaluation as a noisy sample, then use confidence intervals to decide when to stop sampling.
The math shows reliability is surprisingly cheap (95% → 99% confidence only costs 1.7x more), but precision is expensive (doubling scale granularity costs 4x more).Also implemented "mixed-expert sampling" - rotating through multiple models (GPT-4, Claude, etc.) in the same batch for better robustness.
I also analyzed how latency, cost and reliability scale in this approach.Typical result: need 5-20 samples instead of guessing. Especially useful for AI safety evals and model comparisons where reliability matters.
Blog: https://www.sunnybak.net/blog/precision-based-sampling
GitHub: https://github.com/sunnybak/precision-based-sampling/blob/main/mixed\_expert.py
I’d love feedback or pointers to related work.
Thanks!
/r/MachineLearning
https://redd.it/1kyl04x
www.sunnybak.net
How to Add Confidence Intervals to LLM Judges
Precision-Based Sampling for LLM Judges
Friday Daily Thread: r/Python Meta and Free-Talk Fridays
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1kyq5i2
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1kyq5i2
Redditinc
Reddit Rules
Reddit Rules - Reddit
bulletchess, A high performance chess library
# What My Project Does
`bulletchess` is a high performance chess library, that implements the following and more:
* A complete game model with intuitive representations for pieces, moves, and positions.
* Extensively tested legal move generation, application, and undoing.
* Parsing and writing of positions specified in [Forsyth-Edwards Notation](https://www.chessprogramming.org/Forsyth-Edwards_Notation) (FEN), and moves specified in both [Long Algebraic Notation](https://www.chessprogramming.org/Algebraic_Chess_Notation#Long_Algebraic_Notation_.28LAN.29) and [Standard Algebraic Notation](https://www.chessprogramming.org/Algebraic_Chess_Notation#Standard_Algebraic_Notation_.28SAN.29).
* Methods to determine if a position is check, checkmate, stalemate, and each specific type of draw.
* Efficient hashing of positions using [Zobrist Keys](https://en.wikipedia.org/wiki/Zobrist_hashing).
* A [Portable Game Notation](https://thechessworld.com/articles/general-information/portable-chess-game-notation-pgn-complete-guide/) (PGN) file reader
* Utility functions for writing engines.
`bulletchess` is implemented as a C extension, similar to NumPy.
# Target Audience
I made this library after being frustrated with how slow `python-chess` was at large dataset analysis for machine learning and engine building. I hope it can be useful to anyone else looking for a fast interface to do any kind of chess ML in python.
# Comparison:
`bulletchess` has many of the same features as `python-chess`, but [is much faster](https://zedeckj.github.io/bulletchess/auto-examples/performance.html). I think the syntax of `bulletchess` is also a lot nicer to use. For example, instead of `python-chess`'s
board.piece_at(E1)
`bulletchess` uses:
board[E1]
You can install wheels with,
pip
/r/Python
https://redd.it/1kyoyds
# What My Project Does
`bulletchess` is a high performance chess library, that implements the following and more:
* A complete game model with intuitive representations for pieces, moves, and positions.
* Extensively tested legal move generation, application, and undoing.
* Parsing and writing of positions specified in [Forsyth-Edwards Notation](https://www.chessprogramming.org/Forsyth-Edwards_Notation) (FEN), and moves specified in both [Long Algebraic Notation](https://www.chessprogramming.org/Algebraic_Chess_Notation#Long_Algebraic_Notation_.28LAN.29) and [Standard Algebraic Notation](https://www.chessprogramming.org/Algebraic_Chess_Notation#Standard_Algebraic_Notation_.28SAN.29).
* Methods to determine if a position is check, checkmate, stalemate, and each specific type of draw.
* Efficient hashing of positions using [Zobrist Keys](https://en.wikipedia.org/wiki/Zobrist_hashing).
* A [Portable Game Notation](https://thechessworld.com/articles/general-information/portable-chess-game-notation-pgn-complete-guide/) (PGN) file reader
* Utility functions for writing engines.
`bulletchess` is implemented as a C extension, similar to NumPy.
# Target Audience
I made this library after being frustrated with how slow `python-chess` was at large dataset analysis for machine learning and engine building. I hope it can be useful to anyone else looking for a fast interface to do any kind of chess ML in python.
# Comparison:
`bulletchess` has many of the same features as `python-chess`, but [is much faster](https://zedeckj.github.io/bulletchess/auto-examples/performance.html). I think the syntax of `bulletchess` is also a lot nicer to use. For example, instead of `python-chess`'s
board.piece_at(E1)
`bulletchess` uses:
board[E1]
You can install wheels with,
pip
/r/Python
https://redd.it/1kyoyds