I created this polygon screenshot tool for myself, I must say it may be useful to others!
* **What My Project Does -** Take a screenshot by drawing a precise polygon rather than being limited to a rectangular or manual free-form shape
* **Target Audience -** Meant for *production (For me, my professor just give notes pdf with everything jumbled together so I wanted to keep them organized, obviously on my note by taking screenshots of them)*
* **Comparison -** I am a windows user, neither does windows provide default polygon screenshot tool nor are they available on anywhere else on internet
* You can check it out on github: [https://github.com/sultanate-sultan/polygon-screenshot-tool](https://github.com/sultanate-sultan/polygon-screenshot-tool)
* You can find the demo video on my github repo page
/r/Python
https://redd.it/1mzxbia
* **What My Project Does -** Take a screenshot by drawing a precise polygon rather than being limited to a rectangular or manual free-form shape
* **Target Audience -** Meant for *production (For me, my professor just give notes pdf with everything jumbled together so I wanted to keep them organized, obviously on my note by taking screenshots of them)*
* **Comparison -** I am a windows user, neither does windows provide default polygon screenshot tool nor are they available on anywhere else on internet
* You can check it out on github: [https://github.com/sultanate-sultan/polygon-screenshot-tool](https://github.com/sultanate-sultan/polygon-screenshot-tool)
* You can find the demo video on my github repo page
/r/Python
https://redd.it/1mzxbia
GitHub
GitHub - sultanate-sultan/polygon-screenshot-tool: There aren't any screenshot tool in market that has polygon feature, like you…
There aren't any screenshot tool in market that has polygon feature, like you draw bunch of straight lines to enclsoe the area you want to take screenshot of - sultanate-sultan/polygon-scre...
Learning hosting solutions through books or articles?
good evening fellas!
Basically, I am pretty new to flask but really like it so far. I have trained myself to learn from books since a couple years for the guarantee of high quality content and completeness. So far I really like it, but it takes a lot of time and effort. I only know the basics about networking and am interested in hosting my new project on my own hardware, and therefore need some sort of http server software like apache or nginx.
Would you, assuming you are already pretty familiar with hosting solutions on own hardware, recommend learning apache or nginx through books, or through articles or videos? I really have no clue how long I will be busy learning how to install and configure, and really get comfortable with the process of hosting.
I would love to hear what you guys have to say.
Have a great night and take care,
peace
/r/flask
https://redd.it/1n02cj5
good evening fellas!
Basically, I am pretty new to flask but really like it so far. I have trained myself to learn from books since a couple years for the guarantee of high quality content and completeness. So far I really like it, but it takes a lot of time and effort. I only know the basics about networking and am interested in hosting my new project on my own hardware, and therefore need some sort of http server software like apache or nginx.
Would you, assuming you are already pretty familiar with hosting solutions on own hardware, recommend learning apache or nginx through books, or through articles or videos? I really have no clue how long I will be busy learning how to install and configure, and really get comfortable with the process of hosting.
I would love to hear what you guys have to say.
Have a great night and take care,
peace
/r/flask
https://redd.it/1n02cj5
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Building a competitive local LLM server in Python
My team at AMD is working on an open, universal way to run speedy LLMs locally on PCs, and we're building it in Python. I'm curious what the community here would think of the work, so here's a showcase post!
**What My Project Does**
Lemonade runs LLMs on PCs by loading them into a server process with an inference engine. Then, users can:
* Load up the web ui to get a GUI for chatting with the LLM and managing models.
* Connect to other applications over the OpenAI API (chat, coding assistants, document/RAG search, etc.).
* Try out optimized backends, such as ROCm 7 betas for Radeon GPUs or OnnxRuntime-GenAI for Ryzen AI NPUs.
**Target Audience**
* Users who want a dead-simple way to get started with LLMs. Especially if their PC has hardware like Ryzen AI NPU or a Radeon GPU that benefit from specialized optimization.
* Developers who are building cross-platform LLM apps and don't want to worry about the details of setting up or optimizing LLMs for a wide range of PC hardware.
**Comparison**
Lemonade is designed with the following 3 ideas in mind, which I think are essential for local LLMs. Each of the major alternatives has an inherent blocker that prevents them from doing
/r/Python
https://redd.it/1n027ew
My team at AMD is working on an open, universal way to run speedy LLMs locally on PCs, and we're building it in Python. I'm curious what the community here would think of the work, so here's a showcase post!
**What My Project Does**
Lemonade runs LLMs on PCs by loading them into a server process with an inference engine. Then, users can:
* Load up the web ui to get a GUI for chatting with the LLM and managing models.
* Connect to other applications over the OpenAI API (chat, coding assistants, document/RAG search, etc.).
* Try out optimized backends, such as ROCm 7 betas for Radeon GPUs or OnnxRuntime-GenAI for Ryzen AI NPUs.
**Target Audience**
* Users who want a dead-simple way to get started with LLMs. Especially if their PC has hardware like Ryzen AI NPU or a Radeon GPU that benefit from specialized optimization.
* Developers who are building cross-platform LLM apps and don't want to worry about the details of setting up or optimizing LLMs for a wide range of PC hardware.
**Comparison**
Lemonade is designed with the following 3 ideas in mind, which I think are essential for local LLMs. Each of the major alternatives has an inherent blocker that prevents them from doing
/r/Python
https://redd.it/1n027ew
Reddit
From the Python community on Reddit: Building a competitive local LLM server in Python
Explore this post and more from the Python community
Frontend for my Django App
So i have been building this shop management tool for my shop which includes billing, challan etc. Now i want to have a frontend for the same, Please suggest me some frontend tech/ framework that will be easy to build and works great with django.
/r/django
https://redd.it/1n08yu3
So i have been building this shop management tool for my shop which includes billing, challan etc. Now i want to have a frontend for the same, Please suggest me some frontend tech/ framework that will be easy to build and works great with django.
/r/django
https://redd.it/1n08yu3
Reddit
From the django community on Reddit
Explore this post and more from the django community
Need someone for python practise
I am a relatively beginner in python
I have started doing leetcode and hacker rank problems in python
It would be really great if I would have some company
Because that way we can exchange the thoughts and see in different dimensions of the same problem and learn more
Plus, it will make it more fun
So dm me if u are interested
/r/Python
https://redd.it/1n0dpnm
I am a relatively beginner in python
I have started doing leetcode and hacker rank problems in python
It would be really great if I would have some company
Because that way we can exchange the thoughts and see in different dimensions of the same problem and learn more
Plus, it will make it more fun
So dm me if u are interested
/r/Python
https://redd.it/1n0dpnm
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
66k Python Jobs - You can immediately apply!
Many US job openings never show up on job boards; they’re only on company career pages.
I built an AI tool that checks 70,000+ company sites and cleans the listings automatically, here’s what I found (US only).
|Function|Open Roles|
|:-|:-|
|Software Development|171,789|
|Data & AI|68,239|
|Marketing & Sales|183,143|
|Health & Pharma|192,426|
|Retail & Consumer Goods|127,782|
|Engineering, Manufacturing & Environment|134,912|
|Operations, Logistics, Procurement|98,370|
|Finance & Accounting|101,166|
|Business & Strategy|47,076|
|Hardware, Systems & Electronics|30,112|
|Legal, HR & Administration|42,845|
You can explore and apply to all these jobs for free here: *laboro.co*
/r/IPython
https://redd.it/1n0eap3
Many US job openings never show up on job boards; they’re only on company career pages.
I built an AI tool that checks 70,000+ company sites and cleans the listings automatically, here’s what I found (US only).
|Function|Open Roles|
|:-|:-|
|Software Development|171,789|
|Data & AI|68,239|
|Marketing & Sales|183,143|
|Health & Pharma|192,426|
|Retail & Consumer Goods|127,782|
|Engineering, Manufacturing & Environment|134,912|
|Operations, Logistics, Procurement|98,370|
|Finance & Accounting|101,166|
|Business & Strategy|47,076|
|Hardware, Systems & Electronics|30,112|
|Legal, HR & Administration|42,845|
You can explore and apply to all these jobs for free here: *laboro.co*
/r/IPython
https://redd.it/1n0eap3
LABORO | Automate Your Job Applications with AI
LABORO helps you streamline job applications and interview prep with AI.
LLM-powered Django translations: Yesglot
https://github.com/efe/yesglot
/r/django
https://redd.it/1n04l7r
https://github.com/efe/yesglot
/r/django
https://redd.it/1n04l7r
GitHub
GitHub - efe/yesglot: LLM-powered Django translations ✨ Just call me "python manage.py translatemessages"
LLM-powered Django translations ✨ Just call me "python manage.py translatemessages" - efe/yesglot
I built an open-source learning platform for ethical hacking, programming, and related tools
I’ve been working on a project called RareCodeBase.
What My Project Does: It’s a free, open-source platform that brings together tutorials and resources on programming, ethical hacking, and related tools. The idea is to have one place to learn without ads or paywalls.
Target Audience: The platform is mainly aimed at students, beginners, and self-learners who want to get started with coding or security. Developers and security folks are also welcome to contribute tutorials or improvements.
Comparison: A lot of tutorial sites are paid, not open-source, or focused on just one area. RareCodeBase is MIT-licensed and open to contributions, so anyone can add tutorials, suggest features, or even host their own version. The goal is to keep it community-driven and free.
Right now, it’s pretty minimal, but I’m planning to grow it over time, possibly adding video tutorials and more structured content in the future.
The source code is available on GitHub: github.com/RareCodeBase/Rare-Code-Base
Any feedback would be really helpful as I keep improving it.
Contributions are also welcome if you’d like to add tutorials, improve design, or suggest features.
And if you find it useful, leaving a star on GitHub would mean a lot.
/r/Python
https://redd.it/1n0hpjs
I’ve been working on a project called RareCodeBase.
What My Project Does: It’s a free, open-source platform that brings together tutorials and resources on programming, ethical hacking, and related tools. The idea is to have one place to learn without ads or paywalls.
Target Audience: The platform is mainly aimed at students, beginners, and self-learners who want to get started with coding or security. Developers and security folks are also welcome to contribute tutorials or improvements.
Comparison: A lot of tutorial sites are paid, not open-source, or focused on just one area. RareCodeBase is MIT-licensed and open to contributions, so anyone can add tutorials, suggest features, or even host their own version. The goal is to keep it community-driven and free.
Right now, it’s pretty minimal, but I’m planning to grow it over time, possibly adding video tutorials and more structured content in the future.
The source code is available on GitHub: github.com/RareCodeBase/Rare-Code-Base
Any feedback would be really helpful as I keep improving it.
Contributions are also welcome if you’d like to add tutorials, improve design, or suggest features.
And if you find it useful, leaving a star on GitHub would mean a lot.
/r/Python
https://redd.it/1n0hpjs
GitHub
GitHub - RareCodeBase/Rare-Code-Base: Rare Code Base is a free learning platform for hacking, programming, tools, and more.
Rare Code Base is a free learning platform for hacking, programming, tools, and more. - RareCodeBase/Rare-Code-Base
Unexpected login to different instances of flask app with flask_login in docker container
Hello, folks!
I have maybe a bit silly question. Please, keep in mind that I'm not a pro dev. It's my hobby to make such stuff to ease my life and life of my family with available tools.
I have a little project made with `flask` and `flask-login`. I run two instances of it in two different docker containers (port 51000 for the first one and 52000 for the second one). Secret keys and passwords for admin user are completely different for both instances. However, I face a strange (at least for me) behavior: in the same browser if I log in using my admin credentials into the first instance, somehow I've found that I become logged in as admin into the second instance and vice versa. Same thing with log out.
`flask-login` setup is pretty default, except I added a redirect to login page to `unauthorized_callback` function.
I've read about how `flask-login` handles its job (sessions and client-side cookies), but still don't understand what should I do on the server side to prevent this from happening? As far as secret keys differs and, as far as I understand, they are used to encrypt cookies, I don't get how browser treats them as cookies of the same instance of web
/r/flask
https://redd.it/1mzl6gi
Hello, folks!
I have maybe a bit silly question. Please, keep in mind that I'm not a pro dev. It's my hobby to make such stuff to ease my life and life of my family with available tools.
I have a little project made with `flask` and `flask-login`. I run two instances of it in two different docker containers (port 51000 for the first one and 52000 for the second one). Secret keys and passwords for admin user are completely different for both instances. However, I face a strange (at least for me) behavior: in the same browser if I log in using my admin credentials into the first instance, somehow I've found that I become logged in as admin into the second instance and vice versa. Same thing with log out.
`flask-login` setup is pretty default, except I added a redirect to login page to `unauthorized_callback` function.
I've read about how `flask-login` handles its job (sessions and client-side cookies), but still don't understand what should I do on the server side to prevent this from happening? As far as secret keys differs and, as far as I understand, they are used to encrypt cookies, I don't get how browser treats them as cookies of the same instance of web
/r/flask
https://redd.it/1mzl6gi
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Should I ban robot scripts?
Well, the question is more like a general query about good practices than directly related to flask, but I'll try.
I have a flask app running in the production, facing the Internet. So, I also have a bunch of scanning attempts looking for typical weaknesses, like:
2025-08-25 10:46:36,791 - ERROR: 47.130.152.98anonymous_user404 error: https://my.great.app/site/wp-includes/wlwmanifest.xml
2025-08-25 13:32:50,656 - ERROR: 3.83.226.115anonymous_user404 error: https://my.great.app/web/wp-includes/wlwmanifest.xml
2025-08-25 07:13:03,168 - ERROR: 4.223.168.126anonymous_user404 error: https://my.great.app/wp-includes/js/tinymce/plugins/compat3x/css.php
So, the question is really if I should do anything about it - like banning the IP address on the app level, or just ignore it.
There is a WAF in front of the VPS (public hosting), and the above attempts are not really harmful other than flooding the logs. There are no typical .php, .xml or similar components.
/r/flask
https://redd.it/1n0oij2
Well, the question is more like a general query about good practices than directly related to flask, but I'll try.
I have a flask app running in the production, facing the Internet. So, I also have a bunch of scanning attempts looking for typical weaknesses, like:
2025-08-25 10:46:36,791 - ERROR: 47.130.152.98anonymous_user404 error: https://my.great.app/site/wp-includes/wlwmanifest.xml
2025-08-25 13:32:50,656 - ERROR: 3.83.226.115anonymous_user404 error: https://my.great.app/web/wp-includes/wlwmanifest.xml
2025-08-25 07:13:03,168 - ERROR: 4.223.168.126anonymous_user404 error: https://my.great.app/wp-includes/js/tinymce/plugins/compat3x/css.php
So, the question is really if I should do anything about it - like banning the IP address on the app level, or just ignore it.
There is a WAF in front of the VPS (public hosting), and the above attempts are not really harmful other than flooding the logs. There are no typical .php, .xml or similar components.
/r/flask
https://redd.it/1n0oij2
Confused about all the authentication methods with DRF
I am currently developing a web application with React and Django REST framework. I knew that django-allauth was a good package so I went with it for authentication. I saw that there is headless mode specifically for REST and started implementing. I had to decide what kind of authentication to use. I went with the default(sessions). I am currently super confused because almost every tutorial uses JWT with CORS. From the Allauth react example I can see that react and Django are served through a proxy and this way sessions should be handled by Django using cookies securely. But at the same time there is an implementation of sending CSRF and X-Session-Token in every request. I don't get the X-Session-Token. Shouldn't this be handled by Django.
/r/django
https://redd.it/1n0mgce
I am currently developing a web application with React and Django REST framework. I knew that django-allauth was a good package so I went with it for authentication. I saw that there is headless mode specifically for REST and started implementing. I had to decide what kind of authentication to use. I went with the default(sessions). I am currently super confused because almost every tutorial uses JWT with CORS. From the Allauth react example I can see that react and Django are served through a proxy and this way sessions should be handled by Django using cookies securely. But at the same time there is an implementation of sending CSRF and X-Session-Token in every request. I don't get the X-Session-Token. Shouldn't this be handled by Django.
/r/django
https://redd.it/1n0mgce
Reddit
From the django community on Reddit
Explore this post and more from the django community
Whats your favorite Python trick or lesser known feature?
I'm always amazed at the hidden gems in python that can make code cleaner or more efficient. Weather its clever use of comprehensions to underrated standard library modules - whats a Python trick you’ve discovered that really saved you some time or made your projects easier
/r/Python
https://redd.it/1n0ng7f
I'm always amazed at the hidden gems in python that can make code cleaner or more efficient. Weather its clever use of comprehensions to underrated standard library modules - whats a Python trick you’ve discovered that really saved you some time or made your projects easier
/r/Python
https://redd.it/1n0ng7f
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Django Shinobi 1.4.0 has been released!
For those who don't know, Django Ninja is a fantastic modern library for writing APIs with Django. It's inspired by FastAPI, has great support for OpenAPI, and integrates nicely with Django.
Django Shinobi is a fork of Django Ninja, meant to be focused on community-desired features and bug fixes. I originally forked it and announced it here 6 months ago. It's taken quite a while to get everything into a state I feel confident in, but I'm happy to finally announce its first proper release! This release is based on Django Ninja 1.4.3 and includes additional features and bug fixes.
https://github.com/pmdevita/django-shinobi/releases/tag/v1.4.0
## Highlights
### Schema performance improvements
It's taken a lot of work over the past few months but the performance improvements for Schema are now complete! When enabled, you should see a significant speed up in Schema validation. One particularly heavy response that was tested saw a 15x improvement in speed.
Before (~512ms):
https://i.imgur.com/eFKq5pf.png
After (~34ms):
https://i.imgur.com/xbS5oZn.png
EDIT: Did another test using u/airoscar's https://github.com/oscarychen/building-efficient-api benchmark project. Schema size is smaller here but we see about a 20% reduction in median time compared to Ninja. Shinobi is still almost twice as slow as FastAPI, so there remains much more to do in optimization.
https://i.imgur.com/1DlxcIm.png
### Choices support
ModelSchema now supports including
/r/django
https://redd.it/1n0yrgi
For those who don't know, Django Ninja is a fantastic modern library for writing APIs with Django. It's inspired by FastAPI, has great support for OpenAPI, and integrates nicely with Django.
Django Shinobi is a fork of Django Ninja, meant to be focused on community-desired features and bug fixes. I originally forked it and announced it here 6 months ago. It's taken quite a while to get everything into a state I feel confident in, but I'm happy to finally announce its first proper release! This release is based on Django Ninja 1.4.3 and includes additional features and bug fixes.
https://github.com/pmdevita/django-shinobi/releases/tag/v1.4.0
## Highlights
### Schema performance improvements
It's taken a lot of work over the past few months but the performance improvements for Schema are now complete! When enabled, you should see a significant speed up in Schema validation. One particularly heavy response that was tested saw a 15x improvement in speed.
Before (~512ms):
https://i.imgur.com/eFKq5pf.png
After (~34ms):
https://i.imgur.com/xbS5oZn.png
EDIT: Did another test using u/airoscar's https://github.com/oscarychen/building-efficient-api benchmark project. Schema size is smaller here but we see about a 20% reduction in median time compared to Ninja. Shinobi is still almost twice as slow as FastAPI, so there remains much more to do in optimization.
https://i.imgur.com/1DlxcIm.png
### Choices support
ModelSchema now supports including
/r/django
https://redd.it/1n0yrgi
GitHub
GitHub - vitalik/django-ninja: 💨 Fast, Async-ready, Openapi, type hints based framework for building APIs
💨 Fast, Async-ready, Openapi, type hints based framework for building APIs - vitalik/django-ninja
I built a tool to benchmark tokenizers across 100+ languages and found some wild disparities [R]
**TL;DR:** Created [tokka-bench](https://tokka-bench.streamlit.app/) to compare tokenizers across languages. Turns out your fine-tune's multilingual performance might suck because of tokenization, not architecture. Also explains why proprietary models (Claude, GPT, Gemini) are so much better at non-English tasks.
**Links:**
* [Live dashboard](https://tokka-bench.streamlit.app/)
* [Full blog post](https://www.bengubler.com/posts/2025-08-25-tokka-bench-evaluate-tokenizers-multilingual)
* [GitHub repo](https://github.com/bgub/tokka-bench)
https://preview.redd.it/7i03jela9elf1.png?width=1724&format=png&auto=webp&s=95378457970e6337b147e71d7a8f0ab2dd67cb91
# The Problem Nobody Talks About
I started this as a side quest while pretraining a multilingual model, but tokenization turned out to be way more important than expected. There are two hidden layers creating massive efficiency gaps:
**UTF-8 encoding differences:**
* English: \~1 byte per character
* Arabic: 2+ bytes per character
* Chinese: 3+ bytes per character
**Tokenization bias:** Most tokenizers are trained on English-heavy data, so they allocate way more vocabulary to English patterns. These compound into serious problems.
# Why This Affects Performance
**During training:** If you allocate tokens proportionally (10M English, 1M Khmer), the Khmer text has WAY less semantic content because it needs more tokens per word. Plus Khmer tokens end up being character-level instead of semantic units, making concept storage much harder.
**During inference:** Low-resource languages need 2-3x more tokens per sentence:
* Slower throughput (costs more to serve)
* Context windows fill up faster
* More chances to mess up during generation
# What I Built
tokka-bench measures four key things:
1. **Efficiency** \- bytes
/r/MachineLearning
https://redd.it/1n0r8b7
**TL;DR:** Created [tokka-bench](https://tokka-bench.streamlit.app/) to compare tokenizers across languages. Turns out your fine-tune's multilingual performance might suck because of tokenization, not architecture. Also explains why proprietary models (Claude, GPT, Gemini) are so much better at non-English tasks.
**Links:**
* [Live dashboard](https://tokka-bench.streamlit.app/)
* [Full blog post](https://www.bengubler.com/posts/2025-08-25-tokka-bench-evaluate-tokenizers-multilingual)
* [GitHub repo](https://github.com/bgub/tokka-bench)
https://preview.redd.it/7i03jela9elf1.png?width=1724&format=png&auto=webp&s=95378457970e6337b147e71d7a8f0ab2dd67cb91
# The Problem Nobody Talks About
I started this as a side quest while pretraining a multilingual model, but tokenization turned out to be way more important than expected. There are two hidden layers creating massive efficiency gaps:
**UTF-8 encoding differences:**
* English: \~1 byte per character
* Arabic: 2+ bytes per character
* Chinese: 3+ bytes per character
**Tokenization bias:** Most tokenizers are trained on English-heavy data, so they allocate way more vocabulary to English patterns. These compound into serious problems.
# Why This Affects Performance
**During training:** If you allocate tokens proportionally (10M English, 1M Khmer), the Khmer text has WAY less semantic content because it needs more tokens per word. Plus Khmer tokens end up being character-level instead of semantic units, making concept storage much harder.
**During inference:** Low-resource languages need 2-3x more tokens per sentence:
* Slower throughput (costs more to serve)
* Context windows fill up faster
* More chances to mess up during generation
# What I Built
tokka-bench measures four key things:
1. **Efficiency** \- bytes
/r/MachineLearning
https://redd.it/1n0r8b7
Streamlit
Tokka Bench Visualizer
Benchmark and compare tokenizers across many languages using real FineWeb-2 and StarCoder data. H...
I Just released Sagebox - a procedural GUI library for Python (Initial Beta)
What My Project Does:
Sagebox is a comprehensive GUI providing GUI-based controls and graphics, that can be used in a simple procedural manner.
Target Audience:
Anyone, really. Hobbyists, research, professional. I have used in the industry quite a lot, but also use it for quick prototyping and just playing around with graphics. The github page has examples of many different ypes.
Comparison:
Sagebox is meant to provide easily-used and access controls that are also scalable into more complex controls as-you-go, which is the main emphasis -- easily-used but scalable as a procedural GUI with a lot of control, widgets, and graphics functions.
One of the main differences, besides being procedural (which some GUIs are, too) is having controls and graphics as specialized areas that can work independently or together, to create personalized control-based windows, as well quick developer-based controls that are easily created and automatically placed.
It's also purposely designed to work with all other GUIs and libraries, so you can use it, for example, to provide controls while using Matlplot lib (see examples on the github page), and it can work along side PySimple Gui or Pygame, since every GUI has it's
/r/Python
https://redd.it/1n0wemp
What My Project Does:
Sagebox is a comprehensive GUI providing GUI-based controls and graphics, that can be used in a simple procedural manner.
Target Audience:
Anyone, really. Hobbyists, research, professional. I have used in the industry quite a lot, but also use it for quick prototyping and just playing around with graphics. The github page has examples of many different ypes.
Comparison:
Sagebox is meant to provide easily-used and access controls that are also scalable into more complex controls as-you-go, which is the main emphasis -- easily-used but scalable as a procedural GUI with a lot of control, widgets, and graphics functions.
One of the main differences, besides being procedural (which some GUIs are, too) is having controls and graphics as specialized areas that can work independently or together, to create personalized control-based windows, as well quick developer-based controls that are easily created and automatically placed.
It's also purposely designed to work with all other GUIs and libraries, so you can use it, for example, to provide controls while using Matlplot lib (see examples on the github page), and it can work along side PySimple Gui or Pygame, since every GUI has it's
/r/Python
https://redd.it/1n0wemp
Reddit
From the Python community on Reddit: I Just released Sagebox - a procedural GUI library for Python (Initial Beta)
Explore this post and more from the Python community
Python package for NCAA Baseball & MLB Draft stats
What My Project Does:
ncaa_bbStats is an open-source Python package for retrieving, parsing, and analyzing Division I, II, and III college baseball team statistics (2002–2025), player statistics (2021-2025), and MLB Draft data (1965-2025).
Target Audience:
Researchers, analysts, or general fans looking to see how teams perform from 2002-2025 and players from 2021-2025.
Comparison:
It was hard finding any resources for college baseball, but of the ones I did find I couldn't find direct statistical retrieve functions for research purposes. Especially that of players and team statistics. I hope this project is able to fulfill that.
Main Text:
Hey everyone,
I built a Python package called ncaa_bbStats that lets you pull and analyze NCAA Division I, II, and III baseball stats (2002–2025), player stats (2021–2025), and MLB Draft data (1965–2025).
Some things you can do with it:
Get team stats like BA, ERA, OBP, SLG, FPCT
Compute Pythagorean expectation & compare to actual records
Build player leaderboards (HR leaders, K/9 leaders, etc.)
Retrieve MLB Draft picks for any NCAA team since 1965
Docs: https://collegebaseballstatspackage.readthedocs.io/
PyPI: https://pypi.org/project/ncaa-bbStats/
GitHub: https://github.com/CodeMateo15/CollegeBaseballStatsPackage
It’s still under development, so I’d love feedback, collaborators, or even just a GitHub ⭐ if you think it’s cool.
If you’re into college baseball, MLB draft history, or sports analytics with Python, check it out and let
/r/Python
https://redd.it/1n16al4
What My Project Does:
ncaa_bbStats is an open-source Python package for retrieving, parsing, and analyzing Division I, II, and III college baseball team statistics (2002–2025), player statistics (2021-2025), and MLB Draft data (1965-2025).
Target Audience:
Researchers, analysts, or general fans looking to see how teams perform from 2002-2025 and players from 2021-2025.
Comparison:
It was hard finding any resources for college baseball, but of the ones I did find I couldn't find direct statistical retrieve functions for research purposes. Especially that of players and team statistics. I hope this project is able to fulfill that.
Main Text:
Hey everyone,
I built a Python package called ncaa_bbStats that lets you pull and analyze NCAA Division I, II, and III baseball stats (2002–2025), player stats (2021–2025), and MLB Draft data (1965–2025).
Some things you can do with it:
Get team stats like BA, ERA, OBP, SLG, FPCT
Compute Pythagorean expectation & compare to actual records
Build player leaderboards (HR leaders, K/9 leaders, etc.)
Retrieve MLB Draft picks for any NCAA team since 1965
Docs: https://collegebaseballstatspackage.readthedocs.io/
PyPI: https://pypi.org/project/ncaa-bbStats/
GitHub: https://github.com/CodeMateo15/CollegeBaseballStatsPackage
It’s still under development, so I’d love feedback, collaborators, or even just a GitHub ⭐ if you think it’s cool.
If you’re into college baseball, MLB draft history, or sports analytics with Python, check it out and let
/r/Python
https://redd.it/1n16al4
Python DX for data & analytics infrastructure
Hey everyone - I’ve been thinking a lot about Python developer experience for data infrastructure, and why it matters almost as much performance. We’re not just building data warehouses for BI dashboards and data science anymore. OLAP and real-time analytics are powering massively scaled software development efforts. But the DX is still pretty outdated relative to modern software dev—things like schemas in YAML configs, manual SQL workflows, and brittle migrations.
I’d like to propose eight core principles to bring analytics developer tooling in line with modern software engineering: **git-native workflows, local-first environments, schemas as python code, modularity, open‑source tooling, AI/copilot‑friendliness, and transparent CI/CD + migrations.**
We’ve started implementing these ideas in[ MooseStack](https://github.com/514-labs/moosestack) (open source, MIT licensed):
* **Migrations** → before deploying, your code is diffed against the live schema and a migration plan is generated. If drift has crept in, it fails fast instead of corrupting data.
* **Local development** → your entire data infra stack materialized locally with one command. Branch off main, and all production models are instantly available to dev against.
* **Type safety** → rename a column in your code, and every SQL fragment, stream, pipeline, or API depending on it gets flagged immediately in your IDE.
I’d love to spark a
/r/Python
https://redd.it/1n0yv7u
Hey everyone - I’ve been thinking a lot about Python developer experience for data infrastructure, and why it matters almost as much performance. We’re not just building data warehouses for BI dashboards and data science anymore. OLAP and real-time analytics are powering massively scaled software development efforts. But the DX is still pretty outdated relative to modern software dev—things like schemas in YAML configs, manual SQL workflows, and brittle migrations.
I’d like to propose eight core principles to bring analytics developer tooling in line with modern software engineering: **git-native workflows, local-first environments, schemas as python code, modularity, open‑source tooling, AI/copilot‑friendliness, and transparent CI/CD + migrations.**
We’ve started implementing these ideas in[ MooseStack](https://github.com/514-labs/moosestack) (open source, MIT licensed):
* **Migrations** → before deploying, your code is diffed against the live schema and a migration plan is generated. If drift has crept in, it fails fast instead of corrupting data.
* **Local development** → your entire data infra stack materialized locally with one command. Branch off main, and all production models are instantly available to dev against.
* **Type safety** → rename a column in your code, and every SQL fragment, stream, pipeline, or API depending on it gets flagged immediately in your IDE.
I’d love to spark a
/r/Python
https://redd.it/1n0yv7u
GitHub
GitHub - 514-labs/moosestack: The developer framework for building analytics into your app on top of ClickHouse, Redpanda and other…
The developer framework for building analytics into your app on top of ClickHouse, Redpanda and other high-performance analytical infrastructure - 514-labs/moosestack
I built prompttest - a testing framework for LLMs. It's like pytest, but for prompts.
What My Project Does
prompttest is a command-line tool that brings automated testing to your LLM prompts. Instead of manually checking whether prompt changes break behavior, you can write tests in simple YAML files and run them directly from your terminal.
It works by running your prompt with different inputs, then using another LLM to evaluate whether the output meets the criteria you define in plain English.
Here’s a quick look at how it works:
1. Create a
2. Write a corresponding
3. Run
4. Get a summary in the console and detailed Markdown reports for each test run.
You can see a demo of it in the project’s README on GitHub.
Target Audience
This tool is for developers and teams building applications with LLMs who want to bring more rigor to their prompt engineering process. If you find it difficult to track how prompt modifications affect your outputs, prompttest helps catch regressions and ensures consistent quality.
It’s designed to fit naturally into a CI/CD pipeline, just like you would use
Comparison
The main
/r/Python
https://redd.it/1n1dsqh
What My Project Does
prompttest is a command-line tool that brings automated testing to your LLM prompts. Instead of manually checking whether prompt changes break behavior, you can write tests in simple YAML files and run them directly from your terminal.
It works by running your prompt with different inputs, then using another LLM to evaluate whether the output meets the criteria you define in plain English.
Here’s a quick look at how it works:
1. Create a
.txt file for your prompt with placeholders like {variable}.2. Write a corresponding
.yml test file where you define test cases, provide inputs for the placeholders, and specify the success criteria.3. Run
prompttest in your terminal to execute all your tests.4. Get a summary in the console and detailed Markdown reports for each test run.
You can see a demo of it in the project’s README on GitHub.
Target Audience
This tool is for developers and teams building applications with LLMs who want to bring more rigor to their prompt engineering process. If you find it difficult to track how prompt modifications affect your outputs, prompttest helps catch regressions and ensures consistent quality.
It’s designed to fit naturally into a CI/CD pipeline, just like you would use
pytest for code.Comparison
The main
/r/Python
https://redd.it/1n1dsqh
Reddit
From the Python community on Reddit: I built prompttest - a testing framework for LLMs. It's like pytest, but for prompts.
Explore this post and more from the Python community
Moving to production - and making running changes
I have spent quite a while developing my project doing my best to ensure everything is working.
My next concern when I feel everything is working is deploying and how I would go about making improvements and changes - while not impacting the running of the production environment or completely wrecking the PostgreSQL database.
Can I run
Alongside
on the same server. Or do these need to be kept completely separated?
Is there perhaps a good guide on this aspect somewhere?
/r/django
https://redd.it/1n1bsop
I have spent quite a while developing my project doing my best to ensure everything is working.
My next concern when I feel everything is working is deploying and how I would go about making improvements and changes - while not impacting the running of the production environment or completely wrecking the PostgreSQL database.
Can I run
docker compose -f docker-compose.local.yml upAlongside
docker compose -f docker-compose.production.yml upon the same server. Or do these need to be kept completely separated?
Is there perhaps a good guide on this aspect somewhere?
/r/django
https://redd.it/1n1bsop
Reddit
From the django community on Reddit
Explore this post and more from the django community
I bundled my common Python utilities into a library (alx-common) – feedback welcome
Over the years I found developers rewriting the same helper functions across multiple projects — things like:
* Sending text + HTML emails easily
* Normalizing strings and filenames
* Simple database utilities (SQLite, MariaDB, PostgreSQL, with parameter support)
* Config handling + paths setup
So I wrapped them up into a reusable package called [**alx-common**](https://pypi.org/project/alx-common/)
I use it daily for automation, SRE, and DevOps work, and figured it might save others the “copy-paste from old projects” routine.
It’s under GPLv3, so free to use and adapt. Docs + examples are in the repo, and I’m adding more over time.
Would love any feedback:
* Anything that feels missing from a “common utils” package?
* Is the API style clean enough, or too opinionated?
* Anyone else packaging up their “utility functions” into something similar?
Appreciate any thoughts, and happy to answer questions.
/r/Python
https://redd.it/1n1hkls
Over the years I found developers rewriting the same helper functions across multiple projects — things like:
* Sending text + HTML emails easily
* Normalizing strings and filenames
* Simple database utilities (SQLite, MariaDB, PostgreSQL, with parameter support)
* Config handling + paths setup
So I wrapped them up into a reusable package called [**alx-common**](https://pypi.org/project/alx-common/)
I use it daily for automation, SRE, and DevOps work, and figured it might save others the “copy-paste from old projects” routine.
It’s under GPLv3, so free to use and adapt. Docs + examples are in the repo, and I’m adding more over time.
Would love any feedback:
* Anything that feels missing from a “common utils” package?
* Is the API style clean enough, or too opinionated?
* Anyone else packaging up their “utility functions” into something similar?
Appreciate any thoughts, and happy to answer questions.
/r/Python
https://redd.it/1n1hkls
PyPI
alx-common
An application framework promoting consistency and centralising common tasks
Session management on cross domains
I had a Quart application, and I implemented a session version of it in Flask, possibly to identify an error. Below is my Flask implementation. I have tested it with the front-end application running on a different system, and the login was successful; however, upon changing the window location to dashboard.html, it redirects to the login page once again, and the session is lost. What could the issues be?
import os
import uuid
from datetime import timedelta
from http import HTTPStatus
from functools import wraps
import redis
from flask import Flask, rendertemplatestring, request, session, redirect, urlfor, jsonify
from flasksession import Session
from flaskcors import CORS
# Create the Flask application
app = Flask(name)
# Details on the Secret Key: https://flask.palletsprojects.com/en/3.0.x/config/#SECRETKEY
# NOTE: The secret key is used to cryptographically-sign the cookies used for storing
# the
/r/flask
https://redd.it/1n1exhd
I had a Quart application, and I implemented a session version of it in Flask, possibly to identify an error. Below is my Flask implementation. I have tested it with the front-end application running on a different system, and the login was successful; however, upon changing the window location to dashboard.html, it redirects to the login page once again, and the session is lost. What could the issues be?
import os
import uuid
from datetime import timedelta
from http import HTTPStatus
from functools import wraps
import redis
from flask import Flask, rendertemplatestring, request, session, redirect, urlfor, jsonify
from flasksession import Session
from flaskcors import CORS
# Create the Flask application
app = Flask(name)
# Details on the Secret Key: https://flask.palletsprojects.com/en/3.0.x/config/#SECRETKEY
# NOTE: The secret key is used to cryptographically-sign the cookies used for storing
# the
/r/flask
https://redd.it/1n1exhd