I made a job board aggregator that uses LLMs to find Python jobs with your exact stack
Hey r/Python,
I built a desktop app called First 2 Apply that I wanted to share.
What My Project Does It's a job board aggregator that uses LLMs to filter jobs based on specific tech stack requirements. The app analyzes both job titles and full descriptions through an LLM to determine if a position truly matches your criteria, rather than just using keyword matching.
Target Audience This is meant for developers who are job hunting and want to filter opportunities by very specific technical requirements. It's a production-ready desktop application that I built for my own job search and thought others might benefit from too.
Comparison Unlike traditional job boards where filtering is limited to keywords (which often miss context or return false positives), First 2 Apply uses AI to understand the actual requirements. For example, when searching for Python jobs, most aggregators would return results where Python is mentioned anywhere - even if it's just "nice to have" or the job actually requires 5 years of Django when you're a Flask developer. This tool can specifically find Python jobs that use Flask, PostgreSQL, and React, while excluding ones that require Django or MongoDB.
I use it to search for Python positions that match my
/r/Python
https://redd.it/1jae3rp
Hey r/Python,
I built a desktop app called First 2 Apply that I wanted to share.
What My Project Does It's a job board aggregator that uses LLMs to filter jobs based on specific tech stack requirements. The app analyzes both job titles and full descriptions through an LLM to determine if a position truly matches your criteria, rather than just using keyword matching.
Target Audience This is meant for developers who are job hunting and want to filter opportunities by very specific technical requirements. It's a production-ready desktop application that I built for my own job search and thought others might benefit from too.
Comparison Unlike traditional job boards where filtering is limited to keywords (which often miss context or return false positives), First 2 Apply uses AI to understand the actual requirements. For example, when searching for Python jobs, most aggregators would return results where Python is mentioned anywhere - even if it's just "nice to have" or the job actually requires 5 years of Django when you're a Flask developer. This tool can specifically find Python jobs that use Flask, PostgreSQL, and React, while excluding ones that require Django or MongoDB.
I use it to search for Python positions that match my
/r/Python
https://redd.it/1jae3rp
Reddit
From the Python community on Reddit: I made a job board aggregator that uses LLMs to find Python jobs with your exact stack
Posted by drakedemon - 8 votes and 5 comments
Multithreaded zip compression
I a writing a python tool that compress folders with some filtering done. I use the zipfile module with DEFLATED algorithm.
Looking for a way to multithread the compression loop, in such a way that the end result is highly compatible (so zip with DEFLATED or bz2) but not some fancy newer algorithm, it is for archiving stuff.
What are the best options ? Use threadpool (but I guess the CPU won’t be used enough). Using a third party open source library? Any rust wrapper available ?
Thanks for your answer
/r/Python
https://redd.it/1jakf5c
I a writing a python tool that compress folders with some filtering done. I use the zipfile module with DEFLATED algorithm.
Looking for a way to multithread the compression loop, in such a way that the end result is highly compatible (so zip with DEFLATED or bz2) but not some fancy newer algorithm, it is for archiving stuff.
What are the best options ? Use threadpool (but I guess the CPU won’t be used enough). Using a third party open source library? Any rust wrapper available ?
Thanks for your answer
/r/Python
https://redd.it/1jakf5c
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Does Django work on Inteliji Community Edition?
Can’t seem to install it. Im somewhat new to coding and I’ve been learning Django this week for a personal project. Can’t seem to install on my computer. Tried every method and searched online but can’t seem to find an answer that meets my needs.
/r/django
https://redd.it/1jagzwp
Can’t seem to install it. Im somewhat new to coding and I’ve been learning Django this week for a personal project. Can’t seem to install on my computer. Tried every method and searched online but can’t seem to find an answer that meets my needs.
/r/django
https://redd.it/1jagzwp
Reddit
From the django community on Reddit
Explore this post and more from the django community
Feedback on my Flask AuthService project for job applications
Hey everyone!
I’m currently job hunting and built this AuthService project to showcase my skills. It’s a Flask-based authentication system featuring user login, MFA (pyotp), and password reset functionality.
Additionally, I incorporated some basic DevOps concepts like Docker Compose and followed a repository architecture for better maintainability.
I’d love some constructive feedback—especially on code quality, security, and best practices—before adding it to my portfolio.
Any thoughts or suggestions would be greatly appreciated!
GitHub Repo: https://github.com/LeonR92/AuthService
Thanks a lot for your time! 🚀
/r/flask
https://redd.it/1jamvqi
Hey everyone!
I’m currently job hunting and built this AuthService project to showcase my skills. It’s a Flask-based authentication system featuring user login, MFA (pyotp), and password reset functionality.
Additionally, I incorporated some basic DevOps concepts like Docker Compose and followed a repository architecture for better maintainability.
I’d love some constructive feedback—especially on code quality, security, and best practices—before adding it to my portfolio.
Any thoughts or suggestions would be greatly appreciated!
GitHub Repo: https://github.com/LeonR92/AuthService
Thanks a lot for your time! 🚀
/r/flask
https://redd.it/1jamvqi
GitHub
GitHub - LeonR92/AuthService
Contribute to LeonR92/AuthService development by creating an account on GitHub.
Python Steering Council rejects PEP 736 – Shorthand syntax for keyword arguments at invocation
The Steering Council has rejected PEP 736, which proposed syntactic sugar for function calls with keyword arguments:
Here's the rejection notice and here's some previous discussion of the PEP on this subreddit.
/r/Python
https://redd.it/1jaorm1
The Steering Council has rejected PEP 736, which proposed syntactic sugar for function calls with keyword arguments:
f(x=) as shorthand for f(x=x).Here's the rejection notice and here's some previous discussion of the PEP on this subreddit.
/r/Python
https://redd.it/1jaorm1
Python Enhancement Proposals (PEPs)
PEP 736 – Shorthand syntax for keyword arguments at invocation | peps.python.org
This PEP proposes to introduce syntactic sugar f(x=) for the common pattern where a keyword argument has the same name as that of the variable corresponding to its value f(x=x).
Is it safe to put a CSRF_TOKEN inside the URL of a websocket-consumer connection?
In my app I have a WebSocket connection with a consumer to handle a live-chat and stuff and because in this consumer I have to generate an HTML form with a CSRF token in it, I'm currently passing the CSRF token from the WebSocket to the consumer via their *URL* if it's the correct word.
**Is this a safe thing to do?**
/r/django
https://redd.it/1jarlsx
In my app I have a WebSocket connection with a consumer to handle a live-chat and stuff and because in this consumer I have to generate an HTML form with a CSRF token in it, I'm currently passing the CSRF token from the WebSocket to the consumer via their *URL* if it's the correct word.
**Is this a safe thing to do?**
/r/django
https://redd.it/1jarlsx
Reddit
From the django community on Reddit
Explore this post and more from the django community
Friday Daily Thread: r/Python Meta and Free-Talk Fridays
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1jaqpdq
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1jaqpdq
Redditinc
Reddit Rules
Reddit Rules - Reddit
Project Rusty Graph: Python Library for Knowledge Graphs from SQL Data
# What my project does
Rusty Graph is a high-performance graph database library with Python bindings written in Rust. It transforms SQL data into knowledge graphs, making it easy to discover relationships and patterns hidden in relational databases.
# Target Audience
Data scientists working with complex relational datasets
Developers building applications that need to traverse relationships
Anyone who's found SQL joins and subqueries limiting when trying to extract insights from connected data
# Implementation
The library bridges the gap between tabular data and graph-based analysis:
# Transform SQL data into a knowledge graph with minimal code
graph = rusty_graph.KnowledgeGraph()
graph.add_nodes(data=users_df, node_type='User', unique_id_field='user_id')
graph.add_connections(
data=purchases_df,
connection_type='PURCHASED',
source_type='User',
source_id_field='user_id',
target_type='Product',
target_id_field='product_id',
)
# Calculate insights directly on the graph
user_spending = graph.type_filter('User').traverse('PURCHASED').calculate(
expression='sum(price quantity)',
/r/Python
https://redd.it/1jamelh
# What my project does
Rusty Graph is a high-performance graph database library with Python bindings written in Rust. It transforms SQL data into knowledge graphs, making it easy to discover relationships and patterns hidden in relational databases.
# Target Audience
Data scientists working with complex relational datasets
Developers building applications that need to traverse relationships
Anyone who's found SQL joins and subqueries limiting when trying to extract insights from connected data
# Implementation
The library bridges the gap between tabular data and graph-based analysis:
# Transform SQL data into a knowledge graph with minimal code
graph = rusty_graph.KnowledgeGraph()
graph.add_nodes(data=users_df, node_type='User', unique_id_field='user_id')
graph.add_connections(
data=purchases_df,
connection_type='PURCHASED',
source_type='User',
source_id_field='user_id',
target_type='Product',
target_id_field='product_id',
)
# Calculate insights directly on the graph
user_spending = graph.type_filter('User').traverse('PURCHASED').calculate(
expression='sum(price quantity)',
/r/Python
https://redd.it/1jamelh
Reddit
From the Python community on Reddit: [Project] Rusty Graph: Python Library for Knowledge Graphs from SQL Data
Explore this post and more from the Python community
I am building a technical debt quantification tool for Python frameworks -- looking for feedback
Hey everyone,
I’m working on a tool that **automates technical debt analysis** for Python teams. One of the biggest frustrations I’ve seen is that **SonarQube applies generic rules but doesn’t detect which framework you’re using (Django, Flask, FastAPI, etc.)**.
🔹 **What it does:**
✅ **Auto-detects the framework** in your repo (**no manual setup needed**).
✅ **Applies custom SonarQube rules** tailored to that framework.
✅ Generates a **framework-aware technical debt report** so teams can prioritize fixes.
💡 The idea is to save teams from **writing custom rules manually** and provide **more meaningful insights on tech debt.**
🚀 **Looking for feedback!**
* Would this be useful for your team?
* What are your biggest frustrations with SonarQube & technical debt tracking?
* Any must-have features you’d like in something like this?
I’d love to hear your thoughts! If you’re interested in testing it, I can share early access. 😊
Thanks in advance! 🙌
/r/Python
https://redd.it/1jamcog
Hey everyone,
I’m working on a tool that **automates technical debt analysis** for Python teams. One of the biggest frustrations I’ve seen is that **SonarQube applies generic rules but doesn’t detect which framework you’re using (Django, Flask, FastAPI, etc.)**.
🔹 **What it does:**
✅ **Auto-detects the framework** in your repo (**no manual setup needed**).
✅ **Applies custom SonarQube rules** tailored to that framework.
✅ Generates a **framework-aware technical debt report** so teams can prioritize fixes.
💡 The idea is to save teams from **writing custom rules manually** and provide **more meaningful insights on tech debt.**
🚀 **Looking for feedback!**
* Would this be useful for your team?
* What are your biggest frustrations with SonarQube & technical debt tracking?
* Any must-have features you’d like in something like this?
I’d love to hear your thoughts! If you’re interested in testing it, I can share early access. 😊
Thanks in advance! 🙌
/r/Python
https://redd.it/1jamcog
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Convert Voice to Text
Hi, I hope everything's going well. I need to convert audio files to text. These would be recordings of my voice, and sometimes conversations with a group of people. Can you recommend any software or advice? I use Manjaro as my operating system. Thanks.
/r/Python
https://redd.it/1jappop
Hi, I hope everything's going well. I need to convert audio files to text. These would be recordings of my voice, and sometimes conversations with a group of people. Can you recommend any software or advice? I use Manjaro as my operating system. Thanks.
/r/Python
https://redd.it/1jappop
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
django-pghistory vs django-simple-history?
I am using Django + PostGres and the goal here is just tracing the events and build a timeline (x was added / removed from Y, value Z change from 1 to 2, etc.), not necessarily recover any state at a given time.
Any recommendations which library to use? Any remarks about either of them, what to consider, pitfalls, etc.?
Thanks!
/r/django
https://redd.it/1jabg92
I am using Django + PostGres and the goal here is just tracing the events and build a timeline (x was added / removed from Y, value Z change from 1 to 2, etc.), not necessarily recover any state at a given time.
Any recommendations which library to use? Any remarks about either of them, what to consider, pitfalls, etc.?
Thanks!
/r/django
https://redd.it/1jabg92
Reddit
From the django community on Reddit
Explore this post and more from the django community
Matlab's variable explorer is amazing. What's pythons closest?
Hi all,
Long time python user. Recently needed to use Matlab for a customer. They had a large data set saved in their native *mat file structure.
It was so simple and easy to explore the data within the structure without needing any code itself. It made extracting the data I needed super quick and simple. Made me wonder if anything similar exists in Python?
I know Spyder has a variable explorer (which is good) but it dies as soon as the data structure is remotely complex.
I will likely need to do this often with different data sets.
Background: I'm converting a lot of the code from an academic research group to run in p.
/r/Python
https://redd.it/1jb1gzp
Hi all,
Long time python user. Recently needed to use Matlab for a customer. They had a large data set saved in their native *mat file structure.
It was so simple and easy to explore the data within the structure without needing any code itself. It made extracting the data I needed super quick and simple. Made me wonder if anything similar exists in Python?
I know Spyder has a variable explorer (which is good) but it dies as soon as the data structure is remotely complex.
I will likely need to do this often with different data sets.
Background: I'm converting a lot of the code from an academic research group to run in p.
/r/Python
https://redd.it/1jb1gzp
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
TIL: You can actually debug the Django shell in VS Code and it's changed everything
After years of sprinkling
Using VS Code launch config for the debugger. I always used it for running the application, but I was testing it out and I discovered you can do the same with the shell command
{
"version": "0.2.0",
"configurations":
{
"name": "Django Shell",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/manage.py",
"args": ["shell",
"django": true
}
]
}
Just drop this in your
/r/django
https://redd.it/1jb3s85
After years of sprinkling
print() statements and logs throughout my Django codebase when debugging, I've discovered a much better way that's been here all along. Using VS Code launch config for the debugger. I always used it for running the application, but I was testing it out and I discovered you can do the same with the shell command
{
"version": "0.2.0",
"configurations":
{
"name": "Django Shell",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/manage.py",
"args": ["shell",
"django": true
}
]
}
Just drop this in your
.vscode/launch.json file and select "Django Shell" from the debug dropdown, and use it as you would when running server./r/django
https://redd.it/1jb3s85
Reddit
From the django community on Reddit
Explore this post and more from the django community
Rate my program
It's just a simple math program. Nothing special! (except that it's my first actual project that i feel like publishing)
Link: https://github.com/ger3tto/first\_math\_program
/r/Python
https://redd.it/1jb4255
It's just a simple math program. Nothing special! (except that it's my first actual project that i feel like publishing)
Link: https://github.com/ger3tto/first\_math\_program
/r/Python
https://redd.it/1jb4255
GitHub
GitHub - ger3tto/first_math_program
Contribute to ger3tto/first_math_program development by creating an account on GitHub.
CocoIndex: Open source ETL to index fresh data for AI, like LEGO
📌 Repo: GitHub - cocoindex-io/cocoindex (Apache License 2.0)
# 📌 What My Project Does
It is an ETL framework to index data for AI, such as semantic search, retrieval-augmented generation (RAG); with realtime incremental updates. It is featured on console[.\]dev this week with 5k downloads last week.
It is the first engine that supports both custom transformation logic (like building lego) and incremental updates (out of box, to handle source data updates) to indexing data.
CocoIndex offers a data-driven programming model that simplifies the creation and maintenance of data indexing pipelines, ensuring data freshness and consistency.
# 🎯 Target Audience
\- Developers building data pipelines for RAG or semantic search.
# 🔥 Key Features
Data Flow Programming: Build indexing pipelines by composing transformations like Lego blocks, with built-in state management and observability.
Support Custom Logic: Plug in your choice of chunking, embedding, and vector stores. Extend with custom transformations like deduplication and reconciliation.
Incremental Updates: Smart state management minimizes re-computation by tracking changes at the file level, with future support for chunk-level granularity.
Python SDK: Built with a RUST core 🦀 , exposed through an intuitive Python binding 🐍 for ease of use. All of our examples are currently in Python 🐍.
# 🐳 How it works
You can think of
/r/Python
https://redd.it/1jb7oya
📌 Repo: GitHub - cocoindex-io/cocoindex (Apache License 2.0)
# 📌 What My Project Does
It is an ETL framework to index data for AI, such as semantic search, retrieval-augmented generation (RAG); with realtime incremental updates. It is featured on console[.\]dev this week with 5k downloads last week.
It is the first engine that supports both custom transformation logic (like building lego) and incremental updates (out of box, to handle source data updates) to indexing data.
CocoIndex offers a data-driven programming model that simplifies the creation and maintenance of data indexing pipelines, ensuring data freshness and consistency.
# 🎯 Target Audience
\- Developers building data pipelines for RAG or semantic search.
# 🔥 Key Features
Data Flow Programming: Build indexing pipelines by composing transformations like Lego blocks, with built-in state management and observability.
Support Custom Logic: Plug in your choice of chunking, embedding, and vector stores. Extend with custom transformations like deduplication and reconciliation.
Incremental Updates: Smart state management minimizes re-computation by tracking changes at the file level, with future support for chunk-level granularity.
Python SDK: Built with a RUST core 🦀 , exposed through an intuitive Python binding 🐍 for ease of use. All of our examples are currently in Python 🐍.
# 🐳 How it works
You can think of
/r/Python
https://redd.it/1jb7oya
GitHub
GitHub - cocoindex-io/cocoindex: Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if…
Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if you like it! - cocoindex-io/cocoindex
R How Pickle Files Backdoor AI Models—And What You Can Do About It
This articles deep dives on Python serialisation and how it is being used to exploit ML models.
Do let me know if there are any feedbacks. Thanks.
Blog - https://jchandra.com/posts/python-pickle/
/r/MachineLearning
https://redd.it/1jb4vbn
This articles deep dives on Python serialisation and how it is being used to exploit ML models.
Do let me know if there are any feedbacks. Thanks.
Blog - https://jchandra.com/posts/python-pickle/
/r/MachineLearning
https://redd.it/1jb4vbn
JC
The Dark Side of Python’s pickle – How to Backdoor an AI Model
Server-side rendering: FastAPI, HTMX, no Jinja
Hi,
I recently created a simple FastAPI project to showcase how Python server-side rendered apps with an htmx frontend could look like, using a React-like, async, type-checked rendering engine.
The app does not use Jinja/Chameleon, or any similar templating engine, ugly custom syntax in HTML- or markdown-like files, etc.; but it can (and does) use valid HTML and even customized, TailwindCSS-styled markdown for some pages.
Admittedly, this is a demo for the
Interestingly, even AI coding assistants pick up the patterns and offer decent completions.
If interested, you can check out the project here (link to deployed version in the repo): https://github.com/volfpeter/lipsum-chat
For comparison, you can find a somewhat older, but fairly similar project of mine that uses Jinja: https://github.com/volfpeter/fastapi-htmx-tailwind-example
/r/Python
https://redd.it/1jbagf6
Hi,
I recently created a simple FastAPI project to showcase how Python server-side rendered apps with an htmx frontend could look like, using a React-like, async, type-checked rendering engine.
The app does not use Jinja/Chameleon, or any similar templating engine, ugly custom syntax in HTML- or markdown-like files, etc.; but it can (and does) use valid HTML and even customized, TailwindCSS-styled markdown for some pages.
Admittedly, this is a demo for the
htmy and FastHX libraries.Interestingly, even AI coding assistants pick up the patterns and offer decent completions.
If interested, you can check out the project here (link to deployed version in the repo): https://github.com/volfpeter/lipsum-chat
For comparison, you can find a somewhat older, but fairly similar project of mine that uses Jinja: https://github.com/volfpeter/fastapi-htmx-tailwind-example
/r/Python
https://redd.it/1jbagf6
GitHub
GitHub - volfpeter/lipsum-chat: Technology demonstration for server-side rendering with Python, FastAPI, and htmx
Technology demonstration for server-side rendering with Python, FastAPI, and htmx - volfpeter/lipsum-chat
Project architecure for streamlit/Data Apps
Hi there, I'm a data scientist working mostly with Streamlit to build data apps.
Recently the requests for solutions that requires a more user friendly interface for data/ai visualization has grown significantly at my job.
Enough to make my manager realize that the deployment of such applications requires a robust standard (process).
As someone with a degree on computer science some of the most common project architecture doesn't seems to fit our use cases. Making me curious about the most used projects architecture for those kind of solutions?
/r/Python
https://redd.it/1jb880y
Hi there, I'm a data scientist working mostly with Streamlit to build data apps.
Recently the requests for solutions that requires a more user friendly interface for data/ai visualization has grown significantly at my job.
Enough to make my manager realize that the deployment of such applications requires a robust standard (process).
As someone with a degree on computer science some of the most common project architecture doesn't seems to fit our use cases. Making me curious about the most used projects architecture for those kind of solutions?
/r/Python
https://redd.it/1jb880y
Reddit
From the Python community on Reddit
Explore this post and more from the Python community