Webcam Rubik's Cube Solver GUI App PySide6 / OpenGL / OpenCV
# Background
This toy-project started as a self-challenge to see if I could build an application that uses the webcam and some foundational computer vision techniques to detect the state of a scrambled Rubik's cube and then show the solution steps to the user.
# Target Audience
As it is a toy-project it is mainly meant for casual use by those who are curious or it serves as an example project for students trying to learn computer vision and/or graphics programming.
# Comparison
I have seen a few projects on GitHub that implement a Rubik's cube facelet detection pipeline but they seem to fall short of actually solving the cube and show the solution to the user. I have also seen a few android solver apps but those don't seem to have a way to auto detect the state of the cube using your phone camera and you need to manually set the state.
# Installation and Usage
git clone https://github.com/pdadhikary/rubiksolver.git
cd rubiksolver
uv sync
uv run rubiksolver
When scanning their Rubik's cube the user should hold up each face of the cube
/r/Python
https://redd.it/1ouwa42
# Background
This toy-project started as a self-challenge to see if I could build an application that uses the webcam and some foundational computer vision techniques to detect the state of a scrambled Rubik's cube and then show the solution steps to the user.
# Target Audience
As it is a toy-project it is mainly meant for casual use by those who are curious or it serves as an example project for students trying to learn computer vision and/or graphics programming.
# Comparison
I have seen a few projects on GitHub that implement a Rubik's cube facelet detection pipeline but they seem to fall short of actually solving the cube and show the solution to the user. I have also seen a few android solver apps but those don't seem to have a way to auto detect the state of the cube using your phone camera and you need to manually set the state.
# Installation and Usage
git clone https://github.com/pdadhikary/rubiksolver.git
cd rubiksolver
uv sync
uv run rubiksolver
When scanning their Rubik's cube the user should hold up each face of the cube
/r/Python
https://redd.it/1ouwa42
GitHub
GitHub - pdadhikary/rubiksolver: A Rubik's cube solver app
A Rubik's cube solver app. Contribute to pdadhikary/rubiksolver development by creating an account on GitHub.
Can I create PDF infographics/reports using Python?
I have a python script that does data scrapping and whatnot to output data into a CSV file. I'd love to know which packages I can use to printout professional graphics and charts and output the data into nice layouts to export it as a PDF on my computer. Any suggestions? I used ChatGPT and it used the basic Matplotlib, but I am wondering what is the best way I can go about creating something like this:
https://cdn.venngage.com/template/thumbnail/small/f7c94e39-a01c-4bba-934c-52bd9330525a.webp
https://cdn.venngage.com/template/thumbnail/small/f7c94e39-a01c-4bba-934c-52bd9330525a.webp
/r/Python
https://redd.it/1our6fw
I have a python script that does data scrapping and whatnot to output data into a CSV file. I'd love to know which packages I can use to printout professional graphics and charts and output the data into nice layouts to export it as a PDF on my computer. Any suggestions? I used ChatGPT and it used the basic Matplotlib, but I am wondering what is the best way I can go about creating something like this:
https://cdn.venngage.com/template/thumbnail/small/f7c94e39-a01c-4bba-934c-52bd9330525a.webp
https://cdn.venngage.com/template/thumbnail/small/f7c94e39-a01c-4bba-934c-52bd9330525a.webp
/r/Python
https://redd.it/1our6fw
Question I'm new
I am doing a project that uses Django rest and vite for the front, I was making a request and it had to send the credentials, cookies or section-id, issue despite doing the configuration of the cords
Front at localhost:8000
Back at 172.0.10... the typical
It didn't work for me, error 400 I think it was
I fixed it by making Back Django serve on the same local host but with a different port.
Is it normal in development to do this? Or I ruined something because I read that the AI didn't help me and neither did it.
I must have explained myself poorly, I'm sure sorry.
/r/djangolearning
https://redd.it/1ouud8w
I am doing a project that uses Django rest and vite for the front, I was making a request and it had to send the credentials, cookies or section-id, issue despite doing the configuration of the cords
Front at localhost:8000
Back at 172.0.10... the typical
It didn't work for me, error 400 I think it was
I fixed it by making Back Django serve on the same local host but with a different port.
Is it normal in development to do this? Or I ruined something because I read that the AI didn't help me and neither did it.
I must have explained myself poorly, I'm sure sorry.
/r/djangolearning
https://redd.it/1ouud8w
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
autopyperf — A tiny Python profiler that gives instant optimization tips
Hey everyone,
I made **autopyperf**, a lightweight Python module that automatically profiles your code and suggests quick optimizations — no setup, no dependencies, just pure Python.
# 🧩 What It Does
autopyperf helps you understand where your code slows down and gives small static suggestions to make it faster.
* ⏱️ Profile functions with a decorator
* 🧮 Analyze whole scripts easily
* 💡 Get simple optimization hints
Example:
from autopyperf import profile_function, profile_script, suggest_optimizations
u/profile_function
def slow_func():
return [i**2 for i in range(100000)]
slow_func()
profile_script("test.py")
suggest_optimizations("test.py")
# 🎯 Target Audience
Made for developers, students, and hobbyists who want quick feedback on code performance without using heavy profilers like `cProfile` or `pyinstrument`.
It’s simple, educational, and perfect for small projects or quick checks.
# ⚖️ How It’s Different
* ✅ No dependencies
* ✅ Dead-simple setup
* 💡 Adds optimization suggestions (something most profilers don’t)
* ❌ No complex graphs or visualizations — intentionally minimal
# ⚙️ Install
pip install autopyperf
or directly:
pip install git+https://github.com/Ithihasmadhu/autopyperf
🔗 **GitHub:** [https://github.com/Ithihasmadhu/autopyperf](https://github.com/Ithihasmadhu/autopyperf)
/r/Python
https://redd.it/1ovghpg
Hey everyone,
I made **autopyperf**, a lightweight Python module that automatically profiles your code and suggests quick optimizations — no setup, no dependencies, just pure Python.
# 🧩 What It Does
autopyperf helps you understand where your code slows down and gives small static suggestions to make it faster.
* ⏱️ Profile functions with a decorator
* 🧮 Analyze whole scripts easily
* 💡 Get simple optimization hints
Example:
from autopyperf import profile_function, profile_script, suggest_optimizations
u/profile_function
def slow_func():
return [i**2 for i in range(100000)]
slow_func()
profile_script("test.py")
suggest_optimizations("test.py")
# 🎯 Target Audience
Made for developers, students, and hobbyists who want quick feedback on code performance without using heavy profilers like `cProfile` or `pyinstrument`.
It’s simple, educational, and perfect for small projects or quick checks.
# ⚖️ How It’s Different
* ✅ No dependencies
* ✅ Dead-simple setup
* 💡 Adds optimization suggestions (something most profilers don’t)
* ❌ No complex graphs or visualizations — intentionally minimal
# ⚙️ Install
pip install autopyperf
or directly:
pip install git+https://github.com/Ithihasmadhu/autopyperf
🔗 **GitHub:** [https://github.com/Ithihasmadhu/autopyperf](https://github.com/Ithihasmadhu/autopyperf)
/r/Python
https://redd.it/1ovghpg
GitHub
GitHub - Ithihasmadhu/autopyperf: Automatic Python performance insights — lightweight, dependency-free, and smart optimization…
Automatic Python performance insights — lightweight, dependency-free, and smart optimization tips. - Ithihasmadhu/autopyperf
Can't use socketIO with a reverse proxy
Hi, has anyone worked with socketio using a reverse proxy? I can't find the correct configuration to do it, this is how I'm using it
main.py:
socketio = SocketIO(app, cors_allowed_origins="*")
web.config:
<rule name="ChatBot Port 5001">
<match url="\^example/(.*)" />
<action type="Rewrite" url="http://localhost:5001/{R:1}" />
</rule>
<rule name="ChatBot WebSocket" stopProcessing="true">
<match url="\^example/socket.io/(.*)" />
<action type="Rewrite" url="http://localhost:5001/example/socket.io/{R:1}" />
</rule>
JS:
<script>var socket = io();</script>
/r/flask
https://redd.it/1ovc82j
Hi, has anyone worked with socketio using a reverse proxy? I can't find the correct configuration to do it, this is how I'm using it
main.py:
socketio = SocketIO(app, cors_allowed_origins="*")
web.config:
<rule name="ChatBot Port 5001">
<match url="\^example/(.*)" />
<action type="Rewrite" url="http://localhost:5001/{R:1}" />
</rule>
<rule name="ChatBot WebSocket" stopProcessing="true">
<match url="\^example/socket.io/(.*)" />
<action type="Rewrite" url="http://localhost:5001/example/socket.io/{R:1}" />
</rule>
JS:
<script>var socket = io();</script>
/r/flask
https://redd.it/1ovc82j
R LeJEPA: New Yann Lecun paper
Abstract: Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance and theory has led to ad - hoc R&D. We present a comprehensive theory of JEPAs and instantiate it in LeJEPA, a lean, scalable, and theoretically grounded training objective. First, we identify the isotropic Gaussian as the optimal distribution that JEPAs’ embeddings should follow to minimize downstream prediction risk. Second, we introduce a novel objective–Sketched Isotropic Gaussian Regularization (SIGReg)–to constrain embeddings to reach that ideal distribution. Combining the JEPA predictive loss with SIGReg yields LeJEPA with numerous theoretical and practical benefits: (i) single trade - off hyperparameter, (ii) linear time and memory complexity, (iii) stability across hyper-parameters, architectures (ResNets, ViTs, ConvNets) and domains, (iv) heuristics-free, e.g., no stop -gradient, no teacher–student, no hyper-parameter schedulers, and (v) distributed training-friendly implementation requiring only ≈50 lines of code. Our empirical validation covers 10+ datasets, 60+ architectures, all with varying scales and domains. As an example, using imagenet-1k for pretraining and linear evaluation with frozen backbone, LeJEPA reaches 79% with a ViT-H/14. We hope that the simplicity and theory-friendly ecosystem offered by LeJEPA will reestablish self-supervised pre-training
/r/MachineLearning
https://redd.it/1ovm4fd
Abstract: Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance and theory has led to ad - hoc R&D. We present a comprehensive theory of JEPAs and instantiate it in LeJEPA, a lean, scalable, and theoretically grounded training objective. First, we identify the isotropic Gaussian as the optimal distribution that JEPAs’ embeddings should follow to minimize downstream prediction risk. Second, we introduce a novel objective–Sketched Isotropic Gaussian Regularization (SIGReg)–to constrain embeddings to reach that ideal distribution. Combining the JEPA predictive loss with SIGReg yields LeJEPA with numerous theoretical and practical benefits: (i) single trade - off hyperparameter, (ii) linear time and memory complexity, (iii) stability across hyper-parameters, architectures (ResNets, ViTs, ConvNets) and domains, (iv) heuristics-free, e.g., no stop -gradient, no teacher–student, no hyper-parameter schedulers, and (v) distributed training-friendly implementation requiring only ≈50 lines of code. Our empirical validation covers 10+ datasets, 60+ architectures, all with varying scales and domains. As an example, using imagenet-1k for pretraining and linear evaluation with frozen backbone, LeJEPA reaches 79% with a ViT-H/14. We hope that the simplicity and theory-friendly ecosystem offered by LeJEPA will reestablish self-supervised pre-training
/r/MachineLearning
https://redd.it/1ovm4fd
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1ovlxtw
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1ovlxtw
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
MyPy vs Pyright
What's the preferred tool in industry?
For the whole workflow: IDE, precommit, CI/CD.
I searched and cannot find what's standard. I'm also working with unannotated libraries.
/r/Python
https://redd.it/1ovivvs
What's the preferred tool in industry?
For the whole workflow: IDE, precommit, CI/CD.
I searched and cannot find what's standard. I'm also working with unannotated libraries.
/r/Python
https://redd.it/1ovivvs
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
How do I stop and start ./manage.py runserver automatically?
I have a docker container running this command. I need to automatically make it stop and start so that it can pick up new changes in the environment. How can I do it. I would optimally need something the checks a condition after each minute and then restart this command.
/r/djangolearning
https://redd.it/1ou632x
I have a docker container running this command. I need to automatically make it stop and start so that it can pick up new changes in the environment. How can I do it. I would optimally need something the checks a condition after each minute and then restart this command.
/r/djangolearning
https://redd.it/1ou632x
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
DjangoCon 2025 talk recordings just went live!
https://www.techtalksweekly.io/i/178572323/djangocon-us
/r/django
https://redd.it/1ovea53
https://www.techtalksweekly.io/i/178572323/djangocon-us
/r/django
https://redd.it/1ovea53
www.techtalksweekly.io
💥 Tech Talks Weekly #82
... With 11 featured podcasts and talks of the week!
D CVPR submission number almost at 30k
Made my CVPR submission and got assigned almost a 30k submission number. Does this mean there are \~30k submissions to CVPR this year? That is more than double of last years...
/r/MachineLearning
https://redd.it/1ovqvdr
Made my CVPR submission and got assigned almost a 30k submission number. Does this mean there are \~30k submissions to CVPR this year? That is more than double of last years...
/r/MachineLearning
https://redd.it/1ovqvdr
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
D How to sound more like a Researcher
I have been working in Applied ML for the last 10 years but in the last 2 have had a much stronger research focus and have published a few papers. Through that I have a few people reach out for some frontier labs for some research positions (my 10 years have been in FAANG). This would be a career jump that I would love but I find in my interviews I sound too applied and not researchey enough. This makes me feel very unconfident in discussing what I have done. Applied interviews are more like exams and these are more like defending a thesis.
Any suggestions for improvement? (I do stay up to date with current papers but honestly there are so many that I may not be in full depth about everything)
/r/MachineLearning
https://redd.it/1ovtrn4
I have been working in Applied ML for the last 10 years but in the last 2 have had a much stronger research focus and have published a few papers. Through that I have a few people reach out for some frontier labs for some research positions (my 10 years have been in FAANG). This would be a career jump that I would love but I find in my interviews I sound too applied and not researchey enough. This makes me feel very unconfident in discussing what I have done. Applied interviews are more like exams and these are more like defending a thesis.
Any suggestions for improvement? (I do stay up to date with current papers but honestly there are so many that I may not be in full depth about everything)
/r/MachineLearning
https://redd.it/1ovtrn4
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
D <ICLR review comment> Is this real?
https://preview.redd.it/s49lfluvdu0g1.png?width=1179&format=png&auto=webp&s=ee4a90975f2accef9c884bdea8900214a039483a
/r/MachineLearning
https://redd.it/1ov7qs2
https://preview.redd.it/s49lfluvdu0g1.png?width=1179&format=png&auto=webp&s=ee4a90975f2accef9c884bdea8900214a039483a
/r/MachineLearning
https://redd.it/1ov7qs2
Keecas: Dict-based symbolic math for Jupyter with units support and automatic LaTeX rendering
As a structural engineer I always aimed to reduce the friction between doing the calculation and writing the report. I've been taught symbolic math with units, but the field is dominated by Word and Excel, neither of which is a good fit. Thanks to Quarto I've been able to break the shackle of Office and write reproducible documents (BONUS: plain text is a bliss).
# What My Project Does
Keecas is a Python package for symbolic and units-aware calculations in Jupyter notebooks, specifically designed for Quarto-rendered documents (PDF/HTML). It minimizes boilerplate by using Python dicts and dict comprehension as main equations containers: keys represent left-hand side symbols, values represent right-hand side expressions.
The package combines SymPy (symbolic math), Pint (units), and functional programming patterns to provide automatic LaTeX rendering with equation numbering, unit conversion, and cross-referencing.
# Target Audience
Engineers writing calculation reports and technical documentation
Scientists creating reproducible notebooks with units
Academics preparing papers with mathematical content (likely not mathematicians though, those pesky folk have no use for units; or numbers)
Anyone using Jupyter + Quarto for technical documents requiring LaTeX output
>NOTE: while
/r/Python
https://redd.it/1ow4shz
As a structural engineer I always aimed to reduce the friction between doing the calculation and writing the report. I've been taught symbolic math with units, but the field is dominated by Word and Excel, neither of which is a good fit. Thanks to Quarto I've been able to break the shackle of Office and write reproducible documents (BONUS: plain text is a bliss).
# What My Project Does
Keecas is a Python package for symbolic and units-aware calculations in Jupyter notebooks, specifically designed for Quarto-rendered documents (PDF/HTML). It minimizes boilerplate by using Python dicts and dict comprehension as main equations containers: keys represent left-hand side symbols, values represent right-hand side expressions.
The package combines SymPy (symbolic math), Pint (units), and functional programming patterns to provide automatic LaTeX rendering with equation numbering, unit conversion, and cross-referencing.
# Target Audience
Engineers writing calculation reports and technical documentation
Scientists creating reproducible notebooks with units
Academics preparing papers with mathematical content (likely not mathematicians though, those pesky folk have no use for units; or numbers)
Anyone using Jupyter + Quarto for technical documents requiring LaTeX output
>NOTE: while
keecas includes features aimed at Quarto, it can be used just as easily with Jupyter notebooks alone.keecas is available on/r/Python
https://redd.it/1ow4shz
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
is there any way to detect and filter out bot traffic?
hi, I am a django lover, nowadays I feel there are a lot of bot traffics in my website.. any ways to detect and block bots? is there any python package or other? not capcha or cloudflare
/r/django
https://redd.it/1ow7s2k
hi, I am a django lover, nowadays I feel there are a lot of bot traffics in my website.. any ways to detect and block bots? is there any python package or other? not capcha or cloudflare
/r/django
https://redd.it/1ow7s2k
Reddit
From the django community on Reddit
Explore this post and more from the django community
From 59 lines of tutorial code to 260,000 lines powering a production SaaS
Ever wondered what Flask looks like in production? Here are some insights into a Flask app with over 150 thousand users. Enjoy!
## How it started
In 2016, I started a Flask tutorial because I had an idea for a simple app. I knew a little bit about HTML and CSS but
almost nothing about database driven apps. I continued building on this codebase for nine years. Now,
that same app has hundreds of thousands of registered users, earns thousands
of revenue per month, and has changed my life forever.
Despite its unglamorous beginnings I never rewrote the app from scratch, I just kept on adding to it (and sometimes
taking away). Whenever I faced a
problem or a challenging requirement, I churned, ground and didn't give up until it was fixed. Then I moved on to the
next task.
## Some stats
Some usage stats:
* 400k visitors per month
* 1.5 million page views per month
* 8k signups per month with 180k signed-up users overall
* 80 requests per second
Some code stats:
- Python: **51,537 lines**
- Vue/JavaScript: **193,355 lines**
- HTML: **16,414 lines**
- **Total: ~261,000 lines of code**
## The architecture and customizations
OK, onto the code! Here is a top-level overview:
* The main database is Postgres and I use Peewee as
/r/flask
https://redd.it/1owbrvx
Ever wondered what Flask looks like in production? Here are some insights into a Flask app with over 150 thousand users. Enjoy!
## How it started
In 2016, I started a Flask tutorial because I had an idea for a simple app. I knew a little bit about HTML and CSS but
almost nothing about database driven apps. I continued building on this codebase for nine years. Now,
that same app has hundreds of thousands of registered users, earns thousands
of revenue per month, and has changed my life forever.
Despite its unglamorous beginnings I never rewrote the app from scratch, I just kept on adding to it (and sometimes
taking away). Whenever I faced a
problem or a challenging requirement, I churned, ground and didn't give up until it was fixed. Then I moved on to the
next task.
## Some stats
Some usage stats:
* 400k visitors per month
* 1.5 million page views per month
* 8k signups per month with 180k signed-up users overall
* 80 requests per second
Some code stats:
- Python: **51,537 lines**
- Vue/JavaScript: **193,355 lines**
- HTML: **16,414 lines**
- **Total: ~261,000 lines of code**
## The architecture and customizations
OK, onto the code! Here is a top-level overview:
* The main database is Postgres and I use Peewee as
/r/flask
https://redd.it/1owbrvx
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Friday Daily Thread: r/Python Meta and Free-Talk Fridays
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1owhaba
# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️
Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!
## How it Works:
1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.
## Guidelines:
All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.
## Example Topics:
1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.
Let's keep the conversation going. Happy discussing! 🌟
/r/Python
https://redd.it/1owhaba
Redditinc
Reddit Rules
Reddit Rules - Reddit
R is Top-K edge selection preserving task-relevant info, or am I reasoning in circles?
I have m modalities with embeddings Hi. I learn edge weights Φij(c, et) for all pairs (just a learned feedforward function based on two embeddings + context), then select Top-K edges by weight and discard the rest.
My thought , Since Φij is learned via gradient descent to maximize task performance, high-weight edges should indicate that modalities i and j are relevant together. So by selecting Top-K, I'm keeping the most useful pairs and discarding irrelevant ones.
Problem: This feels circular.. “Φ is good because we trained it to be good."
Is there a formal way to argue that Top-K selection preserves task-relevant information that doesn't just assume this?
/r/MachineLearning
https://redd.it/1ow7g2u
I have m modalities with embeddings Hi. I learn edge weights Φij(c, et) for all pairs (just a learned feedforward function based on two embeddings + context), then select Top-K edges by weight and discard the rest.
My thought , Since Φij is learned via gradient descent to maximize task performance, high-weight edges should indicate that modalities i and j are relevant together. So by selecting Top-K, I'm keeping the most useful pairs and discarding irrelevant ones.
Problem: This feels circular.. “Φ is good because we trained it to be good."
Is there a formal way to argue that Top-K selection preserves task-relevant information that doesn't just assume this?
/r/MachineLearning
https://redd.it/1ow7g2u
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
D Question about self-referential novelty gating
I’ve been wondering about continual learning and noticed that most setups treat “novelty” as a single scalar, usually tied to prediction error or surprise. But in humans, a surprise that feels self-relevant (“this is about me / my situation”) clearly lands differently from a random trivia fact. So I’m wondering if it makes sense to give agents a simple “self-score” for each event and let that bias what gets written into long-term memory.
For example like this a promotion gate I imagined for an episodic memory buffer
effective\\_score = score + alpha \\* self\\_score
if effective\\_score >= SCORE\\_THRESH and dist\\_to\\_neighbors <= RADIUS\\_THRESH:
promote\\_to\\_long\\_term(memory)
Intuitively, this would mean self-relevant surprises are slightly more likely to be preserved and influence future behavior, without just globally increasing the learning rate. Has anyone tried something like this in practice (RL agents, LLM agents with memory, etc.) or seen papers where self-relevance is treated as an explicit signal in the learning rule, rather than just a psychological observation?
/r/MachineLearning
https://redd.it/1ow8587
I’ve been wondering about continual learning and noticed that most setups treat “novelty” as a single scalar, usually tied to prediction error or surprise. But in humans, a surprise that feels self-relevant (“this is about me / my situation”) clearly lands differently from a random trivia fact. So I’m wondering if it makes sense to give agents a simple “self-score” for each event and let that bias what gets written into long-term memory.
For example like this a promotion gate I imagined for an episodic memory buffer
effective\\_score = score + alpha \\* self\\_score
if effective\\_score >= SCORE\\_THRESH and dist\\_to\\_neighbors <= RADIUS\\_THRESH:
promote\\_to\\_long\\_term(memory)
Intuitively, this would mean self-relevant surprises are slightly more likely to be preserved and influence future behavior, without just globally increasing the learning rate. Has anyone tried something like this in practice (RL agents, LLM agents with memory, etc.) or seen papers where self-relevance is treated as an explicit signal in the learning rule, rather than just a psychological observation?
/r/MachineLearning
https://redd.it/1ow8587
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
I want to build and use custom MCP in my Django project. Have any suggestion on this?
I'm working on a project where users can explore the entire database and create dashboards using simple natural language queries. I've already implemented the system of connecting different types of databases like PostgreSQL, MongoDB, SQLite, CSV, Excel, etc., and created a chat model and views for that. It's currently having simple OpenAI calls for the query responses.
Now, I want to connect the databases to chat so that when the user writes the query, it talks to connected chat databases and provide responses based on that.
For this, I want to use the MCP in my project, as the MCP perfectly works with AI.
Does anyone have any experience with a similar situation and can guide me in this?
Thanks in advance to everyone!
/r/django
https://redd.it/1ow0kfw
I'm working on a project where users can explore the entire database and create dashboards using simple natural language queries. I've already implemented the system of connecting different types of databases like PostgreSQL, MongoDB, SQLite, CSV, Excel, etc., and created a chat model and views for that. It's currently having simple OpenAI calls for the query responses.
Now, I want to connect the databases to chat so that when the user writes the query, it talks to connected chat databases and provide responses based on that.
For this, I want to use the MCP in my project, as the MCP perfectly works with AI.
Does anyone have any experience with a similar situation and can guide me in this?
Thanks in advance to everyone!
/r/django
https://redd.it/1ow0kfw
Reddit
From the django community on Reddit
Explore this post and more from the django community