Python 1.0.0, released 31 years ago today
Python 1.0.0 is out!
https://groups.google.com/g/comp.lang.misc/c/\_QUzdEGFwCo/m/KIFdu0-Dv7sJ?pli=1
>\--> Tired of decyphering the Perl code you wrote last week?
>\--> Frustrated with Bourne shell syntax?
>\--> Spent too much time staring at core dumps lately?
>Maybe you should try Python...
\~ Guido van Rossum
/r/Python
https://redd.it/1ibxols
Python 1.0.0 is out!
https://groups.google.com/g/comp.lang.misc/c/\_QUzdEGFwCo/m/KIFdu0-Dv7sJ?pli=1
>\--> Tired of decyphering the Perl code you wrote last week?
>\--> Frustrated with Bourne shell syntax?
>\--> Spent too much time staring at core dumps lately?
>Maybe you should try Python...
\~ Guido van Rossum
/r/Python
https://redd.it/1ibxols
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Django Channels: Asynchronous Magic for Real-Time Applications 🚀
Ever wondered how to build real-time features like chat applications, notifications, or live updates in your Django project? Django Channels makes it possible by bringing asynchronous capabilities to Django.
I’ve been working on this Django-Channels repository to make it easier for developers to understand and implement these real-time features. 🎯
🔗 Check it out here:
GitHub Repository
✨ If you find it helpful, please show your support by giving the repo a ⭐ and following me on GitHub. Every star and follow encourages me to create more helpful resources for the community! 🙌
Let’s keep building awesome projects and pushing boundaries together. 💻💡
/r/django
https://redd.it/1ibx9qg
Ever wondered how to build real-time features like chat applications, notifications, or live updates in your Django project? Django Channels makes it possible by bringing asynchronous capabilities to Django.
I’ve been working on this Django-Channels repository to make it easier for developers to understand and implement these real-time features. 🎯
🔗 Check it out here:
GitHub Repository
✨ If you find it helpful, please show your support by giving the repo a ⭐ and following me on GitHub. Every star and follow encourages me to create more helpful resources for the community! 🙌
Let’s keep building awesome projects and pushing boundaries together. 💻💡
/r/django
https://redd.it/1ibx9qg
GitHub
GitHub - Matu-sunuwawa/Django-Channels
Contribute to Matu-sunuwawa/Django-Channels development by creating an account on GitHub.
Terminal based Python debuggers
Are there any TUI python debuggers? I found a couple hobby projects but they seem WIP/unmaintained.
Ideally, it should replicate vscode debugger like being able to
EDIT: I'm aware of pdb/pdb++/ipython. I'm specifically asking for alternatives to these that are more graphical.
/r/Python
https://redd.it/1iby62a
Are there any TUI python debuggers? I found a couple hobby projects but they seem WIP/unmaintained.
Ideally, it should replicate vscode debugger like being able to
watch expressions etc.EDIT: I'm aware of pdb/pdb++/ipython. I'm specifically asking for alternatives to these that are more graphical.
/r/Python
https://redd.it/1iby62a
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Day 2: Building Expense Tracker App
Hey everyone 👋
I'm currently working on an Expense Tracker App to learn how to display data on graphs using Chart.js. It's been a fun project so far, and I've made a few updates:
1. User-friendly interface: I focused on creating a more intuitive experience to keep users engaged.
2. Dismissible messages: Users are now better informed about their post progress with helpful notifications.
3. Robust error handling: Errors are now handled gracefully, preventing any app crashes.
4. Data persistence: Users won’t have to re-enter data when they encounter an error — it's saved for them!
This project has been a great opportunity to focus more on UI/UX instead of my usual backend-heavy approach, and I’ve learned a lot in the process.
View project on GitHub
If you're new to Django or looking for a fun project, why not give this a try? You’ll find a full description in the repo.
For my previous post **click here**
https://preview.redd.it/3w6oysi6dqfe1.png?width=2496&format=png&auto=webp&s=93deeb48f3fe9ca4fef43a9d12ed1b6141a2484c
https://preview.redd.it/arsi2859dqfe1.jpg?width=751&format=pjpg&auto=webp&s=20bc5fae30bc8cbbed3d91d4820840086f319d59
https://preview.redd.it/enjc8759dqfe1.jpg?width=747&format=pjpg&auto=webp&s=71685691d986318001575b11a3f97cb46410ad77
https://preview.redd.it/iapym759dqfe1.jpg?width=709&format=pjpg&auto=webp&s=2e1ca5f1879f1a3053e5a29db037f386195e02cb
/r/django
https://redd.it/1ic0pwv
Hey everyone 👋
I'm currently working on an Expense Tracker App to learn how to display data on graphs using Chart.js. It's been a fun project so far, and I've made a few updates:
1. User-friendly interface: I focused on creating a more intuitive experience to keep users engaged.
2. Dismissible messages: Users are now better informed about their post progress with helpful notifications.
3. Robust error handling: Errors are now handled gracefully, preventing any app crashes.
4. Data persistence: Users won’t have to re-enter data when they encounter an error — it's saved for them!
This project has been a great opportunity to focus more on UI/UX instead of my usual backend-heavy approach, and I’ve learned a lot in the process.
View project on GitHub
If you're new to Django or looking for a fun project, why not give this a try? You’ll find a full description in the repo.
For my previous post **click here**
https://preview.redd.it/3w6oysi6dqfe1.png?width=2496&format=png&auto=webp&s=93deeb48f3fe9ca4fef43a9d12ed1b6141a2484c
https://preview.redd.it/arsi2859dqfe1.jpg?width=751&format=pjpg&auto=webp&s=20bc5fae30bc8cbbed3d91d4820840086f319d59
https://preview.redd.it/enjc8759dqfe1.jpg?width=747&format=pjpg&auto=webp&s=71685691d986318001575b11a3f97cb46410ad77
https://preview.redd.it/iapym759dqfe1.jpg?width=709&format=pjpg&auto=webp&s=2e1ca5f1879f1a3053e5a29db037f386195e02cb
/r/django
https://redd.it/1ic0pwv
GitHub
GitHub - keith244/Expense-Tracker: This is an expense tracking app. It helps track your expenses and displays them on a graph.
This is an expense tracking app. It helps track your expenses and displays them on a graph. - keith244/Expense-Tracker
Windows Hotspot
I am trying to run my web app via my windows hotspot on my laptop but the application seems unable to listen on the hotspot. I have tried listening on my laptops hotspot interface (
/r/flask
https://redd.it/1ibylju
I am trying to run my web app via my windows hotspot on my laptop but the application seems unable to listen on the hotspot. I have tried listening on my laptops hotspot interface (
192.168.137.1) and all interfaces (0.0.0.0) when listening on all interfaces my hotspot interface does not appear in the list. Is there a way to resolve this? Would this application work on the hotspot from a Raspberry Pi? Happy to provide selected code snippets as required but much of the code is sensitive so won't be uploaded in an uncensored form./r/flask
https://redd.it/1ibylju
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Can't make Nginx see Gunicorn socket. Please help.
Hi all.
I´m trying to install a Nginx/Gunicorn/Flask app (protocardtools is its name) in a local server following this tutorial.
Everything seems to work fine down to the last moment: when I run
Gunicorn seems to be running fine when I do
Contents of my
Contents of my
Can anyone please help me shed a light on this? Thank you so much in advance.
/r/flask
https://redd.it/1ic372l
Hi all.
I´m trying to install a Nginx/Gunicorn/Flask app (protocardtools is its name) in a local server following this tutorial.
Everything seems to work fine down to the last moment: when I run
sudo nginx -t I get the error "/etc/nginx/proxy_params" failed (2: No such file or directory) in /etc/nginx/conf.d/protocardtools.conf:22Gunicorn seems to be running fine when I do
sudo systemctl status protocardtoolsContents of my
/etc/nginx/conf.d/protocardtools.conf:server {
listen 80;
server_name cards.proto.server;
location / {
include proxy_params;
proxy_pass http://unix:/media/media/www/www-protocardtools/protocardtools.sock;
}
}
Contents of my
/etc/systemd/system/protocardtools.service:[Unit]
Description=Gunicorn instance to serve ProtoCardTools
After=network.target
[Service]
User=proto
Group=www-data
WorkingDirectory=/media/media/www/www-protocardtools
Environment="PATH=/media/media/www/www-protocardtools/venv/bin"
ExecStart=/media/media/www/www-protocardtools/venv/bin/gunicorn --workers 3 --bind unix:protocardtools.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Can anyone please help me shed a light on this? Thank you so much in advance.
/r/flask
https://redd.it/1ic372l
Digitalocean
How To Serve Flask Applications with Gunicorn and Nginx on Ubuntu 22.04 | DigitalOcean
In this guide, you will build a Python application using the Flask microframework on Ubuntu 22.04. The majority of this tutorial is about how to set up the …
Created a cool python pattern generator parser
Hey everyone!
Like many learning programmers, I cut my teeth on printing star patterns. It's a classic way to get comfortable with a new language's syntax. This got me thinking: what if I could create an engine to generate these patterns automatically? So, I did! I'd love for you to check it out and give me your feedback and suggestions for improvement.
**What My Project Does:**
This project, PatternGenerator, takes a simple input defined by my language and generates various patterns. It's designed to be easily extensible, allowing for the addition of more pattern types and customization options in the future. The current version focuses on core pattern generation logic. You can find the code on GitHub: [https://github.com/ajratnam/PatternGenerator](https://github.com/ajratnam/PatternGenerator)
**Target Audience:**
This is currently a toy project, primarily for learning and exploring different programming concepts. I'm aiming to improve it and potentially turn it into a more robust tool. I think it could be useful for:
* **Anyone wanting to quickly generate patterns:** Maybe you need a specific pattern for a project or just for fun.
* **Developers interested in contributing:** I welcome pull requests and contributions to expand the pattern library and features.
**Comparison:**
While there are many online pattern generators, this project differs in a few key ways:
*
/r/Python
https://redd.it/1ic4j5b
Hey everyone!
Like many learning programmers, I cut my teeth on printing star patterns. It's a classic way to get comfortable with a new language's syntax. This got me thinking: what if I could create an engine to generate these patterns automatically? So, I did! I'd love for you to check it out and give me your feedback and suggestions for improvement.
**What My Project Does:**
This project, PatternGenerator, takes a simple input defined by my language and generates various patterns. It's designed to be easily extensible, allowing for the addition of more pattern types and customization options in the future. The current version focuses on core pattern generation logic. You can find the code on GitHub: [https://github.com/ajratnam/PatternGenerator](https://github.com/ajratnam/PatternGenerator)
**Target Audience:**
This is currently a toy project, primarily for learning and exploring different programming concepts. I'm aiming to improve it and potentially turn it into a more robust tool. I think it could be useful for:
* **Anyone wanting to quickly generate patterns:** Maybe you need a specific pattern for a project or just for fun.
* **Developers interested in contributing:** I welcome pull requests and contributions to expand the pattern library and features.
**Comparison:**
While there are many online pattern generators, this project differs in a few key ways:
*
/r/Python
https://redd.it/1ic4j5b
GitHub
GitHub - ajratnam/PatternGenerator: Powerful Pattern Generator written in Python
Powerful Pattern Generator written in Python. Contribute to ajratnam/PatternGenerator development by creating an account on GitHub.
[D] Ever feel like you're reinventing the wheel with every scikit-learn project? Let's talk about making ML recommended practices less painful. 🤔
Hey fellow data scientists,
While scikit-learn is powerful, we often find ourselves:
* Manually checking for cross-validation errors
* Bouncing between Copilot, StackOverflow, and docs just to follow recommended practices
* Reinventing validation processes that need to work for DS teams and stakeholders
* Notebooks that become a graveyard of model iterations
I'm curious how you handle these challenges in your workflow:
* What's your approach to validation across different projects? Is there any unified method, or does each project end up with its own validation style?
* How do you track experiments without overcomplicating things?
* What tricks have you found to maintain consistency?
We (at probabl) have built an open-source library ([skore](https://github.com/probabl-ai/skore)) to tackle these issues, but I'd love to hear your solutions first. What workflows have worked for you? What's still frustrating?
* GitHub: [github.com/probabl-ai/skore](http://github.com/probabl-ai/skore)
* Docs: [skore.probabl.ai](http://skore.probabl.ai/)
/r/MachineLearning
https://redd.it/1ic5e7f
Hey fellow data scientists,
While scikit-learn is powerful, we often find ourselves:
* Manually checking for cross-validation errors
* Bouncing between Copilot, StackOverflow, and docs just to follow recommended practices
* Reinventing validation processes that need to work for DS teams and stakeholders
* Notebooks that become a graveyard of model iterations
I'm curious how you handle these challenges in your workflow:
* What's your approach to validation across different projects? Is there any unified method, or does each project end up with its own validation style?
* How do you track experiments without overcomplicating things?
* What tricks have you found to maintain consistency?
We (at probabl) have built an open-source library ([skore](https://github.com/probabl-ai/skore)) to tackle these issues, but I'd love to hear your solutions first. What workflows have worked for you? What's still frustrating?
* GitHub: [github.com/probabl-ai/skore](http://github.com/probabl-ai/skore)
* Docs: [skore.probabl.ai](http://skore.probabl.ai/)
/r/MachineLearning
https://redd.it/1ic5e7f
GitHub
GitHub - probabl-ai/skore: 𝗢𝘄𝗻 𝗬𝗼𝘂𝗿 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲. Skore's open-source Python library accelerates ML model development with automated…
𝗢𝘄𝗻 𝗬𝗼𝘂𝗿 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲. Skore's open-source Python library accelerates ML model development with automated evaluation reports, smart methodological guidance, and comprehensive cross-validati...
Django security best practices for software engineers.
Hi all,
I'm Ahmad, founder of Corgea. We've built a scanner that can find vulnerabilities in Django applications, so we decided to write a guide for software engineers on Django security best practices: https://corgea.com/Learn/django-security-best-practices-a-comprehensive-guid-for-software-engineers
We wanted to cover Django's security features, things we've seen developers do that they shouldn't, and all-around best practices. While we can't go into every detail, we've tried to cover a wide range of topics and gotcha's that are typically missed.
I'd love to get feedback from the community. Is there something else you'd include in the article? What's best practice that you've followed?
Thanks!
PS: we're using Django too for some of our services ❤️
/r/django
https://redd.it/1ic9c1w
Hi all,
I'm Ahmad, founder of Corgea. We've built a scanner that can find vulnerabilities in Django applications, so we decided to write a guide for software engineers on Django security best practices: https://corgea.com/Learn/django-security-best-practices-a-comprehensive-guid-for-software-engineers
We wanted to cover Django's security features, things we've seen developers do that they shouldn't, and all-around best practices. While we can't go into every detail, we've tried to cover a wide range of topics and gotcha's that are typically missed.
I'd love to get feedback from the community. Is there something else you'd include in the article? What's best practice that you've followed?
Thanks!
PS: we're using Django too for some of our services ❤️
/r/django
https://redd.it/1ic9c1w
Corgea
Django Security Best Practices: A Comprehensive Guide for Software Engineers - Corgea - Home
Corgea is an AI-native security platform that automatically finds, triages, and fixes insecure code. Sign up today for free to try Corgea.
etl4py - Beautiful, whiteboard-style, typesafe dataflows for Python
https://github.com/mattlianje/etl4py
What my project does
etl4py is a simple DSL for pretty, whiteboard-style, typesafe dataflows that run anywhere - from laptop, to massive PySpark clusters to CUDA cores.
Target audience
Anyone who finds themselves writing dataflows or sequencing tasks - may it be for local scripts or multi-node big data workflows. Like it? Star it ... but issues help more 🙇♂️
Comparison
As far as I know, there aren't any libraries offering this type of DSL (but lmk!) ... although I think overloading >> is not uncommon.
Quickstart:
from etl4py import
# Define your building blocks
five_extract: Extract[None, int] = Extract(lambda _:5)
double: Transform[int, int] = Transform(lambda x: x 2)
add10: Transform[int, int] = Extract(lambda x: x + 10)
attempts = 0
def riskytransform(x: int) -> int:
global attempts; attempts += 1
if attempts <= 2: raise
/r/Python
https://redd.it/1ic9b2m
https://github.com/mattlianje/etl4py
What my project does
etl4py is a simple DSL for pretty, whiteboard-style, typesafe dataflows that run anywhere - from laptop, to massive PySpark clusters to CUDA cores.
Target audience
Anyone who finds themselves writing dataflows or sequencing tasks - may it be for local scripts or multi-node big data workflows. Like it? Star it ... but issues help more 🙇♂️
Comparison
As far as I know, there aren't any libraries offering this type of DSL (but lmk!) ... although I think overloading >> is not uncommon.
Quickstart:
from etl4py import
# Define your building blocks
five_extract: Extract[None, int] = Extract(lambda _:5)
double: Transform[int, int] = Transform(lambda x: x 2)
add10: Transform[int, int] = Extract(lambda x: x + 10)
attempts = 0
def riskytransform(x: int) -> int:
global attempts; attempts += 1
if attempts <= 2: raise
/r/Python
https://redd.it/1ic9b2m
GitHub
GitHub - mattlianje/etl4py: etl4 — in Python
etl4 — in Python. Contribute to mattlianje/etl4py development by creating an account on GitHub.
PyPI security funding in limbo as Trump executive order pauses NSF grant reviews
Seth Larson, PSF Security-Developer-in-Residence, posts on LinkedIn:
> The threat of Trump EOs has caused the National Science Foundation to pause grant review panels. Critically for Python and PyPI security I spent most of December authoring and submitting a proposal to the "Safety, Security, and Privacy of Open Source Ecosystems" program. What happens now is uncertain to me.
> Shuttering R&D only leaves open source software users more vulnerable, this is nonsensical in my mind given America's dependence on software manufacturing.
> https://www.npr.org/sections/shots-health-news/2025/01/27/nx-s1-5276342/nsf-freezes-grant-review-trump-executive-orders-dei-science
This doesn't have immediate effects on PyPI, but the NSF grant money was going to help secure the Python ecosystem and supply chain.
/r/Python
https://redd.it/1iccu2q
Seth Larson, PSF Security-Developer-in-Residence, posts on LinkedIn:
> The threat of Trump EOs has caused the National Science Foundation to pause grant review panels. Critically for Python and PyPI security I spent most of December authoring and submitting a proposal to the "Safety, Security, and Privacy of Open Source Ecosystems" program. What happens now is uncertain to me.
> Shuttering R&D only leaves open source software users more vulnerable, this is nonsensical in my mind given America's dependence on software manufacturing.
> https://www.npr.org/sections/shots-health-news/2025/01/27/nx-s1-5276342/nsf-freezes-grant-review-trump-executive-orders-dei-science
This doesn't have immediate effects on PyPI, but the NSF grant money was going to help secure the Python ecosystem and supply chain.
/r/Python
https://redd.it/1iccu2q
Linkedin
Seth Michael Larson on LinkedIn: National Science Foundation freezes grant review in response to Trump…
The threat of Trump EOs has caused the National Science Foundation to pause grant review panels. Critically for Python and PyPI security I spent most of…
Wednesday Daily Thread: Beginner questions
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1icge4n
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1icge4n
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Problem with env variables
I'm trying to set up an email sending system. The problem is that if I set MAIL_SERVER and MAIL_PORT their values always remain None. How can I solve it?
/r/flask
https://redd.it/1ic7n1s
I'm trying to set up an email sending system. The problem is that if I set MAIL_SERVER and MAIL_PORT their values always remain None. How can I solve it?
/r/flask
https://redd.it/1ic7n1s
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Alternatives to session and global variables in flask
I currently am making an app that will query weather data from an AWS bucket and display it on a map. Right now I am using global variables to store progress data (small dictionary that records amount of files read, if program is running, etc) and the names of files that match certain criteria. However, I understand this is bad pratice for a web app. When trying to look for alternatives, I discovered flask's session, but my "results" variable will need to store anywhere from 50-100 filenames, with the possibility of having up to 2700. From my understanding this list of files seems like way too much data for a session variable. When I tested the code, 5 filenames was 120 bytes, so I think that its pretty impossible to stay under 4kb. Does anyone have any ideas instead? Once a user closes the tab, the data is not important (there are download functions for maps and files). I would perfer not to use a db, but will if that is outright the best option.
/r/flask
https://redd.it/1icknxu
I currently am making an app that will query weather data from an AWS bucket and display it on a map. Right now I am using global variables to store progress data (small dictionary that records amount of files read, if program is running, etc) and the names of files that match certain criteria. However, I understand this is bad pratice for a web app. When trying to look for alternatives, I discovered flask's session, but my "results" variable will need to store anywhere from 50-100 filenames, with the possibility of having up to 2700. From my understanding this list of files seems like way too much data for a session variable. When I tested the code, 5 filenames was 120 bytes, so I think that its pretty impossible to stay under 4kb. Does anyone have any ideas instead? Once a user closes the tab, the data is not important (there are download functions for maps and files). I would perfer not to use a db, but will if that is outright the best option.
/r/flask
https://redd.it/1icknxu
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
The scale vs. intelligence trade-off in retrieval augmented generation Discussion
Retrieval Augmented Generation (RAG) has been huge in the past year or two as a way to supplement LLMs with knowledge of a particular set of documents or the world in general. I've personally worked with most flavors of RAG quite extensively and there are some fundamental limitations with the two fundamental algorithms (long-context, and embedding) which almost all flavors of RAG are built on. I am planning on writing a longer and more comprehensive piece on this, but I wanted to put some of my thoughts here first to get some feedback and see if there are any perspectives I might be missing.
Long-context models (e.g. Gemini), designed to process extensive amounts of text within a single context window, face a critical bottleneck in the form of training data scarcity. As context lengths increase, the availability of high-quality training data diminishes rapidly. This is important because of the neural scaling laws, which have been remarkably robust for LLMs so far. There is a great video explaining them here. One important implication is that if you run out of human-generated training data, the reasoning capabilities of your model are bottle-necked no matter how many other resources or tricks you throw at
/r/MachineLearning
https://redd.it/1ick63j
Retrieval Augmented Generation (RAG) has been huge in the past year or two as a way to supplement LLMs with knowledge of a particular set of documents or the world in general. I've personally worked with most flavors of RAG quite extensively and there are some fundamental limitations with the two fundamental algorithms (long-context, and embedding) which almost all flavors of RAG are built on. I am planning on writing a longer and more comprehensive piece on this, but I wanted to put some of my thoughts here first to get some feedback and see if there are any perspectives I might be missing.
Long-context models (e.g. Gemini), designed to process extensive amounts of text within a single context window, face a critical bottleneck in the form of training data scarcity. As context lengths increase, the availability of high-quality training data diminishes rapidly. This is important because of the neural scaling laws, which have been remarkably robust for LLMs so far. There is a great video explaining them here. One important implication is that if you run out of human-generated training data, the reasoning capabilities of your model are bottle-necked no matter how many other resources or tricks you throw at
/r/MachineLearning
https://redd.it/1ick63j
YouTube
AI can't cross this line and we don't know why.
Have we discovered an ideal gas law for AI? Head to https://brilliant.org/WelchLabs/ to try Brilliant for free for 30 days and get 20% off an annual premium subscription.
Welch Labs Imaginary Numbers Book!
https://www.welchlabs.com/resources/imaginary-numbers…
Welch Labs Imaginary Numbers Book!
https://www.welchlabs.com/resources/imaginary-numbers…
Guidance for junior backend developer
I am pursuing BCA ( Bachelor of Computer Application ) from IGNOU ( Indira Gandhi National Open University ) . I am in last semester. And now I have completed internship as a backend developer and after that gained experience as a junior django backend developer. But at that time I acknowledge that I didn't learn enough much or confidence that I am able to work on any project.. I can not quit job and also not one will give me job . What should I do now 🫠
/r/django
https://redd.it/1icm5ww
I am pursuing BCA ( Bachelor of Computer Application ) from IGNOU ( Indira Gandhi National Open University ) . I am in last semester. And now I have completed internship as a backend developer and after that gained experience as a junior django backend developer. But at that time I acknowledge that I didn't learn enough much or confidence that I am able to work on any project.. I can not quit job and also not one will give me job . What should I do now 🫠
/r/django
https://redd.it/1icm5ww
Reddit
From the django community on Reddit
Explore this post and more from the django community
DeepSeek Infinite Context Window
What my project does?
Input arbitrary length of text into LLM model. With models being so cheap and strong I came up with an idea to make a simple "Agent" that will refine the infinite context size to something manageable for LLM to answer from instead of using RAG. For very large contexts you could still use RAG + "infinite context" to keep the price at pay.
How it works?
1. We take a long text and split it into chunks (like with any RAG solution)
2. Until we have reduced text to model's context we repeat
1. We classify each chunk as either relevant or irrelevant with the model
2. We take only relevant chunks
3. We feed the high-quality context to the final model for answering (like with any RAG solution)
Target audience
For anyone needing high-quality answers, speed and price are not priorities.
Comparison
Usually context reduction is done via RAG - embeddings, but with the rise of reasoning models, we can perform a lot better and more detailed search by directly using models capabilities.
Full code Github link: Click
/r/Python
https://redd.it/1icpk3z
What my project does?
Input arbitrary length of text into LLM model. With models being so cheap and strong I came up with an idea to make a simple "Agent" that will refine the infinite context size to something manageable for LLM to answer from instead of using RAG. For very large contexts you could still use RAG + "infinite context" to keep the price at pay.
How it works?
1. We take a long text and split it into chunks (like with any RAG solution)
2. Until we have reduced text to model's context we repeat
1. We classify each chunk as either relevant or irrelevant with the model
2. We take only relevant chunks
3. We feed the high-quality context to the final model for answering (like with any RAG solution)
Target audience
For anyone needing high-quality answers, speed and price are not priorities.
Comparison
Usually context reduction is done via RAG - embeddings, but with the rise of reasoning models, we can perform a lot better and more detailed search by directly using models capabilities.
Full code Github link: Click
/r/Python
https://redd.it/1icpk3z
GitHub
FlashLearn/examples/deepseek_inifinite_context.py at main · Pravko-Solutions/FlashLearn
Never train another ML model. Contribute to Pravko-Solutions/FlashLearn development by creating an account on GitHub.
How to implement protected routes with allauth dj-rest?
I have been stuck for days with oauth. I managed to login with oauth using allauth then I was looking for a way to token based authentication for my drf restapi endpoint. That is why I implemented dj-rest auth.
http://localhost:8000/accounts/github/login/callback/
repath('dj-rest-auth/', include('djrestauth.urls')),
repath('dj-rest-auth/github/', GitHubLogin.asview(), name='githublogin'),
Then I have a social provider with client id and client secret.
When I add this url Git Hub Login – Django REST framework to my url it shows me drf page where I need to add access token and code and token id to make a request. I have missed something here. Can someone help me?
/r/django
https://redd.it/1icqlfw
I have been stuck for days with oauth. I managed to login with oauth using allauth then I was looking for a way to token based authentication for my drf restapi endpoint. That is why I implemented dj-rest auth.
http://localhost:8000/accounts/github/login/callback/
repath('dj-rest-auth/', include('djrestauth.urls')),
repath('dj-rest-auth/github/', GitHubLogin.asview(), name='githublogin'),
Then I have a social provider with client id and client secret.
When I add this url Git Hub Login – Django REST framework to my url it shows me drf page where I need to add access token and code and token id to make a request. I have missed something here. Can someone help me?
/r/django
https://redd.it/1icqlfw
Reddit
From the django community on Reddit
Explore this post and more from the django community
Any good Flask study resource or playlist?
All youtube videos I can search are already old. Which resource do you recommend?
/r/flask
https://redd.it/1icv06w
All youtube videos I can search are already old. Which resource do you recommend?
/r/flask
https://redd.it/1icv06w
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
deployed my flask app, the apis donot work, help
index.html works well while the apis return 404 error in vercel, can anyone help me
/r/flask
https://redd.it/1ictqry
index.html works well while the apis return 404 error in vercel, can anyone help me
/r/flask
https://redd.it/1ictqry
Reddit
From the flask community on Reddit
Explore this post and more from the flask community