db.init_app(app) Errror
Hi I am a compleat Noob (in flask), i have an Error in my Program that says: TypeError: SQLAlchemy.init\_app() missing 1 required positional argument: 'app' and i dont know what is wrong ):
This is the code pls Help me:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from os import path
db = SQLAlchemy
DB_NAME = "database.db"
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = 'hjshjhdjah kjshkjdhjs'
app.config['SQLALCHEMY_DATABASE_URI'] = f'sqlite:///{DB_NAME}'
db.init_app(app) #this thing makes the problem
from .views import views #thies are just website things
from .auth import auth
app.register_blueprint(views, url_prefix='/')
app.register_blueprint(auth, url_prefix='/')
from .models import User, Note #that are moduls for the data base
with app.app_context():
/r/flask
https://redd.it/1kmg69c
Hi I am a compleat Noob (in flask), i have an Error in my Program that says: TypeError: SQLAlchemy.init\_app() missing 1 required positional argument: 'app' and i dont know what is wrong ):
This is the code pls Help me:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from os import path
db = SQLAlchemy
DB_NAME = "database.db"
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = 'hjshjhdjah kjshkjdhjs'
app.config['SQLALCHEMY_DATABASE_URI'] = f'sqlite:///{DB_NAME}'
db.init_app(app) #this thing makes the problem
from .views import views #thies are just website things
from .auth import auth
app.register_blueprint(views, url_prefix='/')
app.register_blueprint(auth, url_prefix='/')
from .models import User, Note #that are moduls for the data base
with app.app_context():
/r/flask
https://redd.it/1kmg69c
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Is free threading ready to be used in production in 3.14?
I am currently using multiprocessing and having to handle the problem of copying data to processes and the overheads involved is something I would like to avoid. Will 3.14 have official support for free threading or should I put off using it in production until 3.15?
/r/Python
https://redd.it/1ko5f3k
I am currently using multiprocessing and having to handle the problem of copying data to processes and the overheads involved is something I would like to avoid. Will 3.14 have official support for free threading or should I put off using it in production until 3.15?
/r/Python
https://redd.it/1ko5f3k
How to filter related objects by attribute and pass to Django template?
Im working on a Django app where I have a group model with multiple sections, and each section has multiple items. Each item has a category (using TextChoices). I want to display items that belong to a certain category, grouped by section, in a view/template.
In other hands, i want to control where item is displayed based mainly the category. I want to display this is this kind of way (example):
Section 1 :
Items (that belong to section 1) Category 1
Section 2:
Items (that belong to section 2) from Category 1
Section 1:
Items from Category 3
Section 2:
Items from Category 4
etc..
I tried looking at Django's documentation, as well as asking AI, but i still struggle to understand how to structure this. Assuming I have many categories, i don't know how to assign them to the context.
Here's an example code i generated (and of course, checked) to explain my problem.
# MODELS
from django.db import models
class Item(models.Model):
class ItemCategory(models.TextChoices):
TYPE_A = "A", "Alpha"
/r/djangolearning
https://redd.it/1ko396g
Im working on a Django app where I have a group model with multiple sections, and each section has multiple items. Each item has a category (using TextChoices). I want to display items that belong to a certain category, grouped by section, in a view/template.
In other hands, i want to control where item is displayed based mainly the category. I want to display this is this kind of way (example):
Section 1 :
Items (that belong to section 1) Category 1
Section 2:
Items (that belong to section 2) from Category 1
Section 1:
Items from Category 3
Section 2:
Items from Category 4
etc..
I tried looking at Django's documentation, as well as asking AI, but i still struggle to understand how to structure this. Assuming I have many categories, i don't know how to assign them to the context.
Here's an example code i generated (and of course, checked) to explain my problem.
# MODELS
from django.db import models
class Item(models.Model):
class ItemCategory(models.TextChoices):
TYPE_A = "A", "Alpha"
/r/djangolearning
https://redd.it/1ko396g
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1kofmtf
# Weekly Thread: Resource Request and Sharing 📚
Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!
## How it Works:
1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.
## Guidelines:
Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.
## Example Shares:
1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.
## Example Requests:
1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.
Share the knowledge, enrich the community. Happy learning! 🌟
/r/Python
https://redd.it/1kofmtf
YouTube
Data Structures and Algorithms in Python - Full Course for Beginners
A beginner-friendly introduction to common data structures (linked lists, stacks, queues, graphs) and algorithms (search, sorting, recursion, dynamic programming) in Python. This course will help you prepare for coding interviews and assessments.
🔗 Course…
🔗 Course…
D Who do you all follow for genuinely substantial ML/AI content?
I've been looking for people to follow to keep up with the latest in ML and AI research/releases but have noticed there's a lot of low quality content creators crowding this space.
Who are some people you follow that you genuinely get substantial info from?
/r/MachineLearning
https://redd.it/1ko64s6
I've been looking for people to follow to keep up with the latest in ML and AI research/releases but have noticed there's a lot of low quality content creators crowding this space.
Who are some people you follow that you genuinely get substantial info from?
/r/MachineLearning
https://redd.it/1ko64s6
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Microsoft Fired Faster CPython Team
https://www.linkedin.com/posts/mdboom_its-been-a-tough-couple-of-days-microsofts-activity-7328583333536268289-p4Lp
This is quite a big disappointment, really. But can anyone say how the overall project goes, if other companies are also financing it etc.? Like does this end the project or it's no huge deal?
/r/Python
https://redd.it/1koev5c
https://www.linkedin.com/posts/mdboom_its-been-a-tough-couple-of-days-microsofts-activity-7328583333536268289-p4Lp
This is quite a big disappointment, really. But can anyone say how the overall project goes, if other companies are also financing it etc.? Like does this end the project or it's no huge deal?
/r/Python
https://redd.it/1koev5c
Linkedin
It's been a tough couple of days. Microsoft's support for the Faster CPython project was canceled yesterday, and my heart goes…
It's been a tough couple of days. Microsoft's support for the Faster CPython project was canceled yesterday, and my heart goes out to the majority of the team that was laid off. A hard day for me, but even harder for others.
If you are looking for top…
If you are looking for top…
What CPython Layoffs Taught Me About the Real Value of Expertise
The layoffs of the CPython and TypeScript compiler teams have been bothering me—not because those people weren’t brilliant, but because their roles didn’t translate into enough real-world value for the businesses that employed them.
That’s the hard truth: Even deep expertise in widely-used technologies won’t protect you if your work doesn’t drive clear, measurable business outcomes.
The tools may be critical to the ecosystem, but the companies decided that further optimizations or refinements didn’t materially affect their goals. In other words, "good enough" was good enough. This is a shift in how I think about technical depth. I used to believe that mastering internals made you indispensable. Now I see that: You’re not measured on what you understand. You’re measured on what you produce—and whether it moves the needle.
The takeaway? Build enough expertise to be productive. Go deeper only when it’s necessary for the problem at hand. Focus on outcomes over architecture, and impact over elegance. CPython is essential. But understanding CPython internals isn’t essential unless it solves a problem that matters right now.
/r/Python
https://redd.it/1kok2e1
The layoffs of the CPython and TypeScript compiler teams have been bothering me—not because those people weren’t brilliant, but because their roles didn’t translate into enough real-world value for the businesses that employed them.
That’s the hard truth: Even deep expertise in widely-used technologies won’t protect you if your work doesn’t drive clear, measurable business outcomes.
The tools may be critical to the ecosystem, but the companies decided that further optimizations or refinements didn’t materially affect their goals. In other words, "good enough" was good enough. This is a shift in how I think about technical depth. I used to believe that mastering internals made you indispensable. Now I see that: You’re not measured on what you understand. You’re measured on what you produce—and whether it moves the needle.
The takeaway? Build enough expertise to be productive. Go deeper only when it’s necessary for the problem at hand. Focus on outcomes over architecture, and impact over elegance. CPython is essential. But understanding CPython internals isn’t essential unless it solves a problem that matters right now.
/r/Python
https://redd.it/1kok2e1
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Skylos: Another dead code finder, but its better and faster. Source, Trust me bro.
# Skylos: The Python Dead Code Finder Written in Rust
Yo peeps
Been working on a static analysis tool for Python for a while. It's designed to detect unreachable functions and unused imports in your Python codebases. I know there's already Vulture, flake 8 etc etc.. but hear me out. This is more accurate and faster, and because I'm slightly OCD, I like to have my codebase, a bit cleaner. I'll elaborate more down below.
# What Makes Skylos Special?
* **High Performance**: Built with Rust, making it fast
* **Better Detection**: Finds more dead code than alternatives in our benchmarks
* **Interactive Mode**: Select and remove specific items interactively
* **Dry Run Support**: Preview changes before applying them
* **Cross-module Analysis**: Tracks imports and calls across your entire project
# Benchmark Results
|Tool|Time (s)|Functions|Imports|Total|
|:-|:-|:-|:-|:-|
|Skylos|0.039|48|8|56|
|Vulture (100%)|0.040|0|3|3|
|Vulture (60%)|0.041|28|3|31|
|Vulture (0%)|0.041|28|3|31|
|Flake8|0.274|0|8|8|
|Pylint|0.285|0|6|6|
|Dead|0.035|0|0|0|
This is the benchmark shown in the table above.
# How It Works
Skylos uses tree-sitter for parsing of Python code and employs a hybrid architecture with a Rust core for analysis and a Python CLI for the user interface. It handles Python features like decorators, chained method calls, and cross-mod references.
# Target Audience
Anyone with a **.py** file and a huge codebase that needs to kill off dead code? This ONLY
/r/Python
https://redd.it/1koi4fo
# Skylos: The Python Dead Code Finder Written in Rust
Yo peeps
Been working on a static analysis tool for Python for a while. It's designed to detect unreachable functions and unused imports in your Python codebases. I know there's already Vulture, flake 8 etc etc.. but hear me out. This is more accurate and faster, and because I'm slightly OCD, I like to have my codebase, a bit cleaner. I'll elaborate more down below.
# What Makes Skylos Special?
* **High Performance**: Built with Rust, making it fast
* **Better Detection**: Finds more dead code than alternatives in our benchmarks
* **Interactive Mode**: Select and remove specific items interactively
* **Dry Run Support**: Preview changes before applying them
* **Cross-module Analysis**: Tracks imports and calls across your entire project
# Benchmark Results
|Tool|Time (s)|Functions|Imports|Total|
|:-|:-|:-|:-|:-|
|Skylos|0.039|48|8|56|
|Vulture (100%)|0.040|0|3|3|
|Vulture (60%)|0.041|28|3|31|
|Vulture (0%)|0.041|28|3|31|
|Flake8|0.274|0|8|8|
|Pylint|0.285|0|6|6|
|Dead|0.035|0|0|0|
This is the benchmark shown in the table above.
# How It Works
Skylos uses tree-sitter for parsing of Python code and employs a hybrid architecture with a Rust core for analysis and a Python CLI for the user interface. It handles Python features like decorators, chained method calls, and cross-mod references.
# Target Audience
Anyone with a **.py** file and a huge codebase that needs to kill off dead code? This ONLY
/r/Python
https://redd.it/1koi4fo
Reddit
From the Python community on Reddit: Skylos: Another dead code finder, but its better and faster. Source, Trust me bro.
Explore this post and more from the Python community
[pyfuze] Make your Python project truly cross-platform with Cosmopolitan and uv
## What My Project Does
I recently came across an interesting project called [Cosmopolitan](https://github.com/jart/cosmopolitan). In short, it can compile a C program into an [Actually Portable Executable (APE)](https://justine.lol/ape.html) which is capable of running natively on **Linux**, **macOS**, **Windows**, **FreeBSD**, **OpenBSD**, **NetBSD**, and even **BIOS**, across both **AMD64** and **ARM64** architectures.
The Cosmopolitan project already provides a Python APE (available in [cosmos.zip](https://github.com/jart/cosmopolitan/releases)), but it doesn't support running your own Python project with multiple dependencies.
Recently, I switched from Miniconda to [uv](https://github.com/astral-sh/uv), an extremely fast Python package and project manager. It occurred to me that I could bootstrap **any** Python project using uv!
That led me to create a new project called [pyfuze](https://github.com/TanixLu/pyfuze). It packages your Python project into a single zip file containing:
* `pyfuze.com` — an APE binary that prepares and runs your Python project
* `.python-version` — tells uv which Python version to install
* `requirements.txt` — lists your dependencies
* `src/` — contains all your source code
* `config.txt` — specifies the Python entry point and whether to enable Windows GUI mode (which hides console)
When you execute `pyfuze.com`, it performs the following steps:
* Installs `uv` into the `./uv` folder
* Installs Python into the `./python` folder (version taken from `.python-version`)
* Installs dependencies listed in `requirements.txt`
* Runs your Python
/r/Python
https://redd.it/1koos2n
## What My Project Does
I recently came across an interesting project called [Cosmopolitan](https://github.com/jart/cosmopolitan). In short, it can compile a C program into an [Actually Portable Executable (APE)](https://justine.lol/ape.html) which is capable of running natively on **Linux**, **macOS**, **Windows**, **FreeBSD**, **OpenBSD**, **NetBSD**, and even **BIOS**, across both **AMD64** and **ARM64** architectures.
The Cosmopolitan project already provides a Python APE (available in [cosmos.zip](https://github.com/jart/cosmopolitan/releases)), but it doesn't support running your own Python project with multiple dependencies.
Recently, I switched from Miniconda to [uv](https://github.com/astral-sh/uv), an extremely fast Python package and project manager. It occurred to me that I could bootstrap **any** Python project using uv!
That led me to create a new project called [pyfuze](https://github.com/TanixLu/pyfuze). It packages your Python project into a single zip file containing:
* `pyfuze.com` — an APE binary that prepares and runs your Python project
* `.python-version` — tells uv which Python version to install
* `requirements.txt` — lists your dependencies
* `src/` — contains all your source code
* `config.txt` — specifies the Python entry point and whether to enable Windows GUI mode (which hides console)
When you execute `pyfuze.com`, it performs the following steps:
* Installs `uv` into the `./uv` folder
* Installs Python into the `./python` folder (version taken from `.python-version`)
* Installs dependencies listed in `requirements.txt`
* Runs your Python
/r/Python
https://redd.it/1koos2n
GitHub
GitHub - jart/cosmopolitan: build-once run-anywhere c library
build-once run-anywhere c library. Contribute to jart/cosmopolitan development by creating an account on GitHub.
Why does my Flask /health endpoint show nothing at http://localhost:5000/health?
Hey folks, I’m working on a Flask backend and I’m running into a weird issue.
I’ve set up a simple /health endpoint to check if the server is up. Here’s the code I’m using:
@app.route('/health', methods=['GET'])
def health_check():
return 'OK', 200
The server runs without errors, and I can confirm that it’s listening on port 5000. But when I open http://localhost:5000/health in the browser, I get a blank page or sometimes nothing at all — no “OK” message shows up on Safari while Chrome says “access to localhost was denied”.
What I expected:
A plain "OK" message in the browser or in the response body.
What I get:
Blank screen/access to localhost was denied (but status code is still 200).
Has anyone seen this before? Could it be something to do with the way Flask handles plain text responses in browsers? Or is there something else I’m missing?
Thanks in advance for any help!
/r/flask
https://redd.it/1kolnus
Hey folks, I’m working on a Flask backend and I’m running into a weird issue.
I’ve set up a simple /health endpoint to check if the server is up. Here’s the code I’m using:
@app.route('/health', methods=['GET'])
def health_check():
return 'OK', 200
The server runs without errors, and I can confirm that it’s listening on port 5000. But when I open http://localhost:5000/health in the browser, I get a blank page or sometimes nothing at all — no “OK” message shows up on Safari while Chrome says “access to localhost was denied”.
What I expected:
A plain "OK" message in the browser or in the response body.
What I get:
Blank screen/access to localhost was denied (but status code is still 200).
Has anyone seen this before? Could it be something to do with the way Flask handles plain text responses in browsers? Or is there something else I’m missing?
Thanks in advance for any help!
/r/flask
https://redd.it/1kolnus
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Should I learn FastAPI? Why? Doesn’t Django or Flask do the trick?
I’ve been building Python web apps and always used Django or Flask because they felt reliable and well-established. Recently, I stumbled on davia ai — a tool built on FastAPI that I really wanted to try. But to get the most out of it, I realized I needed to learn FastAPI first. Now I’m wondering if it’s worth the switch. If so, what teaching materials do you recommend?
/r/Python
https://redd.it/1kou6lc
I’ve been building Python web apps and always used Django or Flask because they felt reliable and well-established. Recently, I stumbled on davia ai — a tool built on FastAPI that I really wanted to try. But to get the most out of it, I realized I needed to learn FastAPI first. Now I’m wondering if it’s worth the switch. If so, what teaching materials do you recommend?
/r/Python
https://redd.it/1kou6lc
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Is there a module that can dynamically can change all div ids and css ids on each request?
as the title says.
I need that without change all other functions in my flask application.
if it doesn't exist and you just wanna talk bullshit then just don't reply
/r/flask
https://redd.it/1kou68z
as the title says.
I need that without change all other functions in my flask application.
if it doesn't exist and you just wanna talk bullshit then just don't reply
/r/flask
https://redd.it/1kou68z
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Senior Django Developers: Do You Stick with Django for High-Concurrency Async Applications or Transition to Other Frameworks?
Hi everyone, I hope you're all doing well!
I'm exploring the feasibility of using Django for applications that need to handle a massive number of asynchronous operations—things like real-time chat systems, live dashboards, or streaming services. With Django's support for ASGI and asynchronous views, it's now possible to implement async features, but I'm wondering how well it holds up in real-world, high-concurrency environments compared to frameworks that are natively asynchronous.
Given that, I'm curious:
1️⃣ Have you successfully deployed Django in high-concurrency, async-heavy environments?
2️⃣ Did you encounter limitations that led you to consider or switch to frameworks like Node.js, ASP.NET Core, or others?
3️⃣ What strategies or tools did you use to scale Django in such scenarios?
I’m especially interested in hearing about real-world experiences, the challenges you faced, and how you decided on the best framework for your needs.
Thanks in advance for sharing your insights—looking forward to learning from you all!
Warm regards!
/r/django
https://redd.it/1koyugq
Hi everyone, I hope you're all doing well!
I'm exploring the feasibility of using Django for applications that need to handle a massive number of asynchronous operations—things like real-time chat systems, live dashboards, or streaming services. With Django's support for ASGI and asynchronous views, it's now possible to implement async features, but I'm wondering how well it holds up in real-world, high-concurrency environments compared to frameworks that are natively asynchronous.
Given that, I'm curious:
1️⃣ Have you successfully deployed Django in high-concurrency, async-heavy environments?
2️⃣ Did you encounter limitations that led you to consider or switch to frameworks like Node.js, ASP.NET Core, or others?
3️⃣ What strategies or tools did you use to scale Django in such scenarios?
I’m especially interested in hearing about real-world experiences, the challenges you faced, and how you decided on the best framework for your needs.
Thanks in advance for sharing your insights—looking forward to learning from you all!
Warm regards!
/r/django
https://redd.it/1koyugq
Reddit
From the django community on Reddit
Explore this post and more from the django community
Should I take a government Data Science job that only uses SAS?
Hey all,
I’ve just been offered a Data Science position at a national finance ministry (public sector). The role sounds meaningful, and I’ve already verbally accepted, but haven’t signed the contract yet.
Here’s the thing:
I currently work in a tech-oriented role where I get to experiment with modern ML/AI tools — Python, transformers, SHAP, even LLM prototyping. In contrast, the ministry role would rely almost entirely on SAS. Python might be introduced at some point, but currently isn’t part of the tech stack.
I’m 35 now, and if I stay for 5 years, I’m worried I’ll lose touch with modern tools and limit my career flexibility. The role would be focused on structured data, traditional scoring models, and heavy audit/governance use cases.
Pros:
• Societal impact
• Work-life balance + flexibility for parental leave
• Stable government job with long-term security
• Exposure to public policy and regulated environments
Cons:
• No Python or open-source stack
• No access to cutting-edge AI tools or innovation
• Potential tech stagnation if I stay long
• May hurt my profile if I return to the private sector at 40
I’m torn between meaning and innovation.
Would love to hear from anyone who’s made a similar move or faced this kind of tradeoff.
Would you take the role and just “keep Python alive” on the side?
/r/Python
https://redd.it/1koy4vw
Hey all,
I’ve just been offered a Data Science position at a national finance ministry (public sector). The role sounds meaningful, and I’ve already verbally accepted, but haven’t signed the contract yet.
Here’s the thing:
I currently work in a tech-oriented role where I get to experiment with modern ML/AI tools — Python, transformers, SHAP, even LLM prototyping. In contrast, the ministry role would rely almost entirely on SAS. Python might be introduced at some point, but currently isn’t part of the tech stack.
I’m 35 now, and if I stay for 5 years, I’m worried I’ll lose touch with modern tools and limit my career flexibility. The role would be focused on structured data, traditional scoring models, and heavy audit/governance use cases.
Pros:
• Societal impact
• Work-life balance + flexibility for parental leave
• Stable government job with long-term security
• Exposure to public policy and regulated environments
Cons:
• No Python or open-source stack
• No access to cutting-edge AI tools or innovation
• Potential tech stagnation if I stay long
• May hurt my profile if I return to the private sector at 40
I’m torn between meaning and innovation.
Would love to hear from anyone who’s made a similar move or faced this kind of tradeoff.
Would you take the role and just “keep Python alive” on the side?
/r/Python
https://redd.it/1koy4vw
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
FRONTEND FRAMEWORK WITH DRF
Hello, writing a drf project and I haven't decided what frontend to use, I've previously written a traditional MVT but first time implementing a frontend with my drf, thinking of using react, but I feel it is kind of stress learning the framework maybe it'll take me a lot of time to get it and since I'm good with django-html and css I feel it's a waste of time or does it worth it?
/r/django
https://redd.it/1kp7yto
Hello, writing a drf project and I haven't decided what frontend to use, I've previously written a traditional MVT but first time implementing a frontend with my drf, thinking of using react, but I feel it is kind of stress learning the framework maybe it'll take me a lot of time to get it and since I'm good with django-html and css I feel it's a waste of time or does it worth it?
/r/django
https://redd.it/1kp7yto
Reddit
From the django community on Reddit
Explore this post and more from the django community
Hiding API key
Hi there, I am currently Doing a python application where one of the html pages is a html,css javascript chatbot.
This chatbot relies on an open AI api key. I want to hide this key as an environment variable so I can use it in Javascript and add it as a config var in Heroku. Is it possible to do this.
Thank you.
/r/django
https://redd.it/1kozxzr
Hi there, I am currently Doing a python application where one of the html pages is a html,css javascript chatbot.
This chatbot relies on an open AI api key. I want to hide this key as an environment variable so I can use it in Javascript and add it as a config var in Heroku. Is it possible to do this.
Thank you.
/r/django
https://redd.it/1kozxzr
Reddit
From the django community on Reddit
Explore this post and more from the django community
P I built a transformer that skips layers per token based on semantic importance
I’m a high school student who’s been exploring how to make transformers/ai models more efficient, and I recently built something I’m really excited about: a transformer that routes each token through a different number of layers depending on how "important" it is.
The idea came from noticing how every token, even simple ones like “the” or “of”, gets pushed through every layer in standard transformers. But not every token needs the same amount of reasoning. So I created a lightweight scoring mechanism that estimates how semantically dense a token is, and based on that, decides how many layers it should go through.
It’s called SparseDepthTransformer, and here’s what it does:
Scores each token for semantic importance
Skips deeper layers for less important tokens using hard gating
Tracks how many layers each token actually uses
Benchmarks against a baseline transformer
In my tests, this reduced memory usage by about 15% and cut the average number of layers per token by \~40%, while keeping output quality the same. Right now it runs a bit slower because the skipping is done token-by-token, but batching optimization is next on my list.
Here’s the GitHub repo if you’re curious or want to give feedback:
https://github.com/Quinnybob/sparse-depth-transformer
Would love if you
/r/MachineLearning
https://redd.it/1kpalhd
I’m a high school student who’s been exploring how to make transformers/ai models more efficient, and I recently built something I’m really excited about: a transformer that routes each token through a different number of layers depending on how "important" it is.
The idea came from noticing how every token, even simple ones like “the” or “of”, gets pushed through every layer in standard transformers. But not every token needs the same amount of reasoning. So I created a lightweight scoring mechanism that estimates how semantically dense a token is, and based on that, decides how many layers it should go through.
It’s called SparseDepthTransformer, and here’s what it does:
Scores each token for semantic importance
Skips deeper layers for less important tokens using hard gating
Tracks how many layers each token actually uses
Benchmarks against a baseline transformer
In my tests, this reduced memory usage by about 15% and cut the average number of layers per token by \~40%, while keeping output quality the same. Right now it runs a bit slower because the skipping is done token-by-token, but batching optimization is next on my list.
Here’s the GitHub repo if you’re curious or want to give feedback:
https://github.com/Quinnybob/sparse-depth-transformer
Would love if you
/r/MachineLearning
https://redd.it/1kpalhd
GitHub
GitHub - Quinnybob/sparse-depth-transformer: A novel transformer architecture that routes each token through a variable number…
A novel transformer architecture that routes each token through a variable number of layers based on semantic importance, reducing memory usage and unnecessary compute. - Quinnybob/sparse-depth-tra...