CORS Error in my Flask | React web app
Hey, everyone, how's it going?
I'm getting a CORS error in a web application I'm developing for my portfolio.
I'm trying to use CORS in my <app.py> and in my endpoint, but the error persists.
I think it is a simple error, but I am trying several ways to solve it, without success!
Right now, I have this line in my <app.py>, above my blueprint.
# imports
from flaskcors import CORS
def createapp():
app = Flask(name)
...
CORS(app)
...
if name == 'main:
...
PS: I read something about the possibility of it being a Swagger-related error, but I don't know if that makes sense.
/r/flask
https://redd.it/1p5j53r
Hey, everyone, how's it going?
I'm getting a CORS error in a web application I'm developing for my portfolio.
I'm trying to use CORS in my <app.py> and in my endpoint, but the error persists.
I think it is a simple error, but I am trying several ways to solve it, without success!
Right now, I have this line in my <app.py>, above my blueprint.
# imports
from flaskcors import CORS
def createapp():
app = Flask(name)
...
CORS(app)
...
if name == 'main:
...
PS: I read something about the possibility of it being a Swagger-related error, but I don't know if that makes sense.
/r/flask
https://redd.it/1p5j53r
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
MovieLite: A MoviePy alternative for video editing that is up to 4x faster
Hi r/Python,
I love the simplicity of MoviePy, but it often becomes very slow when doing complex things like resizing or mixing multiple videos. So, I built MovieLite.
This started as a module inside a personal project where I had to migrate away from MoviePy due to performance bottlenecks. I decided to extract the code into its own library to help others with similar issues. It is currently in early alpha, but stable enough for my internal use cases.
Repo: https://github.com/francozanardi/movielite
### What My Project Does
MovieLite is a library for programmatic video editing (cutting, concatenating, text overlays, effects). It delegates I/O to FFmpeg but handles pixel processing in Python.
It is designed to be CPU Optimized using Numba to speed up pixel-heavy operations. Note that it is not GPU optimized and currently only supports exporting to MP4 (h264).
### Target Audience
This is for Python Developers doing backend video automation who find MoviePy too slow for production. It is not a full-featured video editor replacement yet, but a faster tool for the most common automation tasks.
### Comparison & Benchmarks
The main difference is performance. Here are real benchmarks comparing MovieLite vs. MoviePy (v2.x) on a 1280x720 video at 30fps.
These tests were run using 1 single process, and the
/r/Python
https://redd.it/1p5vkia
Hi r/Python,
I love the simplicity of MoviePy, but it often becomes very slow when doing complex things like resizing or mixing multiple videos. So, I built MovieLite.
This started as a module inside a personal project where I had to migrate away from MoviePy due to performance bottlenecks. I decided to extract the code into its own library to help others with similar issues. It is currently in early alpha, but stable enough for my internal use cases.
Repo: https://github.com/francozanardi/movielite
### What My Project Does
MovieLite is a library for programmatic video editing (cutting, concatenating, text overlays, effects). It delegates I/O to FFmpeg but handles pixel processing in Python.
It is designed to be CPU Optimized using Numba to speed up pixel-heavy operations. Note that it is not GPU optimized and currently only supports exporting to MP4 (h264).
### Target Audience
This is for Python Developers doing backend video automation who find MoviePy too slow for production. It is not a full-featured video editor replacement yet, but a faster tool for the most common automation tasks.
### Comparison & Benchmarks
The main difference is performance. Here are real benchmarks comparing MovieLite vs. MoviePy (v2.x) on a 1280x720 video at 30fps.
These tests were run using 1 single process, and the
/r/Python
https://redd.it/1p5vkia
GitHub
GitHub - francozanardi/movielite: Performance-focused Python video editing library. Alternative to MoviePy, powered by Numba.
Performance-focused Python video editing library. Alternative to MoviePy, powered by Numba. - francozanardi/movielite
How to do resource provisioning
I have developed a study platform in django
for the first time i'm hosting it
i'm aware of how much storage i will need
but, don't know how many CPU cores, RAM and bandwidth needed?
/r/django
https://redd.it/1p673l9
I have developed a study platform in django
for the first time i'm hosting it
i'm aware of how much storage i will need
but, don't know how many CPU cores, RAM and bandwidth needed?
/r/django
https://redd.it/1p673l9
Reddit
From the django community on Reddit
Explore this post and more from the django community
API tracing with Django and Nginx
Hi everyone,
I’m trying to measure the exact time spent in each stage of my API request flow — starting from the browser, through Nginx, into Django, then the database, and back out through Django and Nginx to the client.
Essentially, I want to capture timestamps and time intervals for:
* When the browser sends the request
* When Nginx receives it
* When Django starts processing it
* Time spent in the database
* Django response time
* Nginx response time
* When the browser receives the response
Is there any Django package or best practice that can help log these timing metrics end-to-end? Currently I have to manually add timestamps in nginx conf file, django middleware, before and after the fetch call in the frontend.
Thanks!
/r/django
https://redd.it/1p64pzt
Hi everyone,
I’m trying to measure the exact time spent in each stage of my API request flow — starting from the browser, through Nginx, into Django, then the database, and back out through Django and Nginx to the client.
Essentially, I want to capture timestamps and time intervals for:
* When the browser sends the request
* When Nginx receives it
* When Django starts processing it
* Time spent in the database
* Django response time
* Nginx response time
* When the browser receives the response
Is there any Django package or best practice that can help log these timing metrics end-to-end? Currently I have to manually add timestamps in nginx conf file, django middleware, before and after the fetch call in the frontend.
Thanks!
/r/django
https://redd.it/1p64pzt
Reddit
From the django community on Reddit
Explore this post and more from the django community
modeltranslation
As the title states;
how do you guys handle `modeltranslation` as of 2025.
/r/django
https://redd.it/1p6d4wq
As the title states;
how do you guys handle `modeltranslation` as of 2025.
/r/django
https://redd.it/1p6d4wq
Reddit
From the django community on Reddit
Explore this post and more from the django community
Django project flow for understanding
I am developing a project and parallelly learning django rest framework.
Currently, I have comfortably created models, and a customuser (with AbstractBaseUser) and corresponding customusermanager which will communicate with jwt auth. I have also implemented djangorestframework-simplejwt for obtaining token pair. Now, at this point I am at a standstill as to how should I proceed. I also have some confusions regarding customuser and customusermanager, and while studying stumbled upon some extra info such as there are forms and admin to be customized as well for customuser.
Also wondering as, how will I verify the user with jwt token obtained for some other functionalities.
Need help for understanding the general flow for drf+jwt and detailed answers for my abovementioned confusions are appreciated.
Thanks in advance.
/r/djangolearning
https://redd.it/1p6f4bj
I am developing a project and parallelly learning django rest framework.
Currently, I have comfortably created models, and a customuser (with AbstractBaseUser) and corresponding customusermanager which will communicate with jwt auth. I have also implemented djangorestframework-simplejwt for obtaining token pair. Now, at this point I am at a standstill as to how should I proceed. I also have some confusions regarding customuser and customusermanager, and while studying stumbled upon some extra info such as there are forms and admin to be customized as well for customuser.
Also wondering as, how will I verify the user with jwt token obtained for some other functionalities.
Need help for understanding the general flow for drf+jwt and detailed answers for my abovementioned confusions are appreciated.
Thanks in advance.
/r/djangolearning
https://redd.it/1p6f4bj
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
uvlink – A CLI to keep .venv in a centralized cache for uv
# GitHub Repo
* [https://github.com/c0rychu/uvlink](https://github.com/c0rychu/uvlink)
# What My Project Does
This tiny Python CLI tool `uvlink` keeps `.venv` out of cloud-synced project directories by storing the real env in a centralized cache and symlinking it from the project.
Basically, I'm trying to solve this `uv` issue: [https://github.com/astral-sh/uv/issues/1495](https://github.com/astral-sh/uv/issues/1495)
# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
It is perfect for `uv` users who sync code to Dropbox, Google Drive, or iCloud. Only your source code syncs, not gigabytes of .venv dependencies.
# Comparison (A brief comparison explaining how it differs from existing alternatives.)
* venvlink: It claims that it only supports Windows.
* uv-workon: It basically does the opposite; it creates symlinks at a centralized link back to the project's virtual environment.
Unless `uv` supports this natively in the future; I'm not aware of a good publicly available solution. (except for switching to `poetry`)
Any feedback is welcome :)
/r/Python
https://redd.it/1p662t0
# GitHub Repo
* [https://github.com/c0rychu/uvlink](https://github.com/c0rychu/uvlink)
# What My Project Does
This tiny Python CLI tool `uvlink` keeps `.venv` out of cloud-synced project directories by storing the real env in a centralized cache and symlinking it from the project.
Basically, I'm trying to solve this `uv` issue: [https://github.com/astral-sh/uv/issues/1495](https://github.com/astral-sh/uv/issues/1495)
# Target Audience (e.g., Is it meant for production, just a toy project, etc.)
It is perfect for `uv` users who sync code to Dropbox, Google Drive, or iCloud. Only your source code syncs, not gigabytes of .venv dependencies.
# Comparison (A brief comparison explaining how it differs from existing alternatives.)
* venvlink: It claims that it only supports Windows.
* uv-workon: It basically does the opposite; it creates symlinks at a centralized link back to the project's virtual environment.
Unless `uv` supports this natively in the future; I'm not aware of a good publicly available solution. (except for switching to `poetry`)
Any feedback is welcome :)
/r/Python
https://redd.it/1p662t0
GitHub
GitHub - c0rychu/uvlink: storing venv in a system-wise cache and symbolic link them back into project file
storing venv in a system-wise cache and symbolic link them back into project file - c0rychu/uvlink
How good can NumPy get?
I was reading this article doing some research on optimizing my code and came something that I found interesting (I am a beginner lol)
For creating a simple binary column (like an IF/ELSE) in a 1 million-row Pandas DataFrame, the common
I always treated
Is this massive speed difference common knowledge?
Why is the gap so huge? Is it purely due to Python's row-wise iteration vs. NumPy's C-compiled vectorization, or are there other factors at play (like memory management or overhead)?
Have any of you hit this bottleneck?
I'm trying to understand the underlying mechanics better
/r/Python
https://redd.it/1p65vcm
I was reading this article doing some research on optimizing my code and came something that I found interesting (I am a beginner lol)
For creating a simple binary column (like an IF/ELSE) in a 1 million-row Pandas DataFrame, the common
df.apply(lambda...) method was apparently 49.2 times slower than using np.where().I always treated
df.apply() as the standard, efficient way to run element-wise operations.Is this massive speed difference common knowledge?
Why is the gap so huge? Is it purely due to Python's row-wise iteration vs. NumPy's C-compiled vectorization, or are there other factors at play (like memory management or overhead)?
Have any of you hit this bottleneck?
I'm trying to understand the underlying mechanics better
/r/Python
https://redd.it/1p65vcm
Medium
Stop Using Lambda for Conditional Column Creation in Pandas! Use this instead.
Speed vs Familiarity!
Breaking Django convention? Using a variable key in template to acces a dict value
I have an application that tracks working hours. Users will make entries for a work day. Internally, the entry is made up of UserEntry and UserEntryItem. UserEntry will have the date amongst other things. UserEntryItems are made of a ForeignKey to a WorkType and a field for the acutal hours.
The data from these entries will be displayed in a table. This table is dynamic since different workers have different WorkTypeProfiles, and also the WorkTypeProfile can change where a worker did general services plus driving services but eventually he goes back to just general services.
So tables will have different columns depending on who and when. The way I want to solve this is: build up an index of columns which is just a list of column handles. The table has a header row and a footer row with special content. The body rows are all the same in structure, just with different values.
For top and bottom row, I want to build a dictionary with key = one of the column handles, and value = what goes into the table cell. For the body, I want to build a list of dictionaries with each dictionary representing one row.
In order to build the
/r/django
https://redd.it/1p6knne
I have an application that tracks working hours. Users will make entries for a work day. Internally, the entry is made up of UserEntry and UserEntryItem. UserEntry will have the date amongst other things. UserEntryItems are made of a ForeignKey to a WorkType and a field for the acutal hours.
The data from these entries will be displayed in a table. This table is dynamic since different workers have different WorkTypeProfiles, and also the WorkTypeProfile can change where a worker did general services plus driving services but eventually he goes back to just general services.
So tables will have different columns depending on who and when. The way I want to solve this is: build up an index of columns which is just a list of column handles. The table has a header row and a footer row with special content. The body rows are all the same in structure, just with different values.
For top and bottom row, I want to build a dictionary with key = one of the column handles, and value = what goes into the table cell. For the body, I want to build a list of dictionaries with each dictionary representing one row.
In order to build the
/r/django
https://redd.it/1p6knne
Reddit
From the django community on Reddit
Explore this post and more from the django community
DAG-style sync engine in Django
Project backstory: I had an existing WooCommerce website. Then I opened a retail store and added a Clover POS system and needed to sync data between them. There were not any commercial off the shelf syncing options that I could find that would fit my specific use case. So I created a simple python script that connects to both APIs and syncs data between them. Problem solved! But then I wanted to turn my long single script into some kind of auditable task log.
So I created a dag-style sync engine whichs runs in Django. It is a database driven task routing system controlled by a django front end. It consists of an orchestrator which determines the sequence of tasks and a dispatcher for task routing. Each sync job is initiated by essentially writing a row with queued status in the synccomand table with the dag name and initial payload. Django signals are used to fire the orchestrator and dispatcher and the task steps are run in Celery. It also features a built in idempotency guard so each step can be fully replayed/restarted.
I have deployed this
/r/django
https://redd.it/1p6hgid
Project backstory: I had an existing WooCommerce website. Then I opened a retail store and added a Clover POS system and needed to sync data between them. There were not any commercial off the shelf syncing options that I could find that would fit my specific use case. So I created a simple python script that connects to both APIs and syncs data between them. Problem solved! But then I wanted to turn my long single script into some kind of auditable task log.
So I created a dag-style sync engine whichs runs in Django. It is a database driven task routing system controlled by a django front end. It consists of an orchestrator which determines the sequence of tasks and a dispatcher for task routing. Each sync job is initiated by essentially writing a row with queued status in the synccomand table with the dag name and initial payload. Django signals are used to fire the orchestrator and dispatcher and the task steps are run in Celery. It also features a built in idempotency guard so each step can be fully replayed/restarted.
I have deployed this
/r/django
https://redd.it/1p6hgid
Reddit
From the django community on Reddit
Explore this post and more from the django community
Spent a bunch of time choosing between Loguru, Structlog and native logging
Python's native logging module is just fine but modern options like Loguru and Structlog are eye-catching. As someone who wants to use the best tooling so that I can make my life easy, I agonized over choosing one.. perhaps a little too much (I'd rather expend calories now rather than being in production hell and trying to wrangle logs).
I've boiled down what I've learnt to the following:
- Read some good advice here on r/Python to switch to a third party library only when you find/need something that the native libraries can't do - this basically holds true.
- Loguru's (most popular 3rd party library) value prop (zero config, dev ex prioritized) in the age of AI coding is much less appealing. AI can handle writing config boiler plate with the native logging module
- What kills loguru is that it isnt opentelemetry compatible. Meaning if you are using it for a production or production intent codebase, loguru really shouldnt be an option.
- Structlog feels like a more powerful and featured option but this brings with it the need to learn, understand a new system. Plus it still needs a custom "processor" to integrate with OTEL.
- Structlog's biggest value prop -
/r/Python
https://redd.it/1p6qy1e
Python's native logging module is just fine but modern options like Loguru and Structlog are eye-catching. As someone who wants to use the best tooling so that I can make my life easy, I agonized over choosing one.. perhaps a little too much (I'd rather expend calories now rather than being in production hell and trying to wrangle logs).
I've boiled down what I've learnt to the following:
- Read some good advice here on r/Python to switch to a third party library only when you find/need something that the native libraries can't do - this basically holds true.
- Loguru's (most popular 3rd party library) value prop (zero config, dev ex prioritized) in the age of AI coding is much less appealing. AI can handle writing config boiler plate with the native logging module
- What kills loguru is that it isnt opentelemetry compatible. Meaning if you are using it for a production or production intent codebase, loguru really shouldnt be an option.
- Structlog feels like a more powerful and featured option but this brings with it the need to learn, understand a new system. Plus it still needs a custom "processor" to integrate with OTEL.
- Structlog's biggest value prop -
/r/Python
https://redd.it/1p6qy1e
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Naming Things in really complex situations and as codebase size increases.
Naming has become a real challenge for me. It’s easy when I’m following a YouTube tutorial and building mock projects, but in real production projects it gets difficult. In the beginning it’s manageable, but as the project grows, naming things becomes harder.
For example, I have various formatters. A formatter takes a database object—basically a Django model instance—and formats it. It’s similar to a serializer, though I have specific reasons to create my own instead of using the built-in Python or Django REST Framework serializers. The language or framework isn’t the main point here; I’m mentioning them only for clarity.
So I create one formatter that returns some structured data. Then I need another formatter that returns about 80% of the same data, but with slight additions or removals. There might be an order formatter, then another order formatter with user data, another one without the “order received” date, and so on. None of this reflects my actual project—it’s not e-commerce but an internal tool I can’t discuss in detail—but it does involve many formatters for different use cases. Depending on the role, I may need to send different versions of order data with certain fields blank. This is only the formatter
/r/django
https://redd.it/1p7ayhs
Naming has become a real challenge for me. It’s easy when I’m following a YouTube tutorial and building mock projects, but in real production projects it gets difficult. In the beginning it’s manageable, but as the project grows, naming things becomes harder.
For example, I have various formatters. A formatter takes a database object—basically a Django model instance—and formats it. It’s similar to a serializer, though I have specific reasons to create my own instead of using the built-in Python or Django REST Framework serializers. The language or framework isn’t the main point here; I’m mentioning them only for clarity.
So I create one formatter that returns some structured data. Then I need another formatter that returns about 80% of the same data, but with slight additions or removals. There might be an order formatter, then another order formatter with user data, another one without the “order received” date, and so on. None of this reflects my actual project—it’s not e-commerce but an internal tool I can’t discuss in detail—but it does involve many formatters for different use cases. Depending on the role, I may need to send different versions of order data with certain fields blank. This is only the formatter
/r/django
https://redd.it/1p7ayhs
Reddit
From the django community on Reddit
Explore this post and more from the django community
i18n with AI?
i18n for Django apps is a lot of tough work. I am wondering if anyone here knows any good AI tools to speed this process up? I am talking about automatically generating the translations when making messages.
/r/django
https://redd.it/1p7crih
i18n for Django apps is a lot of tough work. I am wondering if anyone here knows any good AI tools to speed this process up? I am talking about automatically generating the translations when making messages.
/r/django
https://redd.it/1p7crih
Reddit
From the django community on Reddit
Explore this post and more from the django community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1p7nn45
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1p7nn45
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
complexipy 5.0.0, cognitive complexity tool
Hi r/Python! I've released the version [v5.0.0](https://github.com/rohaquinlop/complexipy/releases/tag/5.0.0). This version introduces new changes that will improve the tool adoption in existing projects and the cognitive complexity algorithm itself.
**What My Project Does**
`complexipy` is a command-line tool and library that calculates the cognitive complexity of Python code. Unlike cyclomatic complexity, which measures how complex code is to test, cognitive complexity measures how difficult code is for humans to read and understand.
**Target audience**
`complexipy` is built for:
* Python developers who care about readable, maintainable code.
* Teams who want to enforce quality standards in CI/CD pipelines.
* Open-source maintainers looking for automated complexity checks.
* Developers who want real-time feedback in their editors or pre-commit hooks.
* Researcher scientists, during this year I noticed that many researchers used `complexipy` during their investigations on LLMs generating code.
Whether you're working solo or in a team, `complexipy` helps you keep complexity under control.
**Comparison to Alternatives**
`Sonar` has the original version which runs online only in GitHub repos, and it's a slower workflow because you need to push your changes, wait until their scanner finishes the analysis and check the results. I inspired from them to create this tool, that's why it runs locally without having to publish anything and the analysis is really fast.
**Highlights of
/r/Python
https://redd.it/1p7fqbo
Hi r/Python! I've released the version [v5.0.0](https://github.com/rohaquinlop/complexipy/releases/tag/5.0.0). This version introduces new changes that will improve the tool adoption in existing projects and the cognitive complexity algorithm itself.
**What My Project Does**
`complexipy` is a command-line tool and library that calculates the cognitive complexity of Python code. Unlike cyclomatic complexity, which measures how complex code is to test, cognitive complexity measures how difficult code is for humans to read and understand.
**Target audience**
`complexipy` is built for:
* Python developers who care about readable, maintainable code.
* Teams who want to enforce quality standards in CI/CD pipelines.
* Open-source maintainers looking for automated complexity checks.
* Developers who want real-time feedback in their editors or pre-commit hooks.
* Researcher scientists, during this year I noticed that many researchers used `complexipy` during their investigations on LLMs generating code.
Whether you're working solo or in a team, `complexipy` helps you keep complexity under control.
**Comparison to Alternatives**
`Sonar` has the original version which runs online only in GitHub repos, and it's a slower workflow because you need to push your changes, wait until their scanner finishes the analysis and check the results. I inspired from them to create this tool, that's why it runs locally without having to publish anything and the analysis is really fast.
**Highlights of
/r/Python
https://redd.it/1p7fqbo
GitHub
Release 5.0.0 · rohaquinlop/complexipy
New
Snapshots: --snapshot-create writes complexipy-snapshot.json and comparisons block regressions; auto-refresh on improvements, bypass with --snapshot-ignore.
Change tracking: per-target cache i...
Snapshots: --snapshot-create writes complexipy-snapshot.json and comparisons block regressions; auto-refresh on improvements, bypass with --snapshot-ignore.
Change tracking: per-target cache i...
Thinking about a Python-native frontend - feedback?
Hey everyone experimenting with a personal project called Evolve.
The idea is to run Python directly in the browser via WebAssembly and use it to build reactive, component-based UIs - without writing JavaScript, without a virtual DOM, and without transpiling Python to JS.
# Current high-level architecture (text version):
User Python Code
↓
Python → WebAssembly toolchain
↓
WebAssembly Runtime (in browser)
↓
Evolve Core
┌───────────────┐
│ Component Sys │
│ Reactive Core │
└───────┬───────┘
↓
Tiny DOM Kernel
↓
/r/Python
https://redd.it/1p7ec8z
Hey everyone experimenting with a personal project called Evolve.
The idea is to run Python directly in the browser via WebAssembly and use it to build reactive, component-based UIs - without writing JavaScript, without a virtual DOM, and without transpiling Python to JS.
# Current high-level architecture (text version):
User Python Code
↓
Python → WebAssembly toolchain
↓
WebAssembly Runtime (in browser)
↓
Evolve Core
┌───────────────┐
│ Component Sys │
│ Reactive Core │
└───────┬───────┘
↓
Tiny DOM Kernel
↓
/r/Python
https://redd.it/1p7ec8z
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
15 most-watched Python conference talks of 2025 (so far)
Hi again r/python,
Below, you'll find 15 most-watched Python conference talks of 2025 so far.
These come with short summaries, so you can quickly decide whether a talk is worth watching. I put them together with a little help from AI. Hope you like it!
1. **“”Escape from Tutorial Hell” - Sarah Reichelt (PyCon AU 2025)”** Conference ⸱ +55k views ⸱ Sep 21, 2025 ⸱ 00h 25m 55s tldw: This talk shows how to break free from the cycle of endless tutorials and actually start developing your own projects, with helpful tips on design, structure, and using AI tools, applicable to any programming language.
2. **“Keynote Speaker - Cory Doctorow”** Conference ⸱ +33k views ⸱ May 22, 2025 ⸱ 00h 43m 49s tldw: How Big Tech rigs the internet and what developers can actually do to take back control.
3. **“How to build a cross-platform graphical user interface with Python - Russell Keith-Magee”** Conference ⸱ +23k views ⸱ Jun 02, 2025 ⸱ 00h 28m 23s tldw: Learn how to create a cross-platform GUI for your Python projects, and discover how to deploy your app seamlessly across desktops and mobile devices without changing any code.
4. **“Mentoring Both Ways: Helping Others While Leveling Up Yourself! — Manivannan
/r/Python
[https://redd.it/1p7xod0
Hi again r/python,
Below, you'll find 15 most-watched Python conference talks of 2025 so far.
These come with short summaries, so you can quickly decide whether a talk is worth watching. I put them together with a little help from AI. Hope you like it!
1. **“”Escape from Tutorial Hell” - Sarah Reichelt (PyCon AU 2025)”** Conference ⸱ +55k views ⸱ Sep 21, 2025 ⸱ 00h 25m 55s tldw: This talk shows how to break free from the cycle of endless tutorials and actually start developing your own projects, with helpful tips on design, structure, and using AI tools, applicable to any programming language.
2. **“Keynote Speaker - Cory Doctorow”** Conference ⸱ +33k views ⸱ May 22, 2025 ⸱ 00h 43m 49s tldw: How Big Tech rigs the internet and what developers can actually do to take back control.
3. **“How to build a cross-platform graphical user interface with Python - Russell Keith-Magee”** Conference ⸱ +23k views ⸱ Jun 02, 2025 ⸱ 00h 28m 23s tldw: Learn how to create a cross-platform GUI for your Python projects, and discover how to deploy your app seamlessly across desktops and mobile devices without changing any code.
4. **“Mentoring Both Ways: Helping Others While Leveling Up Yourself! — Manivannan
/r/Python
[https://redd.it/1p7xod0
YouTube
"Escape from Tutorial Hell" - Sarah Reichelt (PyCon AU 2025)
(Sarah Reichelt) When you're learning to code, no matter what the language, you learn small components: write a function to do something, create a class to do other things, add a user interface component and so on.
Then you get out into the real world…
Then you get out into the real world…
Upload 4 web apps online
Hey,
I Have developed 4 small flask web sites for my personal use.
They require a very small database (right now they run with sqlite)
I want to upload them to the internet but to keep the code and access private for me for now.
Im looking for hosting service or a solution that I can upload them to it
Hopefully without cold start server
My budget is up to 7$ a month
Any recommendations or advice?
Thanks!
/r/flask
https://redd.it/1p7y4q2
Hey,
I Have developed 4 small flask web sites for my personal use.
They require a very small database (right now they run with sqlite)
I want to upload them to the internet but to keep the code and access private for me for now.
Im looking for hosting service or a solution that I can upload them to it
Hopefully without cold start server
My budget is up to 7$ a month
Any recommendations or advice?
Thanks!
/r/flask
https://redd.it/1p7y4q2
Need a suggestion
I’m a B.Pharm 3rd-year student, but I actually got into coding back in my 1st year (2023). At first Python felt amazing I loved learning new concepts. But when topics like OOP and dictionaries came in, I suddenly felt like maybe I wasn’t good enough. Still, I pushed through and finished the course.
Later we shifted to a new place, far from the institute. My teacher there was great he even asked why I chose pharmacy over programming. I told him the truth: I tried for NEET, didn’t clear it due to lack of interest and my own fault to avoid studies during that time, so I chose B.Pharm while doing Python on the side. He appreciated that.
But now the problem is whenever college exams come, I have to stop coding. And every time I return, my concepts feel weak again, so I end up relearning things. This keeps repeating.
Honestly, throughout my life, I’ve never really started something purely out of interest or finished it properly except programming. Python is the only thing I genuinely enjoy,
Now I’m continuing programming as a hobby growing bit by bit and even getting better in my studies. But sometimes I still think
/r/Python
https://redd.it/1p8155t
I’m a B.Pharm 3rd-year student, but I actually got into coding back in my 1st year (2023). At first Python felt amazing I loved learning new concepts. But when topics like OOP and dictionaries came in, I suddenly felt like maybe I wasn’t good enough. Still, I pushed through and finished the course.
Later we shifted to a new place, far from the institute. My teacher there was great he even asked why I chose pharmacy over programming. I told him the truth: I tried for NEET, didn’t clear it due to lack of interest and my own fault to avoid studies during that time, so I chose B.Pharm while doing Python on the side. He appreciated that.
But now the problem is whenever college exams come, I have to stop coding. And every time I return, my concepts feel weak again, so I end up relearning things. This keeps repeating.
Honestly, throughout my life, I’ve never really started something purely out of interest or finished it properly except programming. Python is the only thing I genuinely enjoy,
Now I’m continuing programming as a hobby growing bit by bit and even getting better in my studies. But sometimes I still think
/r/Python
https://redd.it/1p8155t
Reddit
From the Python community on Reddit
Explore this post and more from the Python community