Which countries have higher demand for Django developers?
I personally enjoy working with Django — it's clean, powerful, and helps me build web applications quickly.
However, in my country, technologies like .NET and PHP tend to dominate the job market, and Django isn’t as commonly used in production environments.
That got me thinking: Which countries or regions have a stronger demand for Django developers?
Are there places where Django is more widely adopted, both in startups and established companies?
I’d love to hear from fellow developers around the world. What’s the tech stack landscape like in your country? Is Django commonly used there?
Thanks in advance for your insights! 🙏
/r/django
https://redd.it/1mc9e4b
I personally enjoy working with Django — it's clean, powerful, and helps me build web applications quickly.
However, in my country, technologies like .NET and PHP tend to dominate the job market, and Django isn’t as commonly used in production environments.
That got me thinking: Which countries or regions have a stronger demand for Django developers?
Are there places where Django is more widely adopted, both in startups and established companies?
I’d love to hear from fellow developers around the world. What’s the tech stack landscape like in your country? Is Django commonly used there?
Thanks in advance for your insights! 🙏
/r/django
https://redd.it/1mc9e4b
Reddit
From the django community on Reddit
Explore this post and more from the django community
UV is helping me slowly get rid of bad practices and improve company’s internal tooling.
I work at a large conglomerate company that has been around for a long time. One of the most annoying things that I’ve seen is certain Engineers will put their python scripts into box or into artifactory as a way of deploying or sharing their code as internal tooling. One example might be, “here’s this python script that acts as a AI agent, and you can use it in your local setup. Download the script from box and set it up where needed”.
I’m sick of this. First of all, no one just uses .netrc files to share their actual Gitlab repository code. Also every sets their Gitlab projects to private.
Well I’ve finally been on the tech crusade to say, 1) just use Gitlab, 2 use well known authentication methods like netrc with a Gitlab personal access token, and 3) use UV! Stop with the random requirements.txt files scattered about.
I now have a few well used cli internal tools that are just as simple as installing UV, setting up the netrc file on the machine, then running uvx git+https://gitlab.com/acme/my-tool some args -v.
Its has saved so much headache. We tried poetry but now I’m full in on getting UV spread across the
/r/Python
https://redd.it/1mcgsxr
I work at a large conglomerate company that has been around for a long time. One of the most annoying things that I’ve seen is certain Engineers will put their python scripts into box or into artifactory as a way of deploying or sharing their code as internal tooling. One example might be, “here’s this python script that acts as a AI agent, and you can use it in your local setup. Download the script from box and set it up where needed”.
I’m sick of this. First of all, no one just uses .netrc files to share their actual Gitlab repository code. Also every sets their Gitlab projects to private.
Well I’ve finally been on the tech crusade to say, 1) just use Gitlab, 2 use well known authentication methods like netrc with a Gitlab personal access token, and 3) use UV! Stop with the random requirements.txt files scattered about.
I now have a few well used cli internal tools that are just as simple as installing UV, setting up the netrc file on the machine, then running uvx git+https://gitlab.com/acme/my-tool some args -v.
Its has saved so much headache. We tried poetry but now I’m full in on getting UV spread across the
/r/Python
https://redd.it/1mcgsxr
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
notata: Simple structured logging for scientific simulations
**What My Project Does:**
`notata` is a small Python library for logging simulation runs in a consistent, structured way. It creates a new folder for each run, where it saves parameters, arrays, plots, logs, and metadata as plain files.
The idea is to stop rewriting the same I/O code in every project and to bring some consistency to file management, without adding any complexity. No config files, no database, no hidden state. Everything is just saved where you can see it.
**Target Audience:**
This is for scientists and engineers who run simulations, parameter sweeps, or numerical experiments. If you’ve ever manually saved arrays to .npy, dumped params to a JSON file, and ended up with a folder full of half-labeled outputs, this could be useful to you.
**Comparison:**
Unlike tools like MLflow or W&B, `notata` doesn’t assume you’re doing machine learning. There’s no dashboard, no backend server, and nothing to configure. It just writes structured outputs to disk. You can grep it, copy it, or archive it.
More importantly, it’s a way to standardize simulation logging without changing how you work or adding too much overhead.
**Source Code:**
[https://github.com/alonfnt/notata](https://github.com/alonfnt/notata)
**Example**: Damped Oscillator Simulation
This logs a full run of a basic physics simulation, saving the trajectory and final state
```python
from notata import
/r/Python
https://redd.it/1mc3co4
**What My Project Does:**
`notata` is a small Python library for logging simulation runs in a consistent, structured way. It creates a new folder for each run, where it saves parameters, arrays, plots, logs, and metadata as plain files.
The idea is to stop rewriting the same I/O code in every project and to bring some consistency to file management, without adding any complexity. No config files, no database, no hidden state. Everything is just saved where you can see it.
**Target Audience:**
This is for scientists and engineers who run simulations, parameter sweeps, or numerical experiments. If you’ve ever manually saved arrays to .npy, dumped params to a JSON file, and ended up with a folder full of half-labeled outputs, this could be useful to you.
**Comparison:**
Unlike tools like MLflow or W&B, `notata` doesn’t assume you’re doing machine learning. There’s no dashboard, no backend server, and nothing to configure. It just writes structured outputs to disk. You can grep it, copy it, or archive it.
More importantly, it’s a way to standardize simulation logging without changing how you work or adding too much overhead.
**Source Code:**
[https://github.com/alonfnt/notata](https://github.com/alonfnt/notata)
**Example**: Damped Oscillator Simulation
This logs a full run of a basic physics simulation, saving the trajectory and final state
```python
from notata import
/r/Python
https://redd.it/1mc3co4
GitHub
GitHub - alonfnt/notata: A lightweight Python library for saving simulation results in a standardized, reproducible format..
A lightweight Python library for saving simulation results in a standardized, reproducible format.. - alonfnt/notata
tinyio: A tiny (~200 lines) event loop for Python
Ever used
This is an alternative for the simple use-cases, where you just need an event loop, and want to crash the whole thing if anything goes wrong. (Raising an exception in every coroutine so it can clean up its resources.)
https://github.com/patrick-kidger/tinyio
/r/Python
https://redd.it/1mck8h3
Ever used
asyncio and wished you hadn't?tinyio is a dead-simple event loop for Python, born out of my frustration with trying to get robust error handling with asyncio. ( not the only one running into its sharp corners: link1, link2.)This is an alternative for the simple use-cases, where you just need an event loop, and want to crash the whole thing if anything goes wrong. (Raising an exception in every coroutine so it can clean up its resources.)
https://github.com/patrick-kidger/tinyio
/r/Python
https://redd.it/1mck8h3
sailor.li
asyncio: a library with too many sharp corners
An explanation of some major issues with asyncio.
React + Django html templates
Hi, I inherit a Django project and am currently making small incremental changes. For context I'm a DevOps and Next/React developer. Django is not my strongest suit but I'm comfortable with vanilla Python. One thing that frustrates me the most is Javascript in html templates. Previous devs used both JQuery and pure JS to manipulate the DOM & handle interactive forms. I did this very exact thing many eons ago and hated it because they're so hard to understand and maintain.
How would you incorporate React with html templates?
/r/django
https://redd.it/1mchead
Hi, I inherit a Django project and am currently making small incremental changes. For context I'm a DevOps and Next/React developer. Django is not my strongest suit but I'm comfortable with vanilla Python. One thing that frustrates me the most is Javascript in html templates. Previous devs used both JQuery and pure JS to manipulate the DOM & handle interactive forms. I did this very exact thing many eons ago and hated it because they're so hard to understand and maintain.
How would you incorporate React with html templates?
/r/django
https://redd.it/1mchead
Reddit
From the django community on Reddit
Explore this post and more from the django community
I built a self-hostable Django OIDC provider — pre-release now available
Hey r/django, I wanted to share a project I’ve been working on. A Django-based implementation of an OAuth2 + OpenID Connect provider, built from scratch and designed to be easily self-hosted.
This started partly as a learning project and partly as preparation for a suite of web tools I plan to build in the future. I wanted a central authentication system so users wouldn’t need to sign up separately for each app - something similar to how Google handles auth across products.
# What it does so far:
* Implements OAuth2 and OIDC specs
* Handles registration, email verification, login, and password reset
* Uses Django, PostgreSQL, Redis, Celery, and Nginx
* Fully dockerized and self-hostable
* Includes CLI-style commands to initialize, configure SSL, deploy, and apply migrations
The goal was to make deployment straightforward yet flexible. You can get it running with just a few make commands:
make init
make init-ssl
make deploy
make migrate
Still a lot of polish left (e.g., consent screens, improved token handling, test coverage), but I think it’s a good base if you want a private identity provider setup for your apps or projects.
GitHub: [https://github.com/dakshesh14/django-oidc-provider](https://github.com/dakshesh14/django-oidc-provider)
Write-up and details: [https://www.dakshesh.me/projects/django-oidc-provider](https://www.dakshesh.me/projects/django-oidc-provider)
Would appreciate
/r/django
https://redd.it/1mcf570
Hey r/django, I wanted to share a project I’ve been working on. A Django-based implementation of an OAuth2 + OpenID Connect provider, built from scratch and designed to be easily self-hosted.
This started partly as a learning project and partly as preparation for a suite of web tools I plan to build in the future. I wanted a central authentication system so users wouldn’t need to sign up separately for each app - something similar to how Google handles auth across products.
# What it does so far:
* Implements OAuth2 and OIDC specs
* Handles registration, email verification, login, and password reset
* Uses Django, PostgreSQL, Redis, Celery, and Nginx
* Fully dockerized and self-hostable
* Includes CLI-style commands to initialize, configure SSL, deploy, and apply migrations
The goal was to make deployment straightforward yet flexible. You can get it running with just a few make commands:
make init
make init-ssl
make deploy
make migrate
Still a lot of polish left (e.g., consent screens, improved token handling, test coverage), but I think it’s a good base if you want a private identity provider setup for your apps or projects.
GitHub: [https://github.com/dakshesh14/django-oidc-provider](https://github.com/dakshesh14/django-oidc-provider)
Write-up and details: [https://www.dakshesh.me/projects/django-oidc-provider](https://www.dakshesh.me/projects/django-oidc-provider)
Would appreciate
/r/django
https://redd.it/1mcf570
GitHub
GitHub - dakshesh14/django-oidc-provider: A Django-based OAuth2 and OpenID Connect (OIDC) Identity Provider implementation with…
A Django-based OAuth2 and OpenID Connect (OIDC) Identity Provider implementation with token issuing, user authentication, and user info endpoints. - dakshesh14/django-oidc-provider
Django startup for people struggling to land a job
Hey everyone!
I'm based in London and as a recent graduate, I am finding it tough to land even a junior role or internship in software, especially with Django as my main framework.
Instead of wasting time waiting, I think it would be more productive if a few of us team up and build a real startup-style project together. It’ll help us gain real-world experience, improve our CVs, and who knows — maybe it turns into something serious.
If you’re in or around London (or open to remote work), and you're interested in learning, collaborating, and growing together, please message me or comment below. Let’s build something and help each other break into the industry.
/r/django
https://redd.it/1mcqnqj
Hey everyone!
I'm based in London and as a recent graduate, I am finding it tough to land even a junior role or internship in software, especially with Django as my main framework.
Instead of wasting time waiting, I think it would be more productive if a few of us team up and build a real startup-style project together. It’ll help us gain real-world experience, improve our CVs, and who knows — maybe it turns into something serious.
If you’re in or around London (or open to remote work), and you're interested in learning, collaborating, and growing together, please message me or comment below. Let’s build something and help each other break into the industry.
/r/django
https://redd.it/1mcqnqj
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is Flask still one of the best options for integrating APIs for AI models?
Hi everyone,
I'm working on some AI and machine learning projects and need to make my models available through an API.
I know Flask is still commonly used for this, but I'm wondering if it's still the best choice these days.
Is Flask still the go-to option for serving AI models via an API, or are there better alternatives in 2025, like FastAPI, Django, or something else?
My main priorities are:
- Easy to use
- Good performance
- Simple deployment (like using Docker)
- Scalability if needed
I'd really appreciate hearing about your experiences or any recommendations for modern tools or stacks that work well for this kind of project.
Thanks I appreciate it!
/r/Python
https://redd.it/1mct7ds
Hi everyone,
I'm working on some AI and machine learning projects and need to make my models available through an API.
I know Flask is still commonly used for this, but I'm wondering if it's still the best choice these days.
Is Flask still the go-to option for serving AI models via an API, or are there better alternatives in 2025, like FastAPI, Django, or something else?
My main priorities are:
- Easy to use
- Good performance
- Simple deployment (like using Docker)
- Scalability if needed
I'd really appreciate hearing about your experiences or any recommendations for modern tools or stacks that work well for this kind of project.
Thanks I appreciate it!
/r/Python
https://redd.it/1mct7ds
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Archivey - unified interface for ZIP, TAR, RAR, 7z and more
Hi! I've been working on [this project](https://github.com/davitf/archivey) ([PyPI](https://pypi.org/project/archivey)) for the past couple of months, and I feel it's time to share and get some feedback.
# Motivation
While building a tool to organize my backups, I noticed I had to write separate code for each archive type, as each of the format-specific libraries (`zipfile`, `tarfile`, `rarfile`, `py7zr`, etc) has slightly different APIs and quirks.
I couldn’t find a unified, Pythonic library that handled all common formats with the features I needed, so I decided to build one. I figured others might find it useful too.
# What my project does
It provides a simple interface for reading and extracting many archive formats with consistent behavior:
from archivey import open_archive
with open_archive("example.zip") as archive:
archive.extractall("output_dir/")
# Or process each file in the archive without extracting to disk
for member, stream in archive.iter_members_with_streams():
print(member.filename, member.type, member.file_size)
if stream is not
/r/Python
https://redd.it/1mcrerl
Hi! I've been working on [this project](https://github.com/davitf/archivey) ([PyPI](https://pypi.org/project/archivey)) for the past couple of months, and I feel it's time to share and get some feedback.
# Motivation
While building a tool to organize my backups, I noticed I had to write separate code for each archive type, as each of the format-specific libraries (`zipfile`, `tarfile`, `rarfile`, `py7zr`, etc) has slightly different APIs and quirks.
I couldn’t find a unified, Pythonic library that handled all common formats with the features I needed, so I decided to build one. I figured others might find it useful too.
# What my project does
It provides a simple interface for reading and extracting many archive formats with consistent behavior:
from archivey import open_archive
with open_archive("example.zip") as archive:
archive.extractall("output_dir/")
# Or process each file in the archive without extracting to disk
for member, stream in archive.iter_members_with_streams():
print(member.filename, member.type, member.file_size)
if stream is not
/r/Python
https://redd.it/1mcrerl
GitHub
GitHub - davitf/archivey: Python library for reading zip, tar, rar, 7z and other archives
Python library for reading zip, tar, rar, 7z and other archives - davitf/archivey
Training a "Tab Tab" Code Completion Model for Marimo Notebooks
In the spirit of building in public, we're collaborating with Marimo to build a "tab completion" model for their notebook cells, and we wanted to share our progress as we go in tutorial form.
The goal is to create a local, open-source model that provides a Cursor-like code-completion experience directly in notebook cells. You'll be able to download the weights and run it locally with Ollama or access it through a free API we provide.
We’re already seeing promising results by fine-tuning the Qwen and Llama models, but there’s still more work to do.
👉 Here’s the first post in what will be a series:
https://www.oxen.ai/blog/building-a-tab-tab-code-completion-model
If you’re interested in contributing to data collection or the project in general, let us know! We already have a working CodeMirror plugin and are focused on improving the model’s accuracy over the coming weeks.
/r/Python
https://redd.it/1mcrj71
In the spirit of building in public, we're collaborating with Marimo to build a "tab completion" model for their notebook cells, and we wanted to share our progress as we go in tutorial form.
The goal is to create a local, open-source model that provides a Cursor-like code-completion experience directly in notebook cells. You'll be able to download the weights and run it locally with Ollama or access it through a free API we provide.
We’re already seeing promising results by fine-tuning the Qwen and Llama models, but there’s still more work to do.
👉 Here’s the first post in what will be a series:
https://www.oxen.ai/blog/building-a-tab-tab-code-completion-model
If you’re interested in contributing to data collection or the project in general, let us know! We already have a working CodeMirror plugin and are focused on improving the model’s accuracy over the coming weeks.
/r/Python
https://redd.it/1mcrj71
marimo.io
marimo | a next-generation Python notebook
Explore data and build apps seamlessly with marimo, a next-generation Python notebook.
I coded a prototype last night to solve API problems.
Five days ago, I posted here about the difficulty of finding a product on the market that would help my client manage interactions with my API.
I wanted something like a "Shopify" for my API, not an "Amazon" like RapidAPI.
Last night, during one of those sleepless late nights, I decided to finally bring the idea to life and code the prototype of a little product I had in mind.
The concept is simple: give API creators a quick and easy way for their customers to:
\- Generate and manage API keys
\- Track usage and set limits
\- Manage members
\- Set up payments
For now, it’s just a skeleton, but in the next few late nights, I’ll keep building it out.
The goal is to make life a lot easier for those selling APIs.
What do you think?
https://youtu.be/mlKegPNRSw4
/r/django
https://redd.it/1mcl88d
Five days ago, I posted here about the difficulty of finding a product on the market that would help my client manage interactions with my API.
I wanted something like a "Shopify" for my API, not an "Amazon" like RapidAPI.
Last night, during one of those sleepless late nights, I decided to finally bring the idea to life and code the prototype of a little product I had in mind.
The concept is simple: give API creators a quick and easy way for their customers to:
\- Generate and manage API keys
\- Track usage and set limits
\- Manage members
\- Set up payments
For now, it’s just a skeleton, but in the next few late nights, I’ll keep building it out.
The goal is to make life a lot easier for those selling APIs.
What do you think?
https://youtu.be/mlKegPNRSw4
/r/django
https://redd.it/1mcl88d
YouTube
Shopify API prototype
My mail: thalicofernandes@gmail.com
X: thalesvinif
X: thalesvinif
Azure interactions
Hi,
Anyone got any experience with implementing azure into an app with python?
Are there any good libraries for such things :)?
Asking couse I need to figure out an app/platform that actively cooperates with a data base, azure is kinda my first guess for a thing like that.
Any tips welcome :D
/r/Python
https://redd.it/1mczicz
Hi,
Anyone got any experience with implementing azure into an app with python?
Are there any good libraries for such things :)?
Asking couse I need to figure out an app/platform that actively cooperates with a data base, azure is kinda my first guess for a thing like that.
Any tips welcome :D
/r/Python
https://redd.it/1mczicz
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Python Data Engineers: Meet Elusion v3.12.5 - Rust DataFrame Library with Familiar Syntax
Hey Python Data engineers! 👋
I know what you're thinking: "Another post trying to convince me to learn Rust?" But hear me out - Elusion v3.12.5 might be the easiest way for Python, Scala and SQL developers to dip their toes into Rust for data engineering, and here's why it's worth your time.
# 🤔 "I'm comfortable with Python/PySpark why switch?"
Because the syntax is almost identical to what you already know!
Target audience:
If you can write PySpark or SQL, you can write Elusion. Check this out:
PySpark style you know:
result = (salesdf
.join(customersdf, salesdf.CustomerKey == customersdf.CustomerKey, "inner")
.select("c.FirstName", "c.LastName", "s.OrderQuantity")
.groupBy("c.FirstName", "c.LastName")
.agg(sum("s.OrderQuantity").alias("totalquantity"))
.filter(col("totalquantity") > 100)
.orderBy(desc("totalquantity"))
.limit(10))
**Elusion in Rust (almost the same!):**
let result = salesdf
.join(customersdf, ["s.CustomerKey = c.CustomerKey"], "INNER")
.select(["c.FirstName", "c.LastName", "s.OrderQuantity"])
.agg(["SUM(s.OrderQuantity) AS totalquantity"])
/r/Python
https://redd.it/1md030c
Hey Python Data engineers! 👋
I know what you're thinking: "Another post trying to convince me to learn Rust?" But hear me out - Elusion v3.12.5 might be the easiest way for Python, Scala and SQL developers to dip their toes into Rust for data engineering, and here's why it's worth your time.
# 🤔 "I'm comfortable with Python/PySpark why switch?"
Because the syntax is almost identical to what you already know!
Target audience:
If you can write PySpark or SQL, you can write Elusion. Check this out:
PySpark style you know:
result = (salesdf
.join(customersdf, salesdf.CustomerKey == customersdf.CustomerKey, "inner")
.select("c.FirstName", "c.LastName", "s.OrderQuantity")
.groupBy("c.FirstName", "c.LastName")
.agg(sum("s.OrderQuantity").alias("totalquantity"))
.filter(col("totalquantity") > 100)
.orderBy(desc("totalquantity"))
.limit(10))
**Elusion in Rust (almost the same!):**
let result = salesdf
.join(customersdf, ["s.CustomerKey = c.CustomerKey"], "INNER")
.select(["c.FirstName", "c.LastName", "s.OrderQuantity"])
.agg(["SUM(s.OrderQuantity) AS totalquantity"])
/r/Python
https://redd.it/1md030c
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
If you want to use vibe coding, make sure you fully understand the whole project
I am using python and finite element library FEnicsx0.9 to write a project about compressible flow, I am using FEnicsx0.9, few weeks ago I think AI can let me code the project in one night as long as I give it the whole project algorithm and formulas, now I figure out that if you don't fully understand the whole library you are using and the detail of the whole project you are coding, relying on AI too much will become a disaster, vibe coding is hyped too much
/r/Python
https://redd.it/1md01j6
I am using python and finite element library FEnicsx0.9 to write a project about compressible flow, I am using FEnicsx0.9, few weeks ago I think AI can let me code the project in one night as long as I give it the whole project algorithm and formulas, now I figure out that if you don't fully understand the whole library you are using and the detail of the whole project you are coding, relying on AI too much will become a disaster, vibe coding is hyped too much
/r/Python
https://redd.it/1md01j6
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Flask + PostgreSQL + Flask-Migrate works locally but not on Render (no tables created)
I'm deploying a Flask app to Render using PostgreSQL and Flask-Migrate. Everything works fine on localhost — tables get created, data stores properly, no issues at all.
But after deploying to Render:
* The app runs, but any DB-related operation causes a 500 Internal Server Error.
* I’ve added the `DATABASE_URL` in Render environment .
* My app uses Flask-Migrate. I’ve run `flask db init`, m`igrate`, and `upgrade` locally.
* On Render, I don’t see any tables created in the database (even after deployment).
* How to solve this ? Can anybody give full steps i asked claude , gpt ,grok etc but no use i am missing out something.
/r/flask
https://redd.it/1md6i0f
I'm deploying a Flask app to Render using PostgreSQL and Flask-Migrate. Everything works fine on localhost — tables get created, data stores properly, no issues at all.
But after deploying to Render:
* The app runs, but any DB-related operation causes a 500 Internal Server Error.
* I’ve added the `DATABASE_URL` in Render environment .
* My app uses Flask-Migrate. I’ve run `flask db init`, m`igrate`, and `upgrade` locally.
* On Render, I don’t see any tables created in the database (even after deployment).
* How to solve this ? Can anybody give full steps i asked claude , gpt ,grok etc but no use i am missing out something.
/r/flask
https://redd.it/1md6i0f
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
I coded a prototype last night to solve API problems.
Five days ago, I posted here about the difficulty of finding a product on the market that would help my client manage interactions with my API.
I wanted something like a "Shopify" for my API, not an "Amazon" like RapidAPI.
Last night, during one of those sleepless late nights, I decided to finally bring the idea to life and code the prototype of a little product I had in mind.
The concept is simple: give API creators a quick and easy way for their customers to:
\- Generate and manage API keys
\- Track usage and set limits
\- Manage members
\- Set up payments
For now, it’s just a skeleton, but in the next few late nights, I’ll keep building it out.
The goal is to make life a lot easier for those selling APIs.
What do you think?
https://reddit.com/link/1mclbrj/video/8nakl4hj9vff1/player
/r/flask
https://redd.it/1mclbrj
Five days ago, I posted here about the difficulty of finding a product on the market that would help my client manage interactions with my API.
I wanted something like a "Shopify" for my API, not an "Amazon" like RapidAPI.
Last night, during one of those sleepless late nights, I decided to finally bring the idea to life and code the prototype of a little product I had in mind.
The concept is simple: give API creators a quick and easy way for their customers to:
\- Generate and manage API keys
\- Track usage and set limits
\- Manage members
\- Set up payments
For now, it’s just a skeleton, but in the next few late nights, I’ll keep building it out.
The goal is to make life a lot easier for those selling APIs.
What do you think?
https://reddit.com/link/1mclbrj/video/8nakl4hj9vff1/player
/r/flask
https://redd.it/1mclbrj
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Lessons Learned While Trying to Scrape Google Search Results With Python
I’ve been experimenting with Python web scraping recently, and one of the toughest challenges so far has been scraping Google search results reliably. I expected it to be as simple as
Here’s what I’ve learned from trial and error:
🔹 Google is quick to detect scraping attempts.
Even with random headers, delays, and proxies, it doesn’t take long before CAPTCHAs or temporary blocks pop up. At one point, I could barely scrape 2–3 pages before getting cut off.
🔹 Pagination isn’t consistent.
It’s not just
🔹 JavaScript rendering is almost a requirement now.
Some SERPs don’t fully load without JS enabled. Static requests often return stripped-down or incomplete results, which makes BeautifulSoup parsing unreliable unless you use something like Playwright or an API that supports rendering.
🔹 Data cleaning matters.
Google adds a ton of extra formatting, tracking parameters, and “People Also Ask” blocks. I ended up writing extra functions just to extract clean titles, links, and snippets.
I know scraping Google is a grey area, but it’s a common data engineering
/r/Python
https://redd.it/1md4zmu
I’ve been experimenting with Python web scraping recently, and one of the toughest challenges so far has been scraping Google search results reliably. I expected it to be as simple as
requests + BeautifulSoup, but it wasn’t.Here’s what I’ve learned from trial and error:
🔹 Google is quick to detect scraping attempts.
Even with random headers, delays, and proxies, it doesn’t take long before CAPTCHAs or temporary blocks pop up. At one point, I could barely scrape 2–3 pages before getting cut off.
🔹 Pagination isn’t consistent.
It’s not just
&start=10 for every page—sometimes the results shift or display fewer items than what’s in the browser. You need to account for unexpected page behavior.🔹 JavaScript rendering is almost a requirement now.
Some SERPs don’t fully load without JS enabled. Static requests often return stripped-down or incomplete results, which makes BeautifulSoup parsing unreliable unless you use something like Playwright or an API that supports rendering.
🔹 Data cleaning matters.
Google adds a ton of extra formatting, tracking parameters, and “People Also Ask” blocks. I ended up writing extra functions just to extract clean titles, links, and snippets.
I know scraping Google is a grey area, but it’s a common data engineering
/r/Python
https://redd.it/1md4zmu
Reddit
From the Python community on Reddit: Lessons Learned While Trying to Scrape Google Search Results With Python
Explore this post and more from the Python community
large django project experiencing 502
My project has been experiencing 502 recently. I am running on gunicon with nginx. I don't really want to increase the timeout unless I have too. I have several models with object counts into 400k and another in 2 million objects. The 502 only occurs on PATCH requests. I suspect that the number of objects is causing the issue. What are some possible solutions I should look into?
/r/django
https://redd.it/1mddk6d
My project has been experiencing 502 recently. I am running on gunicon with nginx. I don't really want to increase the timeout unless I have too. I have several models with object counts into 400k and another in 2 million objects. The 502 only occurs on PATCH requests. I suspect that the number of objects is causing the issue. What are some possible solutions I should look into?
/r/django
https://redd.it/1mddk6d
Reddit
From the django community on Reddit
Explore this post and more from the django community
`tokenize`: a tip and a trap
[`tokenize`](https://docs.python.org/3/library/tokenize.html) from the standard library is not often useful, but I had the pleasure of using it in a recent project.
Try `python -m tokenize <some-short-program>`, or `python -m tokenize` to experiment at the command line.
-----
The tip is this: `tokenize.generate_tokens` expects [a readline function that spits out lines as strings when called repeatedly](https://docs.python.org/3/library/tokenize.html#tokenize.generate_tokens), so if you want to mock calls to it, you need something like this:
lines = s.splitlines()
return tokenize.generate_tokens(iter(lines).__next__)
(Use `tokenize.tokenize` if you always have strings.)
----
The trap: there was a breaking change in the tokenizer between Python 3.11 and Python 3.12 because of the formalization of the grammar for f-strings from [PEP 701](https://docs.python.org/3/whatsnew/3.12.html#pep-701-syntactic-formalization-of-f-strings).
$ echo 'a = f" {h:{w}} "' | python3.11 -m tokenize
1,0-1,1: NAME 'a'
1,2-1,3: OP '='
/r/Python
https://redd.it/1mdag10
[`tokenize`](https://docs.python.org/3/library/tokenize.html) from the standard library is not often useful, but I had the pleasure of using it in a recent project.
Try `python -m tokenize <some-short-program>`, or `python -m tokenize` to experiment at the command line.
-----
The tip is this: `tokenize.generate_tokens` expects [a readline function that spits out lines as strings when called repeatedly](https://docs.python.org/3/library/tokenize.html#tokenize.generate_tokens), so if you want to mock calls to it, you need something like this:
lines = s.splitlines()
return tokenize.generate_tokens(iter(lines).__next__)
(Use `tokenize.tokenize` if you always have strings.)
----
The trap: there was a breaking change in the tokenizer between Python 3.11 and Python 3.12 because of the formalization of the grammar for f-strings from [PEP 701](https://docs.python.org/3/whatsnew/3.12.html#pep-701-syntactic-formalization-of-f-strings).
$ echo 'a = f" {h:{w}} "' | python3.11 -m tokenize
1,0-1,1: NAME 'a'
1,2-1,3: OP '='
/r/Python
https://redd.it/1mdag10
Python documentation
tokenize — Tokenizer for Python source
Source code: Lib/tokenize.py The tokenize module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it u...