The HTTP caching Python deserves
# What My Project Does
[Hishel](https://hishel.com/1.0/) is an HTTP caching toolkit for python, which includes **sans-io** caching implementation, **storages** for effectively storing request/response for later use, and integration with your lovely HTTP tool in python such as HTTPX, requests, fastapi, asgi (for any asgi based library), graphql and more!!
Hishel uses **persistent storage** by default, so your cached responses survive program restarts.
After **2 years** and over **63 MILLION pip installs**, I released the first major version with tons of new features to simplify caching.
✨ Help Hishel grow! Give us a [star on GitHub](https://github.com/karpetrosyan/hishel) if you found it useful. ✨
# Use Cases:
HTTP response caching is something you can use **almost everywhere** to:
* Improve the performance of your program
* Work without an internet connection (offline mode)
* Save money and stop wasting API calls—make a single request and reuse it many times!
* Work even when your upstream server goes down
* Avoid unnecessary downloads when content hasn't changed (what I call "free caching"—it's completely free and can be configured to always serve the freshest data without re-downloading if nothing changed, like the browser's 304 Not Modified response)
# QuickStart
First, download and install Hishel using pip:
pip: `pip install "hishel[httpx, requests, fastapi, async]"==1.0.0`
We've installed several integrations just for demonstration—you
/r/Python
https://redd.it/1oilkc1
# What My Project Does
[Hishel](https://hishel.com/1.0/) is an HTTP caching toolkit for python, which includes **sans-io** caching implementation, **storages** for effectively storing request/response for later use, and integration with your lovely HTTP tool in python such as HTTPX, requests, fastapi, asgi (for any asgi based library), graphql and more!!
Hishel uses **persistent storage** by default, so your cached responses survive program restarts.
After **2 years** and over **63 MILLION pip installs**, I released the first major version with tons of new features to simplify caching.
✨ Help Hishel grow! Give us a [star on GitHub](https://github.com/karpetrosyan/hishel) if you found it useful. ✨
# Use Cases:
HTTP response caching is something you can use **almost everywhere** to:
* Improve the performance of your program
* Work without an internet connection (offline mode)
* Save money and stop wasting API calls—make a single request and reuse it many times!
* Work even when your upstream server goes down
* Avoid unnecessary downloads when content hasn't changed (what I call "free caching"—it's completely free and can be configured to always serve the freshest data without re-downloading if nothing changed, like the browser's 304 Not Modified response)
# QuickStart
First, download and install Hishel using pip:
pip: `pip install "hishel[httpx, requests, fastapi, async]"==1.0.0`
We've installed several integrations just for demonstration—you
/r/Python
https://redd.it/1oilkc1
GitHub
GitHub - karpetrosyan/hishel: Elegant HTTP Caching for Python
Elegant HTTP Caching for Python. Contribute to karpetrosyan/hishel development by creating an account on GitHub.
django-modern-csrf: CSRF protection without tokens
I made a package that replaces Django's default CSRF middleware with one based on modern browser features (Fetch metadata request headers and
The main benefit: no more
It works by checking the
The implementation is based on Go's standard library approach (there's a great article by Filippo Valsorda about it).
PyPI: https://pypi.org/project/django-modern-csrf/
GitHub: https://github.com/feliperalmeida/django-modern-csrf
Let me know if you have questions or run into issues.
/r/django
https://redd.it/1oihb4l
I made a package that replaces Django's default CSRF middleware with one based on modern browser features (Fetch metadata request headers and
Origin).The main benefit: no more
{% csrf_token %} in templates or csrfmiddlewaretoken on forms, no X-CSRFToken headers to configure in your frontend. It's a drop-in replacement - just swap the middleware and you're done.It works by checking the
Sec-Fetch-Site header that modern browsers send automatically. According to caniuse, it's supported by 97%+ of browsers. For older browsers, it falls back to Origin header validation.The implementation is based on Go's standard library approach (there's a great article by Filippo Valsorda about it).
PyPI: https://pypi.org/project/django-modern-csrf/
GitHub: https://github.com/feliperalmeida/django-modern-csrf
Let me know if you have questions or run into issues.
/r/django
https://redd.it/1oihb4l
Caniuse
headers HTTP header: Sec-Fetch-Site | Can I use... Support tables for HTML5, CSS3, etc
"Can I use" provides up-to-date browser support tables for support of front-end web technologies on desktop and mobile web browsers.
PyCharm: Hide library stack frames
Hey,
I made a PyCharm plugin called StackSnack that hides library stack frames.
Not everyone know that other IDEs have it as a built-in, so I've carefully crafted this one & really proud to share it with the community.
# What my project does
Helps you to filter out library stack frames(i.e. those that does not belong to your project, without imported files), so that you see frames of your own code. Extremely powerful & useful tool when you're debugging.
# Preview
https://imgur.com/a/v7h3ZZu
# GitHub
https://github.com/heisen273/stacksnack
# JetBrains marketplace
https://plugins.jetbrains.com/plugin/28597-stacksnack--library-stack-frame-hider
/r/Python
https://redd.it/1oicb3y
Hey,
I made a PyCharm plugin called StackSnack that hides library stack frames.
Not everyone know that other IDEs have it as a built-in, so I've carefully crafted this one & really proud to share it with the community.
# What my project does
Helps you to filter out library stack frames(i.e. those that does not belong to your project, without imported files), so that you see frames of your own code. Extremely powerful & useful tool when you're debugging.
# Preview
https://imgur.com/a/v7h3ZZu
# GitHub
https://github.com/heisen273/stacksnack
# JetBrains marketplace
https://plugins.jetbrains.com/plugin/28597-stacksnack--library-stack-frame-hider
/r/Python
https://redd.it/1oicb3y
GitHub
GitHub - heisen273/stacksnack: stack frame hider jetbrains plugin
stack frame hider jetbrains plugin. Contribute to heisen273/stacksnack development by creating an account on GitHub.
Why doesn't for-loop have it's own scope?
For the longest time I didn't know this but finally decided to ask, I get this is a thing and probably has been asked a lot but i genuinely want to know... why? What gain is there other than convenience in certain situations, i feel like this could cause more issue than anything even though i can't name them all right now.
I am also designing a language that works very similarly how python works, so maybe i get to learn something here.
/r/Python
https://redd.it/1oiwxt5
For the longest time I didn't know this but finally decided to ask, I get this is a thing and probably has been asked a lot but i genuinely want to know... why? What gain is there other than convenience in certain situations, i feel like this could cause more issue than anything even though i can't name them all right now.
I am also designing a language that works very similarly how python works, so maybe i get to learn something here.
/r/Python
https://redd.it/1oiwxt5
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Pyfory: Drop‑in replacement serialization for pickle/cloudpickle — faster, smaller, safer
**Pyfory** is the Python implementation of [Apache Fory™](https://github.com/apache/fory/blob/main/python/README.md) — a versatile serialization framework.
It works as a **drop‑in replacement for** `pickle`\*\*/\*\*`cloudpickle`, but with major upgrades:
* **Features**: Circular/shared reference support, protocol‑5 zero‑copy buffers for huge NumPy arrays and Pandas DataFrames.
* **Advanced hooks**: Full support for custom class serialization via `__reduce__`, `__reduce_ex__`, and `__getstate__`.
* **Data size**: \~25% smaller than pickle, and 2–4× smaller than cloudpickle when serializing local functions/classes.
* **Compatibility**: Pure Python mode for dynamic objects (functions, lambdas, local classes), or cross‑language mode to share data with Java, Go, Rust, C++, JS.
* **Security**: Strict mode to block untrusted types, or fine‑grained `DeserializationPolicy` for controlled loading.
/r/Python
https://redd.it/1oj0ogq
**Pyfory** is the Python implementation of [Apache Fory™](https://github.com/apache/fory/blob/main/python/README.md) — a versatile serialization framework.
It works as a **drop‑in replacement for** `pickle`\*\*/\*\*`cloudpickle`, but with major upgrades:
* **Features**: Circular/shared reference support, protocol‑5 zero‑copy buffers for huge NumPy arrays and Pandas DataFrames.
* **Advanced hooks**: Full support for custom class serialization via `__reduce__`, `__reduce_ex__`, and `__getstate__`.
* **Data size**: \~25% smaller than pickle, and 2–4× smaller than cloudpickle when serializing local functions/classes.
* **Compatibility**: Pure Python mode for dynamic objects (functions, lambdas, local classes), or cross‑language mode to share data with Java, Go, Rust, C++, JS.
* **Security**: Strict mode to block untrusted types, or fine‑grained `DeserializationPolicy` for controlled loading.
/r/Python
https://redd.it/1oj0ogq
GitHub
fory/python/README.md at main · apache/fory
A blazingly fast multi-language serialization framework powered by JIT and zero-copy. - apache/fory
A new easy way on Windows to pip install GDAL and other tricky geospatial Python packages
# What My Project Does
geospatial-wheels-index is a pip-compatible
simple index for the cgohlke/geospatial-wheels repository. It's just a few static html files served on GitHub Pages, and all the .whl files are pulled directly from
In addition to GDAL, this index points to the other prebuilt packages in
Contributions are welcome!
# Target Audience
Mostly folks who straddle the traditional GIS and the developer/data science worlds, the people who would love to run Linux but are stuck on Windows for one reason or another.
For myself, I'm tired of dealing with the lack of an easy way to install the GDAL binaries on Windows so that I can
# Comparison
Often you'll have to build these packages from source or rely on
The esteemed Christoph Gohlke has been providing prebuilt wheels for
/r/Python
https://redd.it/1oiufp2
# What My Project Does
geospatial-wheels-index is a pip-compatible
simple index for the cgohlke/geospatial-wheels repository. It's just a few static html files served on GitHub Pages, and all the .whl files are pulled directly from
cgohlke/geospatial-wheels. All you need to do is add an index flag:pip install --index https://gisidx.github.io/gwi gdal
In addition to GDAL, this index points to the other prebuilt packages in
geospatial-wheels: cartopy, cftime, fiona, h5py, netcdf4, pygeos, pyogrio, pyproj, rasterio, rtree, and shapely.Contributions are welcome!
# Target Audience
Mostly folks who straddle the traditional GIS and the developer/data science worlds, the people who would love to run Linux but are stuck on Windows for one reason or another.
For myself, I'm tired of dealing with the lack of an easy way to install the GDAL binaries on Windows so that I can
pip install gdal, especially in a uv virtual environment or a CI/CD context where using conda can be a headache.# Comparison
Often you'll have to build these packages from source or rely on
conda or another add-on package manager. For example, the official GDAL docs suggest various ways to install the binaries. This is often not possible or requires extra work.The esteemed Christoph Gohlke has been providing prebuilt wheels for
/r/Python
https://redd.it/1oiufp2
GitHub
GitHub - corbel-spatial/geospatial-wheels-index: A pip index for cgohlke/geospatial-wheels
A pip index for cgohlke/geospatial-wheels. Contribute to corbel-spatial/geospatial-wheels-index development by creating an account on GitHub.
Best courses Python and Django X
Hi all,
I have a new role at work, which is kind of link between IT and the technical role (I am coming from the techical side).
I enjoy coding and have basic python and java script skills which I get by with for personal projects and AI.
For this role, my work have agreed to fund some development and i am looking for the best python and mainly django x framework courses/plans to gain bettet knowledge anf best practice to be more aid to the IT department.
Wondered if anyone knew the best plan of action? Would likey need futher python training and then I am new to Django and offcial IT workflows and what not.
Tia
/r/Python
https://redd.it/1oj294x
Hi all,
I have a new role at work, which is kind of link between IT and the technical role (I am coming from the techical side).
I enjoy coding and have basic python and java script skills which I get by with for personal projects and AI.
For this role, my work have agreed to fund some development and i am looking for the best python and mainly django x framework courses/plans to gain bettet knowledge anf best practice to be more aid to the IT department.
Wondered if anyone knew the best plan of action? Would likey need futher python training and then I am new to Django and offcial IT workflows and what not.
Tia
/r/Python
https://redd.it/1oj294x
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Curious if someone has a better answer.
I'm designing an app where the user will be entering health information. The day/time the data is entered needs to be recorded which will be easy when I go live. As I'm developing and testing it I want to be able to store the data using different dates to simulate the way it will actually be used. I don't want to actually change the front end for this purposes because there are a lot of pages that are making posts to the app.
The way I thought of doing it is to have a conditional dialog box that pops up from each view asking for a day. The default will be the last date entered, which I can save in a text file each time so I don't have to re-enter and the dialog box will allow me to just increase by one day. This seems like the simplest way but it also seems like something that people have to deal with a lot as they're developing and thought I would post here to see if someone has a better solution or if there is something built into Django that does this for me.
/r/django
https://redd.it/1oj586h
I'm designing an app where the user will be entering health information. The day/time the data is entered needs to be recorded which will be easy when I go live. As I'm developing and testing it I want to be able to store the data using different dates to simulate the way it will actually be used. I don't want to actually change the front end for this purposes because there are a lot of pages that are making posts to the app.
The way I thought of doing it is to have a conditional dialog box that pops up from each view asking for a day. The default will be the last date entered, which I can save in a text file each time so I don't have to re-enter and the dialog box will allow me to just increase by one day. This seems like the simplest way but it also seems like something that people have to deal with a lot as they're developing and thought I would post here to see if someone has a better solution or if there is something built into Django that does this for me.
/r/django
https://redd.it/1oj586h
Reddit
From the django community on Reddit
Explore this post and more from the django community
Pylint 4 changes what's considered a constant. Does a use case exist?
Pylint 4 changed their definition of constants. Previously, all variables at the root of a module were considered constants and expected to be in all caps. With Pylint 4, they are now checking to see if a variable is reassigned non-exclusively. If it is, then it's treated as a "module-level variable" and expected to be in snake case.
So this pattern, which used to be valid, now raises an invalid-name warning.
SERIES_STD = ' ▌█' if platform.system() == 'Windows' else ' ▏▎▍▌▋▊▉█'
try:
SERIES_STD.encode(sys.__stdout__.encoding)
except UnicodeEncodeError:
SERIES_STD = ' |'
except (AttributeError, TypeError):
pass
This could be re-written to match the new definition of a constant, but doing so reduces readability.
In my mind any runtime code is placed in classes, function or guarded with a dunder name clause. This only leaves code needed for module initialization. Within that, I see two categories of variables at the module root, constants and globals.
* Constants
* After value is determine (like above example), it never changes
* All
/r/Python
https://redd.it/1oj4mcr
Pylint 4 changed their definition of constants. Previously, all variables at the root of a module were considered constants and expected to be in all caps. With Pylint 4, they are now checking to see if a variable is reassigned non-exclusively. If it is, then it's treated as a "module-level variable" and expected to be in snake case.
So this pattern, which used to be valid, now raises an invalid-name warning.
SERIES_STD = ' ▌█' if platform.system() == 'Windows' else ' ▏▎▍▌▋▊▉█'
try:
SERIES_STD.encode(sys.__stdout__.encoding)
except UnicodeEncodeError:
SERIES_STD = ' |'
except (AttributeError, TypeError):
pass
This could be re-written to match the new definition of a constant, but doing so reduces readability.
In my mind any runtime code is placed in classes, function or guarded with a dunder name clause. This only leaves code needed for module initialization. Within that, I see two categories of variables at the module root, constants and globals.
* Constants
* After value is determine (like above example), it never changes
* All
/r/Python
https://redd.it/1oj4mcr
Reddit
From the Python community on Reddit: Pylint 4 changes what's considered a constant. Does a use case exist?
Explore this post and more from the Python community
DNLP conferences look like a scam..
Not trying to punch down on other smart folks, but honestly, I feel like most NLP conference papers are kinda scams. Out of 10 papers I read, 9 have zero theoretical justification, and the 1 that does usually calls something a theorem when it’s basically just a lemma with ridiculous assumptions.
And then they all cliam about like a 1% benchmark improvement using methods that are impossible to reproduce because of the insane resource constraints in the LLM world.. Even more funny, most of the benchmarks and made by themselves
/r/MachineLearning
https://redd.it/1ojeldl
Not trying to punch down on other smart folks, but honestly, I feel like most NLP conference papers are kinda scams. Out of 10 papers I read, 9 have zero theoretical justification, and the 1 that does usually calls something a theorem when it’s basically just a lemma with ridiculous assumptions.
And then they all cliam about like a 1% benchmark improvement using methods that are impossible to reproduce because of the insane resource constraints in the LLM world.. Even more funny, most of the benchmarks and made by themselves
/r/MachineLearning
https://redd.it/1ojeldl
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Flask-RQ Tutorial A Simple Task Queue for Flask
https://www.youtube.com/watch?v=oDKuXmbWqX8
/r/flask
https://redd.it/1oj0012
https://www.youtube.com/watch?v=oDKuXmbWqX8
/r/flask
https://redd.it/1oj0012
YouTube
Flask-RQ Tutorial A Simple Task Queue for Flask
Learn Flask-RQ, a task queue for Flask that is much simpler and easier to use than Celery. If you only need basic background tasks for your app, then Flask-RQ might be the best solution for you. In this video, Anthony Herbert will walk you through setting…
PathQL: A Declarative SQL Like Layer For Pathlib
🐍 What PathQL Does
PathQL allows you to easily walk file systems and perform actions on the files that match "simple" query parameters, that don't require you to go into the depths of
The tool supports query functions that are common when crawling folders, tools to aggregate information about those files and finally actions to perform on those files. Out of the box it supports copy, move, delete, fastcopy and zip actions.
It is also VERY/sort-of easy to sub-class filters that can look into the contents of files to add data about the file itself (rather than the metadata), perhaps looking for ERROR lines in todays logs, or image files that have 24 bit color. For these types of filters it can be important to use the built in multithreading for sharing the load of reading into all of those files.
```python
from pathql import AgeDays, Size, Suffix, Query,ResultField
# Count, largest file size, and oldest file from the last 24 hours in the result set
query = Query(
whereexpr=(AgeDays() == 0) & (Size() > "10 mb") & Suffix("log"),
frompaths="C:/logs",
threaded=True
)
resultset =
/r/Python
https://redd.it/1ojgqmr
🐍 What PathQL Does
PathQL allows you to easily walk file systems and perform actions on the files that match "simple" query parameters, that don't require you to go into the depths of
os.stat_result and the datetime module to find file ages, sizes and attributes.The tool supports query functions that are common when crawling folders, tools to aggregate information about those files and finally actions to perform on those files. Out of the box it supports copy, move, delete, fastcopy and zip actions.
It is also VERY/sort-of easy to sub-class filters that can look into the contents of files to add data about the file itself (rather than the metadata), perhaps looking for ERROR lines in todays logs, or image files that have 24 bit color. For these types of filters it can be important to use the built in multithreading for sharing the load of reading into all of those files.
```python
from pathql import AgeDays, Size, Suffix, Query,ResultField
# Count, largest file size, and oldest file from the last 24 hours in the result set
query = Query(
whereexpr=(AgeDays() == 0) & (Size() > "10 mb") & Suffix("log"),
frompaths="C:/logs",
threaded=True
)
resultset =
/r/Python
https://redd.it/1ojgqmr
Reddit
From the Python community on Reddit: PathQL: A Declarative SQL Like Layer For Pathlib
Explore this post and more from the Python community
What's this sub's opinion on panda3d/interrogate?
https://github.com/panda3d/interrogate
I'm just curious how many people have even heard of it, and what people think of it.
Interrogate is a tool used by Panda3D to generate python bindings for its c++ code. it was spun into it's own repo a while back in the hopes that people outside the p3d community might use it.
/r/Python
https://redd.it/1ojgkmz
https://github.com/panda3d/interrogate
I'm just curious how many people have even heard of it, and what people think of it.
Interrogate is a tool used by Panda3D to generate python bindings for its c++ code. it was spun into it's own repo a while back in the hopes that people outside the p3d community might use it.
/r/Python
https://redd.it/1ojgkmz
GitHub
GitHub - panda3d/interrogate: Python binding generator for Panda3D
Python binding generator for Panda3D. Contribute to panda3d/interrogate development by creating an account on GitHub.
Is Django a good fit for a multithreaded application?
Hi everyone,
I need to develop an application with the following requirements:
Multithreaded handling of remote I/O
State machine management, each running in its own thread
REST web API for communication with other devices
A graphical UI for a touch panel PC
I was considering Django, since it would conveniently handle the database part (migrations included) and, with django-ninja, allow me to easily expose REST APIs.
However, I’m unsure about the best way to handle database access from other threads or processes.
Ideally, I’d like to separate the web part (Django + API) from the rest of the logic.
My current idea is to have two processes:
1. Uvicorn or whatever running Django with the web APIs
2. A separate Python process running the UI, state machines, and remote I/O polling, but using in some way the Django ORM
Would this kind of architecture be feasible?
And if so, what would be the recommended way for these two processes to communicate (e.g., shared database, message queue, websockets, etc.)?
Side note: I've asked chatgpt to help me with the translation while writing the explanation, I can assure I'm not a bot 🤣
/r/django
https://redd.it/1oj4bz3
Hi everyone,
I need to develop an application with the following requirements:
Multithreaded handling of remote I/O
State machine management, each running in its own thread
REST web API for communication with other devices
A graphical UI for a touch panel PC
I was considering Django, since it would conveniently handle the database part (migrations included) and, with django-ninja, allow me to easily expose REST APIs.
However, I’m unsure about the best way to handle database access from other threads or processes.
Ideally, I’d like to separate the web part (Django + API) from the rest of the logic.
My current idea is to have two processes:
1. Uvicorn or whatever running Django with the web APIs
2. A separate Python process running the UI, state machines, and remote I/O polling, but using in some way the Django ORM
Would this kind of architecture be feasible?
And if so, what would be the recommended way for these two processes to communicate (e.g., shared database, message queue, websockets, etc.)?
Side note: I've asked chatgpt to help me with the translation while writing the explanation, I can assure I'm not a bot 🤣
/r/django
https://redd.it/1oj4bz3
Reddit
From the django community on Reddit
Explore this post and more from the django community
[R] Researchers from the Center for AI Safety and Scale AI have released the Remote Labor Index (RLI), a benchmark testing AI agents on 240 real-world freelance jobs across 23 domains.
https://redd.it/1ojinwl
@pythondaily
https://redd.it/1ojinwl
@pythondaily
Reddit
From the MachineLearning community on Reddit: [R] Researchers from the Center for AI Safety and Scale AI have released the Remote…
Explore this post and more from the MachineLearning community