Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Mastering Modern Time Series Forecasting : The Complete Guide to Statistical, Machine Learning & Dee

I’ve been working on a Python-focused guide called Mastering Modern Time Series Forecasting — aimed at bridging the gap between theory and practice for time series modeling.

It covers a wide range of methods, from traditional models like ARIMA and SARIMA to deep learning approaches like Transformers, N-BEATS, and TFT. The focus is on practical implementation, using libraries like statsmodelsscikit-learnPyTorch, and Darts. I also dive into real-world topics like handling messy time series data, feature engineering, and model evaluation.

I’m publishing the guide on Gumroad and LeanPub. I’ll drop a link in the comments in case anyone’s interested.

Always open to feedback from the community — thanks!

/r/Python
https://redd.it/1kz1tkt
🎉 Introducing TurboDRF - Auto Generate CRUD APIs from your django models

# What My Project Does:
🚀 [TurboDRF](https://github.com/alexandercollins/turbodrf) is a new drf module that auto generates endpoints by adding 1 class mixin to your django models:
- Autogenerate CRUD API endpoints with docs 🎉
- No more writng basic urls, views, view sets or serailizers
- Supports filtering, text search and granular perissions

After many years with DRF and spinning up new projects I've really gotten tired of writing basic views, urls and serializers so I've build turbodrf which will do all that for you.


🔗 You can access it here on my github: [https://github.com/alexandercollins/turbodrf](https://github.com/alexandercollins/turbodrf)


Basically just **add 1 mixin to the model you want to expose as an endpoint** and then 1 method in that model which specifies the fields (could probably move this to Meta tbh) and boom 💥 your API is ready.

📜 It also generates swagger docs, integrates with django's default user permissions (and has its own static role based permission system with field level permissions too), plus you get advanced filtering, full-text search, automatic pagination, nested relationships with double underscore notation, and automatic query optimization with select_related/prefetch_related.

💻 Here's a quick example:

```
class Book(models.Model, TurboDRFMixin):
title = models.CharField(max_length=200)
author = models.ForeignKey(Author, on_delete=models.CASCADE)
price = models.DecimalField(max_digits=10,

/r/Python
https://redd.it/1kyywn0
MigrateIt, A database migration tool

# What My Project Does

[MigrateIt](https://github.com/iagocanalejas/MigrateIt) allows to manage your database changes with simple migration files in plain SQL. Allowing to run/rollback them as you wish.

Avoids the need to learn a different sintax to configure database changes allowing to write them in the same SQL dialect your database use.

# Target Audience

Developers tired of having to synchronize databases between different environments or using tools that need to be configured in JSON or native ASTs instead of plain SQL.

# Comparison

Instead of:

```json
{ "databaseChangeLog": [
{
"changeSet": {
"changes": [
{
"createTable": {
"columns": [
{
"column": {
"name": "CREATED_BY",
"type": "VARCHAR2(255 CHAR)"


/r/Python
https://redd.it/1kz30mk
Functional programming concepts that actually work in Python

Been incorporating more functional programming ideas into my Python/R workflow lately - immutability, composition, higher-order functions. Makes debugging way easier when data doesn't change unexpectedly.

Wrote about some practical FP concepts that work well even in non-functional languages: https://borkar.substack.com/p/why-care-about-functional-programming?r=2qg9ny&utm\_medium=reddit

Anyone else finding FP useful for data work?

/r/Python
https://redd.it/1kz6kx3
HELP-Struggling to Scale Django App for High Concurrency

Hi everyone,

I'm working on scaling my Django app and facing performance issues under load. I've 5-6 API which hit concurrently by 300 users. Making almost 1800 request at once. I’ve gone through a bunch of optimizations but still seeing odd behavior.

# Tech Stack

\- Django backend
 \- PostgreSQL (AWS RDS)
 \- Gunicorn with `gthread` worker class
 \- Nginx as reverse proxy
 \- Load testing with `k6` (to simulate 500 to 5,000 concurrent requests)
 \- Also tested with JMeter — it handles 2,000 requests without crashing

# Server Setup

Setup 1 (Current):

\- 10 EC2 servers
 \- 9 Gunicorn `gthread` workers per server
 \- 30 threads per worker
 \- 4-core CPU per server

Setup 2 (Tested):

\- 2 EC2 servers
 \- 21 Gunicorn `gthread` workers per server
 \- 30 threads per worker
 \- 10-core CPU per server

Note: No PgBouncer or DB connection pooling in use yet.
 RDS `max_connections` = 3476.

# Load Test Scenario

\- 5–6 APIs are hit concurrently by around 300 users, totaling approximately 1,800 simultaneous requests.
 \- Each API is I/O-bound, with 8–9 DB queries using annotate, aggregate, filter, and other Django ORM queries and some CPU bound logic.

/r/django
https://redd.it/1kzb8h0
[R] The Resurrection of the ReLU

Hello everyone, I’d like to share our new preprint on bringing ReLU back into the spotlight.

Over the years, activation functions such as GELU and SiLU have become the default choices in many modern architectures. Yet ReLU has remained popular for its simplicity and sparse activations despite the long-standing “dying ReLU” problem, where inactive neurons stop learning altogether.

Our paper introduces **SUGAR** (Surrogate Gradient Learning for ReLU), a straightforward fix:

* Forward pass: keep the standard ReLU.
* Backward pass: replace its derivative with a smooth surrogate gradient.

This simple swap can be dropped into almost any network—including convolutional nets, transformers, and other modern architectures—without code-level surgery. With it, previously “dead” neurons receive meaningful gradients, improving convergence and generalization while preserving the familiar forward behaviour of ReLU networks.

**Key results**

* Consistent accuracy gains in convolutional networks by stabilising gradient flow—even for inactive neurons.
* Competitive (and sometimes superior) performance compared with GELU-based models, while retaining the efficiency and sparsity of ReLU.
* Smoother loss landscapes and faster, more stable training—all without architectural changes.

We believe this reframes ReLU not as a legacy choice but as a revitalised classic made relevant through careful gradient handling. I’d be happy to hear any feedback or questions you have.

**Paper:** [https://arxiv.org/pdf/2505.22074](https://arxiv.org/pdf/2505.22074)

\[Throwaway because I do

/r/MachineLearning
https://redd.it/1kz5t16
Windows Task Scheduler & Simple Python Scripts

Putting this out there, for others to find, as other posts on this topic are "closed and archived", so I can't add to them.

Recurring issues with strange errors, and 0x1 results when trying to automate simple python scripts. (to accomplish simple tasks!)
Scripts work flawlessly in a command window, but the moment you try and automate... well... fail.
Lost a number of hours.

Anyhow - simple solution in the end - the extra "pip install" commands I had used in the command prompt, are "temporary", and disappear with the command prompt.

So - when scheduling these scripts (my first time doing this), the solution in the end was a batch file, that FIRST runs the py -m pip install "requests" first, that pulls in what my script needs... and then runs the actual script.

my batch:
py.exe -m pip install "requests"
py.exe fixip3.py

Working perfectly every time, I'm not even logged in... running in the background, just the way I need it to.

Hope that helps someone else!

Andrew

/r/Python
https://redd.it/1kzcj8w
What is the best way to parse log files?

Hi,

I usually have to extract specific data from logs and display it in a certain way, or do other things.

The thing is those logs are tens of thousands of lines sometimes so I have to use a very specific Regex for each entry.

It is not just straight up "if a line starts with X take it" no, sometimes I have to get lists that are nested really deep.

Another problem is sometimes the logs change and I have to adjust the Regex to the new change which takes time

What would you use best to analyse these logs? I can't use any external software since the data I work with is extremely confidential.

Thanks!

/r/Python
https://redd.it/1kzhq0i
After 3 Years and 130k LOC, My Django + Rust Financial Planning App is Live

Hey all,

After about three years of development and ~130k lines of Rust and Python, I’ve just deployed the beta version of my self-directed financial planning web app:

https://finstant.com.au

It’s built with Django (using templates and CBVs) and HTMX for interactivity. The core modelling logic is written in Rust, exposed to Python using pyo3/maturin. This is my first proper web dev project, so I kept the frontend stack deliberately simple.

The app automates financial modelling for many of the most common strategies used in Australian financial advice — things like debt recycling, contribution strategy optimisation, investment structuring comparisons, and more. It also allows users to build custom goal-based scenarios.

It’s still in beta, so there might be a few rough edges — but I’d really appreciate any feedback, especially from Australians who can put the modelling through its paces.

Happy to answer any questions about the stack, modelling approach, or lessons learned along the way. Thanks!

/r/django
https://redd.it/1kypyhs
What is the best free way to host my Python Flask app online 24/7?

I recently built a notification application using React and Flask. The Python script responsible for sending reminders have to be online 24/7 since it needs to fetch data on regular intervals from Firebase and notify users.

Right now, I’m looking for a **free solution** to host this script so that it can run continuously in the background.

I've researched a few options:

* **Render Background Worker** – looks good but not free.
* **GitHub Actions** – possible but feels hacky and might not be reliable long-term.
* **PythonAnywhere** – seems promising, but wondering if there are better alternatives.

Has anyone found a reliable **free** way to keep such a Python script running continuously? Open to cloud functions, cron-like services, or anything else that works.

/r/flask
https://redd.it/1kz5wle
Saturday Daily Thread: Resource Request and Sharing! Daily Thread

# Weekly Thread: Resource Request and Sharing 📚

Stumbled upon a useful Python resource? Or are you looking for a guide on a specific topic? Welcome to the Resource Request and Sharing thread!

## How it Works:

1. Request: Can't find a resource on a particular topic? Ask here!
2. Share: Found something useful? Share it with the community.
3. Review: Give or get opinions on Python resources you've used.

## Guidelines:

Please include the type of resource (e.g., book, video, article) and the topic.
Always be respectful when reviewing someone else's shared resource.

## Example Shares:

1. Book: "Fluent Python" \- Great for understanding Pythonic idioms.
2. Video: Python Data Structures \- Excellent overview of Python's built-in data structures.
3. Article: Understanding Python Decorators \- A deep dive into decorators.

## Example Requests:

1. Looking for: Video tutorials on web scraping with Python.
2. Need: Book recommendations for Python machine learning.

Share the knowledge, enrich the community. Happy learning! 🌟

/r/Python
https://redd.it/1kzjfrf
D Monthly Who's Hiring and Who wants to be Hired?

For Job Postings please use this template

>Hiring: [Location\], Salary:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] and [Brief overview, what you're looking for\]

For Those looking for jobs please use this template

>Want to be Hired: [Location\], Salary Expectation:[\], [Remote | Relocation\], [Full Time | Contract | Part Time\] Resume: [Link to resume\] and [Brief overview, what you're looking for\]

​

Please remember that this community is geared towards those with experience.

/r/MachineLearning
https://redd.it/1kzmd2e
Learning CBVs and guidance

Hi all, i'm currently learning Class based views, and i just want to make sure i'm doing everything as "standard" and actually doing this correct, thanks in advance! This is all just a test-project and learning purposes also.

To start off, i've got my own package which i created, and essentially works as a mini 'git', i recreated the normal fundamentals (repo, commit, etc).

I wanted "users" to be able to create a Repo, view the repo, and add files/documents.


to start with, i created an app called minigit_viewer. Inside i have a few urls:

urlpatterns =
    path("repo/new/", RepoCreateForm.as_view(), name="repo_new"),
    path("repo/<int:pk>/", RepoViewForm.as_view(), name="repo_view"),

And in my views i have:

class RepoViewForm(DetailView):
    model = RepositoryModel
    templatename = "minigitviewer/repoformview.html"

    def getcontextdata(self, kwargs):
        context = super().getcontextdata(kwargs)
        return context

class RepoCreateForm(SuccessMessageMixin, CreateView):
    model = RepositoryModel
    fields =

/r/djangolearning
https://redd.it/1kz0zqi
I'm trying to run this app outside of the localhost and I kepp gettinting this error

Access to fetch at 'http://rnkfa-2804-14c-b521-813c-f99d-84fb-1d69-bffd.a.free.pinggy.link/books' from origin 'http://rnjez-2804-14c-b521-813c-f99d-84fb-1d69-bffd.a.free.pinggy.link' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

script.js:65



GET [http://rnkfa-2804-14c-b521-813c-f99d-84fb-1d69-bffd.a.free.pinggy.link/books](http://rnkfa-2804-14c-b521-813c-f99d-84fb-1d69-bffd.a.free.pinggy.link/books) net::ERR\_FAILED 200 (OK)

loadAndDisplayBooks @ script.js:65

(anônimo) @ script.js:231


app.py:

# Importa as classes e funções necessárias das bibliotecas Flask, Flask-CORS, Flask-SQLAlchemy e Flask-Migrate.
from flask import Flask, request, jsonify
from flask_cors import CORS
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
import os # Módulo para interagir com o sistema operacional, usado aqui para acessar variáveis de ambiente.

# Cria uma instância da aplicação Flask.
# __name__ é uma variável especial em Python que representa o nome do módulo atual.
app = Flask(__name__)
# Habilita o CORS (Cross-Origin Resource Sharing) para a aplicação.
# Isso permite que o frontend (rodando em um domínio/porta diferente) faça requisições para este backend.
CORS(app,
origins
="http://rnjez-2804-14c-b521-813c-f99d-84fb-1d69-bffd.a.free.pinggy.link")


# Configuração do Banco de Dados


/r/flask
https://redd.it/1kzpix5
Do you use django's caching framework?

Just got to know about this one: https://docs.djangoproject.com/en/5.2/topics/cache/ (good docs!)

It says, for small to medium sites it isn't as important. Do you use it, e.g. with redis to cache your pages?

Oh and I don't know if it is just me, but whenever I deploy changes of my templates, I've to restart the gunicorn proccess of django in order to "update" the site on live.

/r/django
https://redd.it/1kzm97i
Django’s URL philosophy seems to contradict Domain Driven Design

Hey everyone, I’ve been using Django for many years now, but just recently started learning about DDD. There’s one part in Django’s docs that I’m trying to relate (https://docs.djangoproject.com/en/5.2/misc/design-philosophies/#id8):

> URLs in a Django app should not be coupled to the underlying Python code. Tying URLs to Python function names is a Bad And Ugly Thing.

At the heart of DDD is a ubiquitous language, which so far to my understanding, I think, means to prefer using the same language across the business logic, to the URL, to the Python function names, and even perhaps to the form class name, and even template file name. Ideally to prefer that way, at least. Needless to say, I know that that’s not a rule cast in stone, there’ll always be exceptions and considerations.

BUT the way Django’s docs portrays it seems to suggest like that’s not the way to think about it AT ALL.

What do you think?

/r/django
https://redd.it/1kzzxg9
Deploying on LAN

Hi, it’s my first time deploying a web app and I’d like to know if what I’m gonna do is right.
I have a Django application that I need to deploy on a windows machine and make that useable in the LAN.
the step that I did were:
- set DEBUG = False, ALLOWEDHOSTS=[*] and CSRFTRUSTEDORIGINS=[‘http://<PC IP IN LAN>’]
- installled waiterss and setup
serve.py script using address 0.0.0.0 and port 8000
-setup Nginx for reverse proxy this way :
Location / {
Proxy
pass http://localhost:8000
}
this setup works and I can use application on other device in the same LAN, but I’d like to know if I missed something or I did something unsafe.

Thanks for reading and for the help.

/r/djangolearning
https://redd.it/1kxdbdn
Views on recent acceptance of LLM written paper at ACL main D

Hi folks, just came across this blog
https://www.intology.ai/blog/zochi-acl

It started with ICLR workshop and now ACL main, was just wondering where are we heading. Is this all the effect of noise review process, or indeed the works are worth publishing

PS: Not a NLP guy, so couldn't really comment on the novelty/technical correctness of the work

Edit: Just found a GitHub repo, corresponding to the agent
https://github.com/IntologyAI/Zochi?tab=readme-ov-file

/r/MachineLearning
https://redd.it/1l074er