Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
I built an appointment scheduler for my school using Django and Bootstrap

/r/django
https://redd.it/uthw9v
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!

Discussion of using Python in a professional environment, getting jobs in Python as well as ask questions about courses to further your python education!

This thread is not for recruitment, please see r/PythonJobs or the thread in the sidebar for that.

/r/Python
https://redd.it/usqdq0
Using a value from one route to another

Hi everyone, I’m pretty new to flask.
I have a in this route the user variable, but I want to use its value in a another route, so how can I do it?

@app.route('/reset_password', methods=['POST', 'GET'])
def reset_password():
form = request.form
user = Users.query.filter_by(email=form['email-address']).first()
if not user:
flash('User does not exist!!')
return redirect(url_for('forgot_password'))


Edit:
I added:
session[‘user’] = user.id in the reset_password route

And when i try doing:
user_id = session[‘user’] in my other route, it gives me a key error

What can be the problem?

/r/flask
https://redd.it/utvz3w
Saturday Daily Thread: Resource Request and Sharing! Daily Thread

Found a neat resource related to Python over the past week? Looking for a resource to explain a certain topic?

Use this thread to chat about and share Python resources!

/r/Python
https://redd.it/uub4o0
All Python data engineering project (Twitter Monitor)

Hi there,

I'd like you to show you a project I made to show my data engineering skills to potential employers (I'm looking for an internship or a first tech job).

This is the repository: https://github.com/jmcmt87/spark\_app\_twitter

Any feedback, suggestion or recommendation, especially code-wise, is more than welcome

The app gets information in streaming from Twitter about the war in Ukraine regarding four topics: Biden, Zelensky, Putin and NATO, then process them in batches of one hour and show the information (sentiment analysis and emotion classification) at three aggregation levels: hour, day and total.

You can check the app here: https://share.streamlit.io/jmcmt87/streamlit\_share/main/streamlit\_share.py

This is the architecture scheme:

https://preview.redd.it/se4waupjbv091.png?width=1280&format=png&auto=webp&s=82b6210b45e50c48b56e7c9233decd4f2b8aa812

Thanks for checking it out!

/r/Python
https://redd.it/uusegj
Sunday Daily Thread: What's everyone working on this week?

Tell /r/python what you're working on this week! You can be bragging, grousing, sharing your passion, or explaining your pain. Talk about your current project or your pet project; whatever you want to share.

/r/Python
https://redd.it/uuz7id
cannot get path /track/<str:pv>/ to work

basically project is a crm that has orders where it's delivery status can be updated, i am trying to build a track page that shows a log of each time status was updated and when. in view.py order_update i have written below:
def patient_update(request,pv):
order = Order.objects.get(id=pv)
form = OrderUpdateForm(instance=order)
if request.method == "POST":
form = OrderUpdateForm(request.POST,instance=order)
if form.is_valid():
if request.user:
form.save()
Track.objects.create(
user = request.user,
patientname = order.patientname,
product = order.product,


/r/djangolearning
https://redd.it/uv9asa
Extra config with relating CORS when using docker compose

I have a Django backend app using Django-rest-framework. Currently, I'm developing the frontend with React using Axios to make HTTP requests.

Whenever I run my app using python manage.py runserver, and make a request from the React client, I get the expected response.

Nevertheless, if I start my app using docker-compose, I get the No 'Access-Control-Allow-Origin' header is present on the requested resource response.

This is my docker-compose.yaml

services:
db:
image: postgres
volumes:
- /var/lib/docker/volumes/mobilequoter/db/data:/var/lib/postgresql/data
ports:
- ${DBPORT}:5432
environment:
- POSTGRES
USER=${DBUSER}
- POSTGRES
PASSWORD=${DBPASSWORD}
- POSTGRES
DB=${DBNAME}
web:
build: .
command: gunicorn mobilequoter.wsgi --bind
0.0.0.0:${PORT}
env
file: .env


/r/django
https://redd.it/uvd1eu
Automating manual jobs with Python

I recently came across this post that presents a method to automate a rather mundane job that needs human intervention. This seems to work on Windows and uses what I would call screen scrapping and clipboard access.

Link to the post:

https://medium.com/towards-data-science/how-to-automate-monkey-jobs-with-python-1910a3219fd2

/r/Python
https://redd.it/uv833z
D Simple Questions Thread

Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead!

Thread will stay alive until next one so keep posting after the date in the title.

Thanks to everyone for answering questions in the previous thread!

/r/MachineLearning
https://redd.it/uvcp51
i cant seem to migrate and i dont know why

so ive been trying to migrate but it doesnt seem to be working, if anyone has any ideas please let me know ive been seching for 2 days now.

the error (in console):

django.db.utils.OperationalError: near "[\]": syntax error

&#x200B;

this is the migratrion file:

import django.contrib.postgres.fieldsfrom django.db import migrations, models

class Migration(migrations.Migration):    dependencies = [        ('website', '0002_rename_members_member'),    \]    operations = [        migrations.CreateModel(name='Gerecht',fields=[                ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),                ('stappenplan', models.TextField()),                ('duur', models.IntegerField()),                ('rating', models.IntegerField()),                ('hoofdIngridiënten', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=200), default=list, size=None)),                ('bijIngridiënten', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=200), default=list, size=None)),                ('aantalPersonen', models.IntegerField()),                ('schrijver', models.CharField(max_length=200)),                ('naam', models.CharField(max_length=200)),                ('afbeelding', models.ImageField(upload_to='website/static/img/')),            \],        ),    \]

this is the models file:

from django.db import modelsfrom django.core.serializers.json import DjangoJSONEncoderfrom django.core import serializersfrom django.http import HttpResponse, JsonResponsefrom django.contrib.postgres.fields import ArrayFieldimport django.contrib.postgres.fieldsclass Member(models.Model):    fName   =

/r/djangolearning
https://redd.it/uvadwj
[D] Machine Learning - WAYR (What Are You Reading) - Week 138

This is a place to share machine learning research papers, journals, and articles that you're reading this week. If it relates to what you're researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you've read.

Please try to provide some insight from your understanding and please don't post things which are present in wiki.

Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links.

Previous weeks :

|1-10|11-20|21-30|31-40|41-50|51-60|61-70|71-80|81-90|91-100|101-110|111-120|121-130|131-140|
|----|-----|-----|-----|-----|-----|-----|-----|-----|------|-------|-------|-------|-------|
|[Week 1](https://www.reddit.com/4qyjiq)|[Week 11](https://www.reddit.com/57xw56)|[Week 21](https://www.reddit.com/60ildf)|[Week 31](https://www.reddit.com/6s0k1u)|[Week 41](https://www.reddit.com/7tn2ax)|[Week 51](https://reddit.com/9s9el5)|[Week 61](https://reddit.com/bfsx4z)|[Week 71](https://reddit.com/d7vno3)|[Week 81](https://reddit.com/f1f0iq)|[Week 91](https://reddit.com/hlt38o)|[Week 101](https://reddit.com/k81ywb)|[Week 111](https://reddit.com/myg8sm)|[Week 121](https://reddit.com/pmzx3g)|[Week 131](https://reddit.com/srsu2n)||||||||||||
|[Week 2](https://www.reddit.com/4s2xqm)|[Week 12](https://www.reddit.com/5acb1t)|[Week 22](https://www.reddit.com/64jwde)|[Week 32](https://www.reddit.com/72ab5y)|[Week 42](https://www.reddit.com/7wvjfk)|[Week 52](https://reddit.com/a4opot)|[Week 62](https://reddit.com/bl29ov)|[Week 72](https://reddit.com/de8h48)|[Week 82](https://reddit.com/f8fs6z)|[Week 92](https://reddit.com/hu6zq9)|[Week 102](https://reddit.com/kh27nx)|[Week 112](https://reddit.com/n8m6ds)|[Week 122](https://reddit.com/pw14z5)|[Week 132](https://reddit.com/t2xpfe)||
|[Week 3](https://www.reddit.com/4t7mqm)|[Week 13](https://www.reddit.com/5cwfb6)|[Week 23](https://www.reddit.com/674331)|[Week 33](https://www.reddit.com/75405d)|[Week 43](https://www.reddit.com/807ex4)|[Week 53](https://reddit.com/a8yaro)|[Week 63](https://reddit.com/bqlb3v)|[Week 73](https://reddit.com/dkox1s)|[Week 83](https://reddit.com/ffi41b)|[Week 93](https://reddit.com/iaz892)|[Week 103](https://reddit.com/kpsxtc)|[Week 113](https://reddit.com/njfsc6)|[Week 123](https://reddit.com/q5fi12)|[Week 133](https://reddit.com/tdf2gt)||
|[Week 4](https://www.reddit.com/4ub2kw)|[Week 14](https://www.reddit.com/5fc5mh)|[Week 24](https://www.reddit.com/68hhhb)|[Week 34](https://www.reddit.com/782js9)|[Week 44](https://reddit.com/8aluhs)|[Week 54](https://reddit.com/ad9ssz)|[Week 64](https://reddit.com/bw1jm7)|[Week 74](https://reddit.com/dr6nca)|[Week 84](https://reddit.com/fn62r1)|[Week 94](https://reddit.com/ijjcep)|[Week 104](https://reddit.com/kzevku)|[Week 114](https://reddit.com/ntu6lq)|[Week 124](https://reddit.com/qjxfu9)|[Week 134](https://reddit.com/tpruqj)||
|[Week 5](https://www.reddit.com/4xomf7)|[Week 15](https://www.reddit.com/5hy4ur)|[Week 25](https://www.reddit.com/69teiz)|[Week 35](https://www.reddit.com/7b0av0)|[Week 45](https://reddit.com/8tnnez)|[Week 55](https://reddit.com/ai29gi)|[Week 65](https://reddit.com/c7itkk)|[Week 75](https://reddit.com/dxshkg)|[Week 85](https://reddit.com/fvk7j6)|[Week 95](https://reddit.com/is5hj9)|[Week 105](https://reddit.com/l9lvgs)|[Week 115](https://reddit.com/o4dph1)|[Week 125](https://reddit.com/qtzbu1)|[Week 135](https://reddit.com/u0pnhf)||
|[Week 6](https://www.reddit.com/4zcyvk)|[Week 16](https://www.reddit.com/5kd6vd)|[Week 26](https://www.reddit.com/6d7nb1)|[Week 36](https://www.reddit.com/7e3fx6)|[Week 46](https://reddit.com/8x48oj)|[Week 56](https://reddit.com/ap8ctk)|[Week 66](https://reddit.com/cd7gko)|[Week 76](https://reddit.com/e4nmyk)|[Week 86](https://reddit.com/g4eavg)|[Week 96](https://reddit.com/j0xr24)|[Week 106](https://reddit.com/ljx92n)|[Week 116](https://reddit.com/odrudt)|[Week 126](https://reddit.com/r4e8he)|[Week 136](https://reddit.com/ub2xlz)||
|[Week
Why Does the VS Code Jupyter Extension Keep Timing-out Trying to Find a Kernel That Exists?

I need to set up virtual environments for each language that I use. To do this, I'm running the Ubuntu 20.04 LTS Windows Subsystem for Linux (WSL) on Windows 10. Within WSL, I'm using Anaconda, installed in
/usr/local/Anaconda
, to create conda virtual environments for each language (i.e. one environment contains all my Python stuff, another contains my R stuff, etc.).

Since WSL doesn't come with a GUI, I'm using Visual Studio Code's (VSCode) Jupyter Notebook Extension to run Jupyter Notebooks to see plots/graphics. So far, I managed to easily create conda environments for Python (with ipython and ipykernel) and R (with IRkernel) and run their code in a notebook via the extension. Each time I set up an environment, the extension is easily able to find the kernel, connect to it and run the code.

However, I've not been able to set up an environment for Julia. I followed the documentation on the Julia website for installing the kernel, which is successfully found by the extension. But, when I try running a cell, the extension says it is trying to connect to the kernel, only for it to timeout and fail.

Here are the steps I have taken so far:

1. Create

/r/JupyterNotebooks
https://redd.it/uupwd4
How to remove double links

/r/flask
https://redd.it/uw28fy
Is Django-Celery appropriate here?

I come from a mainly MS background working with data (ETL) and I've recently migrated a lot of our companies processes to use a Django front end for simple configurations. Nice simple CRUD stuff for tables that feed into the ETL processes.

I also wrote a simple task processing package in Python. Tasks are all written as Python functions and include things like; run SQL procedure, download data from SQL into excel, FTP files to a third party etc. At present the tasks are requested by inserting a row into a table, the processor picks it up and saves away results; we have full visibility on when the task was requested, completed, and the 'results'/download of the task.

I also want it to handle more tasks, such as pushing new meta data to a production environment

Is this use case appropriate for Celery? Or do we need something like Apache Airflow? (I don't really understand the difference but was hoping for some input before I deep dive on one over the other!)

/r/django
https://redd.it/uw2sk3
Tuesday Daily Thread: Advanced questions

Have some burning questions on advanced Python topics? Use this thread to ask more advanced questions related to Python.

If your question is a beginner question we hold a beginner Daily Thread tomorrow (Wednesday) where you can ask any question! We may remove questions here and ask you to resubmit tomorrow.

This thread may be fairly low volume in replies, if you don't receive a response we recommend looking at r/LearnPython or joining the Python Discord server at https://discord.gg/python where you stand a better chance of receiving a response.

/r/Python
https://redd.it/uwdxeu