Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Plotly's Jupyterlab Chart Editor for editing charts through a user-friendly point-and-click interface
https://kyso.io/KyleOS/plotly-chart-editor

/r/IPython
https://redd.it/aqn1if
Initializing a Flask app

[https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world](https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world)


With the above tutorial an `app` package is created. For the `__init__.py` file, `Flask` is instantiated in there:

app/__init__.py: Flask application instance

from flask import Flask

app = Flask(__name__)

from app import routes

Why is that written in \`\_\_init\_\_.py\` and what is the benefit of that? As opposed to an empty `__init__.py` and then instantiating the `Flask` class in another module such as...


app/main_app.py #Flask is instantiated here instead of __init__.py

from flask import Flask

app = Flask(__name__)


​

/r/flask
https://redd.it/aqoq8m
How to put CSRF tokens on the page without a flask form?

Since learning AJAX and RESTful api's, I've been replacing all my flask wtforms with regular ol' html inputs and submit buttons. I haven't really seen the point of wtforms. But they did have the .hidden\_token attribute which was nice.

So now that I'm doing away with flask forms, how do I still put a hidden token on my pages?

/r/flask
https://redd.it/aqr5cy
[R] OpenAI: Better Language Models and Their Implications

https://blog.openai.com/better-language-models/

"We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training."

Interestingly,

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

/r/MachineLearning
https://redd.it/aqlzde
For those of you who got hired to work with Python, what coding challenges did they have have you do during the interviewing process?

I'd also appreciate knowing whether you thought those challenges accurately accessed the coding skills you needed to do your job.

/r/Python
https://redd.it/aqrz1a
Clustering Pollock

I applied kmeans clustering to some of the Pollock's paintings. The idea was to track the artist's usage of #colors through the years. Here's the outcome!

I had really good fun in mixing computer science and art. I used Python with the standard data science stack (pandas, numpy, scikitlearn) plus opencv. echarts for the visualizations at the end of the article .

Let me know what you think!

https://medium.com/@andrea.ialenti/clustering-pollock-1ec24c9cf447

/r/Python
https://redd.it/aqvtkp
Django high level architecture diagram

Hi,

Was just windering if anyone has done a graphical architectural view of a typical django project. I think this would be enormously valuable for beginners particularly, for grasping how a project should be put together. I'm happy to try and start one, but my knowledge is quite lacking so would only be able to go as far as some of the tutorials out there, so would need help with it.

/r/django
https://redd.it/aqxf4i
Storing a class containing datasets

So I'm stuck with a particularly difficult problem. Before I explain, here's an outline of the application I'm building:

1. User clicks on a map
2. Point gets sent to server
3. Fancy math functions take that point and interpolate data as an array of points
4. Send array back to front end and display as a line

I have a 2D dictionary with all the datasets in it. Here's the problem: It takes ~7 seconds to create the datasets when I want to use them for my other functions. These datasets reside in an instance of scipy.interpolate.RectBivariateSpline.

With Django, I need to find a way to only do this once so that I'm not computing the datasets every time. I need to keep this object "alive" so that I can use it repeatedly and not have horrific wait times for users.

For example, this RectBivariateSpline class/2D dictionary gets used in a view that processes JSON input:

myDatasetDict = createInitialDatasets()
xyLine = LineClass(point_stuff, myDatasetDict, distance)
linePoints = TranslateToLatLong(xyLine)
return JsonResponse(linePoints)


Does anyone have any suggestions on how I can store/maintain this object instance on my server? I'm just trying to avoid the long compute times.

Thanks




/r/djangolearning
https://redd.it/ar04go
This media is not supported in your browser
VIEW IN TELEGRAM
Making my first program ever! Recently finished tutorials on python and spent forever attempting to brainstorm ideas about where to go next. I decided to do something useful and automate some of the things I do at work every day! It’s incomplete but I am wayyyy too excited to keep it to myself 😅.

/r/Python
https://redd.it/ar0fcf
What do I need to look for in hosting service if I want to deploy django?

I recently went through the django girls tutorial and in it they go over how to deploy the example site on PythonAnywhere. This site, of course, is designed specifically for this purpose and has nice "hooks" into github.


However, how would I go about deploying a real site to my own host service? What sorts of add-ons do I need to look for make sure my host would even support the django framework, and, how would I tie this into github?


tl;dr how do I translate the nicely setup django girls deployment strategy to a more real life scenario?

/r/djangolearning
https://redd.it/ar2jyn
Here is my pytorch implementation of the model described in the paper DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs (https://arxiv.org/pdf/1606.00915.pdf) Source code: https://github.com/vietnguyen91/Deeplab-pytorch

/r/IPython
https://redd.it/ar39hh
[Discussion] OpenAI should now change their name to ClosedAI

It's the only way to complete the hype wave.

/r/MachineLearning
https://redd.it/aqwcyx
Boilerplate code for setting up Nginx + Gunicorn + Flask + Letsencrypt SSL certs using docker-compose

Though it might be of interest.

I wrote a boilerplate code template to easy back up a webapp on Flask fw.

Include Nginx / Gunicorn / Flask and Letsencrypt SSL Certificates configs.

Tested on Ubuntu 16.04 and 18.04. Steps detailed in README.

[https://github.com/smallwat3r/docker-nginx-gunicorn-flask-letsencrypt](https://github.com/smallwat3r/docker-nginx-gunicorn-flask-letsencrypt)

/r/flask
https://redd.it/ar2v2h
Model and zero or many ForeignKey Models - CRUD

I've got basic CRUD working with the GCBVs for a single model.

Now I want to add the foreignKey model (0 or many "children").

I can see how to "manually" do this by extending TemplateView and override the get\_query and get\_context\_data.

I'm wondering if there is a way to do this more "auto-magically" with some existing GCBVs? My google searches for things like "django display model and foreignkey model on same view" don't turn up useful results. Is there a way of combining the two models/modelForms?

​

Thanks

/r/django
https://redd.it/ar63xh
[D] How do we avoid SOTA-hacking?

The OpenAI blog post reports that they achieve "state-of-the-art performance on many language modeling benchmarks", but they compare a model pretrained on additional data to models trained only on the provided dataset. Reporting this as beating the SOTA on these datasets doesn't make sense. As mentioned on Twitter, the results they state as the existing SOTA can be beaten trivially:
[https://twitter.com/BFelbo/status/1096310277312634882](https://twitter.com/BFelbo/status/1096310277312634882)
[https://twitter.com/seb\_ruder/status/1096335334969933829](https://twitter.com/seb_ruder/status/1096335334969933829)

Claiming SOTA results in this way makes it harder to publish papers focused on sample-efficiency and to compare results across papers. The OpenAI blog post / paper is just the latest of many papers claiming SOTA results with a comparison that doesn't make sense. How do we as a community avoid this kind of SOTA-hacking?

/r/MachineLearning
https://redd.it/ar3tc2
Flask/uwsgi app and Docker

Recently decided to get into Docker and see what it was all about. For anyone who has done this, do you recommend separate containers for nginx proxy and uwsgi/flask app, or no?

Or if anyone has an example of their setup, that'd be great. Just trying to see how I'd structure an API with db, flask/uwsgi/nginx, and nginx on the frontend for whatever js I'm running.

/r/flask
https://redd.it/ar4dsi