Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Import and Run an Application Package?

I am extremely new to Flask and would be very grateful for any help and relevant docs. I was not able to track down any doc about this exact case with a Google search.

I am familiar with importing a package to use various methods, etc, but what of the case where we want to sort of plug a package in as the application that Flask will run?

Example: Say I have a simple web app created a with Flask, which was then configured and published as a package on pypi. From there if someone else wants to import my package and run the app, how would they go about doing so? Are there any examples of this on github?

Looking mostly for doc and resources, no expectation of anyone sitting here to type out an entire tutorial. :\)

/r/flask
https://redd.it/8necpm
How to write this queryset has me stumped!

Hi

Would really appreciate some help with writing this queryset.

[https://stackoverflow.com/questions/50740061/how\-do\-i\-write\-a\-django\-query\-to\-join\-on\-a\-field\-and\-a\-condition](https://stackoverflow.com/questions/50740061/how-do-i-write-a-django-query-to-join-on-a-field-and-a-condition)

Thanks!

/r/djangolearning
https://redd.it/8padu3
To Python Developers: If someone could make any one thing that would make coding in python easier for developers, what would it be?



/r/Python
https://redd.it/8ppvaf
Recommended Flask video tutorials?

Are there any good and updated flask video tutorials? I can find many books and blog posts, but I learn better from videos. Are there any recommendations?

/r/flask
https://redd.it/8pra0x
Recommendations for Plotting Large Datasets

I'm working on a project where a user would upload a CSV or Excel file, and on the backend, I would produce scatter plots, heatmaps, etc for the user to interact with. However, the datasets could be very large (let's say millions of rows) and numerous columns. What would be the best way to approach this? Would just a front\-end library like HighCharts or D3 cut it (I don't want performance bottlenecks on the client's end for larger datasets)? I was thinking about running it with matplotlib, but wouldn't that block other users from accessing the website while it's generating the images?

/r/flask
https://redd.it/8pq3tt
Am I the only one struggling with registration or is registeration just hard to do?

I am trying to build a way to get new users to register for my website and I am having a lot of trouble making the registration and login work. I was wondering if this is just me or if this is a hard topic for most developers. Also I was thinking of using a package for it like django registration redux but I have never used any of those packages and I am not sure if i should. I rather build it myself and not use a package, but so far I have not had any luck.

Also after creating a regular registration I was going to create registrations with facebook and google but I'm not sure if that is going to be a pain or not. I saw the code for facebook and google and they seemed easy to include and most were just javascript.

So is it just me struggling with registration or is it typically a struggle. Also how hard would it be to make facebook and google registration work too. Thank you!

/r/django
https://redd.it/8pomxs
Creating User Registration help

I am struggling with just making a simple registration for new users. I have been going through google and many other websites for hours and I have not found anything. I got it initially to work but then my login wouldn't work, and then I tried changing the registration because I realized the problem was from there and now the registration doesn't work either. I was wondering if anyone can tell what is wrong with my code and how I can fix it. All the code that I saw looked very similar to mine, and when I start the server it doesn't show any errors either. Does anyone see what is wrong with this?

`from django.shortcuts import render, redirect`

`from Users.forms import RegistrationForm`

`from django.contrib.auth import authenticate, login`

`from django.contrib.auth.models import User`

`from django.http import HttpResponse`

`def register(request):`

`if request.method =='POST':`

`form = RegistrationForm(`[`request.POST`](https://request.POST)`)`

`if form.is_valid():`

[`form.save`](https://form.save)`()`

`return redirect('/map_page/')`

`else:`

`form = RegistrationForm()`

`args = {'form': form}`

`return render(request, 'Users/index.html', args)`

/r/django
https://redd.it/8ps7zu
[D] Question about Frequentist and Bayesian interpretation of Probability and MLE and MAP.

So recently I have been reading about the difference between frequentist and bayesian interpretations of probability. I noticed that, generally, when we maximize the likelihood function (in the frequentist interpretation), we find the argmax of the data, given the parameters, to find the actual parameters of the distribution function for the data. However, the parameters found may not necessarily be the actual parameters.

For example, given the data sample, a certain set of parameters may maximize your likelihood function, but this sample may be too small, so the parameters found may have deviated considerably from the actual parameters

Or a random set of parameters maximizes the likelihood function because it fits the noise really well but the region around this set of parameters has a very low likelihood, but there be another region of parameters which may have a much higher likelihood. In this case, the actual parameters may be in the other region.

Are there any confidence estimates for the parameters (maybe that uses bootstrapping or something, I’m not too sure)? Like confidence estimates for all parameters found in the model. Like for the above example, if we considered the region in which actual parameters reside, it may be more likely that the actual parameters reside in the second region rather than the first, if we considered the mean of the likelihood function of those regions. Is there analysis that considers confidence estimates for specific parameters that takes everything into account? I know for linear regression you may have p-values associated with coefficients which would suggest that certain coefficients may be more or less likely to be relevant to the regression. Also, I believe there are confidence intervals for the coefficients. Is this applicable to other ML models (both convex and non-convex)? For linear regression, I believe this may attribute a bayesian interpretation to the parameters because now you assume the coefficients are normally distributed, I believe, though I’m not really sure.

Similarly, for the bayesian interpretation, we try to maximize the posterior probability of the parameters given the evidence. Similarly, we may find a set of parameters that is most likely, but another set of parameters are the actual parameters or another region is more likely to have the actual parameters. Are there any confidence estimates for the parameters? it seems in this case, we could clearly measure the probability of different regions much easier at least considering we consider the parameters as random variables (also, because the parameters are continuous, the actual probability for a specific set of parameters for the posterior distribution should be zero, right?), and thus we could create estimates and have a degree of confidence in our estimates of the parameters and thus proceed much more smoothly potentially. I don’t know if analysis along these lines is used in practice.

Thanks for your help.

/r/MachineLearning
https://redd.it/8pr064
TIL that seaborn refuses to use the jet color scheme in a most wonderful fashion

/r/Python
https://redd.it/8psk37
How to create SCROLLBARS in Python using tkinter!!
https://youtu.be/Y0pLRbX4FaE

/r/Python
https://redd.it/8pv2qv
Help with running a Flask Restful API with Gunicorn and Docker

Hi all,

Over the past few weeks, I've been learning Python. As part of a side project, I've created a Flask Restful API that uses Numpy and Pandas for some analytics.

The past couple of days, I've been attempting to Dockerise the project with little success and I'm hoping someone can point me in the right direction.

From reading, the Flask server is for development only. From reading and googling around, it seemed to be a good idea to try and run Python/Flask on a Linux image and to use Gunicorn as the HTTP server.

My project layout currently looks like this:

Project/
├── app/
│ ├── package_one/
│ ├── __init__.py
│ ├── application.py
├── env/
├── test/
├── Dockerfile
├── manage.py

[manage.py](https://manage.py) looks like this:

from flask import Flask
from app.application import create_app
manager = create_app()
if name == 'main': manager.run(debug=True,host='0.0.0.0',port=5000)

[application.py](https://application.py) looks like this:

from flask import Flask
from flask_restful import Api
def create_app(app_name='API'): app = Flask(app_name)
from .package_one import package_one as package_one_blueprint app.register_blueprint(package_one_blueprint, url_prefix='/api')
return app

Dockerfile currently looks like this:

FROM python:3.6

WORKDIR /usr/src/app

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000
ENTRYPOINT ["gunicorn"]
CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]

I have also tried an ubuntu image as well:

FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-distribute build-essential
WORKDIR /usr/src/app
COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 ENTRYPOINT ["gunicorn"] CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]

I'm using a Windows machine with Docker set to use Windows Containers and as a result the Docker commands I'm running are:

docker build -t flask-api:latest . --platform linux

docker build -t flask-api:latest .

I did originally start out attempting to use an Alpine image but hit some roadblocks with Numpy and Pandas I couldn't work out and find a solution when attempting to build the image.

The error the python:3.6 docker file is producing is the following:

Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
super(Application, self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 232, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 345, in halt
self.stop()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
self.reap_workers()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
raise HaltServer(reason, self.APP_LOAD_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'App failed to load.' 4>

I've tried various solutions and images but feel I have to have something fundamentally wrong with my project but I'm just not quite sure what.

Any advice or help to point me in the right direction would be greatly appreciated!

/r/flask
https://redd.it/8pvayo
How to serve user uploaded imgs with nginx?

I'm running my website using nginx and gunicorn on ubuntu 16.04.
So I'm at a point where the user is already able to upload his img to the correct location (/home/UserImgs/), however even though I've tried many different approaches I can't figure out how to serve this file to a templat.
If you look at the html file when loading the page, it even has the 100% correct path in the src attribute and I don't get an error or anything, so I guess it's an issue with my nginx config:

server {
listen 80;
server_name server_domain_or_IP 'not gonna show this';

location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home;
}

location / {
include proxy_params;
proxy_pass http://unix:/home/hint.sock;
}

location /media {
autoindex on;
alias /home/UserImgs/;
}
}



/r/django
https://redd.it/8pvdrw
Matplotlib: Hangs on plt in Linux

I'm stuck on an issue where, when I debug code using `import matplotlib.pyplot as plt`, when it gets to the plt line, the process just hangs and nothing happens. I end up terminating the instance.

I'm trying to test plot this simple code:

`import matplotlib.pyplot as plt`

`plt.plot([1,2,3,4])`

[`plt.show`](https://plt.show)`()`

This works fine on the Mac and the plot window shows, but not under Linux. I ensured packages are set up and installed correctly. Scratching my head....

/r/Python
https://redd.it/8pvwma
Extracting JSON data from source code of website

How do I use python modules like bs4 to extract specific lines of source code from a website that contain json data and how do I make it do this constantly


/r/Python
https://redd.it/8pxloc
A way to have PyCharm automatically install imported modules?

Is there a way to have PyCharm automatically install imported modules that you don't have already in your python env? Like a plugin, or anything like that. I am aware that package name is not always same as imported module name, yet I think this could still be possible somehow.

Yes, today is that type of lazy day, when typing this post on reddit seems more attractive than going into terminal and typing \`pip install somepackage\`.

/r/Python
https://redd.it/8pwcrd
Missing rows in Pandas

Hi all,
I used Pandas to create data frames to split a dataset into various age ranges, the age range is 0 - 95 in total.

I removed any rows which were over the age of 95 which gave a new total of 110,456 using df.loc, the total number of rows only comes to 106,917 meaning some have been uncounted:

zeroTo14 = hosp_df.loc[(hosp_df['Age'] > 0) & (hosp_df['Age'] <= 14)]

fifteenTo29 = hosp_df.loc[(hosp_df['Age'] >= 15) & (hosp_df['Age'] <= 29)]

thirtyTo44 = hosp_df.loc[(hosp_df['Age'] >= 30) & (hosp_df['Age'] <= 44)]

fortyfiveTo59 = hosp_df.loc[(hosp_df['Age'] >= 45) & (hosp_df['Age'] <= 59)]

sixtyTo64 = hosp_df.loc[(hosp_df['Age'] >= 60) & (hosp_df['Age'] <= 64)]

sixtyfiveTo74 = hosp_df.loc[(hosp_df['Age'] >= 65) & (hosp_df['Age'] <= 74)]

seventyfiveTo89 = hosp_df.loc[(hosp_df['Age'] >= 75) & (hosp_df['Age'] <= 89)]

nintetyTo89 = hosp_df.loc[(hosp_df['Age'] >= 90)]

I think I may have screwed up the greater than and less than symbols as I need to count every single age in between 0 and 95.

I am very grateful for any help here please, more eyes the better. Thanks

/r/pystats
https://redd.it/8pxqat