Python Daily
2.57K subscribers
1.49K photos
53 videos
2 files
39K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
(models.Model):
emp_id = models.IntegerField(db_column='EMP_ID', primary_key=True)
first_name = models.CharField(db_column='FIRST_NAME', max_length=100)
last_name = models.CharField(db_column='LAST_NAME', max_length=100)
mail = models.CharField(db_column='MAIL', max_length=100)

class Meta:
managed = False
db_table = 'EMPLOYEE'


[1]: https://docs.djangoproject.com/en/1.11/topics/db/multi-db/#no-cross-database-relations


/r/django
https://redd.it/6r2kwt
Just wanna say thanks to flask,sqlalchemy,postgres and redis. I managed to create a server app that handles 7 Million requests per day. (probably more)

Proof:
http://imgur.com/a/n3mfG

I think it's one of my greatest achievement ever since we were only a 2-man team in the server dev.



/r/flask
https://redd.it/6r2gir
A Julia set fractal with python

/r/Python
https://redd.it/6r4rb4
Clustering multiple GPUs from different machines and remotely run Jupyter Notebook

Hello everyone, I am a newcomer here in this community.
I have only 2 GB GPU but my friend has 4GB so I generally train my model on his machine. I normally use Jupyter Notebook and I code in Python. Recently I came to know about "Running a notebook server" and I set up that. Now I can remotely run a jupyter notebook on my machine (client) while the resources are used from my friend's machine (server).

4 GB of GPU is also not sufficient for me. I am curious if I could remotely use GPUs from many of my friends' machine and cluster them and then remotely run the jupyter notebook. Its similar to the server-client model that we previously created but I wish to extend it to multiple "shared-servers" so that I can use all of their GPU's in collaborative and distributive fashion. It is a kind of 'many-to-one' server (many) and client (one) model.

Can anybody help me how can I achieve that in Jupyter Notebook server ?
Or is there any option to remotely use GPU from different machines and run my python code remotely ?

Thanks

LINK - jupyter-notebook.readthedocs.io/en/latest/public_server.html

/r/JupyterNotebooks
https://redd.it/6r76md
My first program for today, feeling creative.

>>> true=False
>>> true!=False
False
>>> True!=False
True

/r/Python
https://redd.it/6raocg
I have set up my React frontend. Need a backend to manage user accounts and store data - is Django a good progression?

As mentioned in the title, I have set up the base foundation for my webapp using React/Redux for my front end.

I would like to take it a step further by adding a backend so that users can create accounts, login, and save data.

I have been looking into Firebase as it's recommended but it feels like I won't learn as much about backend / databases since Firebase does a lot of it behinds the scene.

I am considering Django and was wondering if this would be a good progression for me? Where does Django or Firebase fit into the picture?

I have zero experience with anything backend related but want to learn more.

/r/django
https://redd.it/6r9t4a
Are Django session variables secure?

I am creating an app for use in our organization that will login users based on their Office 365 credentials using OAuth2.0. I am looking at fetching an access token that I will store in a session variable. Right now, though, I am just saving 'oauth_state' into a session variable and setting that variable to 'authorized'. Upon loading of each (non-login view), I am checking whether 'oauth_state' is set to 'authorized'. I'm probably going to change this and store a token the variable and use that to query the MSGraph API for a 200 response upon each page load. Anyways, here is an example of what I am doing:

@never_cache
def authorization(request):
microsoft = OAuth2Session(client_id,scope=scope,redirect_uri=redirect_uri)
token = ""
try:
users = 'https://graph.microsoft.com/v1.0/me' ##msgraph query url-
##This query is purelyjust used to
##authenticate user!
token = microsoft.fetch_token(token_url,client_secret=client_secret,code=request.GET.get('code', '')) ##Code is the authorization code present
##in request URL
header = {'Authorization': 'Bearer ' + token['access_token']}
response = requests.get(url = users, headers = header)

if int(response.status_code) != 200: ##if status code is not 200, then authentication failed. Redirect to login.
print ('Not validated. Return to login.')
request.session.flush()
return redirect('http://localhost:8000/login')
except Exception as e:
print ('User not does not have authentication rights')
request.session.flush()
return redirect('http://localhost:8000/login')

request.session['oauth_state'] = 'authorized'
response = HttpResponseRedirect('http://localhost:8000/search')
return response

I am then using this to check if 'oauth_state' is set to 'authorized'. However, I may change this so that the token is used to query the MS Graph API in each function in order to check if the user has proper permissions or not. Here's an example of what I am doing:

def search(request):
try:
if (str(request.session['oauth_state']) != 'authorized'):
print ('Not authorized')
request.session.flush()
return redirect('http://localhost:8000/login')
except Exception as e:
print ('Not authorized')
request.session.flush()
return redirect('http://localhost:8000/login')
<rest of code>

How insecure is this? Should I possibly be passing in the token to the response header? Or should I get rid of this method, and use django's standard auth and login system? I really appreciated the benefits of OAuth2.0, but if this method compromises our security, I might scrap it.

/r/django
https://redd.it/6r8hg8
Cannot put templatetags at the root of the project

I'm trying to put my *templatetags* folder, which contains a *menu* common to all the applications, at the root of the project.

But I'm still getting a: **TemplateSyntaxError**: 'menu' is not a registered tag library

Here is the relevant part of my *setting.py*:

TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
# Look for base template at root of project
'DIRS': [os.path.join(BASE_DIR, 'templates'), ],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
# Look for base templatetags at root of project
'libraries': {
'project_tags': 'templatetags.menu',
}
},
},
]

Is something wrong with my configuration?


/r/django
https://redd.it/6rf6nf
PSA: Don't use the unofficial Windows binaries in an application handling sensitive data.

I've noticed, more and more often, that people are using the [~gohlke binaries](http://www.lfd.uci.edu/~gohlke/pythonlibs/) in serious production applications. I've seen it be mentioned or even recommended by a lot of people who ought to know better.

That's a terrible idea for a lot of reasons -- they're built by a third party and are explicitly marked for testing purposes only -- but there's a huge problem a lot of people ignore, and I can't seem to find any conversations about: they're served over HTTP, and only HTTP. HTTPS connections get redirected to HTTP.

This means that a man-in-the-middle attack could trivially redirect them to a booby-trapped version of the binary. Whenever you import the binary, it might open a TTY on your machine and let an attacker connect to it at will. Or it could traverse your hard drive and `open(file, 'rb')` every file to wipe them. Those are just two really uncreative examples -- you can imagine what arbitrary code execution on your production systems could do.

If your application handles anyone's personal/sensitive data, or if it's in production anywhere, please take the extra time to compile your binaries from scratch, even though it can be a huge pain. MITM attacks are very easy for unskilled attackers to pull off, and the results could be disastrous.

/r/Python
https://redd.it/6rdd21
Python 101 on the Web

Last year I gave my book, **Python 101**, away for free on Reddit and then made it free on [Leanpub](https://leanpub.com/python_101/) permanently. I've been meaning to make Python 101 available as a website too but only recently have I gotten that accomplished. You can check it out here: http://python101.pythonlibrary.org/

For those of you who like the nitty gritty details, I ended up using Sphinx because I write all my books in ReStructuredText. Sphinx uses the Alabaster theme by default, which I didn't care for. I wanted a theme that has previous/next buttons on by default and that was mobile friendly. The only theme I found for that was the [Read the Docs](http://docs.readthedocs.io/en/latest/theme.html) theme. I will probably end up tweaking it a bit to make it look a little more unique. If you have any tips for that, feel free to let me know.

/r/Python
https://redd.it/6rcbxw
How can I save and load the state of the kernel?

In the course of a normal use of a notebook, the output of each cell is saved when the notebook is saved. However, if I closed the notebook and opened it again, the variables in the namespace all go away (presumably because the kernel was restarted). Is there a way to save the variables in a kernel namespace and load it again, so I don't have to go through and run every cell?

/r/IPython
https://redd.it/6reiqp
urllib.request denied with http 429 while using header and proxy (no spam)

Hi,
I am trying to get information from a webpage while learning python (3.6)(my second comprehension test after having worked a few online tutorials), The website is for some reason still able to identify me as bot and give me HTTP Error 429 after having added headers and proxy to my request. Which I only did because I get 429. I am able to get around most pages bot ident but for some reason not this one.
I am only able to get the main domain. If I use the link identified after executing a search on this page I get this error. However my main porpuse is to search something and retrieve this information.

there are several google recaptchas on the page even when using the page as human it often ask.. Ops something seems wrong are you human? But as bot I am not able to load the search result page even once.

I would like to ask if I am missing something, or I should try something else since I just started learning python only a few weegs ago and may not have your insight.

import bs4 as bs
import urllib.request
import ssl
import re
import gzip

#########################################################
## Training parsing with beautiful soup
#########################################################

ssl._create_default_https_context = ssl._create_unverified_context

hdr = [('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
('Accept-Encoding', 'gzip, deflate, br'),
('Accept-Language', 'en-US,en;q=0.5'),
('Connection', 'close'),
('Cookie', '_gauges_unique_hour=1; _gauges_unique_day=1; _gauges_unique_month=1; _gauges_unique_year=1; _gauges_unique=1'),
('Dnt', '1'),
('Upgrade-Insecure-Requests', '1'),
('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0')
]
site = 'https://www.examplepage.de'
site2 = 'https://www.wikipedia.de/'

proxy = {'http' : 'http://62.210.71.225:8080' }



def openPage(site, hdr, proxy):
## IP check
## print('Actual IP', urllib.request.urlopen('http://httpbin.org/ip').read())
## print('Actual headers', urllib.request.urlopen('http://httpbin.org/headers').read())

## Create opener
proxy_support = urllib.request.ProxyHandler(proxy)
opener = urllib.request.build_opener(proxy_support)##proxy_support
urllib.request.install_opener(opener)
opener.addheaders = hdr
##opener.add_cookie_header

## IP check
## print('Fake IP', opener.open('http://httpbin.org/ip').read())
## print('Fake headers', opener.open('http://httpbin.org/headers').read())


rep = gzip.decompress(opener.open(site).read()).decode('utf-8')
## charset = rep.info().get_content_charset()
## respdec = rep.read().decode(charset)
##respe = urllib.parse.urlencode(resp)
soup = bs.BeautifulSoup(rep,'lxml')
print('Saved page')
return(soup)


soup = openPage(site,hdr,proxy)

saveFile = open('withHEaders.txt','w', encoding ='utf-8')
saveFile.write(str(soup))
saveFile.close()

Error:

Traceback (most recent call last):
File "C:\Projects\Python\tutorials\WebScraper\webpage_scraper.py", line 99, in <module>
mainNav()
File "C:\Projects\Python\tutorials\WebScraper\webpage_scraper.py", line 66, in mainNav
soup = openPage(site,hdr,proxy)
File "C:\Projects\Python\tutorials\WebScraper\webpage_scraper.py", line 53, in openPage
rep = gzip.decompress(opener.open(site).read()).decode('utf-8')
File "C:\Program Files\Python36\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Program Files\Python36\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Program Files\Python36\lib\urllib\request.py", line 570, in error
retur
n self._call_chain(*args)
File "C:\Program Files\Python36\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Program Files\Python36\lib\urllib\request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 429:


/r/Python
https://redd.it/6rj32w
how to post data with graphene_django?

I'm completely new to GraphQL and graphene and i just finished the [graphene_django tutorial](http://docs.graphene-python.org/projects/django/en/latest/)

I understand how to get data from server, which is pretty easy, but i don't know how to do create or update

do i need to use django rest framwork for POSTs or is it possible to use just graphene to get and put data?

/r/django
https://redd.it/6rk2w4