[Project] Realtime Interactive Visualization of Convolutional Neural Networks in Unity (feedback strongly welcomed)
https://vimeo.com/274236414
/r/MachineLearning
https://redd.it/8psghc
https://vimeo.com/274236414
/r/MachineLearning
https://redd.it/8psghc
Vimeo
Realtime Interactive Visualization of Convolutional Neural Networks in Unity
This video gives an overview over a project I've developed for the "Visualization 2" Lecture from the Visual Computer Masters program at TU Vienna,…
Creating User Registration help
I am struggling with just making a simple registration for new users. I have been going through google and many other websites for hours and I have not found anything. I got it initially to work but then my login wouldn't work, and then I tried changing the registration because I realized the problem was from there and now the registration doesn't work either. I was wondering if anyone can tell what is wrong with my code and how I can fix it. All the code that I saw looked very similar to mine, and when I start the server it doesn't show any errors either. Does anyone see what is wrong with this?
`from django.shortcuts import render, redirect`
`from Users.forms import RegistrationForm`
`from django.contrib.auth import authenticate, login`
`from django.contrib.auth.models import User`
`from django.http import HttpResponse`
`def register(request):`
`if request.method =='POST':`
`form = RegistrationForm(`[`request.POST`](https://request.POST)`)`
`if form.is_valid():`
[`form.save`](https://form.save)`()`
`return redirect('/map_page/')`
`else:`
`form = RegistrationForm()`
`args = {'form': form}`
`return render(request, 'Users/index.html', args)`
/r/django
https://redd.it/8ps7zu
I am struggling with just making a simple registration for new users. I have been going through google and many other websites for hours and I have not found anything. I got it initially to work but then my login wouldn't work, and then I tried changing the registration because I realized the problem was from there and now the registration doesn't work either. I was wondering if anyone can tell what is wrong with my code and how I can fix it. All the code that I saw looked very similar to mine, and when I start the server it doesn't show any errors either. Does anyone see what is wrong with this?
`from django.shortcuts import render, redirect`
`from Users.forms import RegistrationForm`
`from django.contrib.auth import authenticate, login`
`from django.contrib.auth.models import User`
`from django.http import HttpResponse`
`def register(request):`
`if request.method =='POST':`
`form = RegistrationForm(`[`request.POST`](https://request.POST)`)`
`if form.is_valid():`
[`form.save`](https://form.save)`()`
`return redirect('/map_page/')`
`else:`
`form = RegistrationForm()`
`args = {'form': form}`
`return render(request, 'Users/index.html', args)`
/r/django
https://redd.it/8ps7zu
[D] Question about Frequentist and Bayesian interpretation of Probability and MLE and MAP.
So recently I have been reading about the difference between frequentist and bayesian interpretations of probability. I noticed that, generally, when we maximize the likelihood function (in the frequentist interpretation), we find the argmax of the data, given the parameters, to find the actual parameters of the distribution function for the data. However, the parameters found may not necessarily be the actual parameters.
For example, given the data sample, a certain set of parameters may maximize your likelihood function, but this sample may be too small, so the parameters found may have deviated considerably from the actual parameters
Or a random set of parameters maximizes the likelihood function because it fits the noise really well but the region around this set of parameters has a very low likelihood, but there be another region of parameters which may have a much higher likelihood. In this case, the actual parameters may be in the other region.
Are there any confidence estimates for the parameters (maybe that uses bootstrapping or something, I’m not too sure)? Like confidence estimates for all parameters found in the model. Like for the above example, if we considered the region in which actual parameters reside, it may be more likely that the actual parameters reside in the second region rather than the first, if we considered the mean of the likelihood function of those regions. Is there analysis that considers confidence estimates for specific parameters that takes everything into account? I know for linear regression you may have p-values associated with coefficients which would suggest that certain coefficients may be more or less likely to be relevant to the regression. Also, I believe there are confidence intervals for the coefficients. Is this applicable to other ML models (both convex and non-convex)? For linear regression, I believe this may attribute a bayesian interpretation to the parameters because now you assume the coefficients are normally distributed, I believe, though I’m not really sure.
Similarly, for the bayesian interpretation, we try to maximize the posterior probability of the parameters given the evidence. Similarly, we may find a set of parameters that is most likely, but another set of parameters are the actual parameters or another region is more likely to have the actual parameters. Are there any confidence estimates for the parameters? it seems in this case, we could clearly measure the probability of different regions much easier at least considering we consider the parameters as random variables (also, because the parameters are continuous, the actual probability for a specific set of parameters for the posterior distribution should be zero, right?), and thus we could create estimates and have a degree of confidence in our estimates of the parameters and thus proceed much more smoothly potentially. I don’t know if analysis along these lines is used in practice.
Thanks for your help.
/r/MachineLearning
https://redd.it/8pr064
So recently I have been reading about the difference between frequentist and bayesian interpretations of probability. I noticed that, generally, when we maximize the likelihood function (in the frequentist interpretation), we find the argmax of the data, given the parameters, to find the actual parameters of the distribution function for the data. However, the parameters found may not necessarily be the actual parameters.
For example, given the data sample, a certain set of parameters may maximize your likelihood function, but this sample may be too small, so the parameters found may have deviated considerably from the actual parameters
Or a random set of parameters maximizes the likelihood function because it fits the noise really well but the region around this set of parameters has a very low likelihood, but there be another region of parameters which may have a much higher likelihood. In this case, the actual parameters may be in the other region.
Are there any confidence estimates for the parameters (maybe that uses bootstrapping or something, I’m not too sure)? Like confidence estimates for all parameters found in the model. Like for the above example, if we considered the region in which actual parameters reside, it may be more likely that the actual parameters reside in the second region rather than the first, if we considered the mean of the likelihood function of those regions. Is there analysis that considers confidence estimates for specific parameters that takes everything into account? I know for linear regression you may have p-values associated with coefficients which would suggest that certain coefficients may be more or less likely to be relevant to the regression. Also, I believe there are confidence intervals for the coefficients. Is this applicable to other ML models (both convex and non-convex)? For linear regression, I believe this may attribute a bayesian interpretation to the parameters because now you assume the coefficients are normally distributed, I believe, though I’m not really sure.
Similarly, for the bayesian interpretation, we try to maximize the posterior probability of the parameters given the evidence. Similarly, we may find a set of parameters that is most likely, but another set of parameters are the actual parameters or another region is more likely to have the actual parameters. Are there any confidence estimates for the parameters? it seems in this case, we could clearly measure the probability of different regions much easier at least considering we consider the parameters as random variables (also, because the parameters are continuous, the actual probability for a specific set of parameters for the posterior distribution should be zero, right?), and thus we could create estimates and have a degree of confidence in our estimates of the parameters and thus proceed much more smoothly potentially. I don’t know if analysis along these lines is used in practice.
Thanks for your help.
/r/MachineLearning
https://redd.it/8pr064
reddit
[D] Question about Frequentist and Bayesian... • r/MachineLearning
So recently I have been reading about the difference between frequentist and bayesian interpretations of probability. I noticed that, generally,...
TIL that seaborn refuses to use the jet color scheme in a most wonderful fashion
/r/Python
https://redd.it/8psk37
/r/Python
https://redd.it/8psk37
How to create SCROLLBARS in Python using tkinter!!
https://youtu.be/Y0pLRbX4FaE
/r/Python
https://redd.it/8pv2qv
https://youtu.be/Y0pLRbX4FaE
/r/Python
https://redd.it/8pv2qv
[P] Primer to Attention (Explaining 'Attention is All You Need') still WIP, feedback appreciated!
https://github.com/greentfrapp/attention-primer
/r/MachineLearning
https://redd.it/8ptral
https://github.com/greentfrapp/attention-primer
/r/MachineLearning
https://redd.it/8ptral
GitHub
GitHub - greentfrapp/attention-primer: A demonstration of the attention mechanism with some toy experiments and explanations.
A demonstration of the attention mechanism with some toy experiments and explanations. - GitHub - greentfrapp/attention-primer: A demonstration of the attention mechanism with some toy experiments ...
Help with running a Flask Restful API with Gunicorn and Docker
Hi all,
Over the past few weeks, I've been learning Python. As part of a side project, I've created a Flask Restful API that uses Numpy and Pandas for some analytics.
The past couple of days, I've been attempting to Dockerise the project with little success and I'm hoping someone can point me in the right direction.
From reading, the Flask server is for development only. From reading and googling around, it seemed to be a good idea to try and run Python/Flask on a Linux image and to use Gunicorn as the HTTP server.
My project layout currently looks like this:
Project/
├── app/
│ ├── package_one/
│ ├── __init__.py
│ ├── application.py
├── env/
├── test/
├── Dockerfile
├── manage.py
[manage.py](https://manage.py) looks like this:
from flask import Flask
from app.application import create_app
manager = create_app()
if name == 'main': manager.run(debug=True,host='0.0.0.0',port=5000)
[application.py](https://application.py) looks like this:
from flask import Flask
from flask_restful import Api
def create_app(app_name='API'): app = Flask(app_name)
from .package_one import package_one as package_one_blueprint app.register_blueprint(package_one_blueprint, url_prefix='/api')
return app
Dockerfile currently looks like this:
FROM python:3.6
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
ENTRYPOINT ["gunicorn"]
CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]
I have also tried an ubuntu image as well:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-distribute build-essential
WORKDIR /usr/src/app
COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 ENTRYPOINT ["gunicorn"] CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]
I'm using a Windows machine with Docker set to use Windows Containers and as a result the Docker commands I'm running are:
docker build -t flask-api:latest . --platform linux
docker build -t flask-api:latest .
I did originally start out attempting to use an Alpine image but hit some roadblocks with Numpy and Pandas I couldn't work out and find a solution when attempting to build the image.
The error the python:3.6 docker file is producing is the following:
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
super(Application, self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 232, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 345, in halt
self.stop()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
self.reap_workers()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
raise HaltServer(reason, self.APP_LOAD_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'App failed to load.' 4>
I've tried various solutions and images but feel I have to have something fundamentally wrong with my project but I'm just not quite sure what.
Any advice or help to point me in the right direction would be greatly appreciated!
/r/flask
https://redd.it/8pvayo
Hi all,
Over the past few weeks, I've been learning Python. As part of a side project, I've created a Flask Restful API that uses Numpy and Pandas for some analytics.
The past couple of days, I've been attempting to Dockerise the project with little success and I'm hoping someone can point me in the right direction.
From reading, the Flask server is for development only. From reading and googling around, it seemed to be a good idea to try and run Python/Flask on a Linux image and to use Gunicorn as the HTTP server.
My project layout currently looks like this:
Project/
├── app/
│ ├── package_one/
│ ├── __init__.py
│ ├── application.py
├── env/
├── test/
├── Dockerfile
├── manage.py
[manage.py](https://manage.py) looks like this:
from flask import Flask
from app.application import create_app
manager = create_app()
if name == 'main': manager.run(debug=True,host='0.0.0.0',port=5000)
[application.py](https://application.py) looks like this:
from flask import Flask
from flask_restful import Api
def create_app(app_name='API'): app = Flask(app_name)
from .package_one import package_one as package_one_blueprint app.register_blueprint(package_one_blueprint, url_prefix='/api')
return app
Dockerfile currently looks like this:
FROM python:3.6
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
ENTRYPOINT ["gunicorn"]
CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]
I have also tried an ubuntu image as well:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-distribute build-essential
WORKDIR /usr/src/app
COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 ENTRYPOINT ["gunicorn"] CMD ["-w", "4", "-b", "0.0.0.0:5000","manage:app"]
I'm using a Windows machine with Docker set to use Windows Containers and as a result the Docker commands I'm running are:
docker build -t flask-api:latest . --platform linux
docker build -t flask-api:latest .
I did originally start out attempting to use an Alpine image but hit some roadblocks with Numpy and Pandas I couldn't work out and find a solution when attempting to build the image.
The error the python:3.6 docker file is producing is the following:
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
super(Application, self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 232, in run
self.halt(reason=inst.reason, exit_status=inst.exit_status)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 345, in halt
self.stop()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
self.reap_workers()
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
raise HaltServer(reason, self.APP_LOAD_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'App failed to load.' 4>
I've tried various solutions and images but feel I have to have something fundamentally wrong with my project but I'm just not quite sure what.
Any advice or help to point me in the right direction would be greatly appreciated!
/r/flask
https://redd.it/8pvayo
How to serve user uploaded imgs with nginx?
I'm running my website using nginx and gunicorn on ubuntu 16.04.
So I'm at a point where the user is already able to upload his img to the correct location (/home/UserImgs/), however even though I've tried many different approaches I can't figure out how to serve this file to a templat.
If you look at the html file when loading the page, it even has the 100% correct path in the src attribute and I don't get an error or anything, so I guess it's an issue with my nginx config:
server {
listen 80;
server_name server_domain_or_IP 'not gonna show this';
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/hint.sock;
}
location /media {
autoindex on;
alias /home/UserImgs/;
}
}
/r/django
https://redd.it/8pvdrw
I'm running my website using nginx and gunicorn on ubuntu 16.04.
So I'm at a point where the user is already able to upload his img to the correct location (/home/UserImgs/), however even though I've tried many different approaches I can't figure out how to serve this file to a templat.
If you look at the html file when loading the page, it even has the 100% correct path in the src attribute and I don't get an error or anything, so I guess it's an issue with my nginx config:
server {
listen 80;
server_name server_domain_or_IP 'not gonna show this';
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/hint.sock;
}
location /media {
autoindex on;
alias /home/UserImgs/;
}
}
/r/django
https://redd.it/8pvdrw
reddit
r/django - How to serve user uploaded imgs with nginx?
5 votes and 3 so far on reddit
Matplotlib: Hangs on plt in Linux
I'm stuck on an issue where, when I debug code using `import matplotlib.pyplot as plt`, when it gets to the plt line, the process just hangs and nothing happens. I end up terminating the instance.
I'm trying to test plot this simple code:
`import matplotlib.pyplot as plt`
`plt.plot([1,2,3,4])`
[`plt.show`](https://plt.show)`()`
This works fine on the Mac and the plot window shows, but not under Linux. I ensured packages are set up and installed correctly. Scratching my head....
/r/Python
https://redd.it/8pvwma
I'm stuck on an issue where, when I debug code using `import matplotlib.pyplot as plt`, when it gets to the plt line, the process just hangs and nothing happens. I end up terminating the instance.
I'm trying to test plot this simple code:
`import matplotlib.pyplot as plt`
`plt.plot([1,2,3,4])`
[`plt.show`](https://plt.show)`()`
This works fine on the Mac and the plot window shows, but not under Linux. I ensured packages are set up and installed correctly. Scratching my head....
/r/Python
https://redd.it/8pvwma
Extracting JSON data from source code of website
How do I use python modules like bs4 to extract specific lines of source code from a website that contain json data and how do I make it do this constantly
/r/Python
https://redd.it/8pxloc
How do I use python modules like bs4 to extract specific lines of source code from a website that contain json data and how do I make it do this constantly
/r/Python
https://redd.it/8pxloc
reddit
r/Python - Extracting JSON data from source code of website
3 votes and 0 so far on reddit
A way to have PyCharm automatically install imported modules?
Is there a way to have PyCharm automatically install imported modules that you don't have already in your python env? Like a plugin, or anything like that. I am aware that package name is not always same as imported module name, yet I think this could still be possible somehow.
Yes, today is that type of lazy day, when typing this post on reddit seems more attractive than going into terminal and typing \`pip install somepackage\`.
/r/Python
https://redd.it/8pwcrd
Is there a way to have PyCharm automatically install imported modules that you don't have already in your python env? Like a plugin, or anything like that. I am aware that package name is not always same as imported module name, yet I think this could still be possible somehow.
Yes, today is that type of lazy day, when typing this post on reddit seems more attractive than going into terminal and typing \`pip install somepackage\`.
/r/Python
https://redd.it/8pwcrd
reddit
r/Python - A way to have PyCharm automatically install imported modules?
4 votes and 2 so far on reddit
Missing rows in Pandas
Hi all,
I used Pandas to create data frames to split a dataset into various age ranges, the age range is 0 - 95 in total.
I removed any rows which were over the age of 95 which gave a new total of 110,456 using df.loc, the total number of rows only comes to 106,917 meaning some have been uncounted:
zeroTo14 = hosp_df.loc[(hosp_df['Age'] > 0) & (hosp_df['Age'] <= 14)]
fifteenTo29 = hosp_df.loc[(hosp_df['Age'] >= 15) & (hosp_df['Age'] <= 29)]
thirtyTo44 = hosp_df.loc[(hosp_df['Age'] >= 30) & (hosp_df['Age'] <= 44)]
fortyfiveTo59 = hosp_df.loc[(hosp_df['Age'] >= 45) & (hosp_df['Age'] <= 59)]
sixtyTo64 = hosp_df.loc[(hosp_df['Age'] >= 60) & (hosp_df['Age'] <= 64)]
sixtyfiveTo74 = hosp_df.loc[(hosp_df['Age'] >= 65) & (hosp_df['Age'] <= 74)]
seventyfiveTo89 = hosp_df.loc[(hosp_df['Age'] >= 75) & (hosp_df['Age'] <= 89)]
nintetyTo89 = hosp_df.loc[(hosp_df['Age'] >= 90)]
I think I may have screwed up the greater than and less than symbols as I need to count every single age in between 0 and 95.
I am very grateful for any help here please, more eyes the better. Thanks
/r/pystats
https://redd.it/8pxqat
Hi all,
I used Pandas to create data frames to split a dataset into various age ranges, the age range is 0 - 95 in total.
I removed any rows which were over the age of 95 which gave a new total of 110,456 using df.loc, the total number of rows only comes to 106,917 meaning some have been uncounted:
zeroTo14 = hosp_df.loc[(hosp_df['Age'] > 0) & (hosp_df['Age'] <= 14)]
fifteenTo29 = hosp_df.loc[(hosp_df['Age'] >= 15) & (hosp_df['Age'] <= 29)]
thirtyTo44 = hosp_df.loc[(hosp_df['Age'] >= 30) & (hosp_df['Age'] <= 44)]
fortyfiveTo59 = hosp_df.loc[(hosp_df['Age'] >= 45) & (hosp_df['Age'] <= 59)]
sixtyTo64 = hosp_df.loc[(hosp_df['Age'] >= 60) & (hosp_df['Age'] <= 64)]
sixtyfiveTo74 = hosp_df.loc[(hosp_df['Age'] >= 65) & (hosp_df['Age'] <= 74)]
seventyfiveTo89 = hosp_df.loc[(hosp_df['Age'] >= 75) & (hosp_df['Age'] <= 89)]
nintetyTo89 = hosp_df.loc[(hosp_df['Age'] >= 90)]
I think I may have screwed up the greater than and less than symbols as I need to count every single age in between 0 and 95.
I am very grateful for any help here please, more eyes the better. Thanks
/r/pystats
https://redd.it/8pxqat
reddit
r/pystats - Missing rows in Pandas
2 votes and 3 so far on reddit
Class based view: Best generic view
Hi guys, I'm new to the class based view concept. I'm looking for the best approach to accomplish the following task:
A page containing a form and two inline forms.
The form should contain the first item from a query.
There should be three buttons for:
Delete the current item.
Update the current item.
Autofill values and update the current object.
After all options it should load a page with the next item in the query.
I already have a function view for this, so I know already how I accomplish most of this task.
What I know need to know is if I can use generic views (I would need multiple, am I correct?) or would it be the best to use a Formview, overwrite the post function and put all three button logics into it? The last one would be quite identical to my current function view. I'm just no fan of putting everything in a huge function.
/r/djangolearning
https://redd.it/8ps1dh
Hi guys, I'm new to the class based view concept. I'm looking for the best approach to accomplish the following task:
A page containing a form and two inline forms.
The form should contain the first item from a query.
There should be three buttons for:
Delete the current item.
Update the current item.
Autofill values and update the current object.
After all options it should load a page with the next item in the query.
I already have a function view for this, so I know already how I accomplish most of this task.
What I know need to know is if I can use generic views (I would need multiple, am I correct?) or would it be the best to use a Formview, overwrite the post function and put all three button logics into it? The last one would be quite identical to my current function view. I'm just no fan of putting everything in a huge function.
/r/djangolearning
https://redd.it/8ps1dh
reddit
r/djangolearning - Class based view: Best generic view
6 votes and 1 so far on reddit
[R] A simple example for data augmentation of time-series data
https://github.com/terryum/Data-Augmentation-For-Wearable-Sensor-Data
/r/MachineLearning
https://redd.it/8px7r2
https://github.com/terryum/Data-Augmentation-For-Wearable-Sensor-Data
/r/MachineLearning
https://redd.it/8px7r2
GitHub
GitHub - terryum/Data-Augmentation-For-Wearable-Sensor-Data: A sample code of data augmentation methods for wearable sensor data…
A sample code of data augmentation methods for wearable sensor data (time-series data) - terryum/Data-Augmentation-For-Wearable-Sensor-Data
What to test in Django model? (apart from custom/overridden attributes)
What do you usually test or recommend to test regarding Django models? **This question relates specifically to testing anything on top of your custom and overridden attributes of the model.**
For instance, there is a general advice:
> If the code in question is built into Django, don't test it.
However, I think it is reasonable to write a test for each model that:
1. Creates an instance of a model, including saving it in test database.
2. Checking if count of objects in test database increased by 1.
This approach helps you ensure that your models are successfully validated and are, well... *creatable*!
/r/django
https://redd.it/8pzo41
What do you usually test or recommend to test regarding Django models? **This question relates specifically to testing anything on top of your custom and overridden attributes of the model.**
For instance, there is a general advice:
> If the code in question is built into Django, don't test it.
However, I think it is reasonable to write a test for each model that:
1. Creates an instance of a model, including saving it in test database.
2. Checking if count of objects in test database increased by 1.
This approach helps you ensure that your models are successfully validated and are, well... *creatable*!
/r/django
https://redd.it/8pzo41
reddit
r/django - What to test in Django model? (apart from custom/overridden attributes)
2 votes and 1 so far on reddit
Any good tutorial on how to deploy a local Django app to heroku?Following documentation keeps throwing error
I tried following the documentation but when I do it with my app I get errors on
git push heroku master
I am not too experienced with github either.Any good start to finish guides for beginners?
/r/django
https://redd.it/8py2cr
I tried following the documentation but when I do it with my app I get errors on
git push heroku master
I am not too experienced with github either.Any good start to finish guides for beginners?
/r/django
https://redd.it/8py2cr
reddit
Any good tutorial on how to deploy a local Django app... • r/django
I tried following the documentation but when I do it with my app I get errors on git push heroku master I am not too experienced with github...
Confused on forms (ModelForms) and accessing related objects (foriegn keys).
I am developing a form for use both as a HTML form and as part of an offline step running on site (i.e. though a command).
The form is for creating objects (based on ModelForm). The associated model has a foreign key to another model. An instance of this other model needs to be used to validate the object before creation/save.
A field on the model is (more or less) an unique identifier that can be supplied or generated based on the other fields in the model, including the FK object. There is a method on the FK object to ensure that the uid is proper (i.e. fk\_obj.valid\_name(name)).
Is it safe/proper to load the FK object from the form to run this check? This adds a database query to load the object.
WHERE should this be done? Right now Im doing this in the form init function. i.e. \`fkobj = Fk.objects.get(pk=data.get('fk'))\` This doesn't feel right to me, I tried to do it in 'clean' but I still get errors in the form even though Im updating cleaned\_data. Should this be done in full\_clean()?
Also given that I will often need to load this fk object, is it reasonable to pass object to the form, rather than the pk? This will potentially save a lookup, especially when creating many objects at once, which is the most common use case.
Does anyone have a good flow diagram of the lifecycle of a Form object (specifically ModelForm)? It is unclear to me what the flow of a form is supposed to be and what the actual responsibility of each function is supposed to be.
/r/django
https://redd.it/8pueex
I am developing a form for use both as a HTML form and as part of an offline step running on site (i.e. though a command).
The form is for creating objects (based on ModelForm). The associated model has a foreign key to another model. An instance of this other model needs to be used to validate the object before creation/save.
A field on the model is (more or less) an unique identifier that can be supplied or generated based on the other fields in the model, including the FK object. There is a method on the FK object to ensure that the uid is proper (i.e. fk\_obj.valid\_name(name)).
Is it safe/proper to load the FK object from the form to run this check? This adds a database query to load the object.
WHERE should this be done? Right now Im doing this in the form init function. i.e. \`fkobj = Fk.objects.get(pk=data.get('fk'))\` This doesn't feel right to me, I tried to do it in 'clean' but I still get errors in the form even though Im updating cleaned\_data. Should this be done in full\_clean()?
Also given that I will often need to load this fk object, is it reasonable to pass object to the form, rather than the pk? This will potentially save a lookup, especially when creating many objects at once, which is the most common use case.
Does anyone have a good flow diagram of the lifecycle of a Form object (specifically ModelForm)? It is unclear to me what the flow of a form is supposed to be and what the actual responsibility of each function is supposed to be.
/r/django
https://redd.it/8pueex
reddit
r/django - Confused on forms (ModelForms) and accessing related objects (foriegn keys).
3 votes and 3 so far on reddit
[Project] Does popularity of technology on Stack Overflow influence popularity of post about this technology on Hacker News?
[Link to the project](https://github.com/dgwozdz/HN_SO_analysis).
I tried to answer a question whether popularity of a given technology (programming language/framework/library) on Stack Overflow is a cause of popularity of posts with regard to this technology on Hacker News. The project included an analysis of plots of number of questions/points on Stack Overflow and Hacker News (a.k.a. some Exploratory Data Analysis) as well as Granger causality test. It was conducted in Python (\+ a bit of Google BigQuery to get data with regard to Hacker News).
/r/pystats
https://redd.it/8q00qz
[Link to the project](https://github.com/dgwozdz/HN_SO_analysis).
I tried to answer a question whether popularity of a given technology (programming language/framework/library) on Stack Overflow is a cause of popularity of posts with regard to this technology on Hacker News. The project included an analysis of plots of number of questions/points on Stack Overflow and Hacker News (a.k.a. some Exploratory Data Analysis) as well as Granger causality test. It was conducted in Python (\+ a bit of Google BigQuery to get data with regard to Hacker News).
/r/pystats
https://redd.it/8q00qz
GitHub
dgwozdz/HN_SO_analysis
Is there a relationship between popularity of a given technology on Stack Overflow (SO) and Hacker News (HN)? And a few words about causality - dgwozdz/HN_SO_analysis
Is it a bad idea to use .gz files?
Hello everyone. I'm working on my first python project/Flask web app, deploying it to the web some time later, and I was thinking about using a .gz file of a pandas data frame for a very small database containing resource links/type, etc., rather than SQLite db. Is this a bad idea? Should I be sticking to just a SQLite database? Any advice would be great.
/r/flask
https://redd.it/8mxqy2
Hello everyone. I'm working on my first python project/Flask web app, deploying it to the web some time later, and I was thinking about using a .gz file of a pandas data frame for a very small database containing resource links/type, etc., rather than SQLite db. Is this a bad idea? Should I be sticking to just a SQLite database? Any advice would be great.
/r/flask
https://redd.it/8mxqy2
reddit
r/flask - Is it a bad idea to use .gz files?
2 votes and 4 so far on reddit
I made a non-motivational poster generator
I created a non\-motivational (and non\-cool) poster generator. Check it out:
[https://github.com/rubennp91/InstaQuotes](https://github.com/rubennp91/InstaQuotes)
The script takes an instagram profile, a random quote from a wikiquote page and puts it together into what can only be called a non\-motivational poster. Here are some examples:
[https://i.imgur.com/HpN7kgc.png](https://i.imgur.com/HpN7kgc.png) (natgeo \+ Trump)
[https://i.imgur.com/COhxzb9.jpg](https://i.imgur.com/COhxzb9.jpg) (selenagomez \+ Karl Marx)
[https://i.imgur.com/9418KPb.jpg](https://i.imgur.com/9418KPb.jpg) (taylorswift \+ Churchill)
Some can be pretty hilarious.
So this script is not ideal, as it depends on a ton of libraries and even a tool for downloading instagram media, but even so, I had a ton of fun developing it. It uses instagram\-scraper for media downloading (pip install instagram\-scraper), wikiquotes (pip install wikiquotes) to retrieve the quotes, and PIL (pip install pillow) to add the text to the image. I've uploaded a sample font ([https://fonts.google.com/specimen/Indie\+Flower](https://fonts.google.com/specimen/Indie+Flower)) with it on github but more can be added to the fonts folder.
I started programming python some months ago in my job and I'm loving it so far. I only had some experience programming microcontrollers in C and learning python has opened a world of opportunities for me.
The code is thoroughly commented but if you have any questions, please ask ahead.
/r/Python
https://redd.it/8q0802
I created a non\-motivational (and non\-cool) poster generator. Check it out:
[https://github.com/rubennp91/InstaQuotes](https://github.com/rubennp91/InstaQuotes)
The script takes an instagram profile, a random quote from a wikiquote page and puts it together into what can only be called a non\-motivational poster. Here are some examples:
[https://i.imgur.com/HpN7kgc.png](https://i.imgur.com/HpN7kgc.png) (natgeo \+ Trump)
[https://i.imgur.com/COhxzb9.jpg](https://i.imgur.com/COhxzb9.jpg) (selenagomez \+ Karl Marx)
[https://i.imgur.com/9418KPb.jpg](https://i.imgur.com/9418KPb.jpg) (taylorswift \+ Churchill)
Some can be pretty hilarious.
So this script is not ideal, as it depends on a ton of libraries and even a tool for downloading instagram media, but even so, I had a ton of fun developing it. It uses instagram\-scraper for media downloading (pip install instagram\-scraper), wikiquotes (pip install wikiquotes) to retrieve the quotes, and PIL (pip install pillow) to add the text to the image. I've uploaded a sample font ([https://fonts.google.com/specimen/Indie\+Flower](https://fonts.google.com/specimen/Indie+Flower)) with it on github but more can be added to the fonts folder.
I started programming python some months ago in my job and I'm loving it so far. I only had some experience programming microcontrollers in C and learning python has opened a world of opportunities for me.
The code is thoroughly commented but if you have any questions, please ask ahead.
/r/Python
https://redd.it/8q0802
GitHub
rubennp91/InstaQuotes
This is a tool for creating non-motivational posters. Just give it an instagram profile, a wikiquote name to search and a language. - rubennp91/InstaQuotes