What is the ForeignObject field type?
So i'm working on some models for a new django project (I'm pretty new to django myself) and I'm wondering what the difference between using
some_field = models.ForeignObject(MyClass)
and this
some_field = models.ForeignKey(MyClass, on_delete=models.CASCADE)
**What are the pros and cons of one over the other?** I control+f'd for ForeignObject in the model documentation for django on their site and I can't seem to find any answers to the above question, and some google searching doesnt turn up any results for the specific term either.
Thoughts?
/r/django
https://redd.it/7nsnzn
So i'm working on some models for a new django project (I'm pretty new to django myself) and I'm wondering what the difference between using
some_field = models.ForeignObject(MyClass)
and this
some_field = models.ForeignKey(MyClass, on_delete=models.CASCADE)
**What are the pros and cons of one over the other?** I control+f'd for ForeignObject in the model documentation for django on their site and I can't seem to find any answers to the above question, and some google searching doesnt turn up any results for the specific term either.
Thoughts?
/r/django
https://redd.it/7nsnzn
reddit
What is the ForeignObject field type? • r/django
So i'm working on some models for a new django project (I'm pretty new to django myself) and I'm wondering what the difference between using ...
Help with the Django Cache Framework
Hey everyone,
I'm wondering if someone could help me get to know the django cache framework for a (hopefully) simple task. I'm developing a [financial analysis application](https://join.lazyfa.com) and I would like to cache some portions of the first load of the page to be reused in other sections of the app. Here's the example I'm trying to figure out right now:
When a user sends a ticker through (e.g. AAPL) this is [the dashboard that pops up](https://imgur.com/GOWqPig). Some parts of that dashboard are done in the dashboard view. Others (for example company SIC, short %, float, profile, etc) are pulled from external sources via ajax calls and a few others (e.g. calculations like burn rate, averages, etc) are populated dynamically by making ajax calls to other views.
That part all works fine. The dashboard loads in a couple seconds and whatever doesn't load gets pulled in via AJAX as the user uses what's available.
Now when a user switches to the red flags > income statement section, I'd like to keep the entire company summary section (Overview, Company Information, Recent News) in cache so it doesn't have to be retrieved again. So in the dashboard view, after doing all the biz logic to return the context used in the ajax calls, I just set a cache key ('context') to the context and store it for some amount of time. For example, just storing it for 60 seconds:
# Do stuff to set all these dicts and stuff that's used on the dashboard, then...
template = 'dashboard.html'
context = {
'ticker': ticker,
'companyInfo' : companyInfo,
'database' : database,
'annual_dict_most_recent' : annual_dict_most_recent,
'quarterly_dict_most_recent' : quarterly_dict_most_recent,
'ttm_dict_most_recent' : ttm_dict_most_recent,
'annual_dict_most_recent' : annual_dict_most_recent,
'quarterly_dict_most_recent' : quarterly_dict_most_recent,
'ttm_dict_most_recent' : ttm_dict_most_recent,
'INCOME_COLLECTION' : INCOME_COLLECTION,
'BALANCE_COLLECTION' : BALANCE_COLLECTION,
'CASHFLOW_COLLECTION' : CASHFLOW_COLLECTION,
'METRICS_COLLECTION' : METRICS_COLLECTION,
'INCOME_Q_COLLECTION' : INCOME_Q_COLLECTION,
'BALANCE_Q_COLLECTION' : BALANCE_Q_COLLECTION,
'CASHFLOW_Q_COLLECTION' : CASHFLOW_Q_COLLECTION,
'METRICS_Q_COLLECTION' : METRICS_Q_COLLECTION,
'jsonTickerList' : jsonTickerList,
}
cache.set('context', context, 60)
return render(request, template, context)
Then when a user clicks the income statement link, I can do this in the relevant view:
context = cache.get('context')
template = 'income.html'
return render(request, template, context)
That works perfectly and loads the whole company summary section from cache, but there's a slight problem: If a user searches for a ticker, then opens a new tab and searches for a new ticker (e.g. TSLA), the cache is overwritten because of the new dashboard load, so if they go back to their previous tab with the dashboard for AAPL on it, when they click the income statement link it will load the company summary for TSLA.
I understand why this is happening but I'm just not sure how to handle it cause I don't fully understand the capabilities of the caching framework or what the best way to handle this would be.
Could someone more familiar shed a bit of light on this for me? Thanks so much!
/r/django
https://redd.it/7nrt3y
Hey everyone,
I'm wondering if someone could help me get to know the django cache framework for a (hopefully) simple task. I'm developing a [financial analysis application](https://join.lazyfa.com) and I would like to cache some portions of the first load of the page to be reused in other sections of the app. Here's the example I'm trying to figure out right now:
When a user sends a ticker through (e.g. AAPL) this is [the dashboard that pops up](https://imgur.com/GOWqPig). Some parts of that dashboard are done in the dashboard view. Others (for example company SIC, short %, float, profile, etc) are pulled from external sources via ajax calls and a few others (e.g. calculations like burn rate, averages, etc) are populated dynamically by making ajax calls to other views.
That part all works fine. The dashboard loads in a couple seconds and whatever doesn't load gets pulled in via AJAX as the user uses what's available.
Now when a user switches to the red flags > income statement section, I'd like to keep the entire company summary section (Overview, Company Information, Recent News) in cache so it doesn't have to be retrieved again. So in the dashboard view, after doing all the biz logic to return the context used in the ajax calls, I just set a cache key ('context') to the context and store it for some amount of time. For example, just storing it for 60 seconds:
# Do stuff to set all these dicts and stuff that's used on the dashboard, then...
template = 'dashboard.html'
context = {
'ticker': ticker,
'companyInfo' : companyInfo,
'database' : database,
'annual_dict_most_recent' : annual_dict_most_recent,
'quarterly_dict_most_recent' : quarterly_dict_most_recent,
'ttm_dict_most_recent' : ttm_dict_most_recent,
'annual_dict_most_recent' : annual_dict_most_recent,
'quarterly_dict_most_recent' : quarterly_dict_most_recent,
'ttm_dict_most_recent' : ttm_dict_most_recent,
'INCOME_COLLECTION' : INCOME_COLLECTION,
'BALANCE_COLLECTION' : BALANCE_COLLECTION,
'CASHFLOW_COLLECTION' : CASHFLOW_COLLECTION,
'METRICS_COLLECTION' : METRICS_COLLECTION,
'INCOME_Q_COLLECTION' : INCOME_Q_COLLECTION,
'BALANCE_Q_COLLECTION' : BALANCE_Q_COLLECTION,
'CASHFLOW_Q_COLLECTION' : CASHFLOW_Q_COLLECTION,
'METRICS_Q_COLLECTION' : METRICS_Q_COLLECTION,
'jsonTickerList' : jsonTickerList,
}
cache.set('context', context, 60)
return render(request, template, context)
Then when a user clicks the income statement link, I can do this in the relevant view:
context = cache.get('context')
template = 'income.html'
return render(request, template, context)
That works perfectly and loads the whole company summary section from cache, but there's a slight problem: If a user searches for a ticker, then opens a new tab and searches for a new ticker (e.g. TSLA), the cache is overwritten because of the new dashboard load, so if they go back to their previous tab with the dashboard for AAPL on it, when they click the income statement link it will load the company summary for TSLA.
I understand why this is happening but I'm just not sure how to handle it cause I don't fully understand the capabilities of the caching framework or what the best way to handle this would be.
Could someone more familiar shed a bit of light on this for me? Thanks so much!
/r/django
https://redd.it/7nrt3y
[D] Results from Best of Machine Learning 2017 Survey
Results from
https://www.reddit.com/r/MachineLearning/comments/7mjxl4/d_vote_for_best_of_machine_learning_for_2017/
If you missed that thread and there's something you want to mention, post it and I'll put it up. Lots of categories didn't have an entry. You can also make a category yourself.
######**Best Video:**
aurelien geron's capsule networks explanation.
https://youtu.be/pPN8d0E3900
######**Best Blog post:**
Luke Okden-Rayner's criticism of ChestXray14 dataset.
https://lukeoakdenrayner.wordpress.com/2017/12/18/the-chestxray14-dataset-problems/amp/#click=https://t.co/52PSslbAh8
######**Best New Tool:**
Pytorch
"and we all realized what a pain in the ass Tensorflow was and how it didn't need to be that way. In the academic community, it certainly to me feels like pytorch has become the dominant framework (probably not backed up by actual stats... But my school's CV research lab has certainly switched over)"
######**Best Blog Overall:**
Ferenc Huszar's inference.vc
>
> I might have nominated distill.pub instead, but they (and I) consider themselves a journal, which puts them out of the running.
>
> Writing good blog posts is hard, and as such, I feel like it's fair to weight consistent posting. Signal to noise is the primary problem with following blogs (I'd much rather follow a blog that posts one great article a year than a blog that posts 12 articles a year of which 2 are great).
>
> Thus, I've found all of the articles I've read from Ferenc to be insightful.
>
> One other thing: I don't really expect "novel" insights out of blog posts. I think blog posts are best served as a distillation of the current state of research, and sometimes an explanation of ideas. If they have new insights, they'd prolly be writing a paper :)
>
> Some highlights (apologies if there are any highlights from the posts I haven't read):
>
> http://www.inference.vc/my-notes-on-the-numerics-of-gans/
>
> An insightful post into one of the problems GANs face in optimization, framed in the form of vector fields.
>
> http://www.inference.vc/design-patterns/
>
> Really unified and explained to me how all these different machine learning tasks are just optimizing over a loss surface and approximating gradients.
>
> There's a couple other blogs I think deserve honorable mentions, including Sebastian ruder's. I think I might write a meta blog post talking about these other blogs one day.
Runner up:
Berkeley AI Research blog.
http://bair.berkeley.edu/blog/
######**Best Papers:**
Tied
Deep Image Prior
https://dmitryulyanov.github.io/deep_image_prior
Quantile Regression for Distributional RL
https://arxiv.org/abs/1710.10044
######**Best Reddit Post:**
https://www.reddit.com/r/MachineLearning/comments/6l2esd/d_why_cant_you_guys_comment_your_fucking_code/
######**Best Reddit Project:**
https://www.reddit.com/r/MachineLearning/comments/72l4oi/pfinally_managed_to_paint_on_anime_sketch_with/?st=jbqezuqt&sh=a8aa336f
######**Best Course:**
New Andrew Ng deep learning coursera course
######**Best Youtube channel:**
3 way tie
3blue1brown
DanDoesData
2 minute papers
/r/MachineLearning
https://redd.it/7nrzhn
Results from
https://www.reddit.com/r/MachineLearning/comments/7mjxl4/d_vote_for_best_of_machine_learning_for_2017/
If you missed that thread and there's something you want to mention, post it and I'll put it up. Lots of categories didn't have an entry. You can also make a category yourself.
######**Best Video:**
aurelien geron's capsule networks explanation.
https://youtu.be/pPN8d0E3900
######**Best Blog post:**
Luke Okden-Rayner's criticism of ChestXray14 dataset.
https://lukeoakdenrayner.wordpress.com/2017/12/18/the-chestxray14-dataset-problems/amp/#click=https://t.co/52PSslbAh8
######**Best New Tool:**
Pytorch
"and we all realized what a pain in the ass Tensorflow was and how it didn't need to be that way. In the academic community, it certainly to me feels like pytorch has become the dominant framework (probably not backed up by actual stats... But my school's CV research lab has certainly switched over)"
######**Best Blog Overall:**
Ferenc Huszar's inference.vc
>
> I might have nominated distill.pub instead, but they (and I) consider themselves a journal, which puts them out of the running.
>
> Writing good blog posts is hard, and as such, I feel like it's fair to weight consistent posting. Signal to noise is the primary problem with following blogs (I'd much rather follow a blog that posts one great article a year than a blog that posts 12 articles a year of which 2 are great).
>
> Thus, I've found all of the articles I've read from Ferenc to be insightful.
>
> One other thing: I don't really expect "novel" insights out of blog posts. I think blog posts are best served as a distillation of the current state of research, and sometimes an explanation of ideas. If they have new insights, they'd prolly be writing a paper :)
>
> Some highlights (apologies if there are any highlights from the posts I haven't read):
>
> http://www.inference.vc/my-notes-on-the-numerics-of-gans/
>
> An insightful post into one of the problems GANs face in optimization, framed in the form of vector fields.
>
> http://www.inference.vc/design-patterns/
>
> Really unified and explained to me how all these different machine learning tasks are just optimizing over a loss surface and approximating gradients.
>
> There's a couple other blogs I think deserve honorable mentions, including Sebastian ruder's. I think I might write a meta blog post talking about these other blogs one day.
Runner up:
Berkeley AI Research blog.
http://bair.berkeley.edu/blog/
######**Best Papers:**
Tied
Deep Image Prior
https://dmitryulyanov.github.io/deep_image_prior
Quantile Regression for Distributional RL
https://arxiv.org/abs/1710.10044
######**Best Reddit Post:**
https://www.reddit.com/r/MachineLearning/comments/6l2esd/d_why_cant_you_guys_comment_your_fucking_code/
######**Best Reddit Project:**
https://www.reddit.com/r/MachineLearning/comments/72l4oi/pfinally_managed_to_paint_on_anime_sketch_with/?st=jbqezuqt&sh=a8aa336f
######**Best Course:**
New Andrew Ng deep learning coursera course
######**Best Youtube channel:**
3 way tie
3blue1brown
DanDoesData
2 minute papers
/r/MachineLearning
https://redd.it/7nrzhn
reddit
[D] Vote for Best of Machine Learning for 2017... • r/MachineLearning
Hello everyone, I think it would be fun if we votes for best of 2017 for machine learning stuff. I thought of a few categories, feel free to make...
Europilot: Create self-driving trucks inside Euro Truck Simulator 2
https://github.com/marsauto/europilot/
/r/Python
https://redd.it/7nu5kg
https://github.com/marsauto/europilot/
/r/Python
https://redd.it/7nu5kg
GitHub
GitHub - marsauto/europilot: A toolkit for controlling Euro Truck Simulator 2 with the end-to-end driving model
A toolkit for controlling Euro Truck Simulator 2 with the end-to-end driving model - marsauto/europilot
Temporarily work on media file.
Hi everyone. At the moment I'm doing a web app which is a collection of simple tools. One of them is a meme generator. I have the whole Python script already done, it runs well and now I have to put it into Django project. It works like that: user provides image and captions and it saves the meme to a directory.
And while I have and idea how could I incorporate it, I bet it can be done better and simpler. The obvious way to go about is to create a model that requires a file upload. Then the model created upon file upload would be passed into another view in url. Based on the image, the script would create a new, processed image and display it in the template.
But is there a better solution? I think the problem with this would be that over time and after some usage my VPS would be cluttered with images. So I would have to wipe the whole meme generator image directory but how? To not interrupt the user while creating the image. Plus, I think that making the script write to the directory could be troublesome. Is there a way to do it without model creation?
/r/djangolearning
https://redd.it/7nu2l6
Hi everyone. At the moment I'm doing a web app which is a collection of simple tools. One of them is a meme generator. I have the whole Python script already done, it runs well and now I have to put it into Django project. It works like that: user provides image and captions and it saves the meme to a directory.
And while I have and idea how could I incorporate it, I bet it can be done better and simpler. The obvious way to go about is to create a model that requires a file upload. Then the model created upon file upload would be passed into another view in url. Based on the image, the script would create a new, processed image and display it in the template.
But is there a better solution? I think the problem with this would be that over time and after some usage my VPS would be cluttered with images. So I would have to wipe the whole meme generator image directory but how? To not interrupt the user while creating the image. Plus, I think that making the script write to the directory could be troublesome. Is there a way to do it without model creation?
/r/djangolearning
https://redd.it/7nu2l6
reddit
Temporarily work on media file. • r/djangolearning
Hi everyone. At the moment I'm doing a web app which is a collection of simple tools. One of them is a meme generator. I have the whole Python...
Please share with me your method(s) of determining a specific location's timezone.
Hi folks; hope all is well.
If, for whatever reason, you must determine the timezone in which a location resides, how do you go about acquiring the information? You have the location's address.
The solution seems trivial on the surface -- query some existing location service; query an existing geo database; etc... -- but, as often the case, things aren't always trivial; or they are that trivial and there exists many solutions. So, as I'm doing my research, I'd like to have your input/enlightenment.
I appreciate your time.
/r/django
https://redd.it/7nvnak
Hi folks; hope all is well.
If, for whatever reason, you must determine the timezone in which a location resides, how do you go about acquiring the information? You have the location's address.
The solution seems trivial on the surface -- query some existing location service; query an existing geo database; etc... -- but, as often the case, things aren't always trivial; or they are that trivial and there exists many solutions. So, as I'm doing my research, I'd like to have your input/enlightenment.
I appreciate your time.
/r/django
https://redd.it/7nvnak
reddit
Please share with me your method(s) of determining a... • r/django
Hi folks; hope all is well. If, for whatever reason, you must determine the timezone in which a location resides, how do you go about acquiring...
[AF] How to use a docker image of flask on an instance
I am trying to use [this](https://github.com/tiangolo/uwsgi-nginx-flask-docker/tree/master/python3.6) docker image of flask with nginx and uwsgi on an amazon ec2 instance. I am following the readme in the same github repository.
I am really new to this, so I have no idea how this is supposed to look or what obvious stuff I am supposed to do beforehand. I copied all the files in the github above to my instance, so my directory looks like [this](https://i.imgur.com/fDnkiOH.png). I ran `docker build -t myimage .` and `docker run -d --name mycontainer -p 80:80 myimage. `docker ps -s`looks like [this](https://i.imgur.com/6cpDjrM.png). Yet when I go to my website [here](http://ec2-18-217-22-81.us-east-2.compute.amazonaws.com/), nothing shows up. I have done nothing else. What am I missing/doing wrong?
/r/flask
https://redd.it/7nsnwn
I am trying to use [this](https://github.com/tiangolo/uwsgi-nginx-flask-docker/tree/master/python3.6) docker image of flask with nginx and uwsgi on an amazon ec2 instance. I am following the readme in the same github repository.
I am really new to this, so I have no idea how this is supposed to look or what obvious stuff I am supposed to do beforehand. I copied all the files in the github above to my instance, so my directory looks like [this](https://i.imgur.com/fDnkiOH.png). I ran `docker build -t myimage .` and `docker run -d --name mycontainer -p 80:80 myimage. `docker ps -s`looks like [this](https://i.imgur.com/6cpDjrM.png). Yet when I go to my website [here](http://ec2-18-217-22-81.us-east-2.compute.amazonaws.com/), nothing shows up. I have done nothing else. What am I missing/doing wrong?
/r/flask
https://redd.it/7nsnwn
GitHub
tiangolo/uwsgi-nginx-flask-docker
Docker image with uWSGI and Nginx for Flask applications in Python running in a single container. Optionally with Alpine Linux. - tiangolo/uwsgi-nginx-flask-docker
Simple Machine Learning Tutorials
https://elitedatascience.com/machine-learning-projects-for-beginners
/r/Python
https://redd.it/7nvqz9
https://elitedatascience.com/machine-learning-projects-for-beginners
/r/Python
https://redd.it/7nvqz9
EliteDataScience
6 Fun Machine Learning Projects for Beginners
If you want to master machine learning, fun projects are the best investment of your time. Here are 6 beginner-friendly weekend ML project ideas!
pomegranate v0.9.0 released: probabilistic modeling for Python
Howdy all!
I just released a new version of pomegranate. The focus of this version is on missing value support for all models in both the model fitting, structure learning, and inference steps for all models (probability distributions, k-means, mixture models, hidden Markov models, Bayesian networks, naive Bayes/Bayes classifiers). The general manner that this is done is by only collecting sufficient statistics from observed values, and ignoring missing values. This can frequently achieve better results than using common, simple, imputation methods.
* I've added documentation to the [readthedocs page](https://pomegranate.readthedocs.io/en/latest/) under the "Missing Values" section
* I've added an [extensive tutorial](https://github.com/jmschrei/pomegranate/blob/master/tutorials/Tutorial_9_Missing_Values.ipynb) on how missing value support is handled here
* I recently [gave a talk at ODSC west](https://www.youtube.com/watch?v=oF8BKWe9_i8) about pomegranate and the features recently incorporated (sadly before missing values were added).
The modular nature of pomegranate means that one can now use missing value support in conjunction with any of the other features. For example, one can easily add multi-threading to speed up models, or do out-of-core learning with incomplete data sets, or have both missing data and missing labels to do semi-supervised learning with missing data as well!
You can install pomegranate either by cloning the [GitHub repo](https://github.com/jmschrei/pomegranate), or with `pip install pomegranate`. Wheels should be built for all platforms soon, but some issues have delayed that. I hope to have them up soon, so you don't even need to deal with Cython.
As always, I'd love any feedback or questions!
/r/Python
https://redd.it/7nw5t0
Howdy all!
I just released a new version of pomegranate. The focus of this version is on missing value support for all models in both the model fitting, structure learning, and inference steps for all models (probability distributions, k-means, mixture models, hidden Markov models, Bayesian networks, naive Bayes/Bayes classifiers). The general manner that this is done is by only collecting sufficient statistics from observed values, and ignoring missing values. This can frequently achieve better results than using common, simple, imputation methods.
* I've added documentation to the [readthedocs page](https://pomegranate.readthedocs.io/en/latest/) under the "Missing Values" section
* I've added an [extensive tutorial](https://github.com/jmschrei/pomegranate/blob/master/tutorials/Tutorial_9_Missing_Values.ipynb) on how missing value support is handled here
* I recently [gave a talk at ODSC west](https://www.youtube.com/watch?v=oF8BKWe9_i8) about pomegranate and the features recently incorporated (sadly before missing values were added).
The modular nature of pomegranate means that one can now use missing value support in conjunction with any of the other features. For example, one can easily add multi-threading to speed up models, or do out-of-core learning with incomplete data sets, or have both missing data and missing labels to do semi-supervised learning with missing data as well!
You can install pomegranate either by cloning the [GitHub repo](https://github.com/jmschrei/pomegranate), or with `pip install pomegranate`. Wheels should be built for all platforms soon, but some issues have delayed that. I hope to have them up soon, so you don't even need to deal with Cython.
As always, I'd love any feedback or questions!
/r/Python
https://redd.it/7nw5t0
GitHub
jmschrei/pomegranate
pomegranate - Fast, flexible and easy to use probabilistic modelling in Python.
Simple way to ship Python/Flask web app using Docker
https://github.com/chhantyal/flask-docker
/r/Python
https://redd.it/7nvy3l
https://github.com/chhantyal/flask-docker
/r/Python
https://redd.it/7nvy3l
GitHub
GitHub - chhantyal/flask-docker: Fastest way to ship Python web apps, anywhere. Be shipping 🚀 (using Docker, Flask, Gunicorn, Whitenoise)
Fastest way to ship Python web apps, anywhere. Be shipping 🚀 (using Docker, Flask, Gunicorn, Whitenoise) - GitHub - chhantyal/flask-docker: Fastest way to ship Python web apps, anywhere. Be shippin...
dplyr-style Data Manipulation with Pipes in Python
http://www.allenkunle.me/dplyr-style-data-manipulation-in-python
/r/Python
https://redd.it/7o1edg
http://www.allenkunle.me/dplyr-style-data-manipulation-in-python
/r/Python
https://redd.it/7o1edg
reddit
dplyr-style Data Manipulation with Pipes in Python • r/Python
4 points and 1 comments so far on reddit
File format determination library for Python
https://github.com/floyernick/fleep
/r/Python
https://redd.it/7o1ty8
https://github.com/floyernick/fleep
/r/Python
https://redd.it/7o1ty8
GitHub
floyernick/fleep
fleep - File format determination library for Python
Best practices for building an API consumer? Is it good to have a different class defined for each end point?
I am working on a project that displays stats about users for the game Destiny 2. Part of it consists of lots of API requests for pulling in data about the users. I will be dealing with about a dozen different end points at the server. I'm new to django/apis, and wondering what the best strategy is for structuring things. I have not found much of anything written on this topic.
The big decision point I am facing now is whether to create an object corresponding to each end point. For instance:
class GetProfile:
base_url = 'http://destiny.core/getprofile/'
self.response = { }
def make_request(self, parameters):
<create session, make request, return response>
def extract_profile_data(self, response):
<pull pertinent information about user>
And I'd have one of those for each endpoint, e.g., GetUserStats.
I guess a key question is, What am I using this for? I am using the data to fill the database for a django project, so it will all go into model instances. Unfortunately, there isn't a 1:1 correspondence between endpoint and models: most model instances draw information from multiple endpoints. It seems objects like the above could provide useful as a container of information about a request, as well as methods for working with them.
Why am I asking this question at all? One random person online said that the above strategy is redundant, that responses in the requests library have enough structure, and urged me to just get the data and fill your database using more procedural programming. I feel like that person is wrong, but I am noob enough to really not be sure.
I just wanted to check in and see if there's any hive wisdom about this, obvious pitfalls or whatever, before I make this major design choice for my project.
/r/django
https://redd.it/7nyfvj
I am working on a project that displays stats about users for the game Destiny 2. Part of it consists of lots of API requests for pulling in data about the users. I will be dealing with about a dozen different end points at the server. I'm new to django/apis, and wondering what the best strategy is for structuring things. I have not found much of anything written on this topic.
The big decision point I am facing now is whether to create an object corresponding to each end point. For instance:
class GetProfile:
base_url = 'http://destiny.core/getprofile/'
self.response = { }
def make_request(self, parameters):
<create session, make request, return response>
def extract_profile_data(self, response):
<pull pertinent information about user>
And I'd have one of those for each endpoint, e.g., GetUserStats.
I guess a key question is, What am I using this for? I am using the data to fill the database for a django project, so it will all go into model instances. Unfortunately, there isn't a 1:1 correspondence between endpoint and models: most model instances draw information from multiple endpoints. It seems objects like the above could provide useful as a container of information about a request, as well as methods for working with them.
Why am I asking this question at all? One random person online said that the above strategy is redundant, that responses in the requests library have enough structure, and urged me to just get the data and fill your database using more procedural programming. I feel like that person is wrong, but I am noob enough to really not be sure.
I just wanted to check in and see if there's any hive wisdom about this, obvious pitfalls or whatever, before I make this major design choice for my project.
/r/django
https://redd.it/7nyfvj
Reducing the Variance of A/B Test using Prior Information
http://www.degeneratestate.org/posts/2018/Jan/04/reducing-the-variance-of-ab-test-using-prior-information/
/r/pystats
https://redd.it/7o19si
http://www.degeneratestate.org/posts/2018/Jan/04/reducing-the-variance-of-ab-test-using-prior-information/
/r/pystats
https://redd.it/7o19si
reddit
Reducing the Variance of A/B Test using Prior Information • r/pystats
3 points and 0 comments so far on reddit
Djangobook.com Has anyone read the book?
I am wondering if this book is worth buying? I have been reading the free tutorial and it is very good learning tool so far, better than the documentation because it is human readable.
/r/djangolearning
https://redd.it/7nzniv
I am wondering if this book is worth buying? I have been reading the free tutorial and it is very good learning tool so far, better than the documentation because it is human readable.
/r/djangolearning
https://redd.it/7nzniv
reddit
Djangobook.com Has anyone read the book? • r/djangolearning
I am wondering if this book is worth buying? I have been reading the free tutorial and it is very good learning tool so far, better than the...
[N] TensorFlow 1.5.0 Release Candidate
https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0-rc0
/r/MachineLearning
https://redd.it/7o21w7
https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0-rc0
/r/MachineLearning
https://redd.it/7o21w7
GitHub
tensorflow/tensorflow
tensorflow - Computation using data flow graphs for scalable machine learning
Hacking WiFi to inject cryptocurrency miner to HTML requests with Python
http://arnaucode.com/blog/coffeeminer-hacking-wifi-cryptocurrency-miner.html
/r/Python
https://redd.it/7o23lb
http://arnaucode.com/blog/coffeeminer-hacking-wifi-cryptocurrency-miner.html
/r/Python
https://redd.it/7o23lb
Shared db with restricted views per user.
I have a db that is to be shared. Where if a specific object has a certain value I need specific users to be able to see and edit it. No other users can see or edit these objects.
I am trying to understand what way one should do this. I don't know django very well. I am currently looking at conditionals for views. Not sure if that is even possible given i don't know if conditionals on views can filter objects from models.
If anyone could point me in the right direction I would greatly appreciate it.
/r/djangolearning
https://redd.it/7nqkiu
I have a db that is to be shared. Where if a specific object has a certain value I need specific users to be able to see and edit it. No other users can see or edit these objects.
I am trying to understand what way one should do this. I don't know django very well. I am currently looking at conditionals for views. Not sure if that is even possible given i don't know if conditionals on views can filter objects from models.
If anyone could point me in the right direction I would greatly appreciate it.
/r/djangolearning
https://redd.it/7nqkiu
reddit
Shared db with restricted views per user. • r/djangolearning
I have a db that is to be shared. Where if a specific object has a certain value I need specific users to be able to see and edit it. No other...
Wrecking ball animation in 14 lines of code in Blender 3d
http://slicker.me/blender/wreck.htm
/r/Python
https://redd.it/7o2xy3
http://slicker.me/blender/wreck.htm
/r/Python
https://redd.it/7o2xy3
slicker.me
Blender Python tutorial - wrecking ball effect in 14 lines of code