[P] PyTorch under the hood
I made available some slides about a presentation from PyData Montreal called **"PyTorch under the hood"**, for those who are interested in knowing more about how PyTorch works, here is the link to the slide deck:
https://speakerdeck.com/perone/pytorch-under-the-hood
/r/MachineLearning
https://redd.it/avfoso
I made available some slides about a presentation from PyData Montreal called **"PyTorch under the hood"**, for those who are interested in knowing more about how PyTorch works, here is the link to the slide deck:
https://speakerdeck.com/perone/pytorch-under-the-hood
/r/MachineLearning
https://redd.it/avfoso
Speaker Deck
PyTorch under the hood
Presentation about PyTorch internals presented at the PyData Montreal in Feb 2019.
Dependent cached properties on-change refresh management via decorators (not yo’ javascript’s promise)
https://github.com/Andrew-Hogan/Promised
/r/Python
https://redd.it/avkwhj
https://github.com/Andrew-Hogan/Promised
/r/Python
https://redd.it/avkwhj
GitHub
Andrew-Hogan/Promised
A flexible cached property with get/set/del/init/cached-mapping capabilities. - Andrew-Hogan/Promised
My first Python turtle race game built on incredible app on android - Pydroid3
/r/Python
https://redd.it/avk9va
/r/Python
https://redd.it/avk9va
Django Book
I posted here a couple months ago regarding a tutorial book in Django. I am getting ready to release it, and want to go through it a few times for accuracy. However, I love the idea of a open sourcing education. So, after I am OK with the release of it, I was thinking about publishing it to a public github and letting people add their two cents as well. Has anybody done something like this? What do you think? Since there wouldn't be a huge amount of content like Wikipedia, I want to know if anyone has done something similar based on single articles/books.
/r/django
https://redd.it/avno12
I posted here a couple months ago regarding a tutorial book in Django. I am getting ready to release it, and want to go through it a few times for accuracy. However, I love the idea of a open sourcing education. So, after I am OK with the release of it, I was thinking about publishing it to a public github and letting people add their two cents as well. Has anybody done something like this? What do you think? Since there wouldn't be a huge amount of content like Wikipedia, I want to know if anyone has done something similar based on single articles/books.
/r/django
https://redd.it/avno12
reddit
r/django - Django Book
0 votes and 2 comments so far on Reddit
First time coding anything other than HTML on my first day at university (Except the cls function)
/r/Python
https://redd.it/avnpue
/r/Python
https://redd.it/avnpue
The purpose of this sub-reddit
*In this post I don't to be somehow rude, and if it sounds like this - it is just because English is my first language.*
​
the description of sub-reddit says: "news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python"
​
But is the same time, so many times I saw post like "My first project .... ", "my 24h project", "I'm 99 old man, start learning Python, check out my project"... and so one.. Those kind of post gets insane amount of upvotes, which is cool, which means community wants to cheers some how what they done (and It doesn't mean it is very interesting in general)
​
My question is - "Is it relevant to the purpose if the sub-reddit?"
​
I'm not a very active user reddit , I use reddit-app which once per week send me a notification about the most popular in subreddit, too often I see posts about someone pet-project.
​
What do you think? Does it make sense what I said?
​
Thank you, and sorry again for being rude to someones pet-project (I'm founder of couple coding project and understand how it can be hard to hear)
/r/Python
https://redd.it/avpi5h
*In this post I don't to be somehow rude, and if it sounds like this - it is just because English is my first language.*
​
the description of sub-reddit says: "news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python"
​
But is the same time, so many times I saw post like "My first project .... ", "my 24h project", "I'm 99 old man, start learning Python, check out my project"... and so one.. Those kind of post gets insane amount of upvotes, which is cool, which means community wants to cheers some how what they done (and It doesn't mean it is very interesting in general)
​
My question is - "Is it relevant to the purpose if the sub-reddit?"
​
I'm not a very active user reddit , I use reddit-app which once per week send me a notification about the most popular in subreddit, too often I see posts about someone pet-project.
​
What do you think? Does it make sense what I said?
​
Thank you, and sorry again for being rude to someones pet-project (I'm founder of couple coding project and understand how it can be hard to hear)
/r/Python
https://redd.it/avpi5h
reddit
r/Python - The purpose of this sub-reddit
0 votes and 2 comments so far on Reddit
[P] Implementations of 7 research papers on Deep Seq2Seq learning using Pytorch (Sketch generation, handwriting synthesis, variational autoencoders, machine translation, etc.)
Github repo - [https://github.com/GauravBh1010tt/DL-Seq2Seq](https://github.com/GauravBh1010tt/DL-Seq2Seq)
​
Reproducible Pytorch code on Deep Seq2seq learning for the following papers:
* Sketch Generation - [A Neural Representation of Sketch Drawings](https://openreview.net/pdf?id=Hy6GHpkCW)
* Machine translation - [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/pdf/1508.04025.pdf)
* Handwriting synthesis - [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850.pdf)
* Variational Autoencoders (VAE) - [Auto-Encoding Variational Bayes](https://arxiv.org/pdf/1312.6114.pdf)
* Scheduled Sampling - [Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks](https://arxiv.org/pdf/1506.03099.pdf)
* Conditional VAE - [Learning Structured Output Representation using Deep Conditional Generative Models](https://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf)
* Mixture Density Networks - [Mixture Density Networks](https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf)
Let me know if you have any suggestions or comments.
/r/MachineLearning
https://redd.it/avq80b
Github repo - [https://github.com/GauravBh1010tt/DL-Seq2Seq](https://github.com/GauravBh1010tt/DL-Seq2Seq)
​
Reproducible Pytorch code on Deep Seq2seq learning for the following papers:
* Sketch Generation - [A Neural Representation of Sketch Drawings](https://openreview.net/pdf?id=Hy6GHpkCW)
* Machine translation - [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/pdf/1508.04025.pdf)
* Handwriting synthesis - [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850.pdf)
* Variational Autoencoders (VAE) - [Auto-Encoding Variational Bayes](https://arxiv.org/pdf/1312.6114.pdf)
* Scheduled Sampling - [Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks](https://arxiv.org/pdf/1506.03099.pdf)
* Conditional VAE - [Learning Structured Output Representation using Deep Conditional Generative Models](https://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf)
* Mixture Density Networks - [Mixture Density Networks](https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf)
Let me know if you have any suggestions or comments.
/r/MachineLearning
https://redd.it/avq80b
GitHub
GauravBh1010tt/DL-Seq2Seq
Implementation of papers on Deep Seq2seq learning using Pytorch. - GauravBh1010tt/DL-Seq2Seq
Blog categories and blog post filtered by request.user.is_staff (need some help guys)
https://www.reddit.com/r/django/comments/aviu7c/i_have_some_problem_description_below/
/r/djangolearning
https://redd.it/avphfg
https://www.reddit.com/r/django/comments/aviu7c/i_have_some_problem_description_below/
/r/djangolearning
https://redd.it/avphfg
reddit
r/django - I have some problem (description below)
0 votes and 3 comments so far on Reddit
Better workflow using django-rest-framework-simplejwt?
It's my first time working with JSON Web Tokens and I'm using this package called [django-rest-framework-simplejwt](https://github.com/davesque/django-rest-framework-simplejwt). My understanding is if I am going to log in I have to redirect to the token endpoint first to get the token and redirect back to the desired view. So this has to be done in front-end. How should I implement the api view if I don't have to get the token from the same endpoint with the log in endpoint?
/r/django
https://redd.it/avs4q3
It's my first time working with JSON Web Tokens and I'm using this package called [django-rest-framework-simplejwt](https://github.com/davesque/django-rest-framework-simplejwt). My understanding is if I am going to log in I have to redirect to the token endpoint first to get the token and redirect back to the desired view. So this has to be done in front-end. How should I implement the api view if I don't have to get the token from the same endpoint with the log in endpoint?
/r/django
https://redd.it/avs4q3
GitHub
GitHub - jazzband/djangorestframework-simplejwt: A JSON Web Token authentication plugin for the Django REST Framework.
A JSON Web Token authentication plugin for the Django REST Framework. - jazzband/djangorestframework-simplejwt
Data passed Across Forms
Hello,
I am currently working with django, specifically form wizard, to create a multi-tier form that can pass data across itself. I have three forms (form 0, form 1, form 2) that I want to pass data to. I want to be able to get the data from form 0 and pass it to form 1, and then pass certain data elements from form 1 to form 2. I have been playing around with the get_form_initial() method and have had no success. What type of implementations have you used before? I am flexible to checking out new tech and ideas as well
Thanks in advance
/r/django
https://redd.it/avtmul
Hello,
I am currently working with django, specifically form wizard, to create a multi-tier form that can pass data across itself. I have three forms (form 0, form 1, form 2) that I want to pass data to. I want to be able to get the data from form 0 and pass it to form 1, and then pass certain data elements from form 1 to form 2. I have been playing around with the get_form_initial() method and have had no success. What type of implementations have you used before? I am flexible to checking out new tech and ideas as well
Thanks in advance
/r/django
https://redd.it/avtmul
reddit
r/django - Data passed Across Forms
0 votes and 1 comment so far on Reddit
Flask Background Task in Celery unable to use URL_FOR() function? Help?
Hey Guys,
Got a Flask app with a Celery background task that runs and an API that my front end calls to get the "Status" of the job. When it finishes I want to pass it the the URL for the output file. However I have an issue. the Celery Background job can't run URL\_FOR to give me a link back to the Downloads folder in Static.
I get the following error:
'Application was not able to create a URL adapter for request' RuntimeError: Application was not able to create a URL adapter for request independent URL generation. You might be able to fix this by setting the SERVER_NAME config variable.
I know that the URL\_FOR function is working with foreground tasks as I don't have any issues with it on other sections of my app. However the celery task seems to have issues. I don't understand how to fix it and I am not quite sure why the application cannot see the SERVER\_NAME configuration.
/r/flask
https://redd.it/avph7p
Hey Guys,
Got a Flask app with a Celery background task that runs and an API that my front end calls to get the "Status" of the job. When it finishes I want to pass it the the URL for the output file. However I have an issue. the Celery Background job can't run URL\_FOR to give me a link back to the Downloads folder in Static.
I get the following error:
'Application was not able to create a URL adapter for request' RuntimeError: Application was not able to create a URL adapter for request independent URL generation. You might be able to fix this by setting the SERVER_NAME config variable.
I know that the URL\_FOR function is working with foreground tasks as I don't have any issues with it on other sections of my app. However the celery task seems to have issues. I don't understand how to fix it and I am not quite sure why the application cannot see the SERVER\_NAME configuration.
/r/flask
https://redd.it/avph7p
reddit
r/flask - Flask Background Task in Celery unable to use URL_FOR() function? Help?
5 votes and 4 comments so far on Reddit
[R] Accelerating Self-Play Learning in Go
I just released a paper about improving AlphaZero-like self-play learning in Go. Although we have not yet been able to test full-scale runs, it turns out that for reaching levels at least as strong as strong human professionals, by combining a variety of new and old techniques it's possible to greatly improve the efficiency of learning.
While many of the techniques involve game-specific properties or tuning to the domain more than AlphaZero did, some of them and most of the ideas and general principles presented could generalize to other games besides Go or possibly more broadly to other reinforcement-learning environments with sequential actions.
Additionally, as a result of the large speedup of all of these techniques combined, we found the hardware and cost necessary to do meaningful research is much reduced. Although our runs were not nearly as long, *we only needed dozens of GPUs rather than thousands* \- we hope this is a first step to putting the AlphaZero process in domains with state spaces as large as Go within reach of smaller research groups!
​
**Abstract:** By introducing several new Go-specific and non-Go-specific techniques along with other tuning, we accelerate self-play learning in Go. Like AlphaZero and Leela Zero, a popular open-source
/r/MachineLearning
https://redd.it/avv5dj
I just released a paper about improving AlphaZero-like self-play learning in Go. Although we have not yet been able to test full-scale runs, it turns out that for reaching levels at least as strong as strong human professionals, by combining a variety of new and old techniques it's possible to greatly improve the efficiency of learning.
While many of the techniques involve game-specific properties or tuning to the domain more than AlphaZero did, some of them and most of the ideas and general principles presented could generalize to other games besides Go or possibly more broadly to other reinforcement-learning environments with sequential actions.
Additionally, as a result of the large speedup of all of these techniques combined, we found the hardware and cost necessary to do meaningful research is much reduced. Although our runs were not nearly as long, *we only needed dozens of GPUs rather than thousands* \- we hope this is a first step to putting the AlphaZero process in domains with state spaces as large as Go within reach of smaller research groups!
​
**Abstract:** By introducing several new Go-specific and non-Go-specific techniques along with other tuning, we accelerate self-play learning in Go. Like AlphaZero and Leela Zero, a popular open-source
/r/MachineLearning
https://redd.it/avv5dj
reddit
r/MachineLearning - [R] Accelerating Self-Play Learning in Go
62 votes and 0 comments so far on Reddit
adding a "time elapsed" wrapper
I read a little about endpoint decorators and wondering if this is a good idea:
if i add something like `@timed` decorator , it will wrap my request in a timer and append `"_time": "[elapsed: 0:00:00.234] xxxx"` field to the resulting json response.
questions:
1. is this a good approach or is there a better (more pythonic?) way
2. good example to follow ?
3. perhaps there is already a similar library?
/r/flask
https://redd.it/avsois
I read a little about endpoint decorators and wondering if this is a good idea:
if i add something like `@timed` decorator , it will wrap my request in a timer and append `"_time": "[elapsed: 0:00:00.234] xxxx"` field to the resulting json response.
questions:
1. is this a good approach or is there a better (more pythonic?) way
2. good example to follow ?
3. perhaps there is already a similar library?
/r/flask
https://redd.it/avsois
reddit
r/flask - adding a "time elapsed" wrapper
3 votes and 3 comments so far on Reddit
/r/Python Job Board
Top Level comments must be **Job Opportunities.**
Please include **Location** or any other **Requirements** in your comment. If you require people to work on site in San Francisco, *you must note that in your post.* If you require an Engineering degree, *you must note that in your post*.
Please include as much information as possible.
If you are looking for jobs, send a PM to the poster.
/r/Python
https://redd.it/avrws2
Top Level comments must be **Job Opportunities.**
Please include **Location** or any other **Requirements** in your comment. If you require people to work on site in San Francisco, *you must note that in your post.* If you require an Engineering degree, *you must note that in your post*.
Please include as much information as possible.
If you are looking for jobs, send a PM to the poster.
/r/Python
https://redd.it/avrws2
reddit
r/Python - /r/Python Job Board
13 votes and 4 comments so far on Reddit
Pywebcopy: A pure python website and webpages cloning library.
https://github.com/rajatomar788/pywebcopy
/r/pystats
https://redd.it/avqh7g
https://github.com/rajatomar788/pywebcopy
/r/pystats
https://redd.it/avqh7g
GitHub
GitHub - rajatomar788/pywebcopy: Locally saves webpages to your hard disk with images, css, js & links as is.
Locally saves webpages to your hard disk with images, css, js & links as is. - rajatomar788/pywebcopy
Estimate count with conditions when using PostgreSQL
Hi everyone, I've been banging my head against this issue for a while now and would greatly appreciate any advice.
​
I have a model `Rating` with a foreign key relationship to a model `Media`. In the get\_queryset method of the Media ViewSet (REST framework) I need to annotate the count of the rating\_set ('ratings') in order to get the total number of ratings for each Media:
def get_queryset(self):
queryset = Media.objects.all()
queryset = queryset.annotate(ratings_count=Count('ratings'))
This works perfectly fine with a small number of rows, but when I simulate production data with a relatively large number of rows (1,000 Media each with 1,000 Ratings, therefore 1,000,000 Ratings total) then the response time for retrieving one page of 15 Media is prohibitively long (10+ seconds). If I remove the annotation then the response time is a few milliseconds.
​
If I replace Count() with a RawSQL() statement then I can improve the response time slightly, but it is still far too slow.
def get_queryset(self):
queryset = Media.objects.all()
queryset =
/r/django
https://redd.it/aw0vdd
Hi everyone, I've been banging my head against this issue for a while now and would greatly appreciate any advice.
​
I have a model `Rating` with a foreign key relationship to a model `Media`. In the get\_queryset method of the Media ViewSet (REST framework) I need to annotate the count of the rating\_set ('ratings') in order to get the total number of ratings for each Media:
def get_queryset(self):
queryset = Media.objects.all()
queryset = queryset.annotate(ratings_count=Count('ratings'))
This works perfectly fine with a small number of rows, but when I simulate production data with a relatively large number of rows (1,000 Media each with 1,000 Ratings, therefore 1,000,000 Ratings total) then the response time for retrieving one page of 15 Media is prohibitively long (10+ seconds). If I remove the annotation then the response time is a few milliseconds.
​
If I replace Count() with a RawSQL() statement then I can improve the response time slightly, but it is still far too slow.
def get_queryset(self):
queryset = Media.objects.all()
queryset =
/r/django
https://redd.it/aw0vdd
reddit
r/django - Estimate count with conditions when using PostgreSQL
5 votes and 4 comments so far on Reddit
Help reducing Celery Beat memory usage
I have a Django app with a few tasks we use celery beat to run a few long running tasks. The Django app is deployed with docker compose and we use a separate container for the celery instance.
But inside the celery container I see three running processes, each the size of the main app.. so right away our memory usage is up 300%
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
In the compose file I have tried to set --concurrency 1, but still seeing three processes. I assume that it does not make sense to use the entire app for the celery process? And I should make a smaller dedicated app for that?
I know this is not much memory, but I have many instances of the app (50-100) so it eats up many gigs very quickly. Anyone know the best way to have minimal celery tasks running in Django?
/r/django
https://redd.it/aw1f7f
I have a Django app with a few tasks we use celery beat to run a few long running tasks. The Django app is deployed with docker compose and we use a separate container for the celery instance.
But inside the celery container I see three running processes, each the size of the main app.. so right away our memory usage is up 300%
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
In the compose file I have tried to set --concurrency 1, but still seeing three processes. I assume that it does not make sense to use the entire app for the celery process? And I should make a smaller dedicated app for that?
I know this is not much memory, but I have many instances of the app (50-100) so it eats up many gigs very quickly. Anyone know the best way to have minimal celery tasks running in Django?
/r/django
https://redd.it/aw1f7f
reddit
r/django - Help reducing Celery Beat memory usage
5 votes and 1 comment so far on Reddit
Download zip from admin the async way, best practices?
Hi all,
When in an Admin model list view, I define a custom action 'download selected objects as a zip', I get the data from the DB, create CSV'z, zip them and return the zip as a HTTP response.
​
As this blocks the request-response queue when there are a lot of objects to be csv'd and zipped, I'm looking for a way to offload the ZIP creation to an async worker, and send back the ZIP whenever ready to the user.
The Zip should not be preserved, just kept in memory to send back over the wire.
Now I would be grateful for any opinions on how to do this most efficiently. Using django channels and WebSockets on the admin page seems doable, but is it best practice to send a zip of possibly a few 10's (100's?) of MB's over websocket?
​
Fallback would be to create the zip using Celery task, store it in Media and make it available for download somewhere, and notifiy the user that it is ready for download, after download deleting it automatically, but that feels a lot of overhead for what is needed.
​
It all reminds me a bit of the 'please wait till your download
/r/django
https://redd.it/aw3uub
Hi all,
When in an Admin model list view, I define a custom action 'download selected objects as a zip', I get the data from the DB, create CSV'z, zip them and return the zip as a HTTP response.
​
As this blocks the request-response queue when there are a lot of objects to be csv'd and zipped, I'm looking for a way to offload the ZIP creation to an async worker, and send back the ZIP whenever ready to the user.
The Zip should not be preserved, just kept in memory to send back over the wire.
Now I would be grateful for any opinions on how to do this most efficiently. Using django channels and WebSockets on the admin page seems doable, but is it best practice to send a zip of possibly a few 10's (100's?) of MB's over websocket?
​
Fallback would be to create the zip using Celery task, store it in Media and make it available for download somewhere, and notifiy the user that it is ready for download, after download deleting it automatically, but that feels a lot of overhead for what is needed.
​
It all reminds me a bit of the 'please wait till your download
/r/django
https://redd.it/aw3uub
reddit
r/django - Download zip from admin the async way, best practices?
7 votes and 5 comments so far on Reddit
Emailing with django
I am hoping to do two things with email - 1. communicate with existing clients with confirmation emails from form submissions, along with reminder emails for upcoming events and other things like that. 2. Send emails to prospects who are not my clients ie. do not have accounts. What is the best way to avoid getting blacklisted as a spammer? I'm just getting started with django and building my first app now, but email will be very important for it so I want to start thinking about this now. Not really sure where to begin but I have found a few tutorials on email. I assume I will need to learn to use something like Celery as well. I currently have an email provider that uses cloud exchange and I would like emails to come from my domain.
/r/djangolearning
https://redd.it/aw78mu
I am hoping to do two things with email - 1. communicate with existing clients with confirmation emails from form submissions, along with reminder emails for upcoming events and other things like that. 2. Send emails to prospects who are not my clients ie. do not have accounts. What is the best way to avoid getting blacklisted as a spammer? I'm just getting started with django and building my first app now, but email will be very important for it so I want to start thinking about this now. Not really sure where to begin but I have found a few tutorials on email. I assume I will need to learn to use something like Celery as well. I currently have an email provider that uses cloud exchange and I would like emails to come from my domain.
/r/djangolearning
https://redd.it/aw78mu
reddit
r/djangolearning - Emailing with django
0 votes and 0 comments so far on Reddit
I just published a 17-part video series on learning regex in Python
https://www.youtube.com/watch?v=xp1vX15inBg&list=PLyb_C2HpOQSDxe5Y9viJ0JDqGUCetboxB&index=1
/r/Python
https://redd.it/aw18cc
https://www.youtube.com/watch?v=xp1vX15inBg&list=PLyb_C2HpOQSDxe5Y9viJ0JDqGUCetboxB&index=1
/r/Python
https://redd.it/aw18cc
YouTube
RegEx in Python (Part-1) | Introduction
Welcome to the first video of my series "RegEx in Python". This series focuses on learning the basics of regular expressions by implementing it in Python Programming Language.
A regular expression is a sequence of characters that define a search pattern.…
A regular expression is a sequence of characters that define a search pattern.…