Data passed Across Forms
Hello,
I am currently working with django, specifically form wizard, to create a multi-tier form that can pass data across itself. I have three forms (form 0, form 1, form 2) that I want to pass data to. I want to be able to get the data from form 0 and pass it to form 1, and then pass certain data elements from form 1 to form 2. I have been playing around with the get_form_initial() method and have had no success. What type of implementations have you used before? I am flexible to checking out new tech and ideas as well
Thanks in advance
/r/django
https://redd.it/avtmul
Hello,
I am currently working with django, specifically form wizard, to create a multi-tier form that can pass data across itself. I have three forms (form 0, form 1, form 2) that I want to pass data to. I want to be able to get the data from form 0 and pass it to form 1, and then pass certain data elements from form 1 to form 2. I have been playing around with the get_form_initial() method and have had no success. What type of implementations have you used before? I am flexible to checking out new tech and ideas as well
Thanks in advance
/r/django
https://redd.it/avtmul
reddit
r/django - Data passed Across Forms
0 votes and 1 comment so far on Reddit
Flask Background Task in Celery unable to use URL_FOR() function? Help?
Hey Guys,
Got a Flask app with a Celery background task that runs and an API that my front end calls to get the "Status" of the job. When it finishes I want to pass it the the URL for the output file. However I have an issue. the Celery Background job can't run URL\_FOR to give me a link back to the Downloads folder in Static.
I get the following error:
'Application was not able to create a URL adapter for request' RuntimeError: Application was not able to create a URL adapter for request independent URL generation. You might be able to fix this by setting the SERVER_NAME config variable.
I know that the URL\_FOR function is working with foreground tasks as I don't have any issues with it on other sections of my app. However the celery task seems to have issues. I don't understand how to fix it and I am not quite sure why the application cannot see the SERVER\_NAME configuration.
/r/flask
https://redd.it/avph7p
Hey Guys,
Got a Flask app with a Celery background task that runs and an API that my front end calls to get the "Status" of the job. When it finishes I want to pass it the the URL for the output file. However I have an issue. the Celery Background job can't run URL\_FOR to give me a link back to the Downloads folder in Static.
I get the following error:
'Application was not able to create a URL adapter for request' RuntimeError: Application was not able to create a URL adapter for request independent URL generation. You might be able to fix this by setting the SERVER_NAME config variable.
I know that the URL\_FOR function is working with foreground tasks as I don't have any issues with it on other sections of my app. However the celery task seems to have issues. I don't understand how to fix it and I am not quite sure why the application cannot see the SERVER\_NAME configuration.
/r/flask
https://redd.it/avph7p
reddit
r/flask - Flask Background Task in Celery unable to use URL_FOR() function? Help?
5 votes and 4 comments so far on Reddit
[R] Accelerating Self-Play Learning in Go
I just released a paper about improving AlphaZero-like self-play learning in Go. Although we have not yet been able to test full-scale runs, it turns out that for reaching levels at least as strong as strong human professionals, by combining a variety of new and old techniques it's possible to greatly improve the efficiency of learning.
While many of the techniques involve game-specific properties or tuning to the domain more than AlphaZero did, some of them and most of the ideas and general principles presented could generalize to other games besides Go or possibly more broadly to other reinforcement-learning environments with sequential actions.
Additionally, as a result of the large speedup of all of these techniques combined, we found the hardware and cost necessary to do meaningful research is much reduced. Although our runs were not nearly as long, *we only needed dozens of GPUs rather than thousands* \- we hope this is a first step to putting the AlphaZero process in domains with state spaces as large as Go within reach of smaller research groups!
​
**Abstract:** By introducing several new Go-specific and non-Go-specific techniques along with other tuning, we accelerate self-play learning in Go. Like AlphaZero and Leela Zero, a popular open-source
/r/MachineLearning
https://redd.it/avv5dj
I just released a paper about improving AlphaZero-like self-play learning in Go. Although we have not yet been able to test full-scale runs, it turns out that for reaching levels at least as strong as strong human professionals, by combining a variety of new and old techniques it's possible to greatly improve the efficiency of learning.
While many of the techniques involve game-specific properties or tuning to the domain more than AlphaZero did, some of them and most of the ideas and general principles presented could generalize to other games besides Go or possibly more broadly to other reinforcement-learning environments with sequential actions.
Additionally, as a result of the large speedup of all of these techniques combined, we found the hardware and cost necessary to do meaningful research is much reduced. Although our runs were not nearly as long, *we only needed dozens of GPUs rather than thousands* \- we hope this is a first step to putting the AlphaZero process in domains with state spaces as large as Go within reach of smaller research groups!
​
**Abstract:** By introducing several new Go-specific and non-Go-specific techniques along with other tuning, we accelerate self-play learning in Go. Like AlphaZero and Leela Zero, a popular open-source
/r/MachineLearning
https://redd.it/avv5dj
reddit
r/MachineLearning - [R] Accelerating Self-Play Learning in Go
62 votes and 0 comments so far on Reddit
adding a "time elapsed" wrapper
I read a little about endpoint decorators and wondering if this is a good idea:
if i add something like `@timed` decorator , it will wrap my request in a timer and append `"_time": "[elapsed: 0:00:00.234] xxxx"` field to the resulting json response.
questions:
1. is this a good approach or is there a better (more pythonic?) way
2. good example to follow ?
3. perhaps there is already a similar library?
/r/flask
https://redd.it/avsois
I read a little about endpoint decorators and wondering if this is a good idea:
if i add something like `@timed` decorator , it will wrap my request in a timer and append `"_time": "[elapsed: 0:00:00.234] xxxx"` field to the resulting json response.
questions:
1. is this a good approach or is there a better (more pythonic?) way
2. good example to follow ?
3. perhaps there is already a similar library?
/r/flask
https://redd.it/avsois
reddit
r/flask - adding a "time elapsed" wrapper
3 votes and 3 comments so far on Reddit
/r/Python Job Board
Top Level comments must be **Job Opportunities.**
Please include **Location** or any other **Requirements** in your comment. If you require people to work on site in San Francisco, *you must note that in your post.* If you require an Engineering degree, *you must note that in your post*.
Please include as much information as possible.
If you are looking for jobs, send a PM to the poster.
/r/Python
https://redd.it/avrws2
Top Level comments must be **Job Opportunities.**
Please include **Location** or any other **Requirements** in your comment. If you require people to work on site in San Francisco, *you must note that in your post.* If you require an Engineering degree, *you must note that in your post*.
Please include as much information as possible.
If you are looking for jobs, send a PM to the poster.
/r/Python
https://redd.it/avrws2
reddit
r/Python - /r/Python Job Board
13 votes and 4 comments so far on Reddit
Pywebcopy: A pure python website and webpages cloning library.
https://github.com/rajatomar788/pywebcopy
/r/pystats
https://redd.it/avqh7g
https://github.com/rajatomar788/pywebcopy
/r/pystats
https://redd.it/avqh7g
GitHub
GitHub - rajatomar788/pywebcopy: Locally saves webpages to your hard disk with images, css, js & links as is.
Locally saves webpages to your hard disk with images, css, js & links as is. - rajatomar788/pywebcopy
Estimate count with conditions when using PostgreSQL
Hi everyone, I've been banging my head against this issue for a while now and would greatly appreciate any advice.
​
I have a model `Rating` with a foreign key relationship to a model `Media`. In the get\_queryset method of the Media ViewSet (REST framework) I need to annotate the count of the rating\_set ('ratings') in order to get the total number of ratings for each Media:
def get_queryset(self):
queryset = Media.objects.all()
queryset = queryset.annotate(ratings_count=Count('ratings'))
This works perfectly fine with a small number of rows, but when I simulate production data with a relatively large number of rows (1,000 Media each with 1,000 Ratings, therefore 1,000,000 Ratings total) then the response time for retrieving one page of 15 Media is prohibitively long (10+ seconds). If I remove the annotation then the response time is a few milliseconds.
​
If I replace Count() with a RawSQL() statement then I can improve the response time slightly, but it is still far too slow.
def get_queryset(self):
queryset = Media.objects.all()
queryset =
/r/django
https://redd.it/aw0vdd
Hi everyone, I've been banging my head against this issue for a while now and would greatly appreciate any advice.
​
I have a model `Rating` with a foreign key relationship to a model `Media`. In the get\_queryset method of the Media ViewSet (REST framework) I need to annotate the count of the rating\_set ('ratings') in order to get the total number of ratings for each Media:
def get_queryset(self):
queryset = Media.objects.all()
queryset = queryset.annotate(ratings_count=Count('ratings'))
This works perfectly fine with a small number of rows, but when I simulate production data with a relatively large number of rows (1,000 Media each with 1,000 Ratings, therefore 1,000,000 Ratings total) then the response time for retrieving one page of 15 Media is prohibitively long (10+ seconds). If I remove the annotation then the response time is a few milliseconds.
​
If I replace Count() with a RawSQL() statement then I can improve the response time slightly, but it is still far too slow.
def get_queryset(self):
queryset = Media.objects.all()
queryset =
/r/django
https://redd.it/aw0vdd
reddit
r/django - Estimate count with conditions when using PostgreSQL
5 votes and 4 comments so far on Reddit
Help reducing Celery Beat memory usage
I have a Django app with a few tasks we use celery beat to run a few long running tasks. The Django app is deployed with docker compose and we use a separate container for the celery instance.
But inside the celery container I see three running processes, each the size of the main app.. so right away our memory usage is up 300%
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
In the compose file I have tried to set --concurrency 1, but still seeing three processes. I assume that it does not make sense to use the entire app for the celery process? And I should make a smaller dedicated app for that?
I know this is not much memory, but I have many instances of the app (50-100) so it eats up many gigs very quickly. Anyone know the best way to have minimal celery tasks running in Django?
/r/django
https://redd.it/aw1f7f
I have a Django app with a few tasks we use celery beat to run a few long running tasks. The Django app is deployed with docker compose and we use a separate container for the celery instance.
But inside the celery container I see three running processes, each the size of the main app.. so right away our memory usage is up 300%
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
05:51 0:01 /usr/local/bin/python /usr/local/bin/celery -A myApp worker --beat --scheduler django --loglevel=info
In the compose file I have tried to set --concurrency 1, but still seeing three processes. I assume that it does not make sense to use the entire app for the celery process? And I should make a smaller dedicated app for that?
I know this is not much memory, but I have many instances of the app (50-100) so it eats up many gigs very quickly. Anyone know the best way to have minimal celery tasks running in Django?
/r/django
https://redd.it/aw1f7f
reddit
r/django - Help reducing Celery Beat memory usage
5 votes and 1 comment so far on Reddit
Download zip from admin the async way, best practices?
Hi all,
When in an Admin model list view, I define a custom action 'download selected objects as a zip', I get the data from the DB, create CSV'z, zip them and return the zip as a HTTP response.
​
As this blocks the request-response queue when there are a lot of objects to be csv'd and zipped, I'm looking for a way to offload the ZIP creation to an async worker, and send back the ZIP whenever ready to the user.
The Zip should not be preserved, just kept in memory to send back over the wire.
Now I would be grateful for any opinions on how to do this most efficiently. Using django channels and WebSockets on the admin page seems doable, but is it best practice to send a zip of possibly a few 10's (100's?) of MB's over websocket?
​
Fallback would be to create the zip using Celery task, store it in Media and make it available for download somewhere, and notifiy the user that it is ready for download, after download deleting it automatically, but that feels a lot of overhead for what is needed.
​
It all reminds me a bit of the 'please wait till your download
/r/django
https://redd.it/aw3uub
Hi all,
When in an Admin model list view, I define a custom action 'download selected objects as a zip', I get the data from the DB, create CSV'z, zip them and return the zip as a HTTP response.
​
As this blocks the request-response queue when there are a lot of objects to be csv'd and zipped, I'm looking for a way to offload the ZIP creation to an async worker, and send back the ZIP whenever ready to the user.
The Zip should not be preserved, just kept in memory to send back over the wire.
Now I would be grateful for any opinions on how to do this most efficiently. Using django channels and WebSockets on the admin page seems doable, but is it best practice to send a zip of possibly a few 10's (100's?) of MB's over websocket?
​
Fallback would be to create the zip using Celery task, store it in Media and make it available for download somewhere, and notifiy the user that it is ready for download, after download deleting it automatically, but that feels a lot of overhead for what is needed.
​
It all reminds me a bit of the 'please wait till your download
/r/django
https://redd.it/aw3uub
reddit
r/django - Download zip from admin the async way, best practices?
7 votes and 5 comments so far on Reddit
Emailing with django
I am hoping to do two things with email - 1. communicate with existing clients with confirmation emails from form submissions, along with reminder emails for upcoming events and other things like that. 2. Send emails to prospects who are not my clients ie. do not have accounts. What is the best way to avoid getting blacklisted as a spammer? I'm just getting started with django and building my first app now, but email will be very important for it so I want to start thinking about this now. Not really sure where to begin but I have found a few tutorials on email. I assume I will need to learn to use something like Celery as well. I currently have an email provider that uses cloud exchange and I would like emails to come from my domain.
/r/djangolearning
https://redd.it/aw78mu
I am hoping to do two things with email - 1. communicate with existing clients with confirmation emails from form submissions, along with reminder emails for upcoming events and other things like that. 2. Send emails to prospects who are not my clients ie. do not have accounts. What is the best way to avoid getting blacklisted as a spammer? I'm just getting started with django and building my first app now, but email will be very important for it so I want to start thinking about this now. Not really sure where to begin but I have found a few tutorials on email. I assume I will need to learn to use something like Celery as well. I currently have an email provider that uses cloud exchange and I would like emails to come from my domain.
/r/djangolearning
https://redd.it/aw78mu
reddit
r/djangolearning - Emailing with django
0 votes and 0 comments so far on Reddit
I just published a 17-part video series on learning regex in Python
https://www.youtube.com/watch?v=xp1vX15inBg&list=PLyb_C2HpOQSDxe5Y9viJ0JDqGUCetboxB&index=1
/r/Python
https://redd.it/aw18cc
https://www.youtube.com/watch?v=xp1vX15inBg&list=PLyb_C2HpOQSDxe5Y9viJ0JDqGUCetboxB&index=1
/r/Python
https://redd.it/aw18cc
YouTube
RegEx in Python (Part-1) | Introduction
Welcome to the first video of my series "RegEx in Python". This series focuses on learning the basics of regular expressions by implementing it in Python Programming Language.
A regular expression is a sequence of characters that define a search pattern.…
A regular expression is a sequence of characters that define a search pattern.…
[Noob] Are there security issues from loading data from csv files?
Hi, so I built my first app using flask and wtforms (flask-wtf).
I probably have a dumb question.
I noticed sql is more commonly used and i'm familiar with with it and sqlalchemy.
It's a quiz and I wonder if there are any security issues for me using a csv files to load all the data for each question in?
The reason i like to use csv is just that I plan edit it frequently, have lot's of different files and I find it more comfortable.
these are small quizzes and speed has thus far not been an issue at all.
Thank you for any insight you can share, it is much appreciated by me!
/r/flask
https://redd.it/aw6hle
Hi, so I built my first app using flask and wtforms (flask-wtf).
I probably have a dumb question.
I noticed sql is more commonly used and i'm familiar with with it and sqlalchemy.
It's a quiz and I wonder if there are any security issues for me using a csv files to load all the data for each question in?
The reason i like to use csv is just that I plan edit it frequently, have lot's of different files and I find it more comfortable.
these are small quizzes and speed has thus far not been an issue at all.
Thank you for any insight you can share, it is much appreciated by me!
/r/flask
https://redd.it/aw6hle
reddit
r/flask - [Noob] Are there security issues from loading data from csv files?
3 votes and 7 comments so far on Reddit
Charts and data visualization
Hello! I am working on my first real django project so bare with me, im new to this! I am pretty new to python too but I have some basic knowledge! HTML and CSS is solid for me!
I am trying to create web app that users could create new ”tables” and input data to them. Then each table would have it’s own linechart showing the data they entered. What would be the way to go with this?
Only chart im able to get to show on my page is google charts with some dummy data and it felt a bit weak.. I have also tried fusioncharts but I was not able to get it to work and I cannot affor the commercial license.
I dont need library as poweful as fusioncharts but I need very very good and simple documentation. Any suggestions?
/r/django
https://redd.it/aw64y6
Hello! I am working on my first real django project so bare with me, im new to this! I am pretty new to python too but I have some basic knowledge! HTML and CSS is solid for me!
I am trying to create web app that users could create new ”tables” and input data to them. Then each table would have it’s own linechart showing the data they entered. What would be the way to go with this?
Only chart im able to get to show on my page is google charts with some dummy data and it felt a bit weak.. I have also tried fusioncharts but I was not able to get it to work and I cannot affor the commercial license.
I dont need library as poweful as fusioncharts but I need very very good and simple documentation. Any suggestions?
/r/django
https://redd.it/aw64y6
reddit
r/django - Charts and data visualization
5 votes and 8 comments so far on Reddit
[AF] WTForms/Flask: Dynamic min_entries
I'm looking to use the min_entries WTForms parameter dynamically i.e. without hardcoding numbers.
It would look something like this in the form.py:
class TestSpecForm(FlaskForm):
student_number = IntegerField('Number of Students')
class StudentForm(FlaskForm):
answer = StringField('')
class TestInputForm(FlaskForm):
students = FieldList(FormField(StudentForm)) # I'd like to insert the dynamic min_entries here
submit = SubmitField('Submit')
and something like this in the views.py:
def input(key_id):
key = Testspecs.query.get_or_404(key_id)
student_number = key.student_number
/r/flask
https://redd.it/aw9ys9
I'm looking to use the min_entries WTForms parameter dynamically i.e. without hardcoding numbers.
It would look something like this in the form.py:
class TestSpecForm(FlaskForm):
student_number = IntegerField('Number of Students')
class StudentForm(FlaskForm):
answer = StringField('')
class TestInputForm(FlaskForm):
students = FieldList(FormField(StudentForm)) # I'd like to insert the dynamic min_entries here
submit = SubmitField('Submit')
and something like this in the views.py:
def input(key_id):
key = Testspecs.query.get_or_404(key_id)
student_number = key.student_number
/r/flask
https://redd.it/aw9ys9
reddit
r/flask - [AF] WTForms/Flask: Dynamic min_entries
1 vote and 0 comments so far on Reddit
Useful Python tricks
So, I saw a while ago somebody posted a thread asking for ways that people have used python to automate their lives or just make it easier in general. Most of the replies as can be expected involved making tasks at work easier etc, which don't really apply to someone like me who doesn't have an office job or anything similar.
So, what are some cool uses for python for maybe younger people or just people without office jobs in general? It would be fun to find a project to work on or something that I can use python for in my daily life but as far as I'm aware i can't really think of anything that I need automated.
Thanks!
/r/Python
https://redd.it/awb9v5
So, I saw a while ago somebody posted a thread asking for ways that people have used python to automate their lives or just make it easier in general. Most of the replies as can be expected involved making tasks at work easier etc, which don't really apply to someone like me who doesn't have an office job or anything similar.
So, what are some cool uses for python for maybe younger people or just people without office jobs in general? It would be fun to find a project to work on or something that I can use python for in my daily life but as far as I'm aware i can't really think of anything that I need automated.
Thanks!
/r/Python
https://redd.it/awb9v5
reddit
r/Python - Useful Python tricks
0 votes and 2 comments so far on Reddit
pydis - A redis clone in Python 3 to disprove some falsehoods about performance
https://github.com/boramalper/pydis
/r/Python
https://redd.it/awav6k
https://github.com/boramalper/pydis
/r/Python
https://redd.it/awav6k
GitHub
GitHub - boramalper/pydis: A redis clone in Python 3 to disprove some falsehoods about performance.
A redis clone in Python 3 to disprove some falsehoods about performance. - boramalper/pydis
Pipenv and Poetry are both in the weeds - what can we do to help?
I discovered that neither project has merged a PR in roughly a month. Meanwhile, PR's and Issues are flowing in much faster than they're getting resolved. Intrigued by this, I checked dependency/package managers in other languages, and didn't see the same problems (Python/Conda, Rust/Cargo, Ruby/Bundler, Node/npm).
I think this is a big issue. It doesn't just mean that bugs aren't getting fixed, but that they're not accepting help from the community.
Is it a matter of the project owners needing to delegate more? I.e., one person, or a tiny group, is responsible for all decisions? Is there something that we, the stakeholders -- who aren't the maintainers -- can do to help?
\-----
Source: I used my opensource repocheck app to collect the data. E.g.:
* [Analysis: Pipenv](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fpypa%2Fpipenv) score: **1.6** out of 10
* [Analysis: Poetry](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fsdispater%2Fpoetry) score: **2.8** out of 10
* [Analysis: Conda](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fconda%2Fconda) score: 6.4 out of 10
* [Analysis: Ruby Bundler](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fbundler%2Fbundler) score: 7.0 out of 10
/r/Python
https://redd.it/awbr99
I discovered that neither project has merged a PR in roughly a month. Meanwhile, PR's and Issues are flowing in much faster than they're getting resolved. Intrigued by this, I checked dependency/package managers in other languages, and didn't see the same problems (Python/Conda, Rust/Cargo, Ruby/Bundler, Node/npm).
I think this is a big issue. It doesn't just mean that bugs aren't getting fixed, but that they're not accepting help from the community.
Is it a matter of the project owners needing to delegate more? I.e., one person, or a tiny group, is responsible for all decisions? Is there something that we, the stakeholders -- who aren't the maintainers -- can do to help?
\-----
Source: I used my opensource repocheck app to collect the data. E.g.:
* [Analysis: Pipenv](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fpypa%2Fpipenv) score: **1.6** out of 10
* [Analysis: Poetry](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fsdispater%2Fpoetry) score: **2.8** out of 10
* [Analysis: Conda](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fconda%2Fconda) score: 6.4 out of 10
* [Analysis: Ruby Bundler](https://repocheck.com/#https%3A%2F%2Fgithub.com%2Fbundler%2Fbundler) score: 7.0 out of 10
/r/Python
https://redd.it/awbr99
reddit
r/Python - Pipenv and Poetry are both in the weeds - what can we do to help?
14 votes and 4 comments so far on Reddit
What are some well designed PyQT apps?
I’m looking for some design inspiration / best practices for developing a PyQT app.
/r/Python
https://redd.it/awavhh
I’m looking for some design inspiration / best practices for developing a PyQT app.
/r/Python
https://redd.it/awavhh
reddit
r/Python - What are some well designed PyQT apps?
11 votes and 2 comments so far on Reddit
Your migrations are bad and you should feel bad
https://djrobstep.com/talks/your-migrations-are-bad-and-you-should-feel-bad
/r/django
https://redd.it/awcisv
https://djrobstep.com/talks/your-migrations-are-bad-and-you-should-feel-bad
/r/django
https://redd.it/awcisv
reddit
r/django - Your migrations are bad and you should feel bad
10 votes and 4 comments so far on Reddit
Django Object Persistence
Hey guys, so here's the issue I'm facing: I have a function that interpolates datasets via a scipy object. Generating these datasets takes ~7 seconds each time.
I don't have a database for the object and can't find a reliable way to just call the function one time and use its dicts repeatedly.
How can I store this object/keep persistence so that I can call it for a view function? Trying to lower the amount of time that it takes to run functions for users.
Thanks
/r/django
https://redd.it/awas61
Hey guys, so here's the issue I'm facing: I have a function that interpolates datasets via a scipy object. Generating these datasets takes ~7 seconds each time.
I don't have a database for the object and can't find a reliable way to just call the function one time and use its dicts repeatedly.
How can I store this object/keep persistence so that I can call it for a view function? Trying to lower the amount of time that it takes to run functions for users.
Thanks
/r/django
https://redd.it/awas61
reddit
r/django - Django Object Persistence
4 votes and 12 comments so far on Reddit