Another minimalistic python figure in a recently accepted publication
/r/Python
https://redd.it/ccvcqb
/r/Python
https://redd.it/ccvcqb
This media is not supported in your browser
VIEW IN TELEGRAM
Knowing some MatLab and some basic Java, I want to master Python. This is my first project. Pretty pleased with the result!
/r/Python
https://redd.it/cctgti
/r/Python
https://redd.it/cctgti
Give me your feedback
I created a simple NASA image Search engine using Django. Tell me what you think and give me your constructive criticism.
​
[https://nasaimagesearch109.herokuapp.com/?p=sun](https://nasaimagesearch109.herokuapp.com/?p=sun)
/r/django
https://redd.it/ccyiyo
I created a simple NASA image Search engine using Django. Tell me what you think and give me your constructive criticism.
​
[https://nasaimagesearch109.herokuapp.com/?p=sun](https://nasaimagesearch109.herokuapp.com/?p=sun)
/r/django
https://redd.it/ccyiyo
reddit
r/django - Give me your feedback
2 votes and 8 comments so far on Reddit
How to use python to measure average word/phrase occurence per amount of time in a csv?
Note; complete beginner to python
I have a csv spreadsheet with tweets and the date of tweets.
I'd like to generate a second spreadsheet from that spreadsheet that shows, not a list of the most frequently used words, but a list of words that are prioritized by highest average occurrence per, say, 10 days.
But I don't want to select a subset of the data and say "Give me the average occurrence of these words in these specific 10 days" - I want it to spit out an average of *all* word/phrase occurrences per 10-day intervals.
E.g. "The word "climate change" has been mentioned 4 times in the past 10 days but, over all the years of data, on average, it has been mentioned 1 time per 10 days"
Then I'd like it to prioritize by the highest average.
Is that possible to achieve? If so, what modules or fields or tools should I explore further? Any specific suggestions of what to do also welcome.
I'm essentially trying to prioritize by the 'steepest slopes'
/r/pystats
https://redd.it/ccw2mt
Note; complete beginner to python
I have a csv spreadsheet with tweets and the date of tweets.
I'd like to generate a second spreadsheet from that spreadsheet that shows, not a list of the most frequently used words, but a list of words that are prioritized by highest average occurrence per, say, 10 days.
But I don't want to select a subset of the data and say "Give me the average occurrence of these words in these specific 10 days" - I want it to spit out an average of *all* word/phrase occurrences per 10-day intervals.
E.g. "The word "climate change" has been mentioned 4 times in the past 10 days but, over all the years of data, on average, it has been mentioned 1 time per 10 days"
Then I'd like it to prioritize by the highest average.
Is that possible to achieve? If so, what modules or fields or tools should I explore further? Any specific suggestions of what to do also welcome.
I'm essentially trying to prioritize by the 'steepest slopes'
/r/pystats
https://redd.it/ccw2mt
reddit
r/pystats - How to use python to measure average word/phrase occurence per amount of time in a csv?
5 votes and 3 comments so far on Reddit
Best way to implement user settings Django/React
Hi. I'm thinking what would be the best approach with user settings - let's say I would like to give users an option to set timezone in which time should be displayed, so every user would be able to set one however he/she likes. I'm thinking about one-to-one relation between user model and settings model, but what's next? How to pass it from backend? All I could think of was to create custom middleware that would fetch serialized settings and send it to frontend with every request, but not sure if it's the best idea, query database for settings every time user reload/click the link on the website. Do you have any better ideas? Or seen something like this in some project? Every input would be valuable.
/r/django
https://redd.it/ccsq1h
Hi. I'm thinking what would be the best approach with user settings - let's say I would like to give users an option to set timezone in which time should be displayed, so every user would be able to set one however he/she likes. I'm thinking about one-to-one relation between user model and settings model, but what's next? How to pass it from backend? All I could think of was to create custom middleware that would fetch serialized settings and send it to frontend with every request, but not sure if it's the best idea, query database for settings every time user reload/click the link on the website. Do you have any better ideas? Or seen something like this in some project? Every input would be valuable.
/r/django
https://redd.it/ccsq1h
reddit
r/django - Best way to implement user settings Django/React
9 votes and 5 comments so far on Reddit
Learn Python 3: From Beginner to Professional 2019
https://www.youtube.com/watch?v=jE2HLArCS38
/r/flask
https://redd.it/cd1kfe
https://www.youtube.com/watch?v=jE2HLArCS38
/r/flask
https://redd.it/cd1kfe
Model creation. Multiple IDs all have same description and text. There is no ArrayField. How should I do this?
App functionality (simplified): Person enters order ID, webpage spits out a list of products ordered with product description, and some text for each product. This text is a checklist for factory workers who assemble the product.
I'm getting the product_IDs for the order from the WooCommerce API, they are stored in an array.
I need to create the Django Model. I think I can shove all this in 1 Model without needing foreign keys?
WooCommerce variations (product in different color/size) have a different ID than the parent product, but they should all have the same product name and instruction text as the parent.
eg productIDs between 100 and 200 all are "Shoe X" and they should all have the same text (for example: put red laces in, put blue insole in, glue on a black sole, get that type box to put them in).
What's the best way to put this in a Model so that I can loop over the product_id array and get the matching description and text, but don't have to enter all 800+ IDs in a separate table row?
PK | product_shop_ids| product_description| the_text
---|---|----|----
0 | 0 up to 125| Shoe X| checklist for Shoe X
1 | 126-140 |
/r/django
https://redd.it/cd3rqp
App functionality (simplified): Person enters order ID, webpage spits out a list of products ordered with product description, and some text for each product. This text is a checklist for factory workers who assemble the product.
I'm getting the product_IDs for the order from the WooCommerce API, they are stored in an array.
I need to create the Django Model. I think I can shove all this in 1 Model without needing foreign keys?
WooCommerce variations (product in different color/size) have a different ID than the parent product, but they should all have the same product name and instruction text as the parent.
eg productIDs between 100 and 200 all are "Shoe X" and they should all have the same text (for example: put red laces in, put blue insole in, glue on a black sole, get that type box to put them in).
What's the best way to put this in a Model so that I can loop over the product_id array and get the matching description and text, but don't have to enter all 800+ IDs in a separate table row?
PK | product_shop_ids| product_description| the_text
---|---|----|----
0 | 0 up to 125| Shoe X| checklist for Shoe X
1 | 126-140 |
/r/django
https://redd.it/cd3rqp
reddit
r/django - Model creation. Multiple IDs all have same description and text. There is no ArrayField. How should I do this?
0 votes and 1 comment so far on Reddit
How would I minify all my pages.
Hi, I'm getting fairly close to publishing.
Most of my pages can be made 20% or so smaller by minifying them.
​
Would it be worth it to minify my content, and if so. how?
​
I have looked into doing it with nginx (as I will use it as a proxy with gunicorn) but I'm wondering if there is a more Django way of doing it?
​
Thanks.
/r/django
https://redd.it/cd3gaa
Hi, I'm getting fairly close to publishing.
Most of my pages can be made 20% or so smaller by minifying them.
​
Would it be worth it to minify my content, and if so. how?
​
I have looked into doing it with nginx (as I will use it as a proxy with gunicorn) but I'm wondering if there is a more Django way of doing it?
​
Thanks.
/r/django
https://redd.it/cd3gaa
reddit
r/django - How would I minify all my pages.
0 votes and 3 comments so far on Reddit
Convert .py to .exe | PyInstaller - Python 3.6
https://www.youtube.com/watch?v=XHI1J7Wt4EU&t=4s
/r/Python
https://redd.it/cd0uc1
https://www.youtube.com/watch?v=XHI1J7Wt4EU&t=4s
/r/Python
https://redd.it/cd0uc1
YouTube
Convert .py to .exe | PyInstaller - Python 3.6
Hi, good to have you here. Today I have got a video about how to convert .py apps to .exe using the PyInstaller python package. Using this module, you can al...
snoop: a debugging library designed for maximum convenience
https://github.com/alexmojaki/snoop
/r/Python
https://redd.it/cd1moz
https://github.com/alexmojaki/snoop
/r/Python
https://redd.it/cd1moz
GitHub
GitHub - alexmojaki/snoop: A powerful set of Python debugging tools, based on PySnooper
A powerful set of Python debugging tools, based on PySnooper - alexmojaki/snoop
[R] BERT and XLNET for Malay and Indonesian languages.
I released BERT and XLNET for Malay language, trained on around 1.2GB of data (public news, twitter, instagram, wikipedia and parliament text), and do some comparison among it. So it is really good on both social media and native context, I believe it also good for Bahasa Indonesia, in Wikipedia, we share a lot of similar context and assimilation with Indonesian text. And we know BERT released Multilanguage model, size around 714MB, which is so great but too heavy on some low cost development.
BERT-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/bert
2 models for BERT-Bahasa,
1. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, BASE size (467MB).
2. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, SMALL size (184MB).
XLNET-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
1 model for XLNET-Bahasa,
1. Vocab size 32k, Case Sensitive, Train on 1.21GB dataset, BASE size (878MB).
All comparison studies inside both README pages, comparison for abstractive summarization and neural machine translation are on the way, and XLNET-Bahasa SMALL is on training.
/r/MachineLearning
https://redd.it/cd0osl
I released BERT and XLNET for Malay language, trained on around 1.2GB of data (public news, twitter, instagram, wikipedia and parliament text), and do some comparison among it. So it is really good on both social media and native context, I believe it also good for Bahasa Indonesia, in Wikipedia, we share a lot of similar context and assimilation with Indonesian text. And we know BERT released Multilanguage model, size around 714MB, which is so great but too heavy on some low cost development.
BERT-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/bert
2 models for BERT-Bahasa,
1. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, BASE size (467MB).
2. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, SMALL size (184MB).
XLNET-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
1 model for XLNET-Bahasa,
1. Vocab size 32k, Case Sensitive, Train on 1.21GB dataset, BASE size (878MB).
All comparison studies inside both README pages, comparison for abstractive summarization and neural machine translation are on the way, and XLNET-Bahasa SMALL is on training.
/r/MachineLearning
https://redd.it/cd0osl
GitHub
huseinzol05/Malaya
Natural-Language-Toolkit for bahasa Malaysia, https://malaya.readthedocs.io/ - huseinzol05/Malaya
I used Flask and Flask-socketio for my real time dashboard web server check out the code at https://gitlab.com/t3chflicks/smart-buoy
/r/flask
https://redd.it/cd4vvs
/r/flask
https://redd.it/cd4vvs
Using Python, I made an AI that runs a music video channel on youtube.
[Channel Link](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)
Named "[Nightcore Mechanica](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)", this AI creates nightcore music videos from [the top 50 music videos on YouTube trending](https://charts.youtube.com/charts/TopVideos/us).
[(*If you don't know what nightcore is*)](https://en.m.wikipedia.org/wiki/Nightcore)
I'm not sure if YouTube allows robots to make content, but I hope this channel grows to be monetizable [after hitting 1K subscribers](https://www.google.com/amp/s/www.theverge.com/platform/amp/2018/1/16/16899068/youtube-new-monetization-rules-announced-4000-hours).
I am also not releasing the program that makes these videos yet, but I will briefly describe the process used to create the videos.
This program, created in Python, first gets the lyrics and audio for the nightcore video using [youtube-dl](https://github.com/ytdl-org/youtube-dl) , then using a seq2seq GAN trained on the most watched nightcore videos,it derives a "template image" from the lyrics, with the desired color and "feeling" of the image. Using cv2 and the website anime-pictures.net, it compares the similarity of anime images to the template image to find the best matching anime image.
After finding the image, it using [FFMPEG](https://ffmpeg.org/) to modify the audio to make it sound "nightcore-y" and compiles a video with visual affects like [showwaves](https://ffmpeg.org/ffmpeg-filters.html#showwaves).
With the help of ["youtube-upload"](https://stackoverflow.com/questions/2733660/upload-a-video-to-youtube-with-python), I was able to to upload the videos to YouTube from the command line.
Recently, I added captioning capabilities. Using [Aeneas](https://github.com/readbeyond/aeneas), the program aligns the lyrics to the audio in the video, and
/r/Python
https://redd.it/cd7fra
[Channel Link](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)
Named "[Nightcore Mechanica](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)", this AI creates nightcore music videos from [the top 50 music videos on YouTube trending](https://charts.youtube.com/charts/TopVideos/us).
[(*If you don't know what nightcore is*)](https://en.m.wikipedia.org/wiki/Nightcore)
I'm not sure if YouTube allows robots to make content, but I hope this channel grows to be monetizable [after hitting 1K subscribers](https://www.google.com/amp/s/www.theverge.com/platform/amp/2018/1/16/16899068/youtube-new-monetization-rules-announced-4000-hours).
I am also not releasing the program that makes these videos yet, but I will briefly describe the process used to create the videos.
This program, created in Python, first gets the lyrics and audio for the nightcore video using [youtube-dl](https://github.com/ytdl-org/youtube-dl) , then using a seq2seq GAN trained on the most watched nightcore videos,it derives a "template image" from the lyrics, with the desired color and "feeling" of the image. Using cv2 and the website anime-pictures.net, it compares the similarity of anime images to the template image to find the best matching anime image.
After finding the image, it using [FFMPEG](https://ffmpeg.org/) to modify the audio to make it sound "nightcore-y" and compiles a video with visual affects like [showwaves](https://ffmpeg.org/ffmpeg-filters.html#showwaves).
With the help of ["youtube-upload"](https://stackoverflow.com/questions/2733660/upload-a-video-to-youtube-with-python), I was able to to upload the videos to YouTube from the command line.
Recently, I added captioning capabilities. Using [Aeneas](https://github.com/readbeyond/aeneas), the program aligns the lyrics to the audio in the video, and
/r/Python
https://redd.it/cd7fra
YouTube
Nightcore Mechanica
Nightcore Mechanica is a robot that makes nightcore videos. For more information, click here: https://www.reddit.com/r/artificial/comments/cd79vx/i_created_a...
Hello, guys. What have you been doing with Django/DRF/Graphql in the past few months?
What kinds of jobs are you doing? What problems are solving?
/r/django
https://redd.it/cdbw54
What kinds of jobs are you doing? What problems are solving?
/r/django
https://redd.it/cdbw54
reddit
r/django - Hello, guys. What have you been doing with Django/DRF/Graphql in the past few months?
0 votes and 0 comments so far on Reddit
[D] Pointless PhD for Machine Learning career advancement?
Dear [r/MachineLearning](https://www.reddit.com/r/MachineLearning/), I am a STEM graduate who became interested after my Master's into making research in Machine Learning. I was promised a PhD, the opportunity to do cutting-edge research, real world applications and a "close" cooperation with industrial partners. But after having spent a few months reading and discussing with supervisors, a lot of work I am considered to do is centered around metaheuristic search and evolutionary computation. And although, I find it fascinating and there is some application to machine learning / DNNs, as well as companies like Uber and Cognizant are adopting it, I feel like it has too much of a niche quality and mainstream interest seems not to be catching-up with it. In case if there is any at all to begin with.
I thought it might be helpful to ask you guys, to get a neutral outside-of-the-box opinion.
Particularly, as over the last month I live and work in a scientific bubble and my prior background is not AI/ML or Computer Science to begin with. So
/r/MachineLearning
https://redd.it/cd9qga
Dear [r/MachineLearning](https://www.reddit.com/r/MachineLearning/), I am a STEM graduate who became interested after my Master's into making research in Machine Learning. I was promised a PhD, the opportunity to do cutting-edge research, real world applications and a "close" cooperation with industrial partners. But after having spent a few months reading and discussing with supervisors, a lot of work I am considered to do is centered around metaheuristic search and evolutionary computation. And although, I find it fascinating and there is some application to machine learning / DNNs, as well as companies like Uber and Cognizant are adopting it, I feel like it has too much of a niche quality and mainstream interest seems not to be catching-up with it. In case if there is any at all to begin with.
I thought it might be helpful to ask you guys, to get a neutral outside-of-the-box opinion.
Particularly, as over the last month I live and work in a scientific bubble and my prior background is not AI/ML or Computer Science to begin with. So
/r/MachineLearning
https://redd.it/cd9qga
Reddit
Machine Learning
Beginners -> /r/mlquestions or /r/learnmachinelearning , AGI -> /r/singularity, career advices -> /r/cscareerquestions, datasets -> r/datasets
[R] Virtual Adversarial Lipschitz Regularization
https://arxiv.org/abs/1907.05681
/r/MachineLearning
https://redd.it/cdeh9u
https://arxiv.org/abs/1907.05681
/r/MachineLearning
https://redd.it/cdeh9u
9 Data Visualization Techniques You Should Learn in Python
https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/
/r/pystats
https://redd.it/cdg4ze
https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/
/r/pystats
https://redd.it/cdg4ze
Erik Marsja
9 Data Visualization Techniques You Should Learn in Python - Erik Marsja
In this Python data visualization tutorial we will learn how to create 9 different plots using Python Seaborn. More precisely we have used Python to create a scatter plot, histogram, bar plot, time series plot, box plot, heat map, correlogram, violin plot…
Django's Test Case Classes and a Three Times Speed-Up
https://adamj.eu/tech/2019/07/15/djangos-test-case-classes-and-a-three-times-speed-up/
/r/django
https://redd.it/cdes6i
https://adamj.eu/tech/2019/07/15/djangos-test-case-classes-and-a-three-times-speed-up/
/r/django
https://redd.it/cdes6i
adamj.eu
Django’s Test Case Classes and a Three Times Speed-Up - Adam Johnson
This is a story about how I sped up a client’s Django test suite to be three times faster, through swapping the test case class in use.