snoop: a debugging library designed for maximum convenience
https://github.com/alexmojaki/snoop
/r/Python
https://redd.it/cd1moz
https://github.com/alexmojaki/snoop
/r/Python
https://redd.it/cd1moz
GitHub
GitHub - alexmojaki/snoop: A powerful set of Python debugging tools, based on PySnooper
A powerful set of Python debugging tools, based on PySnooper - alexmojaki/snoop
[R] BERT and XLNET for Malay and Indonesian languages.
I released BERT and XLNET for Malay language, trained on around 1.2GB of data (public news, twitter, instagram, wikipedia and parliament text), and do some comparison among it. So it is really good on both social media and native context, I believe it also good for Bahasa Indonesia, in Wikipedia, we share a lot of similar context and assimilation with Indonesian text. And we know BERT released Multilanguage model, size around 714MB, which is so great but too heavy on some low cost development.
BERT-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/bert
2 models for BERT-Bahasa,
1. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, BASE size (467MB).
2. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, SMALL size (184MB).
XLNET-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
1 model for XLNET-Bahasa,
1. Vocab size 32k, Case Sensitive, Train on 1.21GB dataset, BASE size (878MB).
All comparison studies inside both README pages, comparison for abstractive summarization and neural machine translation are on the way, and XLNET-Bahasa SMALL is on training.
/r/MachineLearning
https://redd.it/cd0osl
I released BERT and XLNET for Malay language, trained on around 1.2GB of data (public news, twitter, instagram, wikipedia and parliament text), and do some comparison among it. So it is really good on both social media and native context, I believe it also good for Bahasa Indonesia, in Wikipedia, we share a lot of similar context and assimilation with Indonesian text. And we know BERT released Multilanguage model, size around 714MB, which is so great but too heavy on some low cost development.
BERT-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/bert
2 models for BERT-Bahasa,
1. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, BASE size (467MB).
2. Vocab size 40k, Case Sensitive, Train on 1.21GB dataset, SMALL size (184MB).
XLNET-Bahasa, you can read more at here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
1 model for XLNET-Bahasa,
1. Vocab size 32k, Case Sensitive, Train on 1.21GB dataset, BASE size (878MB).
All comparison studies inside both README pages, comparison for abstractive summarization and neural machine translation are on the way, and XLNET-Bahasa SMALL is on training.
/r/MachineLearning
https://redd.it/cd0osl
GitHub
huseinzol05/Malaya
Natural-Language-Toolkit for bahasa Malaysia, https://malaya.readthedocs.io/ - huseinzol05/Malaya
I used Flask and Flask-socketio for my real time dashboard web server check out the code at https://gitlab.com/t3chflicks/smart-buoy
/r/flask
https://redd.it/cd4vvs
/r/flask
https://redd.it/cd4vvs
Using Python, I made an AI that runs a music video channel on youtube.
[Channel Link](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)
Named "[Nightcore Mechanica](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)", this AI creates nightcore music videos from [the top 50 music videos on YouTube trending](https://charts.youtube.com/charts/TopVideos/us).
[(*If you don't know what nightcore is*)](https://en.m.wikipedia.org/wiki/Nightcore)
I'm not sure if YouTube allows robots to make content, but I hope this channel grows to be monetizable [after hitting 1K subscribers](https://www.google.com/amp/s/www.theverge.com/platform/amp/2018/1/16/16899068/youtube-new-monetization-rules-announced-4000-hours).
I am also not releasing the program that makes these videos yet, but I will briefly describe the process used to create the videos.
This program, created in Python, first gets the lyrics and audio for the nightcore video using [youtube-dl](https://github.com/ytdl-org/youtube-dl) , then using a seq2seq GAN trained on the most watched nightcore videos,it derives a "template image" from the lyrics, with the desired color and "feeling" of the image. Using cv2 and the website anime-pictures.net, it compares the similarity of anime images to the template image to find the best matching anime image.
After finding the image, it using [FFMPEG](https://ffmpeg.org/) to modify the audio to make it sound "nightcore-y" and compiles a video with visual affects like [showwaves](https://ffmpeg.org/ffmpeg-filters.html#showwaves).
With the help of ["youtube-upload"](https://stackoverflow.com/questions/2733660/upload-a-video-to-youtube-with-python), I was able to to upload the videos to YouTube from the command line.
Recently, I added captioning capabilities. Using [Aeneas](https://github.com/readbeyond/aeneas), the program aligns the lyrics to the audio in the video, and
/r/Python
https://redd.it/cd7fra
[Channel Link](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)
Named "[Nightcore Mechanica](https://m.youtube.com/channel/UCQ-qpDRT1oyu7Yo-D_fwI4g)", this AI creates nightcore music videos from [the top 50 music videos on YouTube trending](https://charts.youtube.com/charts/TopVideos/us).
[(*If you don't know what nightcore is*)](https://en.m.wikipedia.org/wiki/Nightcore)
I'm not sure if YouTube allows robots to make content, but I hope this channel grows to be monetizable [after hitting 1K subscribers](https://www.google.com/amp/s/www.theverge.com/platform/amp/2018/1/16/16899068/youtube-new-monetization-rules-announced-4000-hours).
I am also not releasing the program that makes these videos yet, but I will briefly describe the process used to create the videos.
This program, created in Python, first gets the lyrics and audio for the nightcore video using [youtube-dl](https://github.com/ytdl-org/youtube-dl) , then using a seq2seq GAN trained on the most watched nightcore videos,it derives a "template image" from the lyrics, with the desired color and "feeling" of the image. Using cv2 and the website anime-pictures.net, it compares the similarity of anime images to the template image to find the best matching anime image.
After finding the image, it using [FFMPEG](https://ffmpeg.org/) to modify the audio to make it sound "nightcore-y" and compiles a video with visual affects like [showwaves](https://ffmpeg.org/ffmpeg-filters.html#showwaves).
With the help of ["youtube-upload"](https://stackoverflow.com/questions/2733660/upload-a-video-to-youtube-with-python), I was able to to upload the videos to YouTube from the command line.
Recently, I added captioning capabilities. Using [Aeneas](https://github.com/readbeyond/aeneas), the program aligns the lyrics to the audio in the video, and
/r/Python
https://redd.it/cd7fra
YouTube
Nightcore Mechanica
Nightcore Mechanica is a robot that makes nightcore videos. For more information, click here: https://www.reddit.com/r/artificial/comments/cd79vx/i_created_a...
Hello, guys. What have you been doing with Django/DRF/Graphql in the past few months?
What kinds of jobs are you doing? What problems are solving?
/r/django
https://redd.it/cdbw54
What kinds of jobs are you doing? What problems are solving?
/r/django
https://redd.it/cdbw54
reddit
r/django - Hello, guys. What have you been doing with Django/DRF/Graphql in the past few months?
0 votes and 0 comments so far on Reddit
[D] Pointless PhD for Machine Learning career advancement?
Dear [r/MachineLearning](https://www.reddit.com/r/MachineLearning/), I am a STEM graduate who became interested after my Master's into making research in Machine Learning. I was promised a PhD, the opportunity to do cutting-edge research, real world applications and a "close" cooperation with industrial partners. But after having spent a few months reading and discussing with supervisors, a lot of work I am considered to do is centered around metaheuristic search and evolutionary computation. And although, I find it fascinating and there is some application to machine learning / DNNs, as well as companies like Uber and Cognizant are adopting it, I feel like it has too much of a niche quality and mainstream interest seems not to be catching-up with it. In case if there is any at all to begin with.
I thought it might be helpful to ask you guys, to get a neutral outside-of-the-box opinion.
Particularly, as over the last month I live and work in a scientific bubble and my prior background is not AI/ML or Computer Science to begin with. So
/r/MachineLearning
https://redd.it/cd9qga
Dear [r/MachineLearning](https://www.reddit.com/r/MachineLearning/), I am a STEM graduate who became interested after my Master's into making research in Machine Learning. I was promised a PhD, the opportunity to do cutting-edge research, real world applications and a "close" cooperation with industrial partners. But after having spent a few months reading and discussing with supervisors, a lot of work I am considered to do is centered around metaheuristic search and evolutionary computation. And although, I find it fascinating and there is some application to machine learning / DNNs, as well as companies like Uber and Cognizant are adopting it, I feel like it has too much of a niche quality and mainstream interest seems not to be catching-up with it. In case if there is any at all to begin with.
I thought it might be helpful to ask you guys, to get a neutral outside-of-the-box opinion.
Particularly, as over the last month I live and work in a scientific bubble and my prior background is not AI/ML or Computer Science to begin with. So
/r/MachineLearning
https://redd.it/cd9qga
Reddit
Machine Learning
Beginners -> /r/mlquestions or /r/learnmachinelearning , AGI -> /r/singularity, career advices -> /r/cscareerquestions, datasets -> r/datasets
[R] Virtual Adversarial Lipschitz Regularization
https://arxiv.org/abs/1907.05681
/r/MachineLearning
https://redd.it/cdeh9u
https://arxiv.org/abs/1907.05681
/r/MachineLearning
https://redd.it/cdeh9u
9 Data Visualization Techniques You Should Learn in Python
https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/
/r/pystats
https://redd.it/cdg4ze
https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/
/r/pystats
https://redd.it/cdg4ze
Erik Marsja
9 Data Visualization Techniques You Should Learn in Python - Erik Marsja
In this Python data visualization tutorial we will learn how to create 9 different plots using Python Seaborn. More precisely we have used Python to create a scatter plot, histogram, bar plot, time series plot, box plot, heat map, correlogram, violin plot…
Django's Test Case Classes and a Three Times Speed-Up
https://adamj.eu/tech/2019/07/15/djangos-test-case-classes-and-a-three-times-speed-up/
/r/django
https://redd.it/cdes6i
https://adamj.eu/tech/2019/07/15/djangos-test-case-classes-and-a-three-times-speed-up/
/r/django
https://redd.it/cdes6i
adamj.eu
Django’s Test Case Classes and a Three Times Speed-Up - Adam Johnson
This is a story about how I sped up a client’s Django test suite to be three times faster, through swapping the test case class in use.
I tested my Python library SwagLyrics which is a faster approach to display lyrics on the currently playing Spotify song right in the terminal (or browser) for speed and accuracy on the US Top 50 Chart.
Here are the results, it takes 0.4s average per track given your internet speed is ok, even so, it's way faster than the alternatives since there's only one direct request and no API used due to a clever approach to directly format the URL (I'm so proud of it!)
[https://colab.research.google.com/gist/aadibajpai/06a596ad753007b0faea132e96f372e0/swaglyrics\_test.ipynb](https://colab.research.google.com/gist/aadibajpai/06a596ad753007b0faea132e96f372e0/swaglyrics_test.ipynb#scrollTo=vtE04ylUGxO-)
​
The repository in question, [https://github.com/SwagLyrics/SwagLyrics-For-Spotify](https://github.com/SwagLyrics/SwagLyrics-For-Spotify)
​
The method for getting the track from Spotify is novel too, since it doesn't use the API.
(I really wish I could write a paper on it haha, I researched naming conventions on Spotify and Genius to identify patterns so as to bridge them using nothing but string manipulations.)
​
What do you think? I'd love to know what could be better.
/r/Python
https://redd.it/cdgkqf
Here are the results, it takes 0.4s average per track given your internet speed is ok, even so, it's way faster than the alternatives since there's only one direct request and no API used due to a clever approach to directly format the URL (I'm so proud of it!)
[https://colab.research.google.com/gist/aadibajpai/06a596ad753007b0faea132e96f372e0/swaglyrics\_test.ipynb](https://colab.research.google.com/gist/aadibajpai/06a596ad753007b0faea132e96f372e0/swaglyrics_test.ipynb#scrollTo=vtE04ylUGxO-)
​
The repository in question, [https://github.com/SwagLyrics/SwagLyrics-For-Spotify](https://github.com/SwagLyrics/SwagLyrics-For-Spotify)
​
The method for getting the track from Spotify is novel too, since it doesn't use the API.
(I really wish I could write a paper on it haha, I researched naming conventions on Spotify and Genius to identify patterns so as to bridge them using nothing but string manipulations.)
​
What do you think? I'd love to know what could be better.
/r/Python
https://redd.it/cdgkqf
Google
Google Colaboratory
Receive data from Redis RQ worker process
Hi,
I am currently trying to get my Flask Web Application working with Redis RQ.
My application gets an input file and analyzes it for approx. 20 seconds. During the analysis it keeps an filling a dictionary which I want to access from my flask application to display the content of this dictionary in a nice way, like plots and graphs.
Now my idea is to allow the user to upload the file and hit "Analyze", which starts my program. Instead of waiting 20 seconds until the file has been processes and redirecting the user then to the page with all the plots I want to let my program run in the background. So if you click "Analyze" the browser directs the user directly to the page with the plots while in the background my analysis program is running.
For the task queue I am using redis RQ. The problem is, that I have no idea how I can access the dictionary, that is being filled by the worker process, through my flask application. Because the content of the dictionary keeps on growing I want to be able to present the user the live output as the analysis process keeps on running.
I tried using
/r/flask
https://redd.it/cdgs20
Hi,
I am currently trying to get my Flask Web Application working with Redis RQ.
My application gets an input file and analyzes it for approx. 20 seconds. During the analysis it keeps an filling a dictionary which I want to access from my flask application to display the content of this dictionary in a nice way, like plots and graphs.
Now my idea is to allow the user to upload the file and hit "Analyze", which starts my program. Instead of waiting 20 seconds until the file has been processes and redirecting the user then to the page with all the plots I want to let my program run in the background. So if you click "Analyze" the browser directs the user directly to the page with the plots while in the background my analysis program is running.
For the task queue I am using redis RQ. The problem is, that I have no idea how I can access the dictionary, that is being filled by the worker process, through my flask application. Because the content of the dictionary keeps on growing I want to be able to present the user the live output as the analysis process keeps on running.
I tried using
/r/flask
https://redd.it/cdgs20
reddit
r/flask - Receive data from Redis RQ worker process
4 votes and 1 comment so far on Reddit
Using python to generate artistic images (Neural Style Transfer)
I made a [repository](https://github.com/Nick-Morgan/neural-style-transfer) which explores 2 methods ([Gatys, 2015](https://arxiv.org/pdf/1508.06576.pdf) and [Johnson, 2016](https://arxiv.org/pdf/1603.08155.pdf)) of Neural Style Transfer. I've included a link to Google Collab in the repository, making it easy to run the code on your own images.
​
I would love feedback on the repository, as well as any suggestions for future reading. I find this topic fascinating.
/r/Python
https://redd.it/cdk82d
I made a [repository](https://github.com/Nick-Morgan/neural-style-transfer) which explores 2 methods ([Gatys, 2015](https://arxiv.org/pdf/1508.06576.pdf) and [Johnson, 2016](https://arxiv.org/pdf/1603.08155.pdf)) of Neural Style Transfer. I've included a link to Google Collab in the repository, making it easy to run the code on your own images.
​
I would love feedback on the repository, as well as any suggestions for future reading. I find this topic fascinating.
/r/Python
https://redd.it/cdk82d
GitHub
Nick-Morgan/neural-style-transfer
Generating Art with Convolutional Neural Networks. Contribute to Nick-Morgan/neural-style-transfer development by creating an account on GitHub.
I work at IBM, Django on the side.. need advice
Hey everyone, quick question. In my spare time, on the side, I like to build websites and recently finished one for a client (more like a friend who paid me ha) and it's something I really enjoy. (If you'd like to see the site click here: http://hendersonconstruction.org)
Just curious if anyone knows of any decent material (online video series, etc) that cover more advanced topics. Obviously I read the documentation, but I just was hoping to find something covering things such as optimization, best practices, or just anything that focuses on information beyond the technical aspect.
Now that I feel comfortable with Django I just want to improve the way my sites are built. I want to make them faster, make the code cleaner, and see different ways to do things and why each way may be useful given the circumstance.
If anyone could point me in the right direction I'd really appreciate it! God bless.
/r/django
https://redd.it/cdkvrv
Hey everyone, quick question. In my spare time, on the side, I like to build websites and recently finished one for a client (more like a friend who paid me ha) and it's something I really enjoy. (If you'd like to see the site click here: http://hendersonconstruction.org)
Just curious if anyone knows of any decent material (online video series, etc) that cover more advanced topics. Obviously I read the documentation, but I just was hoping to find something covering things such as optimization, best practices, or just anything that focuses on information beyond the technical aspect.
Now that I feel comfortable with Django I just want to improve the way my sites are built. I want to make them faster, make the code cleaner, and see different ways to do things and why each way may be useful given the circumstance.
If anyone could point me in the right direction I'd really appreciate it! God bless.
/r/django
https://redd.it/cdkvrv
hendersonconstruction.org
Henderson Construction | Welcome
Server redirect for JWT cookie storage (Flask-JWT-Extended)
I am storing JWT in httponly cookies via this method: https://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_cookies.html
What I notice is when I use the @jwt_required decorator, it looks for the JWT access token in a cookie, but since it's httponly there is no JWT access token.
Instead, I am forced to do the redirect on the client side (i.e. if error = 401 (unauthorized) then redirect to login page). This is however not preferred.
Anybody have any suggestions for doing a server redirect with httponly jwt cookie storage?
/r/flask
https://redd.it/cdkc2o
I am storing JWT in httponly cookies via this method: https://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_cookies.html
What I notice is when I use the @jwt_required decorator, it looks for the JWT access token in a cookie, but since it's httponly there is no JWT access token.
Instead, I am forced to do the redirect on the client side (i.e. if error = 401 (unauthorized) then redirect to login page). This is however not preferred.
Anybody have any suggestions for doing a server redirect with httponly jwt cookie storage?
/r/flask
https://redd.it/cdkc2o
Project Suggestion Site
Hi all,
I'm posting here to ask whether if this is a good side project idea to put on my resume.
Currently I have an idea to host a flask app that requires users to register (passwords are hashed with a randomly generated salt). Once registered they may post suggestions for side projects for programmers to undertake. These posts then need approval from an admin user which can approve them from within the site, they can also make more admins from within the site.
Once basic functionality is completed I will add more advanced features such as: comments, upvotes, etc.
The posts and users will be hosted on a Heroku PostgreSQL database.
Do people think that this is enough? Thanks in advance. :)
/r/flask
https://redd.it/cdk4ma
Hi all,
I'm posting here to ask whether if this is a good side project idea to put on my resume.
Currently I have an idea to host a flask app that requires users to register (passwords are hashed with a randomly generated salt). Once registered they may post suggestions for side projects for programmers to undertake. These posts then need approval from an admin user which can approve them from within the site, they can also make more admins from within the site.
Once basic functionality is completed I will add more advanced features such as: comments, upvotes, etc.
The posts and users will be hosted on a Heroku PostgreSQL database.
Do people think that this is enough? Thanks in advance. :)
/r/flask
https://redd.it/cdk4ma
reddit
r/flask - Project Suggestion Site
5 votes and 1 comment so far on Reddit
How to render latex formatted output from a code cell?
I've seen that sympy somehow can output latex formatted output, and that they are inmediately rendered correctly.
I was wondering how that was done, and how I would go about implementing it for other stuff.
Most specifically, I'm currently using [octave kernel](https://github.com/Calysto/octave_kernel), and you can do `latex(something)` and get latex formatted output.
I'm wondering how sympy achieves what it does and if they use magic commands for it. And maybe some tips on how I'd go about implementing that in octave if that's not the case
/r/IPython
https://redd.it/cdo2kr
I've seen that sympy somehow can output latex formatted output, and that they are inmediately rendered correctly.
I was wondering how that was done, and how I would go about implementing it for other stuff.
Most specifically, I'm currently using [octave kernel](https://github.com/Calysto/octave_kernel), and you can do `latex(something)` and get latex formatted output.
I'm wondering how sympy achieves what it does and if they use magic commands for it. And maybe some tips on how I'd go about implementing that in octave if that's not the case
/r/IPython
https://redd.it/cdo2kr
GitHub
GitHub - Calysto/octave_kernel: An Octave kernel for IPython
An Octave kernel for IPython. Contribute to Calysto/octave_kernel development by creating an account on GitHub.
Please Help Me Understand Why My Times and Dates Won't Update After Server Start
Hi everyone,
​
I am having trouble with getting accurate times. My page loads a form that is either empty, or, if another form has been submitted within the last hour, contains the last submission's entries pre-populated.
​
The logic to check this condition compares the last records date/time (which are strings, so either 2019-07-15 or 22:34), to one of the below functions, and then retrieves the last model object if within an hour.
​
import datetime
​
[datetime.](https://datetime.date.today)datetime.now().date() FOR DATES
and str([datetime.datetime.now](https://datetime.datetime.now)().time()\[0:5\]) FOR TIMES
​
I also set defaults in [models.py](https://models.py) and forms.py by using these lines.
​
I have just noticed that when posting the form and creating a new model record, it is being saved with the time that I started the server (I am running locally). I'll load a blank form at 22:30, but I've been working 30 minutes and I ran runserver at 21:59, that time shows instead.
​
Can someone please help me to get string representations of date (yyyy-mm-dd) and time (##:##) in 24hr. or some other way to compare dates and times that actually represent the time that they are called?
​
I have searched and found to use [datetime.date.today](https://datetime.date.today) but the condition fails on
/r/django
https://redd.it/cdrh3n
Hi everyone,
​
I am having trouble with getting accurate times. My page loads a form that is either empty, or, if another form has been submitted within the last hour, contains the last submission's entries pre-populated.
​
The logic to check this condition compares the last records date/time (which are strings, so either 2019-07-15 or 22:34), to one of the below functions, and then retrieves the last model object if within an hour.
​
import datetime
​
[datetime.](https://datetime.date.today)datetime.now().date() FOR DATES
and str([datetime.datetime.now](https://datetime.datetime.now)().time()\[0:5\]) FOR TIMES
​
I also set defaults in [models.py](https://models.py) and forms.py by using these lines.
​
I have just noticed that when posting the form and creating a new model record, it is being saved with the time that I started the server (I am running locally). I'll load a blank form at 22:30, but I've been working 30 minutes and I ran runserver at 21:59, that time shows instead.
​
Can someone please help me to get string representations of date (yyyy-mm-dd) and time (##:##) in 24hr. or some other way to compare dates and times that actually represent the time that they are called?
​
I have searched and found to use [datetime.date.today](https://datetime.date.today) but the condition fails on
/r/django
https://redd.it/cdrh3n
My enterprise tech company in the bay area still has alot of their codebase in python2.7 what happens come 5 months later when it supposed becomes unsupported? Will it be Y2K20?
/r/Python
https://redd.it/cdnhp3
/r/Python
https://redd.it/cdnhp3
reddit
r/Python - My enterprise tech company in the bay area still has alot of their codebase in python2.7 what happens come 5 months…
82 votes and 96 comments so far on Reddit
How do I order multiple dates in django?
Hi their I have one model and a couple of fields some are date fields and some are date time fields the [models.py](https://models.py) file is like this:
# models.py
class Stocks(models.Model):
date_and_time_investment_was_made = models.DateTimeField(null=True)
date_dividend_was_deposited = models.DateField(null=True)
date_deposited = models.DateField(null=True)
date_withdrawn = models.DateField(null=True)
I need to filter it based on the date and time if there is one I have been trying to order it this way in my [view.py](https://view.py) file
# views.py
#------------------------------Dashboard Views--------------------------#
class DashBoardView(LoginRequiredMixin, ListView):
model = Stocks
template_name = 'dashboard.html'
context_object_name = 'investments'
def get_queryset(self):
return Stocks.objects.filter(who_by=self.request.user).order_by('date_and_time_investment_was_made', 'date_dividend_was_deposited', 'date_deposited','date_withdrawn')
#------------------------------------------------------------------------#
Now the problem with this approach is that in
/r/django
https://redd.it/cdokuq
Hi their I have one model and a couple of fields some are date fields and some are date time fields the [models.py](https://models.py) file is like this:
# models.py
class Stocks(models.Model):
date_and_time_investment_was_made = models.DateTimeField(null=True)
date_dividend_was_deposited = models.DateField(null=True)
date_deposited = models.DateField(null=True)
date_withdrawn = models.DateField(null=True)
I need to filter it based on the date and time if there is one I have been trying to order it this way in my [view.py](https://view.py) file
# views.py
#------------------------------Dashboard Views--------------------------#
class DashBoardView(LoginRequiredMixin, ListView):
model = Stocks
template_name = 'dashboard.html'
context_object_name = 'investments'
def get_queryset(self):
return Stocks.objects.filter(who_by=self.request.user).order_by('date_and_time_investment_was_made', 'date_dividend_was_deposited', 'date_deposited','date_withdrawn')
#------------------------------------------------------------------------#
Now the problem with this approach is that in
/r/django
https://redd.it/cdokuq