Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
[AF] New project, site like Issue Tracker

Hello! For starters, I did do the `Mega-tutorial Part I` and some other tutorials like 2 months ago. But still don't understand a lot of things.

**What I need:**

- user writes into terminal`issues`, this starts a flask app in our company shared local diskspace where it will be saved.
- Chrome/Firefox will open and asks the user to enter `ANSA` and `META` PDF files (he will click on dots and choose the file, not write a path. Too complicated for them.
- Then they push a `process` button that will call a function `pdf_to_issues(ANSAfile, METAfile)` saved in some specific folder
- this script returns a LOT of issues in form of list: `['ANSA-34123', 'ANSA-35349', 'META-13303', '#44001', ...]`
- Flask would somehow load this list and compare it with already saved issues. If some of them are the same, it will tag them as `resolved`
- The user will see a list of issues that were resolved.
- Then he click on `View all issues` which will display a table of all saved issues and give red (unresolved) and green (resolved).
- In the future, user can edit those issues and modify `description`, `priority`.

[Here is a picture](http://i.imgur.com/6GuNPnJ.png) I did to somehow illuminate what I want.

I would be *grateful* for any help, pointers, where do I start, what libraries do I use, what articles do I read, what do I expect that would go wrong, anything. This will be my first Flask bigger project. Any help would be really appreciated.

Thanks!!!



/r/flask
https://redd.it/69atvr
[Ask Flask] is it possible to POST without moving?

I'm working on a mild app that uses form POST to send data to Flask, but can't figure out if there's a way to post in pure HTML / CSS without moving to the new page, being the POST page.

Is there?

/r/flask
https://redd.it/697mhq
Require login for entire site?

I found [this](http://onecreativeblog.com/post/59051248/django-login-required-middleware#code) blogpost, but it's very old. Is this still a viable approach or are there better alternatives?


/r/djangolearning
https://redd.it/67mjq9
Can Jupyter (or similar) replace Excel?

I hope this question fit's here. If it's a dumb question, tell me and I'll delete it.

1. I don't want to encourage a war.
2. **I admit, that I don't know Jupyter or Excel well!**

Both programs can be used to analyze data.

*As far as I know* Jupyter, it's a bit like Python in the interactive mode, with some extra amenities and easy plotting of graphs. It's mostly used by scientists.

*As far as I know* in Excel, you have files that consist of big tables and in each cell there can either be data, computations, or some explanations like column names. You can probably also connect to dedicated data files/databases and dedicated files with code. You can also use it to make graphs. It's used in "business".

As I said: That is probably not entirely true - that's why I'm asking.

I'm a student of computer science and we learn that you should separate data, metadata, and computation and that having "locations" for data is "bad". In the sense that "goto [line]" commands are bad and pointers are bad if you want maintainability and productivity (of course pointers have their place). To me it seems like Excel makes these errors. (I know that you can give cells names.)

Jupyter can't be used to store and edit structured data (well), I think.

Is anyone of you familiar with both technologies?
What are some good use cases for Excel?
If Jupyter isn't it, do you know other potential replacements for Excel?

…Excel is reactive/"live", which is nice – you don't have to press "run".

/r/IPython
https://redd.it/69nmpa
quickest way to browse web-scraped data?

Hello there.
So, I'm looking for a house, so I wrote a crawler that scrape data from several local real estate agencies and do some filtering (number of rooms, price, etc). All data is currently saved in a sqlite database.

Now, the problem is: what is the quickest way to present the data? I was thinking about making a small Flask-based website to browse the data, deleted records, etc. but that seems a lot of work for an application which will have just two users (me and my GF).

Is there any framework which can help me?

Thanks!

/r/Python
https://redd.it/69mixz
A forensic toolkit in Python

https://github.com/MonroCoury/Forensic-Tools

A project I been working on, a bunch of Python scripts that facilitate digital forensic analysis.


Features:

-Document metadata extraction.

-Image EXIF metadata extraction.

-Firefox database parsing, including extracting cookies, history, form history, Google searches, and downloads. Can limit results to a certain time range.

-Skype database parsing, including Account details, contacts with full details, call log, and messages. Ability to look for messages/calls within a given time range and/or from/to a specific partner.

-Results are saved to html tables with row background highlighting for easier reading.

-I'm trying to make it as simple and easy to use as possible. Firefox scanner attempts to find the default databases across different platforms on its own should the user forget to point it.


Still a work in progress. Planned features:

-Chrome browsing data extraction

-Internet Explorer browsing data extraction

-Network traffic analysis

-Windows registry parsing

-PDF, zip, and rar password cracking

Feedback is most welcome!

/r/Python
https://redd.it/69nc8c
[D] Oxford deep nlp 2017 solutions

My solutions here: https://github.com/mleue/oxford-deep-nlp-2017-solutions

I've recently been going through the lectures of oxford's 2017 deep nlp course (https://github.com/oxford-cs-deepnlp-2017). The course was well presented and I've really deepened my understanding of modern NLP methods.

Naturally I am going through the practicals as well. I've linked to the repo with my current progress but I feel a bit stuck atm.

The main task revolves around a multi-class classification of ~2k transcripts of TED talks. However, the dataset is heavily skewed with one class covering ~50% and some classes only around 3-5% of the data.

Practical 2 wants you to try a basic averaging over word-vectors approach and then pumping that through a single-hidden-layer NN. I've been trying to tweak a lot with preprocessing and tokenization but I can't come beyond ~66% accuracy on the test set.

In Practical 3 you are then supposed to try the same task with a RNN approach. I thought this might get better but I am basically stuck at around the same test set accuracy of ~66%.

Maybe not much more is possible, especially given the fact that there is very little data for some of the classes. Basically I am wondering if anyone else has gone through the course (or even attended the real deal at oxford) so we can get a discussion going.

Thanks in advance! //Michael

/r/MachineLearning
https://redd.it/69pzdg