total beginner got a python project and want to convert in to flask to use in flutter as an API
I got a Python project which works perfectly fine in the console but I want to use it in my Flutter app. I can't directly use I have to deploy this project on a web server for that I need to convert this into a Flask app but i am a total beginner in Flask and python so someone please help to do this. I am giving you a brief of what this Python project does so it is a chatbot it asks questions as per the intent file and gives the answer which is data from the SQL server. I want to create a Flask app for this project which also does the same as this project I am providing the whole codebase of this project =>
import json
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.preprocessing import LabelEncoder
import json
import pyodbc
import numpy as np
from tensorflow import keras
from sklearn.preprocessing import LabelEncoder
import pyodbc
/r/flask
https://redd.it/12p9c49
I got a Python project which works perfectly fine in the console but I want to use it in my Flutter app. I can't directly use I have to deploy this project on a web server for that I need to convert this into a Flask app but i am a total beginner in Flask and python so someone please help to do this. I am giving you a brief of what this Python project does so it is a chatbot it asks questions as per the intent file and gives the answer which is data from the SQL server. I want to create a Flask app for this project which also does the same as this project I am providing the whole codebase of this project =>
import json
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.preprocessing import LabelEncoder
import json
import pyodbc
import numpy as np
from tensorflow import keras
from sklearn.preprocessing import LabelEncoder
import pyodbc
/r/flask
https://redd.it/12p9c49
Reddit
r/flask on Reddit: total beginner got a python project and want to convert in to flask to use in flutter as an API
Posted by u/Technical_Bus_7975 - 2 votes and 6 comments
pytheus: a modern python library for collecting prometheus metrics built with multiprocessing in mind
hello, I wanted to share a library I've been working on for quite some time now.
It's a new python library for collecting metrics with prometheus with a focus on flexibility & multiprocessing. The last one is one of the main reasons for this to be born as I found it difficult to have multiprocessing work correctly with the existing ecosystem.
Going from single process to multi process is a matter of a "function call" and everything should work as expected without differences in features between the two.
The library also supports default labels and partial labels (incrementally build your child instance), and offers ways to easily plug in your own version of something if you so require via protocols. (For example a custom Registry or even a custom Backend for multiprocessing using something else, by default it uses redis).
A lot of focus was also given to the documentation to make it clear.
I hope that if you struggled in the past this might make the experience better! :)
repo: **https://github.com/Llandy3d/pytheus**
docs: **https://pythe.us/**
/r/Python
https://redd.it/12pcvg0
hello, I wanted to share a library I've been working on for quite some time now.
It's a new python library for collecting metrics with prometheus with a focus on flexibility & multiprocessing. The last one is one of the main reasons for this to be born as I found it difficult to have multiprocessing work correctly with the existing ecosystem.
Going from single process to multi process is a matter of a "function call" and everything should work as expected without differences in features between the two.
The library also supports default labels and partial labels (incrementally build your child instance), and offers ways to easily plug in your own version of something if you so require via protocols. (For example a custom Registry or even a custom Backend for multiprocessing using something else, by default it uses redis).
A lot of focus was also given to the documentation to make it clear.
I hope that if you struggled in the past this might make the experience better! :)
repo: **https://github.com/Llandy3d/pytheus**
docs: **https://pythe.us/**
/r/Python
https://redd.it/12pcvg0
GitHub
GitHub - Llandy3d/pytheus: experimenting with a new prometheus client for python
experimenting with a new prometheus client for python - Llandy3d/pytheus
What concerns should I have for a multiple hour long @transaction.atomic django background process?
I have a celery background task that runs and can take 5+ hours to complete. It is primarily creating new objects and it will generate around 100k or so new objects. If something were to go wrong during this I would like to roll it all back as if it never happened. My strategy for ensuring this is to wrap the background task in a transaction.atomic. Since this is such a long running transaction is there anything I should be particularly concerned about? If I happen to fail after 4 hours do I need to worry about my Postgres database freezing up in production? Is there another way I should be doing this or is transaction.atomic the best strategy?
/r/django
https://redd.it/12po5xw
I have a celery background task that runs and can take 5+ hours to complete. It is primarily creating new objects and it will generate around 100k or so new objects. If something were to go wrong during this I would like to roll it all back as if it never happened. My strategy for ensuring this is to wrap the background task in a transaction.atomic. Since this is such a long running transaction is there anything I should be particularly concerned about? If I happen to fail after 4 hours do I need to worry about my Postgres database freezing up in production? Is there another way I should be doing this or is transaction.atomic the best strategy?
/r/django
https://redd.it/12po5xw
Reddit
r/django on Reddit: What concerns should I have for a multiple hour long @transaction.atomic django background process?
Posted by u/tylerprogramming - 6 votes and 7 comments
Created a small model-document based db
https://pypi.org/project/pexicdb/
/r/flask
https://redd.it/12poe6m
https://pypi.org/project/pexicdb/
/r/flask
https://redd.it/12poe6m
PyPI
pexicdb
Pexicdb is a simple model based file database
Discussion Translation of Japanese to English using GPT. These are my discoveries after ~100 hours of extensive experimentation and ways I think it can be improved.
Hello. I am currently experimenting with the viability of LLM models for Japanese to English translation. I've been experimenting with GPT 3.5, GPT 3.5 utilizing the DAN protocols, and GPT 4 for this project for around 3 months now with very promising results and I think I've identified several limitations with GPT that if addressed can significantly improve the efficiency and quality of translations.
​
The project I'm working on is attempting to translate a light novel series from japanese to english. During these tests I did a deep dive, asking GPT how it is attempting the translations and asking it to modify its translation methodology through various means (I am considering doing a long video outlining all this and showing off the prompts and responses at a later date). Notably this includes asking it to utilize its understanding of the series its translating from its training knowledge to aide in the translation, and providing it with a "seed" translation. Basically the seed is a side by side japanese and english translation to show GPT what I'm looking for in terms of grammar and formatting. The english translation notably is a human translation, not a machine translation. The results from these tests
/r/MachineLearning
https://redd.it/12pqqg6
Hello. I am currently experimenting with the viability of LLM models for Japanese to English translation. I've been experimenting with GPT 3.5, GPT 3.5 utilizing the DAN protocols, and GPT 4 for this project for around 3 months now with very promising results and I think I've identified several limitations with GPT that if addressed can significantly improve the efficiency and quality of translations.
​
The project I'm working on is attempting to translate a light novel series from japanese to english. During these tests I did a deep dive, asking GPT how it is attempting the translations and asking it to modify its translation methodology through various means (I am considering doing a long video outlining all this and showing off the prompts and responses at a later date). Notably this includes asking it to utilize its understanding of the series its translating from its training knowledge to aide in the translation, and providing it with a "seed" translation. Basically the seed is a side by side japanese and english translation to show GPT what I'm looking for in terms of grammar and formatting. The english translation notably is a human translation, not a machine translation. The results from these tests
/r/MachineLearning
https://redd.it/12pqqg6
Reddit
r/MachineLearning on Reddit: [Discussion] Translation of Japanese to English using GPT. These are my discoveries after ~100 hours…
Posted by u/NepNep_ - 305 votes and 57 comments
Noob Problem: Cannot seem to get the webpage to find the url route for my app.
I'm probably not using the right terminology and I'm sorry, however when I try make a basic webpage to display Hello World, using a url like http://127.0.0.1:8002/polls/ I get on the webpage:
​
<
Using the URLconf defined in project.url Django tried these URL patterns, in this order:
1. admin/
(I should be expecting polls/ here)
The current path, polls didn’t match any of these.
\>
The home webpage works fine and I've checked 10 times that the code is correct.
​
I've put 'polls' in installed apps in settings.py, the views.py and both urls.py files are fine.
I don't think it's the venv either, I've watched a bunch of videos and they always just take it for granted that this set up works, chat gpt isn't helping and I've no idea what to do.
Any help would be greatly appreciated.
/r/django
https://redd.it/12q4o9z
I'm probably not using the right terminology and I'm sorry, however when I try make a basic webpage to display Hello World, using a url like http://127.0.0.1:8002/polls/ I get on the webpage:
​
<
Using the URLconf defined in project.url Django tried these URL patterns, in this order:
1. admin/
(I should be expecting polls/ here)
The current path, polls didn’t match any of these.
\>
The home webpage works fine and I've checked 10 times that the code is correct.
​
I've put 'polls' in installed apps in settings.py, the views.py and both urls.py files are fine.
I don't think it's the venv either, I've watched a bunch of videos and they always just take it for granted that this set up works, chat gpt isn't helping and I've no idea what to do.
Any help would be greatly appreciated.
/r/django
https://redd.it/12q4o9z
How to Create CRUD API in Django Rest Framework
https://labpys.com/how-to-create-crud-api-in-django-rest-framework/
/r/django
https://redd.it/12q7o2i
https://labpys.com/how-to-create-crud-api-in-django-rest-framework/
/r/django
https://redd.it/12q7o2i
Programming Tutorials
How to Create CRUD API in Django Rest Framework
In this post, we will look at how to create CRUD API in Django Rest Framework. Create, Retrieve, Update, and Delete are the four fundamental
R Foundation Model Alignment with RAFT🛶 in LMFlow
​
https://reddit.com/link/12pnwp8/video/bj5ks4001hua1/player
## Introduction
General-purpose foundation models, especially large language models (LLMs) such as ChatGPT, have demonstrated extraordinary capabilities in performing various tasks that were once challenging. However, we believe that one model cannot rule them all. Further fine-tuning is necessary to achieve better performance in specialized tasks or domains. The standard approaches for fine-tuning these models include:
Continuous pretraining on specific domains so that LLMs can acquire knowledge in those domains
Task tuning on specific tasks so that LLMs can deal with downstream tasks
Instruction tuning to endow LLMs the ability to comply with specialized natural language instructions and complete tasks required by those instructions
Alignment tuning to teach LLMs conversational skills in accordance with human preferences.
Alignment, in particular, is crucial for ensuring the safety of LLMs before deployment in the real world. Today we introduce a new alignment algorithm RAFT [1\] which is more effective than traditional methods such as PPO. RAFT mitigates the issue of bias that could emerge in LLM responses. Using RAFT for aligning LLMs offers numerous benefits, including the ability to disentangle unwanted biases from the LLM's language production while maintaining fluency levels consistently.
Check out the paper https://arxiv.org/abs/2304.06767.
Its implementation is available from https://github.com/OptimalScale/LMFlow.
## RAFT Alignment
Alignment is
/r/MachineLearning
https://redd.it/12pnwp8
​
https://reddit.com/link/12pnwp8/video/bj5ks4001hua1/player
## Introduction
General-purpose foundation models, especially large language models (LLMs) such as ChatGPT, have demonstrated extraordinary capabilities in performing various tasks that were once challenging. However, we believe that one model cannot rule them all. Further fine-tuning is necessary to achieve better performance in specialized tasks or domains. The standard approaches for fine-tuning these models include:
Continuous pretraining on specific domains so that LLMs can acquire knowledge in those domains
Task tuning on specific tasks so that LLMs can deal with downstream tasks
Instruction tuning to endow LLMs the ability to comply with specialized natural language instructions and complete tasks required by those instructions
Alignment tuning to teach LLMs conversational skills in accordance with human preferences.
Alignment, in particular, is crucial for ensuring the safety of LLMs before deployment in the real world. Today we introduce a new alignment algorithm RAFT [1\] which is more effective than traditional methods such as PPO. RAFT mitigates the issue of bias that could emerge in LLM responses. Using RAFT for aligning LLMs offers numerous benefits, including the ability to disentangle unwanted biases from the LLM's language production while maintaining fluency levels consistently.
Check out the paper https://arxiv.org/abs/2304.06767.
Its implementation is available from https://github.com/OptimalScale/LMFlow.
## RAFT Alignment
Alignment is
/r/MachineLearning
https://redd.it/12pnwp8
GitHub
GitHub - OptimalScale/LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All. - OptimalScale/LMFlow
Tuesday Daily Thread: Advanced questions
Have some burning questions on advanced Python topics? Use this thread to ask more advanced questions related to Python.
If your question is a beginner question we hold a beginner Daily Thread tomorrow (Wednesday) where you can ask any question! We may remove questions here and ask you to resubmit tomorrow.
This thread may be fairly low volume in replies, if you don't receive a response we recommend looking at r/LearnPython or joining the Python Discord server at https://discord.gg/python where you stand a better chance of receiving a response.
/r/Python
https://redd.it/12q2gyd
Have some burning questions on advanced Python topics? Use this thread to ask more advanced questions related to Python.
If your question is a beginner question we hold a beginner Daily Thread tomorrow (Wednesday) where you can ask any question! We may remove questions here and ask you to resubmit tomorrow.
This thread may be fairly low volume in replies, if you don't receive a response we recommend looking at r/LearnPython or joining the Python Discord server at https://discord.gg/python where you stand a better chance of receiving a response.
/r/Python
https://redd.it/12q2gyd
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
What do you use for encrypting data at rest?
Aside from password hashes.
What libraries do you use?
Directly in a model?
Do you monitor the overhead of encryption?
What do you choose not to encrypt?
/r/flask
https://redd.it/12qasa7
Aside from password hashes.
What libraries do you use?
Directly in a model?
Do you monitor the overhead of encryption?
What do you choose not to encrypt?
/r/flask
https://redd.it/12qasa7
Reddit
r/flask on Reddit: What do you use for encrypting data at rest?
Posted by u/utc_extended - 4 votes and 7 comments
Issues with @login_required in Movie Watchlist app
With all the heavy lifting being done by chatgpt, I am confident the outputted code is trash but it's a proof of concept, I am trying to make a simple movie watchlist app.
The general idea is to query a DB I have full of all the titles available to stream at the moment, and a column that designates what provider it is on. User registers and states the services they pay for, Netflix for example, and they are shown a list of all the titles on Netflix. They click like and I save their 'likes' or dislike and they are shown the next title.
So far I have a working database connection. registering and login are working, as is the dashboard page showing their user details. But when I try and add the base functionality, showing them titles to like or dislike, I'm getting into trouble. Firstly they need to be logged in, I need to query the User database table to extract what services they have subscribed to and filter this list by provider.
This is where I run into issues and the all-knowing chatgpt is putting me into circles.
​
Here is the current "working" code minus the
/r/flask
https://redd.it/12qh84r
With all the heavy lifting being done by chatgpt, I am confident the outputted code is trash but it's a proof of concept, I am trying to make a simple movie watchlist app.
The general idea is to query a DB I have full of all the titles available to stream at the moment, and a column that designates what provider it is on. User registers and states the services they pay for, Netflix for example, and they are shown a list of all the titles on Netflix. They click like and I save their 'likes' or dislike and they are shown the next title.
So far I have a working database connection. registering and login are working, as is the dashboard page showing their user details. But when I try and add the base functionality, showing them titles to like or dislike, I'm getting into trouble. Firstly they need to be logged in, I need to query the User database table to extract what services they have subscribed to and filter this list by provider.
This is where I run into issues and the all-knowing chatgpt is putting me into circles.
​
Here is the current "working" code minus the
/r/flask
https://redd.it/12qh84r
Reddit
r/flask on Reddit: Issues with @login_required in Movie Watchlist app
Posted by u/thegasman2000 - 4 votes and 4 comments
How does collectstatic work for sites that are constantly updated
Let's say I have a blog in production that I'm hosting with cPanel. If I publish a new blogpost that has new images, will I need to run the collectstatic command each time?
/r/django
https://redd.it/12q9f0e
Let's say I have a blog in production that I'm hosting with cPanel. If I publish a new blogpost that has new images, will I need to run the collectstatic command each time?
/r/django
https://redd.it/12q9f0e
Reddit
r/django on Reddit: How does collectstatic work for sites that are constantly updated
Posted by u/TurtleSpiral88 - 7 votes and 5 comments
For stock trading/finance, which kind of Python library are you looking for and can’t find?
Note : if you have time, you can put your not perfect but tentative solution url or its name
/r/Python
https://redd.it/12qfd3z
Note : if you have time, you can put your not perfect but tentative solution url or its name
/r/Python
https://redd.it/12qfd3z
Reddit
r/Python on Reddit: For stock trading/finance, which kind of Python library are you looking for and can’t find?
Posted by u/Dry-Beyond-1144 - 36 votes and 32 comments
I’m developing a programming game where you use Python to automate all kinds of machines, robots, drones and more and solve exciting bite-sized coding challenges. (playtesting now)
Earlier this year, I first announced JOY OF PROGRAMMING here on r/python and it was met with an overwhelmingly positive reception. Your interest and support really mean a lot! In case you missed it, the game is all about using Python to solve challenging tasks in realistic, physically simulated 3D environments. It covers a wide range of topics, and hopefully presents interesting challenges and fun for all skill levels.
If you are interested in the game, you can find a lot more information on the Steam page.
https://store.steampowered.com/app/2216770/JOY\_OF\_PROGRAMMING\_\_Software\_Engineering\_Simulator
Today, I’d also like to invite you all to finally try an early version of the game! This alpha version focuses mainly on the beginner tutorials (6 at the moment) with one advanced level. Your feedback how difficult, engaging and ultimately fun the game and these levels are would be invaluable. I’m running this playtest on a newly created Discord server to make providing feedback and fixing bugs as seamless as possible. Please find the download link and all further details on Discord.
https://discord.com/invite/2ZrdzkNeBP
Happy Coding!
/r/Python
https://redd.it/12qn0ku
Earlier this year, I first announced JOY OF PROGRAMMING here on r/python and it was met with an overwhelmingly positive reception. Your interest and support really mean a lot! In case you missed it, the game is all about using Python to solve challenging tasks in realistic, physically simulated 3D environments. It covers a wide range of topics, and hopefully presents interesting challenges and fun for all skill levels.
If you are interested in the game, you can find a lot more information on the Steam page.
https://store.steampowered.com/app/2216770/JOY\_OF\_PROGRAMMING\_\_Software\_Engineering\_Simulator
Today, I’d also like to invite you all to finally try an early version of the game! This alpha version focuses mainly on the beginner tutorials (6 at the moment) with one advanced level. Your feedback how difficult, engaging and ultimately fun the game and these levels are would be invaluable. I’m running this playtest on a newly created Discord server to make providing feedback and fixing bugs as seamless as possible. Please find the download link and all further details on Discord.
https://discord.com/invite/2ZrdzkNeBP
Happy Coding!
/r/Python
https://redd.it/12qn0ku
Steampowered
JOY OF PROGRAMMING - Software Engineering Simulator on Steam
Use real Python code to automate machines, robots, drones and more: Program self-driving vehicles; crack passwords; apply machine learning; automate logistics; use image processing to guide missiles. Gain real coding skills and solve exciting bite-sized programming…
Kangas V2: Explore multimedia data
Project: [https://github.com/comet-ml/kangas](https://github.com/comet-ml/kangas)
Demo: [https://kangas.comet.com/](https://kangas.comet.com/)
​
We've just released version 2 of Kangas, our open source platform for exploring large, multimedia datasets. At a high-level, Kangas provides:
* A Python interface for constructing large tables of multimedia data (DataGrids), which should be very familiar to any Pandas users.
* A backend built on SQLLite and Flask for storing/querying/serving DataGrids.
* A UI built on React Server Components with Next 13 that enables fast, interactive exploration of your data
https://i.redd.it/ldpbbkb70nua1.gif
Kangas provides out of the box support for complex querying operations, as well as a variety of computer vision functionality (bounding boxes, labels, annotations, etc.) Additionally, the UI is customizable—you can resize, filter, and reorder columns as you like.
You can run Kangas from within a notebook, as a local app via the Kangas CLI, or even deploy it as a standalone web application (as we've done at [https://kangas.comet.com](https://kangas.comet.com))
Finally, I want to include a thank you here. About 5 months ago, I shared Kangas' initial V1 release here in r/Python, and several of you made your way over to the repo to share feedback and support. This was massively helpful for us. It helped us figure out what to prioritize, and opened our eyes to new features we hadn't considered.
/r/Python
https://redd.it/12qlmup
Project: [https://github.com/comet-ml/kangas](https://github.com/comet-ml/kangas)
Demo: [https://kangas.comet.com/](https://kangas.comet.com/)
​
We've just released version 2 of Kangas, our open source platform for exploring large, multimedia datasets. At a high-level, Kangas provides:
* A Python interface for constructing large tables of multimedia data (DataGrids), which should be very familiar to any Pandas users.
* A backend built on SQLLite and Flask for storing/querying/serving DataGrids.
* A UI built on React Server Components with Next 13 that enables fast, interactive exploration of your data
https://i.redd.it/ldpbbkb70nua1.gif
Kangas provides out of the box support for complex querying operations, as well as a variety of computer vision functionality (bounding boxes, labels, annotations, etc.) Additionally, the UI is customizable—you can resize, filter, and reorder columns as you like.
You can run Kangas from within a notebook, as a local app via the Kangas CLI, or even deploy it as a standalone web application (as we've done at [https://kangas.comet.com](https://kangas.comet.com))
Finally, I want to include a thank you here. About 5 months ago, I shared Kangas' initial V1 release here in r/Python, and several of you made your way over to the repo to share feedback and support. This was massively helpful for us. It helped us figure out what to prioritize, and opened our eyes to new features we hadn't considered.
/r/Python
https://redd.it/12qlmup
GitHub
GitHub - comet-ml/kangas: 🦘 Explore multimedia datasets at scale
🦘 Explore multimedia datasets at scale. Contribute to comet-ml/kangas development by creating an account on GitHub.
Using React with Django
My Django app is now becoming bigger, and the front end part (in vanilla JS) becomes pretty unmaintainable. I would like to change the front end to React, since it is component based and so it is more organized. It also has a dedicated testing framework for front-end use (unlike vanilla JS), and so I can ensure my front-end logic is robust. Does anyone ever done migration from Vanilla JS to React? How would the Django template work there? Does it work the same, and if so was it hard to write tests for them?
/r/django
https://redd.it/12qqxrv
My Django app is now becoming bigger, and the front end part (in vanilla JS) becomes pretty unmaintainable. I would like to change the front end to React, since it is component based and so it is more organized. It also has a dedicated testing framework for front-end use (unlike vanilla JS), and so I can ensure my front-end logic is robust. Does anyone ever done migration from Vanilla JS to React? How would the Django template work there? Does it work the same, and if so was it hard to write tests for them?
/r/django
https://redd.it/12qqxrv
Reddit
r/django on Reddit: Using React with Django
Posted by u/aldosebastian - 28 votes and 28 comments