Intro to PDB, the Python Debugger
https://bitecode.substack.com/p/intro-to-pdb-the-python-debugger
/r/Python
https://redd.it/13cpbh7
https://bitecode.substack.com/p/intro-to-pdb-the-python-debugger
/r/Python
https://redd.it/13cpbh7
www.bitecode.dev
Intro to PDB, the Python Debugger
Looks like crap, but tastes great
Invalid Argument on Passing refresh_token as parameter on Google OAuth Secret Manager Tutorial - "The provided Secret ID [] does not match the expected format [[a-zA-Z_0-9]+]"
Hello, I'm following a tutorial here: [https://www.youtube.com/watch?v=-LXrLVPmlfI](https://www.youtube.com/watch?v=-LXrLVPmlfI). I've been stuck on this issue for almost a month. This program is attempting to log-in to a Flask app using the Google OAuth and Secret manager. In the code, [auth.py](https://auth.py/) is sending a refresh\_token to [secret.py](https://secret.py/), which is supposed to create a secret. Alot of this code is taken directly from Google documentation, and the tutorial was released by Google. The relevant pieces of code are as follows: auth.py
def oauth2callback(passthrough_val, state, code, token):
if passthrough_val != state:
message = "State token does not match the expected state."
raise ValueError(message)
flow = Flow.from_client_secrets_file(_CLIENT_SECRETS_PATH, scopes=[_SCOPE])
flow.redirect_uri = _REDIRECT_URI
# Pass the code back into the OAuth module to get a refresh token.
flow.fetch_token(code=code)
refresh_token = flow.credentials.refresh_token
secret = Secret(token)
secret.create_secret_version(refresh_token)
secret.py
/r/flask
https://redd.it/13cvs9g
Hello, I'm following a tutorial here: [https://www.youtube.com/watch?v=-LXrLVPmlfI](https://www.youtube.com/watch?v=-LXrLVPmlfI). I've been stuck on this issue for almost a month. This program is attempting to log-in to a Flask app using the Google OAuth and Secret manager. In the code, [auth.py](https://auth.py/) is sending a refresh\_token to [secret.py](https://secret.py/), which is supposed to create a secret. Alot of this code is taken directly from Google documentation, and the tutorial was released by Google. The relevant pieces of code are as follows: auth.py
def oauth2callback(passthrough_val, state, code, token):
if passthrough_val != state:
message = "State token does not match the expected state."
raise ValueError(message)
flow = Flow.from_client_secrets_file(_CLIENT_SECRETS_PATH, scopes=[_SCOPE])
flow.redirect_uri = _REDIRECT_URI
# Pass the code back into the OAuth module to get a refresh token.
flow.fetch_token(code=code)
refresh_token = flow.credentials.refresh_token
secret = Secret(token)
secret.create_secret_version(refresh_token)
secret.py
/r/flask
https://redd.it/13cvs9g
YouTube
[Live Demo] Building a Google Ads API Web App - Part 5: Google Secret Manager API
This is the fifth part of an 8-episode series, in which we’ll take a deep dive into developing web apps with the Google Ads API, with a focus on the OAuth flow, by building a multi-tenant app entirely from scratch.
This episode will demonstrate how to use…
This episode will demonstrate how to use…
abacus - minimal accounting framework
Hello everyone, I wrote a small library that can create a chart af accounts, process accounting entries, and produce balance sheet and income statement. I recently added "conta accounts" (eg depreciation) which make the library conceptually fit as a working double entry general ledger.
My initial interest was to make a proof of concept that a compact accounting library is possible, and now I'm at a point where I need to find rationale for it's development. So far it is a research demo, not a accounting software. Maybe useful for teaching code to accountant or accounting to programmers, but these seem very distant professions (eg thematic LinkindIn group on accounting and programming has 10 members).
While writing the code I really enjoyed pattern matching that helps a lot to branch code, as well as subclasses to distinguish between types of accounts. I was able to show the workflow Chart -> Ledger -> ListEntry -> Ledger -> Report generally works for accounting and one can save and process a list Entries in order to be able to get the state of the Ledger. Anything stateful was causing concern and needed extra handling (eg closing entries).
Will appreciate your comment about
/r/Python
https://redd.it/13cmb21
Hello everyone, I wrote a small library that can create a chart af accounts, process accounting entries, and produce balance sheet and income statement. I recently added "conta accounts" (eg depreciation) which make the library conceptually fit as a working double entry general ledger.
My initial interest was to make a proof of concept that a compact accounting library is possible, and now I'm at a point where I need to find rationale for it's development. So far it is a research demo, not a accounting software. Maybe useful for teaching code to accountant or accounting to programmers, but these seem very distant professions (eg thematic LinkindIn group on accounting and programming has 10 members).
While writing the code I really enjoyed pattern matching that helps a lot to branch code, as well as subclasses to distinguish between types of accounts. I was able to show the workflow Chart -> Ledger -> ListEntry -> Ledger -> Report generally works for accounting and one can save and process a list Entries in order to be able to get the state of the Ledger. Anything stateful was causing concern and needed extra handling (eg closing entries).
Will appreciate your comment about
/r/Python
https://redd.it/13cmb21
Reddit
r/Python on Reddit: abacus - minimal accounting framework
Posted by u/iamevpo - 36 votes and 3 comments
I made a tool to analyze and visualize data using ChatGPT, Pyodide, and Plotly
Hi everyone,
I made a small tool that helps you gather insights and build graphs from a dataset by "chatting" with it in plain English. It's built on top of ChatGPT, Pyodide, Plotly, and Litestar.
All the data is processed in the client, so there's no size limit to the dataset. The only limit is the memory on your device. The server only receives a brief summary of the data (column names, distributions, and 3 samples), which is how I generate the code to answer the questions.
It works as follows:
1. Users makes a query.
2. The query + the dataset summary (generated on the client using Pyodide) are sent to the server to generate the code.
3. The code is generated and sent from the server (ChatGPT).
4. The code is executed on the client side (Pyodide). If it's a graph, then it's rendered using Plotly. Otherwise, a string is printed with the answer.
5. Rinse and repeat.
Here's a demo of how the whole process looks like (2x speed):
https://reddit.com/link/13cyll7/video/yvf04r522uya1/player
You can try the app here.
For those interested, I'm planning on writing a tutorial about it in the next few days.
/r/Python
https://redd.it/13cyll7
Hi everyone,
I made a small tool that helps you gather insights and build graphs from a dataset by "chatting" with it in plain English. It's built on top of ChatGPT, Pyodide, Plotly, and Litestar.
All the data is processed in the client, so there's no size limit to the dataset. The only limit is the memory on your device. The server only receives a brief summary of the data (column names, distributions, and 3 samples), which is how I generate the code to answer the questions.
It works as follows:
1. Users makes a query.
2. The query + the dataset summary (generated on the client using Pyodide) are sent to the server to generate the code.
3. The code is generated and sent from the server (ChatGPT).
4. The code is executed on the client side (Pyodide). If it's a graph, then it's rendered using Plotly. Otherwise, a string is printed with the answer.
5. Rinse and repeat.
Here's a demo of how the whole process looks like (2x speed):
https://reddit.com/link/13cyll7/video/yvf04r522uya1/player
You can try the app here.
For those interested, I'm planning on writing a tutorial about it in the next few days.
/r/Python
https://redd.it/13cyll7
deepsheet.dylancastillo.co
deepsheet - Time for some sheet talking!
Ask your questions in plain English and unlock hidden insights from your data.
Implementing an audit trail on all specific model fields - Best solution ?
I have a django model called Problem.
Problem has 3 fields:
\- problem_id which is its primary key
\- status which is a foreign key field
\- user which is a foreign key field that uses AUTH_USER_MODEL built-in django model
The Status model has 2 entries in its SQL table : {ID:1 ; Name: "Open" ; ID:2 ; Name "Closed"}
Let's say i have a Problem instance with ID = 1. Today a user updated the model 1's status to 1, the next day the status was updated to 2 by another user.
I want to track and be able to show in my front end all model 1 instance's changes in an intuitive way. Two solutions come to my mind :
\- Create another table called "status_history" that tracks all Status updates on each model instance. This sounds tedious as i will have to do the same for each other Problem field (model has also other fields like type, difficulty, etc).
\- Use packages like django-simple-history.
What are you guys experience with such implementations?
/r/djangolearning
https://redd.it/13cxazp
I have a django model called Problem.
Problem has 3 fields:
\- problem_id which is its primary key
\- status which is a foreign key field
\- user which is a foreign key field that uses AUTH_USER_MODEL built-in django model
The Status model has 2 entries in its SQL table : {ID:1 ; Name: "Open" ; ID:2 ; Name "Closed"}
Let's say i have a Problem instance with ID = 1. Today a user updated the model 1's status to 1, the next day the status was updated to 2 by another user.
I want to track and be able to show in my front end all model 1 instance's changes in an intuitive way. Two solutions come to my mind :
\- Create another table called "status_history" that tracks all Status updates on each model instance. This sounds tedious as i will have to do the same for each other Problem field (model has also other fields like type, difficulty, etc).
\- Use packages like django-simple-history.
What are you guys experience with such implementations?
/r/djangolearning
https://redd.it/13cxazp
Reddit
r/djangolearning on Reddit: Implementing an audit trail on all specific model fields - Best solution ?
Posted by u/doing20thingsatatime - 1 vote and 1 comment
How to setup an anonymous Flask server with VPN
I'm serving up a Flask server using proton-vpn / openvpn on my desktop using Linux Mint. It was a pain in the ass to setup but the directions on Proton VPN site helped tremendously.
1. Pay for ProtonVPN account (only way to setup port forwarding)
2. Login to Proton mail, proceed to ProtonVPN login and pick openVPN
3. choose a server that supports port redirection and download the config file (see directions)
4. in Linux Mint, click on Network Connections and add + / create an Import saved VPN connection. Browse to the config file you downloaded and import.
5. Enter the username and password provided by ProtonVPN for the openVPN connection, don't forget to add +pmp to your username!! This is IMPORTANT.
6. refer to directions and test connection using command 'natpmpc' in Linux.
7. make sure your Python app points to correct port ( if __name__=="__main__": app.run(host='0.0.0.0', port=39242, threaded=True)
8. launch your python application and make sure Python app and these directories should reside in the same folder as your app (Flask 101).
9. click the link flask provides to test your application https://127.0.0.1:39242 (your port might be different) use the one that natpmpc
/r/flask
https://redd.it/13d5lw9
I'm serving up a Flask server using proton-vpn / openvpn on my desktop using Linux Mint. It was a pain in the ass to setup but the directions on Proton VPN site helped tremendously.
1. Pay for ProtonVPN account (only way to setup port forwarding)
2. Login to Proton mail, proceed to ProtonVPN login and pick openVPN
3. choose a server that supports port redirection and download the config file (see directions)
4. in Linux Mint, click on Network Connections and add + / create an Import saved VPN connection. Browse to the config file you downloaded and import.
5. Enter the username and password provided by ProtonVPN for the openVPN connection, don't forget to add +pmp to your username!! This is IMPORTANT.
6. refer to directions and test connection using command 'natpmpc' in Linux.
7. make sure your Python app points to correct port ( if __name__=="__main__": app.run(host='0.0.0.0', port=39242, threaded=True)
8. launch your python application and make sure Python app and these directories should reside in the same folder as your app (Flask 101).
9. click the link flask provides to test your application https://127.0.0.1:39242 (your port might be different) use the one that natpmpc
/r/flask
https://redd.it/13d5lw9
Proton VPN
How to manually set up port forwarding | Proton VPN
A guide to manually configuring port forwarding for Proton VPN using the NAT-PMP protocol on macOS and Linux
Pip or Anaconda or Miniconda or Poetry or Docker or Nothing for Package/Container Management?
I'm trying to find out which if any manager/container manager I should use and its hard because there doesn't seem to be any consensus.
Some say pip is horrible and anaconda will totally square away your conflicts far better some say anaconda is horrible and pip will totally handle all your needs and every permutation. Some say use multiple solutions like x+y together is the best but v+z will screw up your computer some say using v+z is the best and x+y will screw up your computer for the very same thing. Some say x is okay but just gets a bad rap because its not designed for y case. Others say x really is just outright useless.
Theres like around 7-8 or so major 'container management' options and tons of ways to combine them and its not really clear which is best for what or outright useless so can anyone clarify further? Like maybe list out the major options and describe whether they are still needed and if some what are their strengths and weaknesses? And also make some recommendations?
/r/Python
https://redd.it/13cnf3c
I'm trying to find out which if any manager/container manager I should use and its hard because there doesn't seem to be any consensus.
Some say pip is horrible and anaconda will totally square away your conflicts far better some say anaconda is horrible and pip will totally handle all your needs and every permutation. Some say use multiple solutions like x+y together is the best but v+z will screw up your computer some say using v+z is the best and x+y will screw up your computer for the very same thing. Some say x is okay but just gets a bad rap because its not designed for y case. Others say x really is just outright useless.
Theres like around 7-8 or so major 'container management' options and tons of ways to combine them and its not really clear which is best for what or outright useless so can anyone clarify further? Like maybe list out the major options and describe whether they are still needed and if some what are their strengths and weaknesses? And also make some recommendations?
/r/Python
https://redd.it/13cnf3c
Reddit
r/Python on Reddit: Pip or Anaconda or Miniconda or Poetry or Docker or Nothing for Package/Container Management?
Posted by u/blaher123 - 31 votes and 79 comments
Data Visualization Libraries?
Hey guys I work on a flask react app where flask serves as an API and react is our frontend. We want to do data visualization for some of our data and data analytics stuff.
Does anyone here recommend any libraries/frameworks? We are currently looking at using plotly or streamlit, leaning more toward plotly at the moment since we wouldn't need a separate web server like streamlit requires.
/r/flask
https://redd.it/13d0i9u
Hey guys I work on a flask react app where flask serves as an API and react is our frontend. We want to do data visualization for some of our data and data analytics stuff.
Does anyone here recommend any libraries/frameworks? We are currently looking at using plotly or streamlit, leaning more toward plotly at the moment since we wouldn't need a separate web server like streamlit requires.
/r/flask
https://redd.it/13d0i9u
Reddit
r/flask on Reddit: Data Visualization Libraries?
Posted by u/Kaiser_Wolfgang - 3 votes and 3 comments
Wednesday Daily Thread: Beginner questions
New to Python and have questions? Use this thread to ask anything about Python, there are no bad questions!
This thread may be fairly low volume in replies, if you don't receive a response we recommend looking at r/LearnPython or joining the Python Discord server at https://discord.gg/python where you stand a better chance of receiving a response.
/r/Python
https://redd.it/13dahzw
New to Python and have questions? Use this thread to ask anything about Python, there are no bad questions!
This thread may be fairly low volume in replies, if you don't receive a response we recommend looking at r/LearnPython or joining the Python Discord server at https://discord.gg/python where you stand a better chance of receiving a response.
/r/Python
https://redd.it/13dahzw
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
how does the timeit library get such high precision?
it seems that C can get about a 10millisecond precision. (https://stackoverflow.com/questions/5248915/execution-time-of-c-program)
​
but when I had a look at (https://docs.python.org/3/library/timeit.html) it can give outputs like
> 0.19665591977536678
​
How is this possible? I am actually looking to time functions in C (I want to generate a report on how much faster is C than python in various usecases) so, I want to measure the time for executing a function in C with extremely high precision, is that even possible?
/r/Python
https://redd.it/13dfi21
it seems that C can get about a 10millisecond precision. (https://stackoverflow.com/questions/5248915/execution-time-of-c-program)
​
but when I had a look at (https://docs.python.org/3/library/timeit.html) it can give outputs like
> 0.19665591977536678
​
How is this possible? I am actually looking to time functions in C (I want to generate a report on how much faster is C than python in various usecases) so, I want to measure the time for executing a function in C with extremely high precision, is that even possible?
/r/Python
https://redd.it/13dfi21
Stack Overflow
Execution time of C program
I have a C program that aims to be run in parallel on several processors. I need to be able to record the execution time (which could be anywhere from 1 second to several minutes). I have searched ...
Currently learning flask and need a bit of help. Please see below
Hi all, I am currently using flask to create a budget api application below are two tables:
from flasksqlalchemy import SQLAlchemy
from flasklogin import UserMixin
from datetime import datetime
db = SQLAlchemy()
```
from flask_sqlalchemy import SQLAlchemy
from flask_login import UserMixin
from datetime import datetime
db = SQLAlchemy()
class User(db.Model, UserMixin):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
password = db.Column(db.String(255), nullable=False)
expenses = db.relationship('Expense', backref='user', lazy=True)
class Expense(db.Model):
__tablename__ = 'expenses'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(100), nullable=False)
amount = db.Column(db.Float, nullable=False)
date_created = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
/r/flask
https://redd.it/13d2uw7
Hi all, I am currently using flask to create a budget api application below are two tables:
from flasksqlalchemy import SQLAlchemy
from flasklogin import UserMixin
from datetime import datetime
db = SQLAlchemy()
```
from flask_sqlalchemy import SQLAlchemy
from flask_login import UserMixin
from datetime import datetime
db = SQLAlchemy()
class User(db.Model, UserMixin):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
password = db.Column(db.String(255), nullable=False)
expenses = db.relationship('Expense', backref='user', lazy=True)
class Expense(db.Model):
__tablename__ = 'expenses'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(100), nullable=False)
amount = db.Column(db.Float, nullable=False)
date_created = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
/r/flask
https://redd.it/13d2uw7
Reddit
r/flask on Reddit: Currently learning flask and need a bit of help. Please see below
Posted by u/Consistent_Essay1139 - 5 votes and 4 comments
Do you really need microservices?
I was recently asked by a friend if they needed to use microservices in their project.
My answer is: it depends.
In general, microservices are a great way to structure your code so that it is modular, easy to maintain, and easy to scale.
However, there are some tradeoffs that you need to be aware of before deciding to use microservices.
Mainly because microservices add complexity to your system. This can make debugging and troubleshooting more difficult. It can also lead to increased latency due to the overhead of communication between services.
However, microservice (in my experience) is great for large-scale projects, where you need the flexibility to add or remove components as needed. It also allows greater control over how individual services are deployed and managed.
Goa and Kong are some of the best frameworks to develop and deploy microservices. They provide features such as out-of-the-box support for service discovery, routing and authentication that make it easier to build more complex applications. There are also newer architectural frameworks with less steep learning curves like GPTDeploy that lets you build and deploy microservices with a single command.
But If you did decide to go with microservices, make sure you have a good reason for doing
/r/Python
https://redd.it/13d227c
I was recently asked by a friend if they needed to use microservices in their project.
My answer is: it depends.
In general, microservices are a great way to structure your code so that it is modular, easy to maintain, and easy to scale.
However, there are some tradeoffs that you need to be aware of before deciding to use microservices.
Mainly because microservices add complexity to your system. This can make debugging and troubleshooting more difficult. It can also lead to increased latency due to the overhead of communication between services.
However, microservice (in my experience) is great for large-scale projects, where you need the flexibility to add or remove components as needed. It also allows greater control over how individual services are deployed and managed.
Goa and Kong are some of the best frameworks to develop and deploy microservices. They provide features such as out-of-the-box support for service discovery, routing and authentication that make it easier to build more complex applications. There are also newer architectural frameworks with less steep learning curves like GPTDeploy that lets you build and deploy microservices with a single command.
But If you did decide to go with microservices, make sure you have a good reason for doing
/r/Python
https://redd.it/13d227c
Goa – Design. Generate. Scale.
Transform the way you build APIs in Go. Goa bridges the gap between design and implementation, generating clean, scalable, and production-ready microservices.
GitHub - griptape-ai/griptape: Python framework for AI workflows and pipelines with chain of thought reasoning, external tools, and memory.
https://github.com/griptape-ai/griptape
/r/Python
https://redd.it/13djuec
https://github.com/griptape-ai/griptape
/r/Python
https://redd.it/13djuec
GitHub
GitHub - griptape-ai/griptape: Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and…
Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory. - GitHub - griptape-ai/griptape: Modular Python framework for AI agents and workflows with ...
HarvardX CS50's Introduction to Programming with Python
i highly recommend the online course created by harvard on the fundamentals of Python.
Very good teaching and very easy to understand.
ohh and its free
/r/Python
https://redd.it/13dls6c
i highly recommend the online course created by harvard on the fundamentals of Python.
Very good teaching and very easy to understand.
ohh and its free
/r/Python
https://redd.it/13dls6c
Reddit
r/Python on Reddit: HarvardX CS50's Introduction to Programming with Python
Posted by u/Andymanden33 - No votes and 2 comments
Cleanest way to install python
Hi, I'm on linux I was asking to myself what is the best and cleanest way to install python (with docker, using virtual environment, classic way ecc...)
/r/Python
https://redd.it/13dioxr
Hi, I'm on linux I was asking to myself what is the best and cleanest way to install python (with docker, using virtual environment, classic way ecc...)
/r/Python
https://redd.it/13dioxr
Reddit
r/Python on Reddit: Cleanest way to install python
Posted by u/VeterinarianHuman505 - 3 votes and 18 comments
End-to-End Tutorial on Combining AWS Lambda, Docker, and Python
https://www.youtube.com/watch?v=gvfoZq258gA&list=PLbn3jWIXv_ibGQml3zlXi1TfmdcIl6Afy&index=1
/r/flask
https://redd.it/13cqx6g
https://www.youtube.com/watch?v=gvfoZq258gA&list=PLbn3jWIXv_ibGQml3zlXi1TfmdcIl6Afy&index=1
/r/flask
https://redd.it/13cqx6g
YouTube
Python + AWS Lambda - Part 1: Introduction
Please consider supporting me on Patreon: https://www.patreon.com/programmingwithalex
GitHub link: https://github.com/programmingwithalex/aws_lambda_demo
The video series will cover:
1. Writing a Python script that pulls the current day's weather from an…
GitHub link: https://github.com/programmingwithalex/aws_lambda_demo
The video series will cover:
1. Writing a Python script that pulls the current day's weather from an…
"Should" i normalize everything ? Data modeling question
Hey guys,
I have a model called Problem that contains many fields : difficulty, status, category.
Each of these fields have 3 entries. For example, difficulty field has these values : "Easy", "Normal", "Hard".
Should i create a whole model with its own table just for the difficulty field and make it a foreign key of the Problem model ? As below :
from django.db import models
class Difficulty(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
class Problem(models.Model):
name = models.CharField(max_length=50)
difficulty = models.ForeignKey(Difficulty, on_delete=models.CASCADE)
def __str__(self):
return self.name
Or should i just create a multiple choice field and keep the logic in my code :
from django.db import models
/r/djangolearning
https://redd.it/13dixxh
Hey guys,
I have a model called Problem that contains many fields : difficulty, status, category.
Each of these fields have 3 entries. For example, difficulty field has these values : "Easy", "Normal", "Hard".
Should i create a whole model with its own table just for the difficulty field and make it a foreign key of the Problem model ? As below :
from django.db import models
class Difficulty(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
class Problem(models.Model):
name = models.CharField(max_length=50)
difficulty = models.ForeignKey(Difficulty, on_delete=models.CASCADE)
def __str__(self):
return self.name
Or should i just create a multiple choice field and keep the logic in my code :
from django.db import models
/r/djangolearning
https://redd.it/13dixxh
Reddit
r/djangolearning on Reddit: "Should" i normalize everything ? Data modeling question
Posted by u/doing20thingsatatime - 4 votes and 3 comments
Bevy v2.0
I've created a dependency injection framework that works similarly to Fast API using type annotations. You only need to use the
Installation
pip install bevy
Simple Example
from bevy import dependency, inject
class Demo:
def init(self):
self.message = "Hello World"
@inject
def example(thing: Demo = dependency()):
print(thing.message)
example()
That'll handle creating an instance of
Useful Links
Blog post explaining in more detail
Documentation
GitHub
/r/Python
https://redd.it/13dvdk8
I've created a dependency injection framework that works similarly to Fast API using type annotations. You only need to use the
inject decorator and the dependency function to indicate what should be injected.Installation
pip install bevy
Simple Example
from bevy import dependency, inject
class Demo:
def init(self):
self.message = "Hello World"
@inject
def example(thing: Demo = dependency()):
print(thing.message)
example()
That'll handle creating an instance of
Demo and injecting it into the example function.Useful Links
Blog post explaining in more detail
Documentation
GitHub
/r/Python
https://redd.it/13dvdk8
Zech Zimmerman
Bevy v2.0 - The Simple Python Dependency Injection Framework
Bevy v2.0 - The Python Dependency Injection Framework That Empowers You to Build Better Applications
Your Django-Docker Starter Kit: Streamlined Development & Production Ready
Hey there,
I've crafted a Django-Docker starter kit titled "**Django-Docker Quickstart**" to kickstart your Django projects in no time.
This kit includes Django, PostgreSQL, Redis, Celery, Nginx, and Traefik, all pre-configured for your ease. Nginx and Traefik are set up for your production environment to handle static files, proxy requests, route requests, and provide SSL termination.
You'll also find tools such as Pytest, Pytest plugins, Coverage, Ruff, and Black, making your development and testing process smoother.
Check it out here: **Django-Docker Quickstart**
Enjoy coding and please star the repo if you find it helpful!
P.S: Feedback and suggestions are always welcome! 🚀
/r/django
https://redd.it/13e1t5v
Hey there,
I've crafted a Django-Docker starter kit titled "**Django-Docker Quickstart**" to kickstart your Django projects in no time.
This kit includes Django, PostgreSQL, Redis, Celery, Nginx, and Traefik, all pre-configured for your ease. Nginx and Traefik are set up for your production environment to handle static files, proxy requests, route requests, and provide SSL termination.
You'll also find tools such as Pytest, Pytest plugins, Coverage, Ruff, and Black, making your development and testing process smoother.
Check it out here: **Django-Docker Quickstart**
Enjoy coding and please star the repo if you find it helpful!
P.S: Feedback and suggestions are always welcome! 🚀
/r/django
https://redd.it/13e1t5v
GitHub
GitHub - godd0t/django-docker-quickstart: Your all-in-one Django-Docker starter kit. Pre-configured services including PostgreSQL…
Your all-in-one Django-Docker starter kit. Pre-configured services including PostgreSQL, Redis, Celery, with Nginx and Traefik for production. Streamlined development with included tools for testin...
Took a web development job without much experience, am I doomed?
Okay so please don't ask how or why, but for the next year or so 50% of my 40 hour work week will be dedicated to developing a web application for a public authority.
The goal is to develop an application that offers users to fill out an extensive evaluation about sustainability.
Afterwards, they should receive information and visualizations/diagrams building on their answers. They should also receive a score for their sustainability in different categories and suggestions to improve on.
Both the answers and the suggestions should be stored in a connected database.
That’s about it.
I have a little bit of programming experience.
I know the basic principles like classes, objects, control structures like if/for/etc.
I don’t really have experience in web development though apart from fooling around a bit in Django. Honestly that’s why I want to choose Django as a framework for it.
Do y’all think it is possible if I spent around 20hrs/week on this project?
Obviously the first step would be to learn Django and web development in general the next couple weeks.
I would appreciate any input and also tips for any resources to start off.
/r/django
https://redd.it/13e1b5o
Okay so please don't ask how or why, but for the next year or so 50% of my 40 hour work week will be dedicated to developing a web application for a public authority.
The goal is to develop an application that offers users to fill out an extensive evaluation about sustainability.
Afterwards, they should receive information and visualizations/diagrams building on their answers. They should also receive a score for their sustainability in different categories and suggestions to improve on.
Both the answers and the suggestions should be stored in a connected database.
That’s about it.
I have a little bit of programming experience.
I know the basic principles like classes, objects, control structures like if/for/etc.
I don’t really have experience in web development though apart from fooling around a bit in Django. Honestly that’s why I want to choose Django as a framework for it.
Do y’all think it is possible if I spent around 20hrs/week on this project?
Obviously the first step would be to learn Django and web development in general the next couple weeks.
I would appreciate any input and also tips for any resources to start off.
/r/django
https://redd.it/13e1b5o
Reddit
r/django on Reddit: Took a web development job without much experience, am I doomed?
Posted by u/Dreamville2801 - No votes and 11 comments