Parallel and Concurrent Programming in Python: A Practical Guide
Hey, I made a video about Parallel and Concurrent Programming in Python with threading and multiprocessing.
First we make a program which doesn't use any of those methods and after that we take advantage of those methods and see the differences in terms of performance
https://www.youtube.com/watch?v=IQxKjGEVteI
/r/Python
https://redd.it/1lhgxek
Hey, I made a video about Parallel and Concurrent Programming in Python with threading and multiprocessing.
First we make a program which doesn't use any of those methods and after that we take advantage of those methods and see the differences in terms of performance
https://www.youtube.com/watch?v=IQxKjGEVteI
/r/Python
https://redd.it/1lhgxek
YouTube
Parallel and Concurrent Programming in Python: A Practical Guide (Read Desc)
🚀 This video is part of our Backend Development course — now available with a limited-time discount!
👉 Enroll here: https://www.udemy.com/course/learn-modern-backend-development-with-python-and-sql/?couponCode=CC79F52D249A2BE3F85B (last 24 hours)
Learn how…
👉 Enroll here: https://www.udemy.com/course/learn-modern-backend-development-with-python-and-sql/?couponCode=CC79F52D249A2BE3F85B (last 24 hours)
Learn how…
Fast, lightweight parser for Securities and Exchanges Commission Inline XBRL
Hi there, this is a niche package but may help a few people. I noticed that the SEC XBRL endpoint sometimes takes hours to update, and is missing a lot of data, so I wrote a fast, lightweight InLine XBRL parser to fix this.
https://github.com/john-friedman/secxbrl
# What my project does
Parses SEC InLine XBRL quickly using only the Inline XBRL html file, without the need for linkbases, schema files, etc.
# Target Audience
Algorithmic traders, PhD students, Quant researchers, and hobbyists.
# Comparison
Other packages such as python-xbrl, py-xbrl, and brel are focused on parsing most forms of XBRL. This package only parses SEC XBRL. This allows for dramatically faster performance as no additional files need to be downloaded, making it suitable for running on small instances such as t4g.nanos.
The readme contains links to the other packages as they may be a better fit for your usecase.
# Example
from secxbrl import parseinlinexbrl
# load data
path = '../samples/000095017022000796/tsla-20211231.htm'
with open(path,'rb') as f:
content = f.read()
# get all EarningsPerShareBasic
basic = {'val':item['_val','date':item'_context''context_period_enddate'} for item
/r/Python
https://redd.it/1lhdspc
Hi there, this is a niche package but may help a few people. I noticed that the SEC XBRL endpoint sometimes takes hours to update, and is missing a lot of data, so I wrote a fast, lightweight InLine XBRL parser to fix this.
https://github.com/john-friedman/secxbrl
# What my project does
Parses SEC InLine XBRL quickly using only the Inline XBRL html file, without the need for linkbases, schema files, etc.
# Target Audience
Algorithmic traders, PhD students, Quant researchers, and hobbyists.
# Comparison
Other packages such as python-xbrl, py-xbrl, and brel are focused on parsing most forms of XBRL. This package only parses SEC XBRL. This allows for dramatically faster performance as no additional files need to be downloaded, making it suitable for running on small instances such as t4g.nanos.
The readme contains links to the other packages as they may be a better fit for your usecase.
# Example
from secxbrl import parseinlinexbrl
# load data
path = '../samples/000095017022000796/tsla-20211231.htm'
with open(path,'rb') as f:
content = f.read()
# get all EarningsPerShareBasic
basic = {'val':item['_val','date':item'_context''context_period_enddate'} for item
/r/Python
https://redd.it/1lhdspc
GitHub
GitHub - john-friedman/secxbrl: A package to parse SEC XBRL at scale.
A package to parse SEC XBRL at scale. Contribute to john-friedman/secxbrl development by creating an account on GitHub.
[P] This has been done like a thousand time before, but here I am presenting my very own image denoising model
https://redd.it/1lhny9b
@pythondaily
https://redd.it/1lhny9b
@pythondaily
Reddit
From the MachineLearning community on Reddit: [P] This has been done like a thousand time before, but here I am presenting my very…
Explore this post and more from the MachineLearning community
FastAPI Guard v3.0 - Now with Security Decorators and AI-like Behavior Analysis
Hey r/Python!
So I've been working on my FastAPI security library (fastapi-guard) for a while now, and it's honestly grown way beyond what I thought it would become. Since my last update on r/Python (I wasn't able to post on r/FastAPI until today), I've basically rebuilt the whole thing and added some pretty cool features.
What My Project Does:
Still does all the basic stuff - IP whitelisting/blacklisting, rate limiting, penetration attempt detection, cloud provider blocking, etc. But now it's way more flexible and you can configure everything per route.
What's new:
The biggest addition is Security Decorators. You can now secure individual routes instead of just using the global middleware configuration. Want to rate limit just one endpoint? Block certain countries from accessing your admin panel? Done. No more "all or nothing" approach.
Other stuff that got fixed:
- Had a security vulnerability in v2.0.0 with header injection through X-Forwarded-For. That's patched now
- IPv6 support was broken, fixed that too
- Made IPInfo completely optional - you can now use your own geo IP handler.
- Rate limiting is now proper sliding window instead of fixed window
- Other improvements/enhancements/optimizations...
Been using it in production for months
/r/Python
https://redd.it/1lhxwee
Hey r/Python!
So I've been working on my FastAPI security library (fastapi-guard) for a while now, and it's honestly grown way beyond what I thought it would become. Since my last update on r/Python (I wasn't able to post on r/FastAPI until today), I've basically rebuilt the whole thing and added some pretty cool features.
What My Project Does:
Still does all the basic stuff - IP whitelisting/blacklisting, rate limiting, penetration attempt detection, cloud provider blocking, etc. But now it's way more flexible and you can configure everything per route.
What's new:
The biggest addition is Security Decorators. You can now secure individual routes instead of just using the global middleware configuration. Want to rate limit just one endpoint? Block certain countries from accessing your admin panel? Done. No more "all or nothing" approach.
from fastapi_guard.decorators import SecurityDecorator
@app.get("/admin")
@SecurityDecorator.access_control.block_countries(["CN", "RU"])
@SecurityDecorator.rate_limiting.limit(requests=5, window=60)
async def admin_panel():
return {"status": "admin"}
Other stuff that got fixed:
- Had a security vulnerability in v2.0.0 with header injection through X-Forwarded-For. That's patched now
- IPv6 support was broken, fixed that too
- Made IPInfo completely optional - you can now use your own geo IP handler.
- Rate limiting is now proper sliding window instead of fixed window
- Other improvements/enhancements/optimizations...
Been using it in production for months
/r/Python
https://redd.it/1lhxwee
Reddit
From the Python community on Reddit: FastAPI Guard v3.0 - Now with Security Decorators and AI-like Behavior Analysis
Explore this post and more from the Python community
Monday Daily Thread: Project ideas!
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Description**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Description**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Description**: Create a script that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1li2gwg
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief description and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Description**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Description**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Description**: Create a script that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1li2gwg
YouTube
Build & Integrate your own custom chatbot to a website (Python & JavaScript)
In this fun project you learn how to build a custom chatbot in Python and then integrate this to a website using Flask and JavaScript.
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Run background tasks in Django with zero external dependencies. Here's an update on my library, django-async-manager.
Hey Django community!
I've posted here before about **django-async-manager**, a library I've been developing, and I wanted to share an update on its progress and features.
**What is django-async-manager?**
It's a lightweight, database-backed task queue for Django that provides a Celery-like experience without external dependencies. Perfect for projects where you need background task processing but don't want the overhead of setting up Redis, RabbitMQ, etc.
**✨ New Feature: Memory Management**
The latest update adds memory limit capabilities to prevent tasks from consuming too much RAM. This is especially useful for long-running tasks or when working in environments with limited resources.
# Task with Memory Limit
@background_task(memory_limit=512) # Limit to 512MB
def memory_intensive_task():
# This task will be terminated if it exceeds 512MB
large_data = process_large_dataset()
return analyze_data(large_data)
# Key Features
* **Simple decorator-based API** \- Just add `@background_task` to any function
* **Task prioritization** \- Set tasks as low, medium, high, or critical priority
* **Multiple queues** \- Route tasks to different workers
* **Task dependencies** \- Chain tasks together
* **Automatic retries** \- With configurable exponential backoff
* **Scheduled tasks** \- Cron-like scheduling for periodic tasks
* **Timeout control** \- Prevent tasks from running too long
* **Memory limits** \- Stop tasks from consuming
/r/django
https://redd.it/1lhxz7r
Hey Django community!
I've posted here before about **django-async-manager**, a library I've been developing, and I wanted to share an update on its progress and features.
**What is django-async-manager?**
It's a lightweight, database-backed task queue for Django that provides a Celery-like experience without external dependencies. Perfect for projects where you need background task processing but don't want the overhead of setting up Redis, RabbitMQ, etc.
**✨ New Feature: Memory Management**
The latest update adds memory limit capabilities to prevent tasks from consuming too much RAM. This is especially useful for long-running tasks or when working in environments with limited resources.
# Task with Memory Limit
@background_task(memory_limit=512) # Limit to 512MB
def memory_intensive_task():
# This task will be terminated if it exceeds 512MB
large_data = process_large_dataset()
return analyze_data(large_data)
# Key Features
* **Simple decorator-based API** \- Just add `@background_task` to any function
* **Task prioritization** \- Set tasks as low, medium, high, or critical priority
* **Multiple queues** \- Route tasks to different workers
* **Task dependencies** \- Chain tasks together
* **Automatic retries** \- With configurable exponential backoff
* **Scheduled tasks** \- Cron-like scheduling for periodic tasks
* **Timeout control** \- Prevent tasks from running too long
* **Memory limits** \- Stop tasks from consuming
/r/django
https://redd.it/1lhxz7r
Reddit
From the django community on Reddit: Run background tasks in Django with zero external dependencies. Here's an update on my library…
Explore this post and more from the django community
I made Flask-Squeeze which minifies and compresses responses!
https://github.com/mkrd/Flask-Squeeze
/r/flask
https://redd.it/1lhn2yu
https://github.com/mkrd/Flask-Squeeze
/r/flask
https://redd.it/1lhn2yu
GitHub
GitHub - mkrd/Flask-Squeeze: Automatically minify JS/CSS and compress all responses with brotli, defalte or gzip, with caching…
Automatically minify JS/CSS and compress all responses with brotli, defalte or gzip, with caching for static assets - mkrd/Flask-Squeeze
This media is not supported in your browser
VIEW IN TELEGRAM
[P] I made a website to visualize machine learning algorithms + derive math from scratch
/r/MachineLearning
https://redd.it/1lhtkr4
/r/MachineLearning
https://redd.it/1lhtkr4
Fenix: I built an algorithmic trading bot with CrewAI, Ollama, and Pandas.
Hey r/Python,
I'm excited to share a project I've been passionately working on, built entirely within the Python ecosystem: Fenix Trading Bot. The post was removed earlier for missing some sections, so here is a more structured breakdown.
GitHub Link: https://github.com/Ganador1/FenixAI\_tradingBot
# What My Project Does
Fenix is an open-source framework for algorithmic cryptocurrency trading. Instead of relying on a single strategy, it uses a crew of specialized AI agents orchestrated by CrewAI to make decisions. The workflow is:
1. It scrapes data from multiple sources: news feeds, social media (Twitter/Reddit), and real-time market data.
2. It uses a Visual Agent with a vision model (LLaVA) to analyze screenshots of TradingView charts, identifying visual patterns.
3. A Technical Agent analyzes quantitative indicators (RSI, MACD, etc.).
4. A Sentiment Agent reads news/social media to gauge market sentiment.
5. The analyses are passed to Consensus and Risk Management agents that weigh the evidence, check against user-defined risk parameters, and make the final BUY, SELL, or HOLD decision. The entire AI analysis runs 100% locally using Ollama, ensuring privacy and zero API costs.
# Target Audience
This project is aimed at:
Python Developers & AI Enthusiasts: Who want to see a real-world, complex application of modern Python libraries like CrewAI, Ollama, Pydantic, and Selenium working together. It serves as a great case study for building multi-agent systems.
Algorithmic Traders & Quants: Who are looking for a flexible, open-source framework that goes beyond
/r/Python
https://redd.it/1li8id5
Hey r/Python,
I'm excited to share a project I've been passionately working on, built entirely within the Python ecosystem: Fenix Trading Bot. The post was removed earlier for missing some sections, so here is a more structured breakdown.
GitHub Link: https://github.com/Ganador1/FenixAI\_tradingBot
# What My Project Does
Fenix is an open-source framework for algorithmic cryptocurrency trading. Instead of relying on a single strategy, it uses a crew of specialized AI agents orchestrated by CrewAI to make decisions. The workflow is:
1. It scrapes data from multiple sources: news feeds, social media (Twitter/Reddit), and real-time market data.
2. It uses a Visual Agent with a vision model (LLaVA) to analyze screenshots of TradingView charts, identifying visual patterns.
3. A Technical Agent analyzes quantitative indicators (RSI, MACD, etc.).
4. A Sentiment Agent reads news/social media to gauge market sentiment.
5. The analyses are passed to Consensus and Risk Management agents that weigh the evidence, check against user-defined risk parameters, and make the final BUY, SELL, or HOLD decision. The entire AI analysis runs 100% locally using Ollama, ensuring privacy and zero API costs.
# Target Audience
This project is aimed at:
Python Developers & AI Enthusiasts: Who want to see a real-world, complex application of modern Python libraries like CrewAI, Ollama, Pydantic, and Selenium working together. It serves as a great case study for building multi-agent systems.
Algorithmic Traders & Quants: Who are looking for a flexible, open-source framework that goes beyond
/r/Python
https://redd.it/1li8id5
GitHub
GitHub - Ganador1/FenixAI_tradingBot: Fenix Ai Trading Bot with crew ai and ollama
Fenix Ai Trading Bot with crew ai and ollama . Contribute to Ganador1/FenixAI_tradingBot development by creating an account on GitHub.
sodalite - an open source media downloader with a pure python backend
Made this as a passion project, hope you'll like it :) If you did, please star it! did it as a part of a hackathon and l'd appreciate the support.
What my project does
It detects a link you paste from a supported service, parses it via a network request and serves the file through a FastAPI backend.
Intended audience
Mostly someone who's willing to host this, production ig?
Repo link
https://github.com/oterin/sodalite
/r/Python
https://redd.it/1li6ek4
Made this as a passion project, hope you'll like it :) If you did, please star it! did it as a part of a hackathon and l'd appreciate the support.
What my project does
It detects a link you paste from a supported service, parses it via a network request and serves the file through a FastAPI backend.
Intended audience
Mostly someone who's willing to host this, production ig?
Repo link
https://github.com/oterin/sodalite
/r/Python
https://redd.it/1li6ek4
GitHub
GitHub - oterin/sodalite: open. paste. save.
open. paste. save. Contribute to oterin/sodalite development by creating an account on GitHub.
pandas/python functions (pushing and calling dataframe)
Hello all,
I am fairly new to python and all so i am having difficulty managing next.
So i wanted to create a dim table in separate file, then push few columns to SQL, and allow somehow for few other columns to be allowed to be pulled in another python file, where i would merge it with that data-frame.(creating ID keys basically),
But i am having difficulties doing that,its giving me some long as error. (This part when i am calling in other file : (product_table= Orders_product() )
Could someone point me to right direction?
Product table:
import pandas as pd
from MySQL import getmysqlengine
#getting file
File=r"ExcelFilePath"
Sheet="Orders"
df=pd.readexcel(File, sheetname=Sheet)
productcolumns=["Product Category","Product Sub-Category","Product Container","Product Name"]
def Ordersproduct():
#cleaning text/droping duplicates
dfproducts = df[productcolumns].copy()
for productCol in productcolumns:
dfproducts[productCol] = dfproducts[productCol].str.strip()
dfproducts['ProductKeyJoin'] = dfproductsproduct_columns.agg('|'.join, axis=1)
/r/Python
https://redd.it/1lhyni4
Hello all,
I am fairly new to python and all so i am having difficulty managing next.
So i wanted to create a dim table in separate file, then push few columns to SQL, and allow somehow for few other columns to be allowed to be pulled in another python file, where i would merge it with that data-frame.(creating ID keys basically),
But i am having difficulties doing that,its giving me some long as error. (This part when i am calling in other file : (product_table= Orders_product() )
Could someone point me to right direction?
Product table:
import pandas as pd
from MySQL import getmysqlengine
#getting file
File=r"ExcelFilePath"
Sheet="Orders"
df=pd.readexcel(File, sheetname=Sheet)
productcolumns=["Product Category","Product Sub-Category","Product Container","Product Name"]
def Ordersproduct():
#cleaning text/droping duplicates
dfproducts = df[productcolumns].copy()
for productCol in productcolumns:
dfproducts[productCol] = dfproducts[productCol].str.strip()
dfproducts['ProductKeyJoin'] = dfproductsproduct_columns.agg('|'.join, axis=1)
/r/Python
https://redd.it/1lhyni4
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
I built a new package for processing documents for LLM applications: SplitterMR
Hi!
Over the past few months, I've been mulling over the idea of making a Python library. I work as an AI engineer, and I was a little tired of having to reinvent the wheel every time I had to make an RAG to process documents: chunking, reading, image processing, etc.
So, I've started working on a personal project and developed a library to process files you pass in Markdown format and then easily chunk them. I have called it SplitterMR. This library uses several cool things: it has support for Docling, MarkItDown, and PDFPlumber; it can split tables, describe images using VLMs, split text recursively, or do it by tokens. It is very very simple to use!
It's still in development, and I need to keep working on it, but if you could take a look at it in the meantime and tell me how it goes, I'd appreciate it :)
The code repository is: https://github.com/andreshere00/Splitter\_MR/, and the PyPi package is published here: https://pypi.org/project/splitter-mr/
I've also posted a documentation server with several plug-and-play examples so you can try them out and take a look: https://andreshere00.github.io/Splitter\_MR/
And as I said, I'm here for anything. Let me know!
/r/Python
https://redd.it/1liepo1
Hi!
Over the past few months, I've been mulling over the idea of making a Python library. I work as an AI engineer, and I was a little tired of having to reinvent the wheel every time I had to make an RAG to process documents: chunking, reading, image processing, etc.
So, I've started working on a personal project and developed a library to process files you pass in Markdown format and then easily chunk them. I have called it SplitterMR. This library uses several cool things: it has support for Docling, MarkItDown, and PDFPlumber; it can split tables, describe images using VLMs, split text recursively, or do it by tokens. It is very very simple to use!
It's still in development, and I need to keep working on it, but if you could take a look at it in the meantime and tell me how it goes, I'd appreciate it :)
The code repository is: https://github.com/andreshere00/Splitter\_MR/, and the PyPi package is published here: https://pypi.org/project/splitter-mr/
I've also posted a documentation server with several plug-and-play examples so you can try them out and take a look: https://andreshere00.github.io/Splitter\_MR/
And as I said, I'm here for anything. Let me know!
/r/Python
https://redd.it/1liepo1
GitHub
GitHub - andreshere00/Splitter_MR: Chunk your data into markdown text blocks for your LLM applications
Chunk your data into markdown text blocks for your LLM applications - andreshere00/Splitter_MR
[Showcase] leetfetch – A CLI tool to fetch and organize your LeetCode submissions
**GitHub**: [https://github.com/Rage997/leetfetch](https://github.com/Rage997/leetfetch)
**Example output repo**: [https://github.com/Rage997/LeetCode](https://github.com/Rage997/LeetCode)
# What It Does
**leetfetch** is a command-line Python tool that downloads all your LeetCode submissions and problem descriptions using your browser session (no password or API key needed). It groups them by problem and language, and creates Markdown summaries.
# Target Audience
Anyone who solves problems on LeetCode and wants to:
* Back up their work
* Track progress locally or on GitHub
# How It’s Different
Compared to other tools, leetfetch:
* Uses the current GraphQL API
* Filters by accepted (or all) submissions
* Generates a clean, browsable folder structure
# Example Usage
# Download accepted Python3 submissions
python3 main.py --languages python3
# Download all submissions in all languages
python3 main.py --no-only-accepted --all-languages
# Only fetch problems not yet saved
python3 main.py --sync
No login needed – just need to be signed in with your browser.
Let me know what you think.
/r/Python
https://redd.it/1liej6o
**GitHub**: [https://github.com/Rage997/leetfetch](https://github.com/Rage997/leetfetch)
**Example output repo**: [https://github.com/Rage997/LeetCode](https://github.com/Rage997/LeetCode)
# What It Does
**leetfetch** is a command-line Python tool that downloads all your LeetCode submissions and problem descriptions using your browser session (no password or API key needed). It groups them by problem and language, and creates Markdown summaries.
# Target Audience
Anyone who solves problems on LeetCode and wants to:
* Back up their work
* Track progress locally or on GitHub
# How It’s Different
Compared to other tools, leetfetch:
* Uses the current GraphQL API
* Filters by accepted (or all) submissions
* Generates a clean, browsable folder structure
# Example Usage
# Download accepted Python3 submissions
python3 main.py --languages python3
# Download all submissions in all languages
python3 main.py --no-only-accepted --all-languages
# Only fetch problems not yet saved
python3 main.py --sync
No login needed – just need to be signed in with your browser.
Let me know what you think.
/r/Python
https://redd.it/1liej6o
GitHub
GitHub - Rage997/leetfetch: A commandline python tool to fetch all leetcode submissions.
A commandline python tool to fetch all leetcode submissions. - Rage997/leetfetch
django-hstore-field, An easy to use postgres hstore field that is based on django-hstore-widget
Hello everyone,
Today i released django-hstore-field, an easy to use postgres hstore field that is based on `django-hstore-widget`.
This project is based on stencil.js framework and uses web-components
# 🧐 Usage:
# 🚀 Features:
Drop in replacement for `django.contrib.postgres.HStoreField`
It leverages postgres hstore to give developers a key:value widget in the admin field.
It includes a admin panel widget to input and visualize the data.
It has error detection, to prevent malformed json in the widget.
It has a fallback json textarera (same one shipped with django's default implementation)
The widgets have the same style as the admin panel.
Only one [file](https://github.com/baseplate-admin/django-hstore-field/blob/master/src/django_hstore_field/fields.py).
# ⚖ Comparison with other project:
django-postgres-extensions: As far as i checked, the postgres extensions does not offer the built in admin panel extension. Also this package dosen't align with my philosophy "do one thing and do it well".
# 😎 Example:
Picture:
Rendered using django-hstore-field
Thank you guys all, if you guys like the project a ⭐ please.
/r/django
https://redd.it/1lig4t8
Hello everyone,
Today i released django-hstore-field, an easy to use postgres hstore field that is based on `django-hstore-widget`.
This project is based on stencil.js framework and uses web-components
# 🧐 Usage:
# yourapp/models.py
from django.db import models
from django_hstore_field import HStoreField
class ExampleModel(models.Model):
data = HStoreField()
# 🚀 Features:
Drop in replacement for `django.contrib.postgres.HStoreField`
It leverages postgres hstore to give developers a key:value widget in the admin field.
It includes a admin panel widget to input and visualize the data.
It has error detection, to prevent malformed json in the widget.
It has a fallback json textarera (same one shipped with django's default implementation)
The widgets have the same style as the admin panel.
Only one [file](https://github.com/baseplate-admin/django-hstore-field/blob/master/src/django_hstore_field/fields.py).
# ⚖ Comparison with other project:
django-postgres-extensions: As far as i checked, the postgres extensions does not offer the built in admin panel extension. Also this package dosen't align with my philosophy "do one thing and do it well".
# 😎 Example:
Picture:
Rendered using django-hstore-field
Thank you guys all, if you guys like the project a ⭐ please.
/r/django
https://redd.it/1lig4t8
GitHub
GitHub - baseplate-admin/django-hstore-field: An easy to use postgres hstore field that is based on django-hstore-widget
An easy to use postgres hstore field that is based on django-hstore-widget - baseplate-admin/django-hstore-field