What Free Host Providers do you Use for deploying RESTful API ?
Until this moment I had using Render which provides a free limited plan for deoloying Python or any other API, pythonanywhere is another option which allow deploying for free.
If you're testing a project you need to deploy the API, where you do it for free?
/r/Python
https://redd.it/1gk5ayh
Until this moment I had using Render which provides a free limited plan for deoloying Python or any other API, pythonanywhere is another option which allow deploying for free.
If you're testing a project you need to deploy the API, where you do it for free?
/r/Python
https://redd.it/1gk5ayh
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Flask OpenAPI Generation?
I've been exploring Python frameworks as part of my blog on Python OpenAPI generation and I was quite surprised to see that Flask requires an extension like flask-smorest to generate an OpenAPI specification. Is OpenAPI just not popular in the Flask API community or is smorest just so good that built-in support is not needed?
/r/flask
https://redd.it/1gkcdnu
I've been exploring Python frameworks as part of my blog on Python OpenAPI generation and I was quite surprised to see that Flask requires an extension like flask-smorest to generate an OpenAPI specification. Is OpenAPI just not popular in the Flask API community or is smorest just so good that built-in support is not needed?
/r/flask
https://redd.it/1gkcdnu
Zuplo
Top 20 Python API Frameworks with OpenAPI Support | Zuplo Blog
Explore 20 of the most popular API frameworks and libraries for building APIs with Python that support OpenAPI/Swagger.
Effect size calculation for Repeated measures Anova
Hello! Im running an analysis using python's statsmodels rm anova method. I have a 2 way repeated measures anova analysis and a series of 1 way repeated measures anovas. I want to calculate the effect sizes.
Since there isn't a direct function for retrieving the partial eta square measure, I figured I would have to calculate it. But to do that I require the sum of squares values. As far as I can tell, I can't retrieve those values either.
So my questions are:
1. Is there a way to retrieve or compute the sum of squares values? (Maybe I just missed it?)
2. Can I calculate the partial eta square value using the variables in the anova table (like the f value, degrees of freedom, p value etc)?
/r/pystats
https://redd.it/1gkbqjv
Hello! Im running an analysis using python's statsmodels rm anova method. I have a 2 way repeated measures anova analysis and a series of 1 way repeated measures anovas. I want to calculate the effect sizes.
Since there isn't a direct function for retrieving the partial eta square measure, I figured I would have to calculate it. But to do that I require the sum of squares values. As far as I can tell, I can't retrieve those values either.
So my questions are:
1. Is there a way to retrieve or compute the sum of squares values? (Maybe I just missed it?)
2. Can I calculate the partial eta square value using the variables in the anova table (like the f value, degrees of freedom, p value etc)?
/r/pystats
https://redd.it/1gkbqjv
Reddit
From the pystats community on Reddit
Explore this post and more from the pystats community
ELI5: Flask vs React (framework vs. library)
Flask: a micro-framework
React: a library
Since react is a library and libraries are considered to be un-opinionated, how is the (very proudly un-opinionated) Flask still considered a framework? is it due to routing, wsgi, etc. out of the box?
/r/flask
https://redd.it/1gkhr2i
Flask: a micro-framework
React: a library
Since react is a library and libraries are considered to be un-opinionated, how is the (very proudly un-opinionated) Flask still considered a framework? is it due to routing, wsgi, etc. out of the box?
/r/flask
https://redd.it/1gkhr2i
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
R Never Train from scratch
https://arxiv.org/pdf/2310.02980
The authors show that when transformers are pre trained, they can match the performance with S4 on the Long range Arena benchmark.
/r/MachineLearning
https://redd.it/1gk7dny
https://arxiv.org/pdf/2310.02980
The authors show that when transformers are pre trained, they can match the performance with S4 on the Long range Arena benchmark.
/r/MachineLearning
https://redd.it/1gk7dny
Data analytics
Hi, I’m in a course on data analytics - our teacher keeps saying that we will find our niche within the spectrum of visualisation, machine learning or coding.
I’m not sure how that works? Like how are we supposed to get better at visualisation without mastering coding. At times he says coding is important if you are interested in becoming a junior data analyst. how does the job market work? Can someone explain it to me? I’m not sure where my strength lies.
/r/IPython
https://redd.it/1gkhjzb
Hi, I’m in a course on data analytics - our teacher keeps saying that we will find our niche within the spectrum of visualisation, machine learning or coding.
I’m not sure how that works? Like how are we supposed to get better at visualisation without mastering coding. At times he says coding is important if you are interested in becoming a junior data analyst. how does the job market work? Can someone explain it to me? I’m not sure where my strength lies.
/r/IPython
https://redd.it/1gkhjzb
Reddit
From the IPython community on Reddit
Explore this post and more from the IPython community
Bokeh Plot Problem
Hi all, I'm trying to have two bokeh plots in a flask app with bootstrap columns. I need two.
They are setup as an html and one is loading fine and the other is not showing up.
In my main app.py:
#tell flask to read dashboard page
@app.route('/dashboard')
def dashboard():
# Read content of plot1.html
with open("plot1.html", "r") as f:
plot1_html = f.read()
# Read content of plot2.html
with open("plot2.html", "r") as f:
plot2_html = f.read()
# Pass both plots to the template
return render_template("dashboard.html", plot1_html=plot1_html, plot2_html=plot2_html)
In the dashboard.html:
<!-- map and chart in bootstrap setup-->
<div class="container-fluid">
<div class="row">
<div class="col-md-6">
/r/flask
https://redd.it/1gkiu8s
Hi all, I'm trying to have two bokeh plots in a flask app with bootstrap columns. I need two.
They are setup as an html and one is loading fine and the other is not showing up.
In my main app.py:
#tell flask to read dashboard page
@app.route('/dashboard')
def dashboard():
# Read content of plot1.html
with open("plot1.html", "r") as f:
plot1_html = f.read()
# Read content of plot2.html
with open("plot2.html", "r") as f:
plot2_html = f.read()
# Pass both plots to the template
return render_template("dashboard.html", plot1_html=plot1_html, plot2_html=plot2_html)
In the dashboard.html:
<!-- map and chart in bootstrap setup-->
<div class="container-fluid">
<div class="row">
<div class="col-md-6">
/r/flask
https://redd.it/1gkiu8s
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Wednesday Daily Thread: Beginner questions
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1gkl9r8
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1gkl9r8
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
A recommendation for a simple job queue, for LAN/electric outage resilient app?
I'm developing a Flask application to handle incoming data via webhooks. The primary goal is to ensure reliable processing and delivery of this data, even in the face of potential power outages or network interruptions.
To achieve this, I'm considering a queue-based system to store incoming data locally, preventing data loss if anything happens to my infrastructure.
I initially explored Celery and Redis, but I'm facing challenges in implementing simple, resilient tasks like sending a request and waiting for a response. This leads me to believe that these tools might be overkill for my specific use case.
Given my lack of experience with queue systems, I'm seeking guidance on the most suitable approach to meet my requirements. Are there any recommended best practices or alternative solutions that could be more efficient and straightforward?
/r/flask
https://redd.it/1gk9mo6
I'm developing a Flask application to handle incoming data via webhooks. The primary goal is to ensure reliable processing and delivery of this data, even in the face of potential power outages or network interruptions.
To achieve this, I'm considering a queue-based system to store incoming data locally, preventing data loss if anything happens to my infrastructure.
I initially explored Celery and Redis, but I'm facing challenges in implementing simple, resilient tasks like sending a request and waiting for a response. This leads me to believe that these tools might be overkill for my specific use case.
Given my lack of experience with queue systems, I'm seeking guidance on the most suitable approach to meet my requirements. Are there any recommended best practices or alternative solutions that could be more efficient and straightforward?
/r/flask
https://redd.it/1gk9mo6
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Just published an article to understand Python Project Management and Packaging, illustrated with uv
Hey everyone,
I’ve just finished writing the first part of my comprehensive guide on Python project management and packaging. Now that I think about it, I think it's more an article to understand the many concepts of Python packaging and project management more than a guide in and of itself.
The article: A Comprehensive Guide to Python Project Management and Packaging: Concepts Illustrated with uv – Part I
In this first part, I focused on:
\- The evolution of Python packaging standards through key PEPs.
\- Detailed explanations of the main concepts like `pyproject.toml`, the packaging nomenclature, the dependency groups, locking and syncing etc.
\- An introduction to `uv` and how it illustrates essential packaging concepts.
\- Practical workflows using `uv` that I use with data science projects.
Mainly what it lacks is a deeper section or paragraph on workspaces, scripts, building and publishing. That's for part 2!
Working on this article was mainly journey for me through the various PEPs that have shaped the current Python packaging standards. I delved into the history and rationale behind these PEPs. I just wanted to understand. I wanted to understand all the discussions around packaging. That's something we deal with daily, so I wanted to deeply understand
/r/Python
https://redd.it/1gkmrsg
Hey everyone,
I’ve just finished writing the first part of my comprehensive guide on Python project management and packaging. Now that I think about it, I think it's more an article to understand the many concepts of Python packaging and project management more than a guide in and of itself.
The article: A Comprehensive Guide to Python Project Management and Packaging: Concepts Illustrated with uv – Part I
In this first part, I focused on:
\- The evolution of Python packaging standards through key PEPs.
\- Detailed explanations of the main concepts like `pyproject.toml`, the packaging nomenclature, the dependency groups, locking and syncing etc.
\- An introduction to `uv` and how it illustrates essential packaging concepts.
\- Practical workflows using `uv` that I use with data science projects.
Mainly what it lacks is a deeper section or paragraph on workspaces, scripts, building and publishing. That's for part 2!
Working on this article was mainly journey for me through the various PEPs that have shaped the current Python packaging standards. I delved into the history and rationale behind these PEPs. I just wanted to understand. I wanted to understand all the discussions around packaging. That's something we deal with daily, so I wanted to deeply understand
/r/Python
https://redd.it/1gkmrsg
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Dendrite: Interact with websites with natural language instead of using css selectors
What my project does:
Dendrite is a simple framework for interacting with websites using natural language. Interact and extract without having to find brittle css selectors or xpaths like this:
browser.click(“the sign in button”)
For the developers who like their code typed, specify what data you want with a Pydantic BaseModel and Dendrite returns it in that format with one simple function call. Built on top of playwright for a robust experience. This is an easy way to give your AI agents the same web browsing capabilities as humans have. Integrates easily with frameworks such as Langchain, CrewAI, Llamaindex and more.
We are planning on open sourcing everything soon as well so feel free to reach out to us if you’re interested in contributing!
Github: https://github.com/dendrite-systems/dendrite-python-sdk
Overview
Authenticate Anywhere: Dendrite Vault, our Chrome extension, handles secure authentication, letting your agents log in to almost any website.
Interact Naturally: With natural language commands, agents can click, type, and navigate through web elements with ease.
Extract and Manipulate Data: Collect structured data from websites, return data from different websites in the same structure without having to maintain different scripts.
Download/Upload Files: Effortlessly manage file interactions to and from websites, equipping agents to handle documents,
/r/Python
https://redd.it/1gkg23q
What my project does:
Dendrite is a simple framework for interacting with websites using natural language. Interact and extract without having to find brittle css selectors or xpaths like this:
browser.click(“the sign in button”)
For the developers who like their code typed, specify what data you want with a Pydantic BaseModel and Dendrite returns it in that format with one simple function call. Built on top of playwright for a robust experience. This is an easy way to give your AI agents the same web browsing capabilities as humans have. Integrates easily with frameworks such as Langchain, CrewAI, Llamaindex and more.
We are planning on open sourcing everything soon as well so feel free to reach out to us if you’re interested in contributing!
Github: https://github.com/dendrite-systems/dendrite-python-sdk
Overview
Authenticate Anywhere: Dendrite Vault, our Chrome extension, handles secure authentication, letting your agents log in to almost any website.
Interact Naturally: With natural language commands, agents can click, type, and navigate through web elements with ease.
Extract and Manipulate Data: Collect structured data from websites, return data from different websites in the same structure without having to maintain different scripts.
Download/Upload Files: Effortlessly manage file interactions to and from websites, equipping agents to handle documents,
/r/Python
https://redd.it/1gkg23q
GitHub
GitHub - dendrite-systems/dendrite-python-sdk: Tools to build web AI agents that can authenticate, interact with and extract data…
Tools to build web AI agents that can authenticate, interact with and extract data from any website. - dendrite-systems/dendrite-python-sdk
ParScrape v0.4.7 Released
# What My project Does:
Scrapes data from sites and uses AI to extract structured data from it.
# Whats New:
* BREAKING CHANGE: --pricing cli option now takes a string value of 'details', 'cost', or 'none'.
* Added pool of user agents that gets randomly pulled from.
* Updating pricing data.
* Pricing token capture and compute now much more accurate.
* Faster startup
# Key Features:
* Uses Playwright / Selenium to bypass most simple bot checks.
* Uses AI to extract data from a page and save it various formats such as CSV, XLSX, JSON, Markdown.
* Has rich console output to display data right in your terminal.
# GitHub and PyPI
* PAR Scrape is under active development and getting new features all the time.
* Check out the project on GitHub or for full documentation, installation instructions, and to contribute: [https://github.com/paulrobello/par\_scrape](https://github.com/paulrobello/par_scrape)
* PyPI [https://pypi.org/project/par\_scrape/](https://pypi.org/project/par_scrape/)
# Comparison:
I have seem many command line and web applications for scraping but none that are as simple, flexible and fast as ParScrape
# Target Audience
AI enthusiasts and data hungry hobbyist
/r/Python
https://redd.it/1gkhl3c
# What My project Does:
Scrapes data from sites and uses AI to extract structured data from it.
# Whats New:
* BREAKING CHANGE: --pricing cli option now takes a string value of 'details', 'cost', or 'none'.
* Added pool of user agents that gets randomly pulled from.
* Updating pricing data.
* Pricing token capture and compute now much more accurate.
* Faster startup
# Key Features:
* Uses Playwright / Selenium to bypass most simple bot checks.
* Uses AI to extract data from a page and save it various formats such as CSV, XLSX, JSON, Markdown.
* Has rich console output to display data right in your terminal.
# GitHub and PyPI
* PAR Scrape is under active development and getting new features all the time.
* Check out the project on GitHub or for full documentation, installation instructions, and to contribute: [https://github.com/paulrobello/par\_scrape](https://github.com/paulrobello/par_scrape)
* PyPI [https://pypi.org/project/par\_scrape/](https://pypi.org/project/par_scrape/)
# Comparison:
I have seem many command line and web applications for scraping but none that are as simple, flexible and fast as ParScrape
# Target Audience
AI enthusiasts and data hungry hobbyist
/r/Python
https://redd.it/1gkhl3c
D Want to move away from coding heavy ML but still want to complete the PhD
Hi Folks,
I come from a tradition electrical engineering background doing things like industrial automation and computer vision. I decided to pursue a PhD in ML as I thought it will be a good field to enter given my past experience. Now I have been doing the PhD for the past three years. While I like my group and research, I am getting discouraged/depressed by (1) The publication rat race (2) post graduation opportunities mostly being coding heavy (3) the inability to carve a name for myself in the field given how crowded the field has become.
Thus, ideally I would like to complete my PhD and move into a more relaxed paced (even if it is not as high paying as ML jobs) non coding heavy but technical job, where I do not have to constantly up-skill myself. Do you folks have any suggestion on what jobs I can look into or would you suggest dropping the PhD and doing something else?
TLDR: 4th year ML PhD student unsure of sticking with the PhD as they desire a non coding heavy technical job in the industry post graduation. Seeking advice on what to do.
/r/MachineLearning
https://redd.it/1gkx6o7
Hi Folks,
I come from a tradition electrical engineering background doing things like industrial automation and computer vision. I decided to pursue a PhD in ML as I thought it will be a good field to enter given my past experience. Now I have been doing the PhD for the past three years. While I like my group and research, I am getting discouraged/depressed by (1) The publication rat race (2) post graduation opportunities mostly being coding heavy (3) the inability to carve a name for myself in the field given how crowded the field has become.
Thus, ideally I would like to complete my PhD and move into a more relaxed paced (even if it is not as high paying as ML jobs) non coding heavy but technical job, where I do not have to constantly up-skill myself. Do you folks have any suggestion on what jobs I can look into or would you suggest dropping the PhD and doing something else?
TLDR: 4th year ML PhD student unsure of sticking with the PhD as they desire a non coding heavy technical job in the industry post graduation. Seeking advice on what to do.
/r/MachineLearning
https://redd.it/1gkx6o7
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Prototyping with Nanodjango, uv and ninja
https://www.youtube.com/watch?v=0-iuJgfQMOw
/r/django
https://redd.it/1gkzl07
https://www.youtube.com/watch?v=0-iuJgfQMOw
/r/django
https://redd.it/1gkzl07
YouTube
NanoDjango - single-file Django apps | uv integration
☕️ 𝗕𝘂𝘆 𝗺𝗲 𝗮 𝗰𝗼𝗳𝗳𝗲𝗲:
To support the channel and encourage new videos, please consider buying me a coffee here:
https://ko-fi.com/bugbytes
⭐Top resource to learn Python - https://datacamp.pxf.io/kOjKkV ⭐
NanoDjango is a cool package that lets you build small…
To support the channel and encourage new videos, please consider buying me a coffee here:
https://ko-fi.com/bugbytes
⭐Top resource to learn Python - https://datacamp.pxf.io/kOjKkV ⭐
NanoDjango is a cool package that lets you build small…
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1gld3ic
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1gld3ic
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
An article on lazy fetching in Django
I published my article about lazy fetching today on medium for Django developers especially those new to the framework. I wrote everything based on personal experience.
https://medium.com/@mikyrola8/understanding-lazy-fetching-in-django-a-deep-dive-8159c4822cd4
/r/django
https://redd.it/1gl4ajx
I published my article about lazy fetching today on medium for Django developers especially those new to the framework. I wrote everything based on personal experience.
https://medium.com/@mikyrola8/understanding-lazy-fetching-in-django-a-deep-dive-8159c4822cd4
/r/django
https://redd.it/1gl4ajx
Medium
Understanding Lazy Fetching in Django: A Deep Dive
Django’s ORM (Object-Relational Mapping) is powerful and intuitive, but understanding its lazy fetching mechanism is crucial for building…
Keep your code snippets in README up-to-date!
# Code-Embedder
Links: GitHub, GitHub Actions Marketplace
What My Project Does
Code Embedder is a GitHub Action that automatically updates code snippets in your markdown (README) files. It finds code blocks in your README that reference specific scripts, then replaces these blocks with the current content of those scripts. This keeps your documentation in sync with your code.
✨ Key features
🔄 Automatic synchronization: Keep your README code examples up-to-date without manual intervention.
🛠️ Easy setup: Simply add the action to your GitHub workflow and format your README code blocks.
📝 Section support: Update only specific sections of the script in the README.
🧩 Object support: Update only specific objects (functions, classes) in the README. The latest version v0.5.1 supports only 🐍 Python objects (other languages to be added soon).
Find more information in GitHub 🎉
Target Audience
It is a production-ready, tested Github Action that can be part of you CICD workflow to keep your READMEs up-to-date.
Comparison
It is a light-weight package with primary purpose to keep your code examples in READMEs up-to-date.
/r/Python
https://redd.it/1gl1hla
# Code-Embedder
Links: GitHub, GitHub Actions Marketplace
What My Project Does
Code Embedder is a GitHub Action that automatically updates code snippets in your markdown (README) files. It finds code blocks in your README that reference specific scripts, then replaces these blocks with the current content of those scripts. This keeps your documentation in sync with your code.
✨ Key features
🔄 Automatic synchronization: Keep your README code examples up-to-date without manual intervention.
🛠️ Easy setup: Simply add the action to your GitHub workflow and format your README code blocks.
📝 Section support: Update only specific sections of the script in the README.
🧩 Object support: Update only specific objects (functions, classes) in the README. The latest version v0.5.1 supports only 🐍 Python objects (other languages to be added soon).
Find more information in GitHub 🎉
Target Audience
It is a production-ready, tested Github Action that can be part of you CICD workflow to keep your READMEs up-to-date.
Comparison
It is a light-weight package with primary purpose to keep your code examples in READMEs up-to-date.
MkDocs is a full solution to creating documentation as a code, which also offers embedding external files. Code-Embedder is a light-weight package that can be used for projects with or without MkDocs. It offers additional functionality to sync not only full scripts, but also/r/Python
https://redd.it/1gl1hla
GitHub
GitHub - kvankova/code-embedder: Keep your code snippets in README up-to-date!
Keep your code snippets in README up-to-date! Contribute to kvankova/code-embedder development by creating an account on GitHub.