How to code it is so that section 1 automatically links to discussion a. I'm using topic 1,2,3 to organize the discussions and the {%for post in page.get_parent%} in the template to make a navigation system. I want section 1 to always link to discussion a.
/r/django
https://redd.it/10gsn1x
/r/django
https://redd.it/10gsn1x
Best way to load initial data and maintain it in production ?
We develop surveys for regulatory purposes and we expose them via an API (DRF).
Each survey has questions and we'll have different set of questions for each of our client (wording and number of questions can vary).
We are going live soon, and we haven't settled yet on how to initially load thatdata and how to maintain it (ie. wording may change over time, new question can be added, etc...).
Here's our feeling so far:
Fixtures: We use them extensively in our unit tests and our test environments but it doesn't feel right for production for several reasons:
No tracking of what has been run or what has not yet been run
Force us to deal with PKs
One fixtures can easily be forgotten
Migrations with raw SQL: idea would be to have a folder for each client with the different scripts to be run
Harder to generate and maintain (no loaddata / dumpdata)
What's your take on this ?
Thanks
/r/django
https://redd.it/10gcrw8
We develop surveys for regulatory purposes and we expose them via an API (DRF).
Each survey has questions and we'll have different set of questions for each of our client (wording and number of questions can vary).
We are going live soon, and we haven't settled yet on how to initially load thatdata and how to maintain it (ie. wording may change over time, new question can be added, etc...).
Here's our feeling so far:
Fixtures: We use them extensively in our unit tests and our test environments but it doesn't feel right for production for several reasons:
No tracking of what has been run or what has not yet been run
Force us to deal with PKs
One fixtures can easily be forgotten
Migrations with raw SQL: idea would be to have a folder for each client with the different scripts to be run
Harder to generate and maintain (no loaddata / dumpdata)
What's your take on this ?
Thanks
/r/django
https://redd.it/10gcrw8
reddit
Best way to load initial data and maintain it in production ?
We develop surveys for regulatory purposes and we expose them via an API (DRF). Each survey has questions and we'll have different set of...
Aggregate posts from your Mastodon timeline (built with Django)
Hey all, I built https://fediview.com/ with Django. It surfaces posts from your Mastodon timeline with a very simple algorithm -- super useful to just "catch up" on posts you might have missed.
Hopefully it's useful if anyone is on Mastodon. I'd love to hear any feedback!
/r/django
https://redd.it/10glm2u
Hey all, I built https://fediview.com/ with Django. It surfaces posts from your Mastodon timeline with a very simple algorithm -- super useful to just "catch up" on posts you might have missed.
Hopefully it's useful if anyone is on Mastodon. I'd love to hear any feedback!
/r/django
https://redd.it/10glm2u
Fediview
fediview - The algorithmic timeline for Mastodon
Generate an algorithmic summary for Mastodon timelines
Jupyter + copilot
Anyone had success setting these two up together?
/r/JupyterNotebooks
https://redd.it/10gnx7k
Anyone had success setting these two up together?
/r/JupyterNotebooks
https://redd.it/10gnx7k
reddit
Jupyter + copilot
Anyone had success setting these two up together?
How long will data be stored in RAM?
Since I use notebook as a demo for images and plots when I do data analysis, I often have my jupyternotebook on and leave it on for weeks incase I need to use/check certain notebooks ocassionally.
​
In the jupyter notebook, there are often multiple cells, and there are some variables being stored and pass to the next cell. And my question is that, how long will the intermediate data be kept, and can I run the cell even after weeks and trust the output as long as there is no error reported?
​
My guess is that if the RAM throws away certain groups of data, then I should not be able to run the cell since the intermediate data it needs is no longer available, which means, as long as it can run, the data is still there.
Also, I am using m1 Macbook, which I know will use the hard drive as RAM in certain cases, not sure if this means the intermediate data will be kept on some temporary files on the hard drive, which sounds to be a safer place to store.
/r/JupyterNotebooks
https://redd.it/10gjv8j
Since I use notebook as a demo for images and plots when I do data analysis, I often have my jupyternotebook on and leave it on for weeks incase I need to use/check certain notebooks ocassionally.
​
In the jupyter notebook, there are often multiple cells, and there are some variables being stored and pass to the next cell. And my question is that, how long will the intermediate data be kept, and can I run the cell even after weeks and trust the output as long as there is no error reported?
​
My guess is that if the RAM throws away certain groups of data, then I should not be able to run the cell since the intermediate data it needs is no longer available, which means, as long as it can run, the data is still there.
Also, I am using m1 Macbook, which I know will use the hard drive as RAM in certain cases, not sure if this means the intermediate data will be kept on some temporary files on the hard drive, which sounds to be a safer place to store.
/r/JupyterNotebooks
https://redd.it/10gjv8j
reddit
How long will data be stored in RAM?
Since I use notebook as a demo for images and plots when I do data analysis, I often have my jupyternotebook on and leave it on for weeks incase I...
Today I re-learned: Python function default arguments are retained between executions
https://www.valentinog.com/blog/tirl-python-default-arguments/
/r/Python
https://redd.it/10gt7tv
https://www.valentinog.com/blog/tirl-python-default-arguments/
/r/Python
https://redd.it/10gt7tv
Valentino Gagliardi's Blog
Today I re-learned: Python function default arguments are retained between executions
A short story about Python function default arguments, and why testing even simple things is important.
Parsing and validating
The upcoming release of msgspec
(my fast & friendly
serialization library) adds builtin support for YAML & TOML formats, on top of
the existing JSON & msgpack support.
To demonstrate this, I wrote up a quick example of parsing and validating
https://jcristharif.com/msgspec/examples/pyproject-toml.html
It applies the following standard msgspec workflow to the
format:
1. Define a schema using standard python type annotations. If you're already a
dataclasses/attrs/pydantic user this should feel pretty familiar.
supports a wide selection of common stdlib
types. Any
additional needed types can also be added via an extension mechanism.
2. Pass that schema to one of the supported
using
input data matches the schema. If it's valid the specified type is returned,
otherwise a user-friendly error message is raised detailing where the data
is malformed.
The same technique can be applied for any of the formats msgspec supports,
allowing msgspec to be a one-stop-shop for serialization & validation in
Python.
Note that unlike the existing JSON & msgpack support, these new formats rely on
external parser libraries (
has a
/r/Python
https://redd.it/10gzbgo
pyproject.toml files with msgspecThe upcoming release of msgspec
(my fast & friendly
serialization library) adds builtin support for YAML & TOML formats, on top of
the existing JSON & msgpack support.
To demonstrate this, I wrote up a quick example of parsing and validating
pyproject.toml files using msgspec:https://jcristharif.com/msgspec/examples/pyproject-toml.html
It applies the following standard msgspec workflow to the
pyproject.toml fileformat:
1. Define a schema using standard python type annotations. If you're already a
dataclasses/attrs/pydantic user this should feel pretty familiar.
msgspecsupports a wide selection of common stdlib
types. Any
additional needed types can also be added via an extension mechanism.
2. Pass that schema to one of the supported
decode functions (here we'reusing
msgspec.toml.decode). The decoder will parse and validate that theinput data matches the schema. If it's valid the specified type is returned,
otherwise a user-friendly error message is raised detailing where the data
is malformed.
The same technique can be applied for any of the formats msgspec supports,
allowing msgspec to be a one-stop-shop for serialization & validation in
Python.
Note that unlike the existing JSON & msgpack support, these new formats rely on
external parser libraries (
msgspec includes a fast, custom JSON parser). Thishas a
/r/Python
https://redd.it/10gzbgo
Getting Started With Property-Based Testing in Python π With Hypothesis and Pytest - Semaphore
https://semaphoreci.com/blog/property-based-testing-python-hypothesis-pytest
/r/Python
https://redd.it/10gv7b7
https://semaphoreci.com/blog/property-based-testing-python-hypothesis-pytest
/r/Python
https://redd.it/10gv7b7
Semaphore
Getting Started With Property-Based Testing in Python With Hypothesis and Pytest - Semaphore
In this tutorial, we will be learning about the concepts behind property-based testing, and then we will put those concepts to practice.
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
Found a neat resource related to Python over the past week? Looking for a resource to explain a certain topic?
Use this thread to chat about and share Python resources!
/r/Python
https://redd.it/10hdaol
Found a neat resource related to Python over the past week? Looking for a resource to explain a certain topic?
Use this thread to chat about and share Python resources!
/r/Python
https://redd.it/10hdaol
reddit
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
Found a neat resource related to Python over the past week? Looking for a resource to explain a certain topic? Use this thread to chat about and...
Pynecone: New Features and Performance Improvements β‘οΈ
Hi everyone, wanted to give a quick update on Pynecone because there have been major improvements in the past month since our initial release.
For those who have never heard of Pynecone, it is a way to build full-stack web apps in pure Python. The framework is easy to get started with even without previous web dev experience, and is entirely open source / free to use.
# Improvements:
Here are some of the notable improvements we implemented. Along with these were many bug fixes to get Pynecone more stable.
Components/Features:
πͺ Added Windows support!
π Added built-in graphing libraries using Victory.
Added Dynamic Routes.
Performance:
β‘οΈSwitched to WebSockets (No more new requests for every event!)
Compiler improvements to speed up event processing.
Community:
βοΈ Grown from \~30 to \~2400 Github stars.
70 [Discord](https://discord.gg/T5WSbC2YtQ) members.
13 More contributors.
Testing:
β Improved unit test coverage and added integration tests for all PRs.
Next Steps:
Add components such as upload and date picker.
Show how to make your own Pynecone 3rd party libraries.
And many more features!
/r/Python
https://redd.it/10h6l7e
Hi everyone, wanted to give a quick update on Pynecone because there have been major improvements in the past month since our initial release.
For those who have never heard of Pynecone, it is a way to build full-stack web apps in pure Python. The framework is easy to get started with even without previous web dev experience, and is entirely open source / free to use.
# Improvements:
Here are some of the notable improvements we implemented. Along with these were many bug fixes to get Pynecone more stable.
Components/Features:
πͺ Added Windows support!
π Added built-in graphing libraries using Victory.
Added Dynamic Routes.
Performance:
β‘οΈSwitched to WebSockets (No more new requests for every event!)
Compiler improvements to speed up event processing.
Community:
βοΈ Grown from \~30 to \~2400 Github stars.
70 [Discord](https://discord.gg/T5WSbC2YtQ) members.
13 More contributors.
Testing:
β Improved unit test coverage and added integration tests for all PRs.
Next Steps:
Add components such as upload and date picker.
Show how to make your own Pynecone 3rd party libraries.
And many more features!
/r/Python
https://redd.it/10h6l7e
GitHub
GitHub - reflex-dev/reflex: πΈοΈ Web apps in pure Python π
πΈοΈ Web apps in pure Python π. Contribute to reflex-dev/reflex development by creating an account on GitHub.
Migrating from Flask to FastAPI
Part of the work I've done with Forethought is lead an effort to migrate from Flask to FastAPI. π
Here's the first blog post out of 3, with all the tips and tricks to migrate a real-life, huge, production code base. π€
I hope it's useful! ππ
https://engineering.forethought.ai/blog/2022/12/01/migrating-from-flask-to-fastapi-part-1/
/r/Python
https://redd.it/10h9fb5
Part of the work I've done with Forethought is lead an effort to migrate from Flask to FastAPI. π
Here's the first blog post out of 3, with all the tips and tricks to migrate a real-life, huge, production code base. π€
I hope it's useful! ππ
https://engineering.forethought.ai/blog/2022/12/01/migrating-from-flask-to-fastapi-part-1/
/r/Python
https://redd.it/10h9fb5
engineering.forethought.ai
Migrating from Flask to FastAPI, Part 1 - Forethought AI Engineering
Welcome to the Forethought AI engineering team's blog! We are a group of software engineers, data scientists, and machine learning experts who are committed to building innovative solutions to improve the efficiency and effectiveness of customer service teams.
NiceGUI now has a subreddit.
Hi Folks... Just a heads up that NiceGUI now has it's own subreddit at r/nicegui. If you want to do really fast and efficient web page development using Python this is definitely worth looking at. Cheers!
/r/Python
https://redd.it/10hg2i4
Hi Folks... Just a heads up that NiceGUI now has it's own subreddit at r/nicegui. If you want to do really fast and efficient web page development using Python this is definitely worth looking at. Cheers!
/r/Python
https://redd.it/10hg2i4
Reddit
r/Python on Reddit: NiceGUI now has a subreddit.
Posted by u/ParallaxRay - 47 votes and 5 comments
Add Watermarks To PDF, JPG & PNG files with no restrictive licensing
I created a small Python package to add watermarks to PDF, JPG & PNG files.
**Why?** Most Python PDF packages have licensing that requires you to release your source code which isn't ideal for everyone. In this package, I've utilised PIL (open source HPND License) & Pypdfium2 ( either Apache-2.0 or BSD-3-Clause, at your choice. ) and the code itself is released under MIT license.
Check it out - [https://github.com/bowespublishing/pythonwatermark](https://github.com/bowespublishing/pythonwatermark)
It can probably be massively improved but it works :)
## Installation
Installing the latest PyPI release (recommended)
python3 -m pip install -U pythonwatermark
This will use a pre-built wheel package, the easiest way of installing pythonwatermark.
## Dependencies
pythonwatermark uses two awesome open source python packages to work it's magic they are...
Pillow - [https://github.com/python-pillow/Pillow](https://github.com/python-pillow/Pillow)
Like PIL, Pillow is licensed under the open source HPND License
Pypdfium2 - [https://github.com/pypdfium2-team/pypdfium2](https://github.com/pypdfium2-team/pypdfium2)
PDFium and pypdfium2 are available by the terms and conditions of either Apache-2.0 or BSD-3-Clause, at your choice.
These are both fantastic packages and are liberally licensed meaning unlike with other options you don't need to release your source code to the public.
## Usage
Import watermark utils
from pythonwatermark import watermarkutils
Add watermark to file
watermarkutils.put_watermark(inputfile, outputfile, watermark,
/r/Python
https://redd.it/10hfz75
I created a small Python package to add watermarks to PDF, JPG & PNG files.
**Why?** Most Python PDF packages have licensing that requires you to release your source code which isn't ideal for everyone. In this package, I've utilised PIL (open source HPND License) & Pypdfium2 ( either Apache-2.0 or BSD-3-Clause, at your choice. ) and the code itself is released under MIT license.
Check it out - [https://github.com/bowespublishing/pythonwatermark](https://github.com/bowespublishing/pythonwatermark)
It can probably be massively improved but it works :)
## Installation
Installing the latest PyPI release (recommended)
python3 -m pip install -U pythonwatermark
This will use a pre-built wheel package, the easiest way of installing pythonwatermark.
## Dependencies
pythonwatermark uses two awesome open source python packages to work it's magic they are...
Pillow - [https://github.com/python-pillow/Pillow](https://github.com/python-pillow/Pillow)
Like PIL, Pillow is licensed under the open source HPND License
Pypdfium2 - [https://github.com/pypdfium2-team/pypdfium2](https://github.com/pypdfium2-team/pypdfium2)
PDFium and pypdfium2 are available by the terms and conditions of either Apache-2.0 or BSD-3-Clause, at your choice.
These are both fantastic packages and are liberally licensed meaning unlike with other options you don't need to release your source code to the public.
## Usage
Import watermark utils
from pythonwatermark import watermarkutils
Add watermark to file
watermarkutils.put_watermark(inputfile, outputfile, watermark,
/r/Python
https://redd.it/10hfz75
GitHub
GitHub - bowespublishing/pythonwatermark: Easily add watermarks to PDF, JPG & PNG files with no restrictive licensing
Easily add watermarks to PDF, JPG & PNG files with no restrictive licensing - GitHub - bowespublishing/pythonwatermark: Easily add watermarks to PDF, JPG & PNG files with no restric...
Experimental MiniJinja Bindings for Python
https://pypi.org/project/minijinja/0.30.1/
/r/Python
https://redd.it/10hcmdj
https://pypi.org/project/minijinja/0.30.1/
/r/Python
https://redd.it/10hcmdj
PyPI
minijinja
An experimental Python binding of the Rust MiniJinja template engine.
Am I over thinking this question?
Just for some context, this is my first coding class and read what I am supposed to read in the text book. All it taught us was how to use print and how to set up basic math.
This is the first question on the homework, this question seems complex for the first question. How am I supposed to know how to set this up with knowing little to no nothing about coding?
The US Census Bureau projects population based on the following
assumptions:
One birth every 7 seconds
One death every 13 seconds
One new immigrant every 45 seconds
Write a program to display the population for each of the next five years.
Assume the current population is 312032486 and one year has 365 days.
/r/Python
https://redd.it/10hde5l
Just for some context, this is my first coding class and read what I am supposed to read in the text book. All it taught us was how to use print and how to set up basic math.
This is the first question on the homework, this question seems complex for the first question. How am I supposed to know how to set this up with knowing little to no nothing about coding?
The US Census Bureau projects population based on the following
assumptions:
One birth every 7 seconds
One death every 13 seconds
One new immigrant every 45 seconds
Write a program to display the population for each of the next five years.
Assume the current population is 312032486 and one year has 365 days.
/r/Python
https://redd.it/10hde5l
reddit
Am I over thinking this question?
Just for some context, this is my first coding class and read what I am supposed to read in the text book. All it taught us was how to use print...
Build a custom Python linter in 5 minutes
https://blog.sylver.dev/build-a-custom-python-linter-in-5-minutes
/r/Python
https://redd.it/10hnsxq
https://blog.sylver.dev/build-a-custom-python-linter-in-5-minutes
/r/Python
https://redd.it/10hnsxq
Geoffrey Copin's Blog
Build a custom Python linter in 5 minutes
Creating a custom linter can be a great way to enforce coding standards and detect code smells. In this tutorial, we'll use Sylver, a source code query engine to build a custom Python linter in just a few lines of code.
Sylver's main interface is a R...
Sylver's main interface is a R...
Message Queueing: Using Postgres Triggers, Listen and Notify as a replacement for Celery and Signals
A common pattern in modern web development is the requirement to process data asynchronously after some user action or database event. In the article below, we describe via a concrete example a traditional approach to solving this problem for a Django/Postgres based application using django signals and Celery. We then proceed to discuss some of the shortcomings of this approach and demonstrate how using PostgreSQL triggers alongside the PostgreSQL LISTEN/NOTIFY protocol can offer a more robust solution.
Asynchronous processing of database events in a robust and lightweight manner using django-pgpubsub.
/r/django
https://redd.it/10hn5ij
A common pattern in modern web development is the requirement to process data asynchronously after some user action or database event. In the article below, we describe via a concrete example a traditional approach to solving this problem for a Django/Postgres based application using django signals and Celery. We then proceed to discuss some of the shortcomings of this approach and demonstrate how using PostgreSQL triggers alongside the PostgreSQL LISTEN/NOTIFY protocol can offer a more robust solution.
Asynchronous processing of database events in a robust and lightweight manner using django-pgpubsub.
/r/django
https://redd.it/10hn5ij
Django Project
The web framework for perfectionists with deadlines.
PyI18n - Simple and easy-to-use internationalization library
Attention all Python developers! Are you tired of struggling with internationalization in your projects? Look no further! Introducing our new Python library, PyI18n, the ultimate solution for all your internationalization needs. With easy-to-use API and detailed documentation, localizing your projects has never been easier. This library is fully compatible with the Django framework, so you can easily integrate it into your existing Django projects. Don't miss out on this powerful tool, try it out today and take your internationalization to the next level!
repository: https://github.com/sectasy0/pyi18n
/r/django
https://redd.it/10hmu3p
Attention all Python developers! Are you tired of struggling with internationalization in your projects? Look no further! Introducing our new Python library, PyI18n, the ultimate solution for all your internationalization needs. With easy-to-use API and detailed documentation, localizing your projects has never been easier. This library is fully compatible with the Django framework, so you can easily integrate it into your existing Django projects. Don't miss out on this powerful tool, try it out today and take your internationalization to the next level!
repository: https://github.com/sectasy0/pyi18n
/r/django
https://redd.it/10hmu3p
GitHub
GitHub - sectasy0/pyi18n: Small and easy to use internationalization library inspired by Ruby i18n
Small and easy to use internationalization library inspired by Ruby i18n - GitHub - sectasy0/pyi18n: Small and easy to use internationalization library inspired by Ruby i18n
Skinny models and fat views? Where should I be writing this code?
Hey folks,
I'm writing an app using Django that has fairly minimal locally stored data and is heavily dependent on hitting an external API for data, taking the returned data, manipulating it and mixing it up with locally stored object, before sending it back to the API.
I'm doing a lot of the work in my views.py file, even though what gets rendered to the client side is nearly irrelevant to the project ("here's a sign up form", "you've signed up for X" or "success, you've unsubscribed from y")
So that, combined with there being a) not a ton of local objects being used and b) a lot of taking external data and manipulating it for my needs means I'm getting relatively large views -- more logic happening there than in my models, for sure.
So I'm wondering if there's an inherent problem with writing 'skinny models, fat views' in a case like this? Or if convention or something suggests I should pull some/much of my code into a separate file (a utils.py or bad_architecture.py or whatever) file and just import at runtime? Is there any reason I should be doing that besides DRY when/if that becomes an issue?
Hopefully that makes sense
/r/django
https://redd.it/10hhe87
Hey folks,
I'm writing an app using Django that has fairly minimal locally stored data and is heavily dependent on hitting an external API for data, taking the returned data, manipulating it and mixing it up with locally stored object, before sending it back to the API.
I'm doing a lot of the work in my views.py file, even though what gets rendered to the client side is nearly irrelevant to the project ("here's a sign up form", "you've signed up for X" or "success, you've unsubscribed from y")
So that, combined with there being a) not a ton of local objects being used and b) a lot of taking external data and manipulating it for my needs means I'm getting relatively large views -- more logic happening there than in my models, for sure.
So I'm wondering if there's an inherent problem with writing 'skinny models, fat views' in a case like this? Or if convention or something suggests I should pull some/much of my code into a separate file (a utils.py or bad_architecture.py or whatever) file and just import at runtime? Is there any reason I should be doing that besides DRY when/if that becomes an issue?
Hopefully that makes sense
/r/django
https://redd.it/10hhe87
reddit
Skinny models and fat views? Where should I be writing this code?
Hey folks, I'm writing an app using Django that has fairly minimal locally stored data and is heavily dependent on hitting an external API for...