Sifaka: Simple AI text improvement using research-backed critique (open source)
## What My Project Does
Sifaka is an open-source Python framework that adds reflection and reliability to large language model (LLM) applications. The core functionality includes:
- 7 research-backed critics that automatically evaluate LLM outputs for quality, accuracy, and reliability
- Iterative improvement engine that uses critic feedback to refine content through multiple rounds
- Validation rules system for enforcing custom quality standards and constraints
- Built-in retry mechanisms with exponential backoff for handling API failures
- Structured logging and metrics for monitoring LLM application performance
The framework integrates seamlessly with popular LLM APIs (OpenAI, Anthropic, etc.) and provides both synchronous and asynchronous interfaces for production workflows.
## Target Audience
Sifaka is (eventually) intended for production LLM applications where reliability and quality are critical. Primary use cases include:
- Production AI systems that need consistent, high-quality outputs
- Content generation pipelines requiring automated quality assurance
- AI-powered workflows in enterprise environments
- Research applications studying LLM reliability and improvement techniques
The framework includes comprehensive error handling, making it suitable for mission-critical applications rather than just experimentation.
## Comparison
While there are several LLM orchestration tools available, Sifaka differentiates itself through:
vs. LangChain/LlamaIndex:
- Focuses specifically on output quality and reliability rather than general orchestration
- Provides research-backed evaluation metrics instead of generic chains
- Lighter weight with minimal dependencies
/r/Python
https://redd.it/1m59s5f
## What My Project Does
Sifaka is an open-source Python framework that adds reflection and reliability to large language model (LLM) applications. The core functionality includes:
- 7 research-backed critics that automatically evaluate LLM outputs for quality, accuracy, and reliability
- Iterative improvement engine that uses critic feedback to refine content through multiple rounds
- Validation rules system for enforcing custom quality standards and constraints
- Built-in retry mechanisms with exponential backoff for handling API failures
- Structured logging and metrics for monitoring LLM application performance
The framework integrates seamlessly with popular LLM APIs (OpenAI, Anthropic, etc.) and provides both synchronous and asynchronous interfaces for production workflows.
## Target Audience
Sifaka is (eventually) intended for production LLM applications where reliability and quality are critical. Primary use cases include:
- Production AI systems that need consistent, high-quality outputs
- Content generation pipelines requiring automated quality assurance
- AI-powered workflows in enterprise environments
- Research applications studying LLM reliability and improvement techniques
The framework includes comprehensive error handling, making it suitable for mission-critical applications rather than just experimentation.
## Comparison
While there are several LLM orchestration tools available, Sifaka differentiates itself through:
vs. LangChain/LlamaIndex:
- Focuses specifically on output quality and reliability rather than general orchestration
- Provides research-backed evaluation metrics instead of generic chains
- Lighter weight with minimal dependencies
/r/Python
https://redd.it/1m59s5f
GitHub
GitHub - sifaka-ai/sifaka: Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM)…
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications. - sifaka-ai/sifaka
[P] Chess Llama - Training a tiny Llama model to play chess
https://lazy-guy.github.io/blog/chessllama/
/r/MachineLearning
https://redd.it/1m4s65p
https://lazy-guy.github.io/blog/chessllama/
/r/MachineLearning
https://redd.it/1m4s65p
LazyGuy-_-'s Website
Chess Llama - Training a tiny Llama model to play chess
We trained a tiny Llama model from scratch just to play chess!
Hosting Open Source LLMs for Document Analysis – What's the Most Cost-Effective Way?
Hey fellow Django dev,
Any one here experince working with llms ?
Basically, I'm running my own VPS (basic $5/month setup). I'm building a simple webapp where users upload documents (PDF or JPG), I OCR/extract the text, run some basic analysis (classification/summarization/etc), and return the result.
I'm not worried about the Django/backend stuff – my main question is more around how to approach the LLM side in a cost-effective and scalable way:
I'm trying to stay 100% on free/open-source models (e.g., Hugging Face) – at least during prototyping.
Should I download the LLM locally (e.g., GGUF / GPTQ / Transformers), run it via something like
Or is there a way to call free hosted inference endpoints (Hugging Face Inference API, Ollama, [Together.ai](http://Together.ai), etc.) without needing to host models myself?
If I go self-hosted: is it practical to run 7B or even 13B models on a low-spec VPS? Or should I use something like
I’m fine with hacky setups as long as it’s reasonably stable. My goal isn’t high traffic, just a few dozen users at the start.
What
/r/django
https://redd.it/1m59mzz
Hey fellow Django dev,
Any one here experince working with llms ?
Basically, I'm running my own VPS (basic $5/month setup). I'm building a simple webapp where users upload documents (PDF or JPG), I OCR/extract the text, run some basic analysis (classification/summarization/etc), and return the result.
I'm not worried about the Django/backend stuff – my main question is more around how to approach the LLM side in a cost-effective and scalable way:
I'm trying to stay 100% on free/open-source models (e.g., Hugging Face) – at least during prototyping.
Should I download the LLM locally (e.g., GGUF / GPTQ / Transformers), run it via something like
text-generation-webui, llama.cpp, vLLM, or even FastAPI + transformers?Or is there a way to call free hosted inference endpoints (Hugging Face Inference API, Ollama, [Together.ai](http://Together.ai), etc.) without needing to host models myself?
If I go self-hosted: is it practical to run 7B or even 13B models on a low-spec VPS? Or should I use something like
LM Studio, llama-cpp-python, or a quantized GGUF model to keep memory usage low?I’m fine with hacky setups as long as it’s reasonably stable. My goal isn’t high traffic, just a few dozen users at the start.
What
/r/django
https://redd.it/1m59mzz
www.together.ai
Together AI | The AI Native Cloud
Reliably build, deploy, and scale AI native apps — benefit from cutting-edge research, complete developer experience, and unmatched price-performance.
What is the best way to deal with floating point numbers when you have model restrictions?
I can equally call my title, "How restrictive should my models be?".
I am currently working on a hobby project using Django as my backend and continually running into problems with floating point errors when I add restrictions in my model. Let's take a single column as an example that keeps track of the weight of a food entry.
foodweight = models.DecimalField(
maxdigits=6,
decimalplaces=2,
validators=[MinValueValidator(0), MaxValueValidator(5000)]
)
When writing this, it was sensible to me that I did not want my users to give me data more than two decimal points of precision. I also enforce this via the client side UI.
The problem is that client side enforcement also has floating points errors. So when I use a JavaScript function such as \`toFixed(2)\` and then give these numbers to my endpoint, when I pass a number such as \`0.3\`, this will actaully fail to serialize because it was will try to serialize \`0.300000004\` and break the \`max\digits=6` criteria.
Whenever I write a backend with restrictions, they seem sensible at
/r/django
https://redd.it/1m58xk6
I can equally call my title, "How restrictive should my models be?".
I am currently working on a hobby project using Django as my backend and continually running into problems with floating point errors when I add restrictions in my model. Let's take a single column as an example that keeps track of the weight of a food entry.
foodweight = models.DecimalField(
maxdigits=6,
decimalplaces=2,
validators=[MinValueValidator(0), MaxValueValidator(5000)]
)
When writing this, it was sensible to me that I did not want my users to give me data more than two decimal points of precision. I also enforce this via the client side UI.
The problem is that client side enforcement also has floating points errors. So when I use a JavaScript function such as \`toFixed(2)\` and then give these numbers to my endpoint, when I pass a number such as \`0.3\`, this will actaully fail to serialize because it was will try to serialize \`0.300000004\` and break the \`max\digits=6` criteria.
Whenever I write a backend with restrictions, they seem sensible at
/r/django
https://redd.it/1m58xk6
Reddit
From the django community on Reddit
Explore this post and more from the django community
Prefered way to structure polars expressions in large project?
I love polars. However once your project hit a certain size, you end up with a few "core" dataframe schemas / columns re-used across the codebase, and intermediary transformations who can sometimes be lengthy.
I'm curious about what are other ppl approachs to organize and split up things.
The first point I would like to adress is the following:
given a certain dataframe whereas you have a long transformation chains, do you prefer to split things up in a few functions to separate steps, or centralize everything?
For example, which way would you prefer?
```
# This?
def chained(file: str, cols: list[str]) -> pl.DataFrame:
return (
pl.scan_parquet(file)
.select(*[pl.col(name) for name in cols])
.with_columns()
.with_columns()
.with_columns()
.group_by()
.agg()
.select()
.with_columns()
.sort("foo")
.drop()
/r/Python
https://redd.it/1m5jcot
I love polars. However once your project hit a certain size, you end up with a few "core" dataframe schemas / columns re-used across the codebase, and intermediary transformations who can sometimes be lengthy.
I'm curious about what are other ppl approachs to organize and split up things.
The first point I would like to adress is the following:
given a certain dataframe whereas you have a long transformation chains, do you prefer to split things up in a few functions to separate steps, or centralize everything?
For example, which way would you prefer?
```
# This?
def chained(file: str, cols: list[str]) -> pl.DataFrame:
return (
pl.scan_parquet(file)
.select(*[pl.col(name) for name in cols])
.with_columns()
.with_columns()
.with_columns()
.group_by()
.agg()
.select()
.with_columns()
.sort("foo")
.drop()
/r/Python
https://redd.it/1m5jcot
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
When working in a team do you makemigrations when the DB schema is not updated?
Pretty simple question really.
I'm currently working in a team of 4 django developers on a large and reasonably complex product, we use kubernetes to deploy the same version of the app out to multiple clusters - if that at all makes a difference.
I was wondering that if you were in my position would you run makemigrations for all of the apps when you're just - say - updating choices of a CharField or reordering potential options, changes that wouldn't update the db schema.
I won't say which way I lean to prevent the sway of opinion but I'm interested to know how other teams handle it.
/r/django
https://redd.it/1m5l189
Pretty simple question really.
I'm currently working in a team of 4 django developers on a large and reasonably complex product, we use kubernetes to deploy the same version of the app out to multiple clusters - if that at all makes a difference.
I was wondering that if you were in my position would you run makemigrations for all of the apps when you're just - say - updating choices of a CharField or reordering potential options, changes that wouldn't update the db schema.
I won't say which way I lean to prevent the sway of opinion but I'm interested to know how other teams handle it.
/r/django
https://redd.it/1m5l189
Reddit
From the django community on Reddit
Explore this post and more from the django community
Is it ok to use Pandas in Production code?
Hi I have recently pushed a code, where I was using pandas, and got a review saying that I should not use pandas in production. Would like to check others people opnion on it.
For context, I have used pandas on a code where we scrape page to get data from html tables, instead of writing the parser myself I used pandas as it does this job seamlessly.
Would be great to get different views on it. tks.
/r/Python
https://redd.it/1m5lm8e
Hi I have recently pushed a code, where I was using pandas, and got a review saying that I should not use pandas in production. Would like to check others people opnion on it.
For context, I have used pandas on a code where we scrape page to get data from html tables, instead of writing the parser myself I used pandas as it does this job seamlessly.
Would be great to get different views on it. tks.
/r/Python
https://redd.it/1m5lm8e
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1m5z8b2
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1m5z8b2
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
D Gemini officially achieves gold-medal standard at the International Mathematical Olympiad
https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
>This year, our advanced Gemini model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions – all within the 4.5-hour competition time limit.
/r/MachineLearning
https://redd.it/1m5qudf
https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
>This year, our advanced Gemini model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions – all within the 4.5-hour competition time limit.
/r/MachineLearning
https://redd.it/1m5qudf
Google DeepMind
Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad
Our advanced model officially achieved a gold-medal level performance on problems from the International Mathematical Olympiad (IMO), the world’s most prestigious competition for young...
Tutorial: Send push notifications from Django. (No web-sockets, just push notifications)
https://youtube.com/playlist?list=PLoxOJUuMedAoAGYDEqAPo0az2a_9agXTC&si=GMIce36DB1HJbjGP
/r/djangolearning
https://redd.it/1m33tm6
https://youtube.com/playlist?list=PLoxOJUuMedAoAGYDEqAPo0az2a_9agXTC&si=GMIce36DB1HJbjGP
/r/djangolearning
https://redd.it/1m33tm6
YouTube
Django Web Push Notifications
A playlist of tutorials on how to send web push notifications from django.
Need help figuring out why it is not working.
Has anybody used django-tailwind-cli on their projects?
For the love of god, I could not figure out what is wrong with my setup. I am unable to load CSS on a template.
Anyone willing to help would be greatly appreciated.
/r/django
https://redd.it/1m6aun3
Has anybody used django-tailwind-cli on their projects?
For the love of god, I could not figure out what is wrong with my setup. I am unable to load CSS on a template.
Anyone willing to help would be greatly appreciated.
/r/django
https://redd.it/1m6aun3
GitHub
GitHub - django-commons/django-tailwind-cli: Django and Tailwind integration based on the prebuilt Tailwind CSS CLI.
Django and Tailwind integration based on the prebuilt Tailwind CSS CLI. - django-commons/django-tailwind-cli
Installing djangorestframework
I have a fresh lightsail install with Django stack. I want to now install djangorestframework. How do I install it so Django can use it? Do i install it into a venv or globally using pip?
/r/djangolearning
https://redd.it/1m30n13
I have a fresh lightsail install with Django stack. I want to now install djangorestframework. How do I install it so Django can use it? Do i install it into a venv or globally using pip?
/r/djangolearning
https://redd.it/1m30n13
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
PEP 798 – Unpacking in Comprehensions
PEP 798 – Unpacking in Comprehensions
https://peps.python.org/pep-0798/
# Abstract
This PEP proposes extending list, set, and dictionary comprehensions, as well as generator expressions, to allow unpacking notation (
*it for it in its # list with the concatenation of iterables in 'its'
{it for it in its} # set with the union of iterables in 'its'
{d for d in dicts} # dict with the combination of dicts in 'dicts'
(it for it in its) # generator of the concatenation of iterables in 'its'
/r/Python
https://redd.it/1m607oi
PEP 798 – Unpacking in Comprehensions
https://peps.python.org/pep-0798/
# Abstract
This PEP proposes extending list, set, and dictionary comprehensions, as well as generator expressions, to allow unpacking notation (
* and **) at the start of the expression, providing a concise way of combining an arbitrary number of iterables into one list or set or generator, or an arbitrary number of dictionaries into one dictionary, for example:*it for it in its # list with the concatenation of iterables in 'its'
{it for it in its} # set with the union of iterables in 'its'
{d for d in dicts} # dict with the combination of dicts in 'dicts'
(it for it in its) # generator of the concatenation of iterables in 'its'
/r/Python
https://redd.it/1m607oi
Python Enhancement Proposals (PEPs)
PEP 798 – Unpacking in Comprehensions | peps.python.org
This PEP proposes extending list, set, and dictionary comprehensions, as well as generator expressions, to allow unpacking notation (* and **) at the start of the expression, providing a concise way of combining an arbitrary number of iterables into one...
D Is it me or is ECAI really bad this year?
I have one accepted paper and another one rejected. The review and meta-review quality was really subpar. It felt like most of the responses we got, on both sides of the spectrum, came from underexperinced reviewers. I am all for letting undergrads read, review, and get experience, but I always review the paper by myself first and would never submit theirs as is. This really boggles me because I always thought ECAI is a good conference, but this year I can't help but feel a little bit embarrassed to even go there.
I have not submitted to other conferences yet. So, I wonder if there is a trend.
/r/MachineLearning
https://redd.it/1m69wc3
I have one accepted paper and another one rejected. The review and meta-review quality was really subpar. It felt like most of the responses we got, on both sides of the spectrum, came from underexperinced reviewers. I am all for letting undergrads read, review, and get experience, but I always review the paper by myself first and would never submit theirs as is. This really boggles me because I always thought ECAI is a good conference, but this year I can't help but feel a little bit embarrassed to even go there.
I have not submitted to other conferences yet. So, I wonder if there is a trend.
/r/MachineLearning
https://redd.it/1m69wc3
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
Wii tanks made in Python
What My Project Does
This is a full remake of the Wii Play: Tanks! minigame using Python and Pygame. It replicates the original 20 levels with accurate AI behavior and mechanics. Beyond that, it introduces 30 custom levels and 10 entirely new enemy tank types, each with unique movement, firing, and strategic behaviors. The game includes ricochet bullets, destructible objects, mines, and increasingly harder units.
Target Audience
Intended for beginner to intermediate Python developers, game dev enthusiasts, and fans of the original Wii title. It’s a hobby project designed for learning, experimentation, and entertainment.
Comparison
This project focuses on AI variety and level design depth. It features 19 distinct enemy types and a total of 50 levels. The AI is written from scratch in basic Python, using A* and statemachine logic.
GitHub Repo
https://github.com/Frode-Henrol/Tank\_game
/r/Python
https://redd.it/1m6lzvk
What My Project Does
This is a full remake of the Wii Play: Tanks! minigame using Python and Pygame. It replicates the original 20 levels with accurate AI behavior and mechanics. Beyond that, it introduces 30 custom levels and 10 entirely new enemy tank types, each with unique movement, firing, and strategic behaviors. The game includes ricochet bullets, destructible objects, mines, and increasingly harder units.
Target Audience
Intended for beginner to intermediate Python developers, game dev enthusiasts, and fans of the original Wii title. It’s a hobby project designed for learning, experimentation, and entertainment.
Comparison
This project focuses on AI variety and level design depth. It features 19 distinct enemy types and a total of 50 levels. The AI is written from scratch in basic Python, using A* and statemachine logic.
GitHub Repo
https://github.com/Frode-Henrol/Tank\_game
/r/Python
https://redd.it/1m6lzvk
GitHub
GitHub - Frode-Henrol/Tank_game
Contribute to Frode-Henrol/Tank_game development by creating an account on GitHub.
Anyone else doing production Python at a C++ company? Here's how we won hearts and minds.
I work on a local LLM server tool called Lemonade Server at AMD. Early on we made the choice to implement it in Python because that was the only way for our team to keep up with the breakneck pace of change in the LLM space. However, C++ was certainly the expectation of our colleagues and partner teams.
This blog is about the technical decisions we made to give our Python a native look and feel, which in turn has won people over to the approach.
Rethinking Local AI: Lemonade Server's Python Advantage
I'd love to hear anyone's similar stories! Especially any advice on what else we could be doing to improve native look and feel, reduce install size, etc. would be much appreciated.
This is my first time writing and publishing something like this, so I hope some people find it interesting. I'd love to write more like this in the future if it's useful.
/r/Python
https://redd.it/1m6g0jx
I work on a local LLM server tool called Lemonade Server at AMD. Early on we made the choice to implement it in Python because that was the only way for our team to keep up with the breakneck pace of change in the LLM space. However, C++ was certainly the expectation of our colleagues and partner teams.
This blog is about the technical decisions we made to give our Python a native look and feel, which in turn has won people over to the approach.
Rethinking Local AI: Lemonade Server's Python Advantage
I'd love to hear anyone's similar stories! Especially any advice on what else we could be doing to improve native look and feel, reduce install size, etc. would be much appreciated.
This is my first time writing and publishing something like this, so I hope some people find it interesting. I'd love to write more like this in the future if it's useful.
/r/Python
https://redd.it/1m6g0jx
AMD
Rethinking Local AI: Lemonade Server’s Python Advantage
This blog challenges the idea that Python isn’t suitable for production—by exploring how we used it to build Lemonade Server, a production-grade, high-performance local LLM deployment tool.
Advice needed on coding project!
Hi! I only recently started coding and I'm running into some issues with my recent project, and was wondering if anyone had any advice! My troubles are mainly with the button that's supposed to cancel the final high-level alert. The button is connected to pin D6, and it works fine when tested on its own, but in the actual code it doesn't stop the buzzer or reset the alert counter like it's supposed to. This means the system just stays stuck in the high alert state until I manually stop it. Another challenge is with the RGB LCD screen I'm using. it doesn’t support a text cursor, so I can’t position text exactly where I want on the screen. That makes it hard to format alert messages, especially longer ones that go over the 2-line limit. I’ve had to work around this by clearing the display or cycling through lines of text. The components I'm using include a Grove RGB LCD with a 16x2 screen and backlight, a Grove PIR motion sensor to detect movement, a Grove light sensor to check brightness, a red LED on D4 for visual alerts, a buzzer on D5 for sound alerts, and a
/r/Python
https://redd.it/1m6ow2y
Hi! I only recently started coding and I'm running into some issues with my recent project, and was wondering if anyone had any advice! My troubles are mainly with the button that's supposed to cancel the final high-level alert. The button is connected to pin D6, and it works fine when tested on its own, but in the actual code it doesn't stop the buzzer or reset the alert counter like it's supposed to. This means the system just stays stuck in the high alert state until I manually stop it. Another challenge is with the RGB LCD screen I'm using. it doesn’t support a text cursor, so I can’t position text exactly where I want on the screen. That makes it hard to format alert messages, especially longer ones that go over the 2-line limit. I’ve had to work around this by clearing the display or cycling through lines of text. The components I'm using include a Grove RGB LCD with a 16x2 screen and backlight, a Grove PIR motion sensor to detect movement, a Grove light sensor to check brightness, a red LED on D4 for visual alerts, a buzzer on D5 for sound alerts, and a
/r/Python
https://redd.it/1m6ow2y
Reddit
From the Python community on Reddit: Advice needed on coding project!
Explore this post and more from the Python community
Superfunctions: solving the problem of duplication of the Python ecosystem into sync and async halve
Hello r/Python! 👋
For many years, pythonists have been writing asynchronous versions of old synchronous libraries, violating the DRY principle on a global scale. Just to add async and await in some places, we have to write new libraries! I recently wrote [transfunctions](https://github.com/pomponchik/transfunctions) - the first solution I know of to this problem.
# What My Project Does
The main feature of this library is
As you can see, it works very simply, although there is a lot of magic under the hood. We just got a feature that works both as regular and as coroutine, depending on how we use it. This allows you to write very powerful and versatile libraries that no longer need to be divided into synchronous and asynchronous, they can be any
/r/Python
https://redd.it/1m6rzqv
Hello r/Python! 👋
For many years, pythonists have been writing asynchronous versions of old synchronous libraries, violating the DRY principle on a global scale. Just to add async and await in some places, we have to write new libraries! I recently wrote [transfunctions](https://github.com/pomponchik/transfunctions) - the first solution I know of to this problem.
# What My Project Does
The main feature of this library is
superfunctions. This is a kind of functions that is fully sync/async agnostic - you can use it as you need. An example:from asyncio import run
from transfunctions import superfunction,sync_context, async_context
@superfunction(tilde_syntax=False)
def my_superfunction():
print('so, ', end='')
with sync_context:
print("it's just usual function!")
with async_context:
print("it's an async function!")
my_superfunction()
#> so, it's just usual function!
run(my_superfunction())
#> so, it's an async function!
As you can see, it works very simply, although there is a lot of magic under the hood. We just got a feature that works both as regular and as coroutine, depending on how we use it. This allows you to write very powerful and versatile libraries that no longer need to be divided into synchronous and asynchronous, they can be any
/r/Python
https://redd.it/1m6rzqv
Reddit
Python
The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language.
---
If you have questions or are new to Python use r/LearnPython
---
If you have questions or are new to Python use r/LearnPython
I was paid for creating an app but I don't know if Django is my best choice
Hi there, I'm a backend dev with experience mostly in python.
Recently a real estate agent contacted me to create a property management system.
I know django and a little bit of django templates but I see a lot of people using Node + React or Django + React and I don't know if using purely django will be a headache.
Any suggestions or advice on what stack to use would be highly appreciated.
Thanks in advance.
/r/django
https://redd.it/1m6tz2f
Hi there, I'm a backend dev with experience mostly in python.
Recently a real estate agent contacted me to create a property management system.
I know django and a little bit of django templates but I see a lot of people using Node + React or Django + React and I don't know if using purely django will be a headache.
Any suggestions or advice on what stack to use would be highly appreciated.
Thanks in advance.
/r/django
https://redd.it/1m6tz2f
Reddit
From the django community on Reddit
Explore this post and more from the django community