Python Daily
2.57K subscribers
1.48K photos
53 videos
2 files
38.9K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Run Celery tasks on Railway

I deployed a Django project in Railway, and it uses Celery and Redis to perform an scheduled task. The project is successfully online, but the Celery tasks are not performed.

If I execute the Celery worker from my computer's terminal using the Railway CLI, the tasks are performed as expected, and the results are saved in the Railway's PostgreSQL, and thus those results are displayed in the on-line site. Also, the redis server used is also the one from Railway.

However, Celery is operating in 'local'. I need the Celery tasks to be performed without my pc's terminal doing it. This is the log showing the Celery is running local, and the Redis server is the one up in Railway:

-------------- celery@MacBook-Pro-de-Corey.local v5.2.7 (dawn-chorus)
--- -----
--
---- macOS-13.1-arm64-arm-64bit 2023-01-11 23:08:34
- --- ---
- ---------- config
- ---------- .> app: suii:0x1027e86a0
- ---------- .> transport: redis://default:@containers-us-west-28.railway.app:7078//
- ---------- .> results:
- --- --- .> concurrency: 10 (prefork)


/r/django
https://redd.it/1096phz
Architecture Setting up a structure/architecture for a constant crawling and data saving platform.

Our company's client has multiple websites to track if they're down or not. And the CEO of our company wants us to build a new platform that constantly crawls the inserted websites and pings the user in the platform.

In our old system, the previous developer team used API's to save data coming from the crawler. If you add 1000 of websites, 1000x5 API calls are happening in an specific time gap, which made the system slow. Also, we have added subdomain tracking on the system, with which the system became super slow. And, sending data through websockets in the frontend to update what's going on made the system super duper slow.

I have got the responsibility for making a robost system that is super fast. So, I am here asking the question of how can I make the system robost, and fast.

Where to host the crawler, should we call APIs to save data collected by the crawler? What's the most optimized solution for this?

/r/django
https://redd.it/1095ryn
Errors With "urlpatterns"

I wanted to display a HttpResponse but I get the following errors when I try to Run My Project
Django default template being Displayed:https://imgur.com/a/QdfjMhW

Command Prompt: https://imgur.com/a/UNm2gmZ

urls.py file in Django Project:https://imgur.com/a/s8izstU

views.py file:https://imgur.com/a/Htqlxjb

urls.py file in Django App:https://imgur.com/a/KWaPGfH

/r/django
https://redd.it/10941p2
deploy one api (flask/fastapi/sanic/etc) as many Lambdas, one per endpoint?

Just an idea - I haven't given it a lot of thought, but I'm wondering if this is possible or even worth considering.

It seems like it might be nice to develop an API as one cohesive "thing" using a framework such as Flask/Quart/etc but then instead of deploying it as one site, have it deployed such that each public endpoint becomes its own Lambda on AWS. Or function on Azure, GCP, etc.

Hoping to generate a discussion and see what others think of this idea.

/r/Python
https://redd.it/1093wwf
App to practice 500+ free Python challenges from beginner to advanced topics.

Hi friends, Last month, I've posted about my app that offers 100s of free challenges to practice and learn Python. Today, we already have around 500 free challenges and a clear learning path that helps people to practice everything from basics to the most advanced topics. You can download it via this link. Feedbacks are appreciated. https://apps.apple.com/us/app/id1632477791

/r/Python
https://redd.it/1098ju2
Add StreamBlock child items programmatically in Wagtail

Hello, I have the following setup for a pricing page on Wagtail:

class PricingPage(Page):
plans = StreamField(
[("plans", PlanListBlock())],
use_json_field=True,
)

@property
def plan_count(self) -> int:
try:
return self.plans[0].value.count
except (IndexError, AttributeError):
return 0

class PlanListBlock(blocks.StreamBlock):
"""A collection of price cards"""
plan = PlanCardBlock()

class PlanCardBlock(blocks.StructBlock):
"""Price Card with a plan's name, price"""
title = blocks.CharBlock(required=True, help_text="Plan's title")
currency_symbol = blocks.CharBlock(required=True, max_length=3)
unit_amount = blocks.DecimalBlock(min_value=0, max_value=100)


I'm having trouble testing the plan_count method. Specifically, I'm stuck trying to add a new plan to an existing pricing page with no plans programmatically (I'm on Wagtail 4.1).

My challenge is to take a PricingPage instance I use for tests and

/r/django
https://redd.it/109bphy
Conventional Commits with a data prefix

I uploaded this for myself, mostly for committing blog articles. Would work just as well for many non-code, non-documentation commits. The Commitizen conventional commits implementation with one more field: `data:`

conventional-with-data · PyPI

This is a branch of my conventionalish project.

ShayHill/conventionalish: Extend the Commitizen Conventional-Commits implementation (github.com)

/r/Python
https://redd.it/109gfc4
Data Analysis

I started to learn Python and I've already got basic things on Python. I love statistics and I'm taking some classes about statistics at university.

So that's my question, where should I start to data Analysis with python

I asked that which I should follow the way

Thnx for all answers.

/r/Python
https://redd.it/109fqlv
Django project complexity

How can I know if my Django projects are good enough and complex enough so I can present them as my portfolio projects?
Currently I have a to-do list (creating, updating and deleting tasks with login/register functions), a blog(similar to to-do list, a bit more complex), an app that tracks stocks prices using an API and I'm currently working kn a Ecommerce store(combining Django with JS).
Should I focus on improving these apps or should I create more complexed apps?

/r/django
https://redd.it/109gll2
Chronological list of Resources to Learn Django from Complete Beginner to Advanced Level
https://www.codelivly.com/resources-to-learn-django-from-complete-beginner-to-advanced-level/

/r/django
https://redd.it/109pws3
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!

Discussion of using Python in a professional environment, getting jobs in Python as well as ask questions about courses to further your python education!

This thread is not for recruitment, please see r/PythonJobs or the thread in the sidebar for that.

/r/Python
https://redd.it/109k9to
Chronological list of Resources to Learn Django from Complete Beginner to Advanced Level
https://www.codelivly.com/resources-to-learn-django-from-complete-beginner-to-advanced-level/

/r/Python
https://redd.it/109px3s
Scriptum: A command line utility for storing, documenting, and executing your project's scripts.

Scriptum is a command line utility for storing, documenting, and executing your project's scripts, written in Python.

With scriptum, you can easily configure and run your project's scripts, by defining them in a configuration file. Here is an example of a configuration file:

/
Scripts for my project!
/
{
"serve": "python3 server.py", // runs the server
"test": { // tests the code
"permissions": {
"set": "chmod 777 main.py"
},
"run": "./main.py $1",
"checks": "mypy **/*.py", "python3 tests/main.py"
},
"package":

/r/Python
https://redd.it/109qxzw
Updated documentation for ScreenPy which should help you understand Screenplay Pattern, we think, we hope.

'Sssup, snakes—

I'm back with some updated documentation for ScreenPy, which is our Screenplay Pattern base suite for Python. If you haven't heard of it—that's not surprising—it lets you write tests that look like this:

def testexample(Snoo: AnActor) -> None:
given(Snoo).was
ableto(LogIn.using(USERNAME, PASSWORD))

when(Snoo).attempts
to(
Visit(POSTURL),
MakeNote.of
the(VotesOnThePost()).as("votes before"),
Click.on
the(UPVOTEARROW),
)

then(Snoo).should(
See.the(VotesOnThePost(), IsEqualTo(the
noted("votes before") + 1),
)

We recently rewrote our entire documentation with a focus on showing how to build a suite using a made-up Ability, which showcases way more of the modular, composition-based approach that Screenplay Pattern uses.

Our goal is to present ScreenPy and Screenplay Pattern in a way that is easy to follow and hopefully exciting to read about. Please let

/r/Python
https://redd.it/109s091
Enforcing fixed time step in FMI++ (fmipp)

I am learning python the hard way (at least i feel like it). I need to get an fixed step FMU running, to cosimulate with a multibody simulation Tool (MSC ADAMS). I chose fmipp, as it is quite powerful, but i cant get it to run with fixed step size.

Has anyone here ever done it?

/r/Python
https://redd.it/109ut9w
D The Open Deep Learning Toolkit for Robotics v2.0 was just released

The Open Deep Learning Toolkit for Robotics version 2.0 was just released! This new version of the toolkit includes several improvements, such as new tools for object detection, efficient continual inference, tracking, emotion estimation and high-resolution pose estimation. Furthermore, this version includes a refined ROS interface, along with support for ROS2.

You can download it here: https://github.com/opendr-eu/opendr

We look forward to receiving your feedback, bug reports, and suggestions for improvements!

/r/MachineLearning
https://redd.it/109w09k
R Git is for Data (CIDR 2023) - Extending Git to Support Large-Scale Data

Paper: https://www.cidrdb.org/cidr2023/papers/p43-low.pdf

Abstract:

Dataset management is one of the greatest challenges to the application of machine learning (ML) in the industry. Although scaling and performance have often been highlighted as the significant ML challenges, development teams are bogged down by the contradictory requirements of supporting fast and flexible data iteration while maintaining stability, provenance, and reproducibility. For example, blobstores are used to store datasets for maximum flexibility, but their unmanaged access patterns limit reproducibility. Many ML pipeline solutions to ensure reproducibility have been devised, but all introduce a degree of friction and reduce flexibility.

In this paper, we propose that the solution to the dataset management challenges is simple and apparent: Git. As a source control system, as well as an ecosystem of collaboration and developer tooling, Git has enabled the field of DevOps to provide both speed of iteration and reproducibility to source code. Git is not only already familiar to developers, but is also integrated into existing pipelines, which facilitates adoption. However, as we (and others) demonstrate, Git, as designed today, does not scale to the needs of ML dataset management. In this paper, we propose XetHub; a system that retains the Git user experience and ecosystem, but can scale to

/r/MachineLearning
https://redd.it/10a4mns
D Microsoft ChatGPT investment isn't about Bing but about Cortana

I believe that Microsoft's 10B USD investment in ChatGPT is less about Bing and more about turning Cortana into an Alexa for corporates.
Examples: Cortana prepare the new T&Cs... Cortana answer that client email... Cortana prepare the Q4 investor presentation (maybe even with PowerBI integration)... Cortana please analyze cost cutting measures... Cortana please look up XYZ...

What do you think?

/r/MachineLearning
https://redd.it/1095os9