Help needed Im on a time constraint.
Long story short, need to make 60k Rs. in the span of 30 days, break down: 2k Rs.per day, what can I do to achieve this goal, highly skilled in django, gen ai integrations, payments api integrations etc.
Doesnt matter how much time is taken per day.
Don't mention freelance, market saturated already and can't generate enough from Indian clients.
At this point anything will do just tell me what can I do that will help me achieve the target of edit: 2k Rs. per day.
if anyone interested in paid collaboration pls hmu.
Please refrain from negative comments.
Need help genuinely.
/r/django
https://redd.it/1e4su8t
Long story short, need to make 60k Rs. in the span of 30 days, break down: 2k Rs.per day, what can I do to achieve this goal, highly skilled in django, gen ai integrations, payments api integrations etc.
Doesnt matter how much time is taken per day.
Don't mention freelance, market saturated already and can't generate enough from Indian clients.
At this point anything will do just tell me what can I do that will help me achieve the target of edit: 2k Rs. per day.
if anyone interested in paid collaboration pls hmu.
Please refrain from negative comments.
Need help genuinely.
/r/django
https://redd.it/1e4su8t
Reddit
From the django community on Reddit
Explore this post and more from the django community
Finance Toolkit - Calculate 200+ Financial Metrics for Any Company
This project originates from the frustration I had with financial websites and platforms not listing how they have calculated their financial metrics and seeing deviations when going to platforms such as Stockopedia, Morningstar, Macrotrends, Finance Charts, Wall Street Journal and many more. The metrics shouldn't deviate but they do (sometimes quite heavily) which encouraged me to start this project.
The whole idea behind the Finance Toolkit is to write every type of metric down in the most simplistic way (e.g. see here) so that the only thing that matters is the data that you put into it. I've worked on this project for around a year or two now and it currently has over 200+ different metrics such as Price-to-Earnings, Return on Equity, DUPONT, WACC, Bollinger Bands, Williams R, GARCH, Value at Risk, CAPM, Jensen's Alpha and all Option-related Greeks (and many many more).
Does this pique your interest? Have a look at the project on GitHub here: **https://github.com/JerBouma/FinanceToolkit**
Using the Finance Toolkit is pretty simple, you can download the package by using:
pip install financetoolkit -U
Then for example to perform an Extended Dupont Analysis you initialise the Finance Toolkit and run the related functionality:
from financetoolkit
/r/Python
https://redd.it/1e4ntif
This project originates from the frustration I had with financial websites and platforms not listing how they have calculated their financial metrics and seeing deviations when going to platforms such as Stockopedia, Morningstar, Macrotrends, Finance Charts, Wall Street Journal and many more. The metrics shouldn't deviate but they do (sometimes quite heavily) which encouraged me to start this project.
The whole idea behind the Finance Toolkit is to write every type of metric down in the most simplistic way (e.g. see here) so that the only thing that matters is the data that you put into it. I've worked on this project for around a year or two now and it currently has over 200+ different metrics such as Price-to-Earnings, Return on Equity, DUPONT, WACC, Bollinger Bands, Williams R, GARCH, Value at Risk, CAPM, Jensen's Alpha and all Option-related Greeks (and many many more).
Does this pique your interest? Have a look at the project on GitHub here: **https://github.com/JerBouma/FinanceToolkit**
Using the Finance Toolkit is pretty simple, you can download the package by using:
pip install financetoolkit -U
Then for example to perform an Extended Dupont Analysis you initialise the Finance Toolkit and run the related functionality:
from financetoolkit
/r/Python
https://redd.it/1e4ntif
GitHub
FinanceToolkit/financetoolkit/ratios/valuation_model.py at main · JerBouma/FinanceToolkit
Transparent and Efficient Financial Analysis. Contribute to JerBouma/FinanceToolkit development by creating an account on GitHub.
Chunkit: Convert URLs into LLM-friendly markdown chunks for your RAG projects
Hey all, I am releasing a python package called chunkit which allows you to scrape and convert URLs into markdown chunks. These chunks can then be used for RAG applications.
The reason it works better than naive chunking (for example split every 200 words and use 30 word overlap) is because Chunkit splits on the most common markdown header levels instead, leading to much more semantically cohesive paragraphs.
https://github.com/hypergrok/chunkit
Have a go and let me know what features you would like to see!
/r/Python
https://redd.it/1e4v0sw
Hey all, I am releasing a python package called chunkit which allows you to scrape and convert URLs into markdown chunks. These chunks can then be used for RAG applications.
The reason it works better than naive chunking (for example split every 200 words and use 30 word overlap) is because Chunkit splits on the most common markdown header levels instead, leading to much more semantically cohesive paragraphs.
https://github.com/hypergrok/chunkit
Have a go and let me know what features you would like to see!
/r/Python
https://redd.it/1e4v0sw
GitHub
GitHub - hypergrok/chunkit: Convert URLs into LLM-friendly markdown chunks
Convert URLs into LLM-friendly markdown chunks. Contribute to hypergrok/chunkit development by creating an account on GitHub.
Are PyPI Trove Classifiers worthwhile to maintain?
Are PyPI Trove Classifiers worthwhile to maintain? From what I can tell, many of the classifiers are redundant to modern
Trove Classifiers also seem up to the package author when to apply which classifiers, without much in the way of guidelines as to when a classifier should be used. For example, it's unclear which cases the
With that previous point, it seems the classifiers are fairly arbitrary. Do many systems still heavily rely on these classifiers? For example, are search results related to
/r/Python
https://redd.it/1e51adb
Are PyPI Trove Classifiers worthwhile to maintain? From what I can tell, many of the classifiers are redundant to modern
pyproject.toml entries. For example, the Python versions are typically specified by the requires-python = <version> entry. And so the Trove Classifiers specifying the Python version become something that needs to be manually maintained, which, at least in my case, means it'll almost certainly go out of sync.Trove Classifiers also seem up to the package author when to apply which classifiers, without much in the way of guidelines as to when a classifier should be used. For example, it's unclear which cases the
Framework :: Matplotlib should be applied to, since it might be a package that is dependent on matplotlib, extends matplotlib, or is just usually expected to be usually used in conjunction with matplotlib but is not explicitly dependent on it. I'm having a difficult time finding guides explaining what is expected in these cases. Most (though certainly not all) articles I can find about them seem to be from a decade ago.With that previous point, it seems the classifiers are fairly arbitrary. Do many systems still heavily rely on these classifiers? For example, are search results related to
/r/Python
https://redd.it/1e51adb
PyPI
Classifiers
The Python Package Index (PyPI) is a repository of software for the Python programming language.
Wednesday Daily Thread: Beginner questions
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1e53wm4
# Weekly Thread: Beginner Questions 🐍
Welcome to our Beginner Questions thread! Whether you're new to Python or just looking to clarify some basics, this is the thread for you.
## How it Works:
1. Ask Anything: Feel free to ask any Python-related question. There are no bad questions here!
2. Community Support: Get answers and advice from the community.
3. Resource Sharing: Discover tutorials, articles, and beginner-friendly resources.
## Guidelines:
This thread is specifically for beginner questions. For more advanced queries, check out our [Advanced Questions Thread](#advanced-questions-thread-link).
## Recommended Resources:
If you don't receive a response, consider exploring r/LearnPython or join the Python Discord Server for quicker assistance.
## Example Questions:
1. What is the difference between a list and a tuple?
2. How do I read a CSV file in Python?
3. What are Python decorators and how do I use them?
4. How do I install a Python package using pip?
5. What is a virtual environment and why should I use one?
Let's help each other learn Python! 🌟
/r/Python
https://redd.it/1e53wm4
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 412982 members
Does the Gaussian mixture modeling package from sklearn require normalization?
I’m using the Gaussian mixture model package from sklearn in python and I’m wondering if my data needs to be normalized and in what way (min max scaling, standard scaling, etc)? This is a dataset with both continuous variables like age and test scores as well as categorical variables like gender (as dummy binary variables). I can answer further questions in the comments. Thanks!
/r/Python
https://redd.it/1e4a2e0
I’m using the Gaussian mixture model package from sklearn in python and I’m wondering if my data needs to be normalized and in what way (min max scaling, standard scaling, etc)? This is a dataset with both continuous variables like age and test scores as well as categorical variables like gender (as dummy binary variables). I can answer further questions in the comments. Thanks!
/r/Python
https://redd.it/1e4a2e0
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Pic2Pix: A script to turn pictures and drawings into sprites usable in 2d game engines.
I had a lot of trouble managing art assets as a solo dev in the first game jam I participated in, I ended up wasting far too much time making assets that simply werent worth the time. I wondered if my (mediocre) knowledge of programming in python could help on that front.
This script takes pictures and drawings, samples them, then filters the image based on the sample. All filtered pixels are simply turned transparent, leaving only the secrions that you (hopefully) desire, rendering an image that is usable as a sprite in 2d game engines like GameMaker.
The script is hosted on my Github. Have a looksee:
https://github.com/BobDev94/Pic2Pix
The target audience would be anyone who likes to doodle and turn their doodles into sprites. Or maybe people who want to turn themselves into a sprite. (Yes, the script can do that, provided they dont look the same as the background)
I think I've managed to create something unique. If not, drop me a link.
/r/Python
https://redd.it/1e5fn4c
I had a lot of trouble managing art assets as a solo dev in the first game jam I participated in, I ended up wasting far too much time making assets that simply werent worth the time. I wondered if my (mediocre) knowledge of programming in python could help on that front.
This script takes pictures and drawings, samples them, then filters the image based on the sample. All filtered pixels are simply turned transparent, leaving only the secrions that you (hopefully) desire, rendering an image that is usable as a sprite in 2d game engines like GameMaker.
The script is hosted on my Github. Have a looksee:
https://github.com/BobDev94/Pic2Pix
The target audience would be anyone who likes to doodle and turn their doodles into sprites. Or maybe people who want to turn themselves into a sprite. (Yes, the script can do that, provided they dont look the same as the background)
I think I've managed to create something unique. If not, drop me a link.
/r/Python
https://redd.it/1e5fn4c
GitHub
GitHub - BobDev94/Pic2Pix: A script to process pictures/drawings of people to turn them into sprites usable in 2d game engines…
A script to process pictures/drawings of people to turn them into sprites usable in 2d game engines like GameMaker - BobDev94/Pic2Pix
ReadingBricks - Flask app for searching personal Zettelkasten-like knowledge base
What does this project do?
Given a knowledge base in the supported format, this tools provides web interface for reading and searching notes. Features are as follows:
* Search by single tag
* Search by expressions consisting of tags, logical operators, and parentheses
* Full-text search with TF-IDF
* Links between notes
* Separate spaces for fields of knowledge
To use the tool, your notes must be converted to the special format. Namely, each note must be a Markdown cell in a Jupyter notebook. Such cells may have tags ('View -> Cell Toolbar -> Tags' in Jupyter) and may contain formulas in MathJax. One or more notebooks can be placed in a directory that represents a field of knowledge. A top-level directory with such nested directories is a knowledge base in the proper format.
Target audience
People who have their professional notes and want to easily navigate through them.
Alternatives
Obsidian
Pros:
* It is an end-to-end tool for both writing and reading
* It generates interactive graphs of connections between notes
Cons:
* It cannot search by tag expressions (only by tags or terms) - at least, without plugins
* My tool has more minimalist interface (just search bar and cloud of tags)
Links
GitHub: https://github.com/Nikolay-Lysenko/readingbricks
PyPI: https://pypi.org/project/readingbricks
/r/Python
https://redd.it/1e5j8f3
What does this project do?
Given a knowledge base in the supported format, this tools provides web interface for reading and searching notes. Features are as follows:
* Search by single tag
* Search by expressions consisting of tags, logical operators, and parentheses
* Full-text search with TF-IDF
* Links between notes
* Separate spaces for fields of knowledge
To use the tool, your notes must be converted to the special format. Namely, each note must be a Markdown cell in a Jupyter notebook. Such cells may have tags ('View -> Cell Toolbar -> Tags' in Jupyter) and may contain formulas in MathJax. One or more notebooks can be placed in a directory that represents a field of knowledge. A top-level directory with such nested directories is a knowledge base in the proper format.
Target audience
People who have their professional notes and want to easily navigate through them.
Alternatives
Obsidian
Pros:
* It is an end-to-end tool for both writing and reading
* It generates interactive graphs of connections between notes
Cons:
* It cannot search by tag expressions (only by tags or terms) - at least, without plugins
* My tool has more minimalist interface (just search bar and cloud of tags)
Links
GitHub: https://github.com/Nikolay-Lysenko/readingbricks
PyPI: https://pypi.org/project/readingbricks
/r/Python
https://redd.it/1e5j8f3
GitHub
GitHub - Nikolay-Lysenko/readingbricks: A structured collection of notes (mostly, on machine learning) and a Flask app for reading…
A structured collection of notes (mostly, on machine learning) and a Flask app for reading and searching them. - Nikolay-Lysenko/readingbricks
What are the best platform for django deployment?
I have complete my first CRUD Project I want to deploy so which platform might be easy for beginners?
/r/django
https://redd.it/1e5iu2g
I have complete my first CRUD Project I want to deploy so which platform might be easy for beginners?
/r/django
https://redd.it/1e5iu2g
Reddit
From the django community on Reddit
Explore this post and more from the django community
AWS Lambda Tutorial: Using Selenium with Chromedriver in Python
https://www.youtube.com/watch?v=8XBkm9DD6Ic
I created a concise tutorial on building a Docker image for deploying a Python Selenium package to AWS. One common challenge for beginners is downloading and configuring ChromeDriver, essential for using Selenium, a browser automation tool. In my video, I guide you through this process step-by-step. Additionally, I provide the necessary code in my Medium article linked in the video description to simplify your setup. Automating this functionality in AWS is quite impressive, and I hope this information proves valuable to many.
Consider subscribing or following me on Medium for more useful AWS tutorials in the future. I also do a bunch of IoT stuff!
Thanks, Reddit.
/r/Python
https://redd.it/1e5cc8g
https://www.youtube.com/watch?v=8XBkm9DD6Ic
I created a concise tutorial on building a Docker image for deploying a Python Selenium package to AWS. One common challenge for beginners is downloading and configuring ChromeDriver, essential for using Selenium, a browser automation tool. In my video, I guide you through this process step-by-step. Additionally, I provide the necessary code in my Medium article linked in the video description to simplify your setup. Automating this functionality in AWS is quite impressive, and I hope this information proves valuable to many.
Consider subscribing or following me on Medium for more useful AWS tutorials in the future. I also do a bunch of IoT stuff!
Thanks, Reddit.
/r/Python
https://redd.it/1e5cc8g
YouTube
AWS Lambda Tutorial: Using Selenium with Chromedriver in Python
In this tutorial, I will guide you through the process of running Selenium with ChromeDriver inside an AWS Lambda function. This setup is useful for automating web scraping tasks, testing web applications, or performing any browser automation tasks on the…
R Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
A new benchmark for multimodal AI agents, focused on real-world Dara Engineering tasks.
Project page: link, paper: link, code: link.
=====
TLDR: Autonomous LLM-agents can’t replace Data Engineers…yet. But at least we can track progress 🫡
Overview:
As AI technologies become more advanced, we need increasingly complex benchmarks to evaluate the quality of systems and measure progress. A distinct branch of benchmarks has emerged, focusing on working with professional tools/applications and websites (see WorkArena, WebArena, OSWorld).
In the Spider2-V project, a benchmark is being created to evaluate AI agents in data engineering. It consists of 494 tasks covering the entire work cycle:
Data Warehousing (tools like Snowflake, BigQuery)
Data Ingestion (e.g., Airbyte)
Data Transformation (e.g., dbt)
Data Visualization (e.g., Superset, Metabase)
Data Orchestration (e.g., Airflow, Dagster)
(and beloved Excel files, because who can do without them?)
If you have experience with data engineering, you understand that this is a substantial set, though it doesn't cover the entire zoo of solutions you might encounter.
Preparing each task took an average of 4 hours, so they are quite atomic and do not require very long horizon thinking. Tasks are divided into three levels of difficulty:
Easy (20%, no more than 5 steps to solve)
Medium (63%, 6-15 steps)
Hard (17%, 16-40
/r/MachineLearning
https://redd.it/1e5qt1r
A new benchmark for multimodal AI agents, focused on real-world Dara Engineering tasks.
Project page: link, paper: link, code: link.
=====
TLDR: Autonomous LLM-agents can’t replace Data Engineers…yet. But at least we can track progress 🫡
Overview:
As AI technologies become more advanced, we need increasingly complex benchmarks to evaluate the quality of systems and measure progress. A distinct branch of benchmarks has emerged, focusing on working with professional tools/applications and websites (see WorkArena, WebArena, OSWorld).
In the Spider2-V project, a benchmark is being created to evaluate AI agents in data engineering. It consists of 494 tasks covering the entire work cycle:
Data Warehousing (tools like Snowflake, BigQuery)
Data Ingestion (e.g., Airbyte)
Data Transformation (e.g., dbt)
Data Visualization (e.g., Superset, Metabase)
Data Orchestration (e.g., Airflow, Dagster)
(and beloved Excel files, because who can do without them?)
If you have experience with data engineering, you understand that this is a substantial set, though it doesn't cover the entire zoo of solutions you might encounter.
Preparing each task took an average of 4 hours, so they are quite atomic and do not require very long horizon thinking. Tasks are divided into three levels of difficulty:
Easy (20%, no more than 5 steps to solve)
Medium (63%, 6-15 steps)
Hard (17%, 16-40
/r/MachineLearning
https://redd.it/1e5qt1r
POST request help flask and react
Hey guys, I'm new to flask and have been trying to get my front end application to send some data to the flask server so the back end methods can use it but I keep getting these errors:
https://preview.redd.it/hpxc8rmnb6dd1.png?width=1028&format=png&auto=webp&s=28fa20a3d06fc11942898e3979c868f33916c6eb
I want the text that the user selects from the dropdown menu to be sent to the flask server because the backend main method requires a specific race name to fetch data from the database.
Here are the two files that may be useful:
Predict.js:
import React, { useEffect, useState } from "react";
import Loader from "./components/Loading";
import "./Predict.css";
function Predict() {
const data, setData = useState({});
const selectedRace, setSelectedRace = useState("");
useEffect(() => {
if (selectedRace !== "") {
fetch("/predict", {
method: "POST",
body: JSON.stringify({ race: selectedRace }),
/r/flask
https://redd.it/1e5yiuy
Hey guys, I'm new to flask and have been trying to get my front end application to send some data to the flask server so the back end methods can use it but I keep getting these errors:
https://preview.redd.it/hpxc8rmnb6dd1.png?width=1028&format=png&auto=webp&s=28fa20a3d06fc11942898e3979c868f33916c6eb
I want the text that the user selects from the dropdown menu to be sent to the flask server because the backend main method requires a specific race name to fetch data from the database.
Here are the two files that may be useful:
Predict.js:
import React, { useEffect, useState } from "react";
import Loader from "./components/Loading";
import "./Predict.css";
function Predict() {
const data, setData = useState({});
const selectedRace, setSelectedRace = useState("");
useEffect(() => {
if (selectedRace !== "") {
fetch("/predict", {
method: "POST",
body: JSON.stringify({ race: selectedRace }),
/r/flask
https://redd.it/1e5yiuy
Starting projects while learning django?
okay so i am learning django from corey's channel and i am done with almost 7 videos but i really wanna make websites as i have it in my mind
i wanna ask should i start the project as well as keep going on with the course or should i finish the course first and then start the projects
/r/django
https://redd.it/1e5qko5
okay so i am learning django from corey's channel and i am done with almost 7 videos but i really wanna make websites as i have it in my mind
i wanna ask should i start the project as well as keep going on with the course or should i finish the course first and then start the projects
/r/django
https://redd.it/1e5qko5
Reddit
From the django community on Reddit
Explore this post and more from the django community
Free hosting with 3GB?
I made a simple RAG app that uses an external LLM API, but the dependencies itself cross 2GB(llamaindex/langchain/huggingface). Is there any free option to host it? Or is there any way I can implement this functionality with more lightweight libraries?
/r/flask
https://redd.it/1e5azl0
I made a simple RAG app that uses an external LLM API, but the dependencies itself cross 2GB(llamaindex/langchain/huggingface). Is there any free option to host it? Or is there any way I can implement this functionality with more lightweight libraries?
/r/flask
https://redd.it/1e5azl0
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
in which ways is flask better than django?
hey everyone! I have to develop a project for my high school computer science class, and it takes up 30% of my final grade. anyways I decided to develop it with Flask because I'm an absolute noob at python haha, and I've heard that django is better for more experienced people.
however I need to write about why it is better then other API frameworks, like django, in a "technical" way.
so anyways, what are some technical advantages of flask compared to django or other API frameworks that I could say? or maybe I should choose another API framework that would ease my development? (I'm creating a website for a booking service)
/r/flask
https://redd.it/1e4njfq
hey everyone! I have to develop a project for my high school computer science class, and it takes up 30% of my final grade. anyways I decided to develop it with Flask because I'm an absolute noob at python haha, and I've heard that django is better for more experienced people.
however I need to write about why it is better then other API frameworks, like django, in a "technical" way.
so anyways, what are some technical advantages of flask compared to django or other API frameworks that I could say? or maybe I should choose another API framework that would ease my development? (I'm creating a website for a booking service)
/r/flask
https://redd.it/1e4njfq
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
If flask has async what would be the advantage of using an “async framework” like quart or starlette?
https://flask.palletsprojects.com/en/3.0.x/async-await/
Do the others have some capability flask doesn't? What would that be?
/r/flask
https://redd.it/1e64pc2
https://flask.palletsprojects.com/en/3.0.x/async-await/
Do the others have some capability flask doesn't? What would that be?
/r/flask
https://redd.it/1e64pc2
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Flask-admin Wrong pagination after removing last row on the last page
Hi all. Essentially, after removal last entry on the last page of the table, the browser remains on now non-existent page displaying "There are no items in the table.". Refreshing the page doesn't help, as the browser still wants the same non-existent page. Can click on page with a previous number, then all good. I'd have expected a built-in check for that issue. Is there a workaround? Regards.
/r/flask
https://redd.it/1e665va
Hi all. Essentially, after removal last entry on the last page of the table, the browser remains on now non-existent page displaying "There are no items in the table.". Refreshing the page doesn't help, as the browser still wants the same non-existent page. Can click on page with a previous number, then all good. I'd have expected a built-in check for that issue. Is there a workaround? Regards.
/r/flask
https://redd.it/1e665va
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1e5xg1b
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1e5xg1b
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
PySAPRPA: Automate SAP Processes Effortlessly with Python
Hi All,
What my project does:
Introducing PySAPRPA: a Python library that allows users to automate SAP processes in just a few lines of code. Leveraging the power of the
The library also allows for easy parameter setting with the
Target audience:
Anyone who has ever used SAP. This library is very helpful for data mining, data entry, and testing because it’s simple, easy to debug, and takes minutes to setup.
Comparison:
While other libraries have similar value setting logic, I haven’t seen any others that automatically find and label objects, which is the most time consuming part of automating.
Here's an example code block automating t-code: MB51
Feel free to use, and let me know if you find any bugs. Docs are on the github wiki
GitHub Repo
/r/Python
https://redd.it/1e5ssrt
Hi All,
What my project does:
Introducing PySAPRPA: a Python library that allows users to automate SAP processes in just a few lines of code. Leveraging the power of the
GetObjectTree method, PySAPRPA automatically identifies and labels SAP objects, eliminating the need for manual script recording.The library also allows for easy parameter setting with the
set_parameters() method. Using the automatically identified objects, set_parameters() takes kwargs to set values for fields, buttons, and other interactive elementsTarget audience:
Anyone who has ever used SAP. This library is very helpful for data mining, data entry, and testing because it’s simple, easy to debug, and takes minutes to setup.
Comparison:
While other libraries have similar value setting logic, I haven’t seen any others that automatically find and label objects, which is the most time consuming part of automating.
Here's an example code block automating t-code: MB51
import pysaprpa as pysapsession = pysap.connect_SAP()conn = ObjectTree(session).start_transaction('MB51')conn.get_objects().set_parameters(plant_TEXT=['100', '200'], movement_type_TEXT=['101', '102'], posting_date_TEXT=(7, 2024), layout_TEXT='EXAMPLE', database_FLAG=True)conn.execute().get_objects().export(how='spreadsheet', directory='/fake/path', file_name=EXPORT.XLSX').end_transaction()Feel free to use, and let me know if you find any bugs. Docs are on the github wiki
GitHub Repo
/r/Python
https://redd.it/1e5ssrt
GitHub
GitHub - eoinomahon/PySAPRPA: PySAPRPA enables users to effortlessly automate SAP processes
PySAPRPA enables users to effortlessly automate SAP processes - eoinomahon/PySAPRPA
Dynamic Enterprise RAG project utilizing Microsoft SharePoint as a data source
Hi r/Python,
I'm excited to share a project that utilizes Microsoft SharePoint to create dynamic Enterprise Retrieval-Augmented Generation (RAG) pipelines.
**Repo Link**: [https://pathway.com/developers/templates/enterprise\_rag\_sharepoint](https://pathway.com/developers/templates/enterprise_rag_sharepoint)
# What My Project Does:
In large enterprises, Microsoft SharePoint serves as a critical platform for document management, akin to Google Drive for individual users. This template makes it easy to build powerful RAG applications that deliver up-to-date answers and insights, enhancing productivity and collaboration.
# Key Features:
* **Dynamic Real-Time Sync**: Ensures your RAG app always reflects the latest changes in SharePoint files.
* **Robust Security**: Includes comprehensive steps to set up Entra ID and SSL authentication.
* **Scalability**: Designed with optimal frameworks and a minimalist architecture for secure and scalable solutions.
* **Ease of Setup**: Allows you to deploy the app template in Docker within minutes.
# Target Audience:
Designed for enterprises needing efficient document management and retrieval. Production-ready with a focus on security, scalability, and ease of integration.
# Comparison:
Seamlessly integrates with SharePoint, ensuring real-time sync and robust security, unlike other alternatives. The scalable, minimalist architecture is easy to deploy and manage.
# Planned Enhancements:
* [~Adaptive RAG~](https://pathway.com/developers/templates/adaptive-rag): Implementing cost-effective strategies without sacrificing accuracy.
* [~Pathway Rerankers~](https://pathway.com/developers/user-guide/llm-xpack/overview/#rerankers): Integrating advanced reranking techniques for improved results.
* [~Multimodal Pipelines with Hybrid Indexes~](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/slides_ai_search): Using advanced parsing capabilities and indexing techniques
I'm excited to hear
/r/Python
https://redd.it/1e6alk8
Hi r/Python,
I'm excited to share a project that utilizes Microsoft SharePoint to create dynamic Enterprise Retrieval-Augmented Generation (RAG) pipelines.
**Repo Link**: [https://pathway.com/developers/templates/enterprise\_rag\_sharepoint](https://pathway.com/developers/templates/enterprise_rag_sharepoint)
# What My Project Does:
In large enterprises, Microsoft SharePoint serves as a critical platform for document management, akin to Google Drive for individual users. This template makes it easy to build powerful RAG applications that deliver up-to-date answers and insights, enhancing productivity and collaboration.
# Key Features:
* **Dynamic Real-Time Sync**: Ensures your RAG app always reflects the latest changes in SharePoint files.
* **Robust Security**: Includes comprehensive steps to set up Entra ID and SSL authentication.
* **Scalability**: Designed with optimal frameworks and a minimalist architecture for secure and scalable solutions.
* **Ease of Setup**: Allows you to deploy the app template in Docker within minutes.
# Target Audience:
Designed for enterprises needing efficient document management and retrieval. Production-ready with a focus on security, scalability, and ease of integration.
# Comparison:
Seamlessly integrates with SharePoint, ensuring real-time sync and robust security, unlike other alternatives. The scalable, minimalist architecture is easy to deploy and manage.
# Planned Enhancements:
* [~Adaptive RAG~](https://pathway.com/developers/templates/adaptive-rag): Implementing cost-effective strategies without sacrificing accuracy.
* [~Pathway Rerankers~](https://pathway.com/developers/user-guide/llm-xpack/overview/#rerankers): Integrating advanced reranking techniques for improved results.
* [~Multimodal Pipelines with Hybrid Indexes~](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/slides_ai_search): Using advanced parsing capabilities and indexing techniques
I'm excited to hear
/r/Python
https://redd.it/1e6alk8