Working with username with custom user model
I have come to know that it is advisable to create my own custom user model instead of using the default provided by django, most of the tutorials i have watched don't seem to add a username field and instead strip the username from the email, when i did add the field username i was no longer able to create a superuser without the error "django.core.exceptions.FieldDoesNotExist: User has no field named 'accounts.User.username' ". where should i go from here
my custom user manager
my custom user model
/r/djangolearning
https://redd.it/1pecctq
I have come to know that it is advisable to create my own custom user model instead of using the default provided by django, most of the tutorials i have watched don't seem to add a username field and instead strip the username from the email, when i did add the field username i was no longer able to create a superuser without the error "django.core.exceptions.FieldDoesNotExist: User has no field named 'accounts.User.username' ". where should i go from here
my custom user manager
my custom user model
/r/djangolearning
https://redd.it/1pecctq
Part 1 what is Django and why django in 2026? Learn Django from basic to advanced by building CRM SaaS product
https://youtu.be/9JiVt_3fjIM
/r/djangolearning
https://redd.it/1pe8mlj
https://youtu.be/9JiVt_3fjIM
/r/djangolearning
https://redd.it/1pe8mlj
YouTube
Part 1: What is Django and Why Django in 2026 | Learn Django from basic to advanced step by step
Welcome to Part 1 of the Django CRM SaaS Mega-Series — your complete beginner-to-advanced journey into building a real, production-ready SaaS product using Django.
In this video, we explore what Django is, why it’s one of the most popular backend frameworks…
In this video, we explore what Django is, why it’s one of the most popular backend frameworks…
qCrawl — an async high-performance crawler framework
Site: https://github.com/crawlcore/qcrawl
What My Project Does
qCrawl is an async web crawler framework based on asyncio.
Key features
Async architecture - High-performance concurrent crawling based on asyncio
Performance optimized - Queue backend on Redis with direct delivery, messagepack serialization, connection pooling, DNS caching
Powerful parsing - CSS/XPath selectors with lxml
Middleware system - Customizable request/response processing
Flexible export - Multiple output formats including JSON, CSV, XML
Flexible queue backends - Memory or Redis-based (+disk) schedulers for different scale requirements
Item pipelines - Data transformation, validation, and processing pipeline
Pluggable downloaders - HTTP (aiohttp), Camoufox (stealth browser) for JavaScript rendering and anti-bot evasion
Target Audience
1. Developers building large-scale web crawlers or scrapers
2. Data engineers and data scientists need automated data extraction
3. Companies and researchers performing continuous or scheduled crawling
Comparison
1. it can be compared to scrapy - it is scrapy if it were built on asyncio instead of twisted, with queue backends Memory/Redis with direct delivery and messagepack serialization, and pluggable downloaders - HTTP (aiohttp), Camoufox (stealth browser) for JavaScript rendering and anti-bot evasion
2. it can be compared to playwright/camoufox - you can use them directly, but using qCraw, you can in one spider, distribute requests between aiohttp for max performance and camoufox if JS rendering
/r/Python
https://redd.it/1pfofmq
Site: https://github.com/crawlcore/qcrawl
What My Project Does
qCrawl is an async web crawler framework based on asyncio.
Key features
Async architecture - High-performance concurrent crawling based on asyncio
Performance optimized - Queue backend on Redis with direct delivery, messagepack serialization, connection pooling, DNS caching
Powerful parsing - CSS/XPath selectors with lxml
Middleware system - Customizable request/response processing
Flexible export - Multiple output formats including JSON, CSV, XML
Flexible queue backends - Memory or Redis-based (+disk) schedulers for different scale requirements
Item pipelines - Data transformation, validation, and processing pipeline
Pluggable downloaders - HTTP (aiohttp), Camoufox (stealth browser) for JavaScript rendering and anti-bot evasion
Target Audience
1. Developers building large-scale web crawlers or scrapers
2. Data engineers and data scientists need automated data extraction
3. Companies and researchers performing continuous or scheduled crawling
Comparison
1. it can be compared to scrapy - it is scrapy if it were built on asyncio instead of twisted, with queue backends Memory/Redis with direct delivery and messagepack serialization, and pluggable downloaders - HTTP (aiohttp), Camoufox (stealth browser) for JavaScript rendering and anti-bot evasion
2. it can be compared to playwright/camoufox - you can use them directly, but using qCraw, you can in one spider, distribute requests between aiohttp for max performance and camoufox if JS rendering
/r/Python
https://redd.it/1pfofmq
GitHub
GitHub - crawlcore/qcrawl: qcrawl - fast async web crawling & scraping framework for Python.
qcrawl - fast async web crawling & scraping framework for Python. - crawlcore/qcrawl