Block's AI engineering approach includes: 95% of engineers using AI assistants, providing freedom to explore multiple tools, launching an AI Champions program focused on repo readiness and context engineering, implementing automated PRs, and planning team-based workshops for multi-agent workflows.
https://engineering.block.xyz/blog/ai-assisted-development-at-block
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
The fastest-growing personal AI agent ecosystem just became a new delivery channel for malware. Over the last few days, VirusTotal has detected hundreds of OpenClaw skills that are actively malicious.
https://blog.virustotal.com/2026/02/from-automation-to-infection-how.html
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
A practical workflow for threat modeling agentic AI systems: use a five-zone navigation lens to trace attack paths, formalize them as attack trees, and map to OWASP's threat taxonomy and playbooks.
https://christian-schneider.net/blog/threat-modeling-agentic-ai/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
👏3❤1👍1
That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends. Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique they called "AI Recommendation Poisoning".
https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤3🔥2👍1
LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks. See also the companion blog post.
https://github.com/praetorian-inc/augustus
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
Block Engineering discusses designing agent skills using three principles: make deterministic outputs script-based, let agents handle interpretation and conversation, and write explicit constitutional constraints. Skills codify tribal knowledge into executable documentation for AI agents across their organization.
https://engineering.block.xyz/blog/3-principles-for-designing-agent-skills
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
🔶🤖 Building an AI-powered defense-in-depth security architecture for serverless microservices
This AWS blog demonstrates implementing a seven-layer AI-powered defense-in-depth security architecture for serverless microservices using AWS Shield, WAF, Cognito, API Gateway, VPC, Lambda, Secrets Manager, and DynamoDB, enhanced with GuardDuty and Amazon Bedrock for intelligent threat detection and automated response.
https://aws.amazon.com/ru/blogs/security/building-an-ai-powered-defense-in-depth-security-architecture-for-serverless-microservices/
(Use VPN to open from Russia)
#aws #AI
This AWS blog demonstrates implementing a seven-layer AI-powered defense-in-depth security architecture for serverless microservices using AWS Shield, WAF, Cognito, API Gateway, VPC, Lambda, Secrets Manager, and DynamoDB, enhanced with GuardDuty and Amazon Bedrock for intelligent threat detection and automated response.
https://aws.amazon.com/ru/blogs/security/building-an-ai-powered-defense-in-depth-security-architecture-for-serverless-microservices/
(Use VPN to open from Russia)
#aws #AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
MCP servers connecting AI assistants to external tools create significant attack surfaces enabling arbitrary code execution, data exfiltration, and social engineering. Both local and remote MCP servers can be exploited through server chaining, supply chain attacks, and malicious tool implementations.
https://www.praetorian.com/blog/mcp-server-security-the-hidden-ai-attack-surface/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
🤖 caterpillar
Caterpillar is a security scanning library for AI agent skill files (e.g., Claude Code skills) for dangerous or malicious behavior.
https://github.com/alice-dot-io/caterpillar
#AI
Caterpillar is a security scanning library for AI agent skill files (e.g., Claude Code skills) for dangerous or malicious behavior.
https://github.com/alice-dot-io/caterpillar
#AI
❤1👍1🔥1
Trail of Bits used ML-centered threat modeling and adversarial testing to identify four prompt injection techniques that could exploit Perplexity's Comet browser AI assistant to exfiltrate private Gmail data. The audit demonstrated how fake security mechanisms, system instructions, and user requests could manipulate the AI agent into accessing and transmitting sensitive user information.
https://blog.trailofbits.com/2026/02/20/using-threat-modeling-and-prompt-injection-to-audit-comet/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
A prompt injection in a GitHub issue title gave attackers code execution inside Cline's CI/CD pipeline, leading to cache poisoning, stolen npm credentials, and an unauthorized package publish affecting the popular AI coding tool's 5 million users. Here's the full technical breakdown and what developers should do now.
https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/
(Use VPN to open from Russia)
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
OpenClaw, a self-hosted agent runtime, lacks built-in security controls, enabling credential exfiltration, memory/state manipulation, and host compromise via indirect prompt injection and malicious skills. Microsoft recommends isolated deployment, least-privilege identities, continuous monitoring, and Defender XDR hunting queries.
https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥3😱2❤1
A week-long automated attack campaign targeted CI/CD pipelines across major open source repositories, achieving remote code execution in at least 4 out of 5 targets. The attacker, an autonomous bot called hackerbot-claw, used 5 different exploitation techniques and successfully exfiltrated a GitHub token with write permissions from one of the most popular repositories on GitHub. This post breaks down each attack, shows the evidence, and explains what you can do to protect your workflows.
https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation#attack-6-aquasecuritytrivy---evidence-cleared
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
The "Reach" pattern is a personal CLI that hijacks existing browser sessions to query SaaS APIs (Slack, Jira, Confluence, etc.) on your behalf, feeding structured organizational context to your AI coding assistant.
https://jackdanger.com/the-reach-pattern
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
A technical deep-dive into Praetorian's multi-agent CVE research pipeline, exploring how orchestrated AI agents transform vulnerability data into validated detection templates.
https://www.praetorian.com/blog/how-ai-agents-automate-cve-vulnerability-research/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥2❤1👍1
🤖 When an AI agent came knocking: Catching malicious contributions in Datadog’s open source repos
How Datadog discovered malicious issues and PRs in two of their public repositories as the result of attacks by hackerbot-claw, an AI agent designed to target GitHub Actions and LLM-powered workflows.
https://www.datadoghq.com/blog/engineering/stopping-hackerbot-claw-with-bewaire
#AI
How Datadog discovered malicious issues and PRs in two of their public repositories as the result of attacks by hackerbot-claw, an AI agent designed to target GitHub Actions and LLM-powered workflows.
https://www.datadoghq.com/blog/engineering/stopping-hackerbot-claw-with-bewaire
#AI
❤1👍1🔥1
🤖 Securing our codebase with autonomous agents
Cursor's security team built a fleet of security agents to find and fix vulnerabilities across a fast-changing codebase.
https://cursor.com/blog/security-agents
#AI
Cursor's security team built a fleet of security agents to find and fix vulnerabilities across a fast-changing codebase.
https://cursor.com/blog/security-agents
#AI
👍2❤1🔥1
OpenSandbox is a general-purpose sandbox platform for AI applications, offering multi-language SDKs, unified sandbox APIs, and Docker/Kubernetes runtimes for scenarios like Coding Agents, GUI Agents, Agent Evaluation, AI Code Execution, and RL Training.
https://github.com/alibaba/OpenSandbox
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1