Happy to share our first-in-2026 digest of the prominent software updates in the Cloud Native ecosystem!
1. Apache CloudStack, an Open Source IaaS solution, updated its CloudStack Kubernetes Provider, a cloud controller manager to facilitate K8s deployments, to v1.2.0. It added support for network ACLs for LB on VPC networks, a configurable source CIDR list, and ARM64 support for Docker images.
2. Harvester, a hyperconverged infrastructure solution built on Kubernetes, released v1.7.0 with experimental automatic VM workload rebalancing, support for MIG-backed vGPU devices, multipath device recognition and management, NIC hot plugging and hot unplugging, Open Virtual Format (OVF), pausable node upgrades, transparent hugepages configuration, VM VLAN trunking, and volume snapshots in guest clusters.
3. Jaeger (a CNCF Graduated project) reached v2.14.0, which added a dark theme to the UI, removed legacy v1 components (query, collector, ingester), and added a bunch of experimental features, most of which are related to ClickHouse and the FindTraces implementation for this storage.
4. Keycloak, an identity and access management solution (a CNCF Incubating project), released 26.5.0 (with 26.5.1, which followed shortly). It introduced several new features in preview, such as Workflows to automate administrative tasks, JWT Authorization Grants, exporting logs and metrics to OpenTelemetry collectors, and authenticating clients with Kubernetes service account tokens. Other release highlights are support for Caddy as a reverse proxy provider for client certificate authentication, organisation invitation management, and a guide on integrating Keycloak with MCP servers.
5. Envoy (a CNCF Graduated project) was updated to v1.37.0, which brought many new features. Some of them include global module loading and streaming HTTP callouts to HTTP filters in dynamic modules, container-aware CPU detection, new MCP filter and router for agentic network, new stats-based access logger, production-ready Proto API Scrubber filter, cluster-level retry policies, hash policies, and request mirroring, and many more.
6. Kubebuilder, an SDK for building Kubernetes APIs using CRDs, has seen its v4.11.0 release. The helm/v1-alpha projects are now automatically migrated to helm/v2-alpha, which got numerous improvements, including nodeSelector, affinity, and tolerations support, standard Helm labels for generated resources, and custom resources added to
#news #releases
1. Apache CloudStack, an Open Source IaaS solution, updated its CloudStack Kubernetes Provider, a cloud controller manager to facilitate K8s deployments, to v1.2.0. It added support for network ACLs for LB on VPC networks, a configurable source CIDR list, and ARM64 support for Docker images.
2. Harvester, a hyperconverged infrastructure solution built on Kubernetes, released v1.7.0 with experimental automatic VM workload rebalancing, support for MIG-backed vGPU devices, multipath device recognition and management, NIC hot plugging and hot unplugging, Open Virtual Format (OVF), pausable node upgrades, transparent hugepages configuration, VM VLAN trunking, and volume snapshots in guest clusters.
3. Jaeger (a CNCF Graduated project) reached v2.14.0, which added a dark theme to the UI, removed legacy v1 components (query, collector, ingester), and added a bunch of experimental features, most of which are related to ClickHouse and the FindTraces implementation for this storage.
4. Keycloak, an identity and access management solution (a CNCF Incubating project), released 26.5.0 (with 26.5.1, which followed shortly). It introduced several new features in preview, such as Workflows to automate administrative tasks, JWT Authorization Grants, exporting logs and metrics to OpenTelemetry collectors, and authenticating clients with Kubernetes service account tokens. Other release highlights are support for Caddy as a reverse proxy provider for client certificate authentication, organisation invitation management, and a guide on integrating Keycloak with MCP servers.
5. Envoy (a CNCF Graduated project) was updated to v1.37.0, which brought many new features. Some of them include global module loading and streaming HTTP callouts to HTTP filters in dynamic modules, container-aware CPU detection, new MCP filter and router for agentic network, new stats-based access logger, production-ready Proto API Scrubber filter, cluster-level retry policies, hash policies, and request mirroring, and many more.
6. Kubebuilder, an SDK for building Kubernetes APIs using CRDs, has seen its v4.11.0 release. The helm/v1-alpha projects are now automatically migrated to helm/v2-alpha, which got numerous improvements, including nodeSelector, affinity, and tolerations support, standard Helm labels for generated resources, and custom resources added to
templates/extras. Newly generated projects also got their AGENTS.md files.#news #releases
❤5
Another bunch of interesting Kubernetes-related articles recently spotted online:
1. "It works on my cluster: a tale of two troubleshooters" by Liam Mackie, Octopus Deploy.
2. "A Brief Deep-Dive into Attacking and Defending Kubernetes" by Alexis Obeng.
3. "Exploring Cloud Native projects in CNCF Sandbox. Part 5: 13 arrivals of January 2025" by Dmitry Shurupov, Palark.
4. "The Real State of Helm Chart Reliability: Hidden Risks in 100+ Open‑Source Charts" by Prequel.
5. "Reclaiming underutilized GPUs in Kubernetes using scheduler plugins" by Lalit Somavarapha, Gernot Seidler, Srujana Reddy Attunuri (HPE).
6. "How We Built Our Deployment Pipeline: GitOps, ArgoCD, and Kubernetes at Dodo Payments" by Ayush Agarwal, Dodo Payments.
#articles
1. "It works on my cluster: a tale of two troubleshooters" by Liam Mackie, Octopus Deploy.
Kubernetes has a gift for making simple problems look complicated, and complicated problems look simple. When something breaks, you often see symptoms completely unrelated to the real cause of the problem. This leads to a problem I like to call “blaming the network team”, where problems end up being diagnosed by the wrong engineers for a given issue. [..] I’ve personally experienced this dichotomy during my time as an engineer, working on both software and infrastructure, so I’m going to tell a story from two perspectives.
2. "A Brief Deep-Dive into Attacking and Defending Kubernetes" by Alexis Obeng.
My main motivation for writing this was to better understand for myself how Kubernetes works and its attack surface. I was also inspired from talking to people in the field and realizing just how prominent Kubernetes is in corporate environments. Although I did not cover every single attack vector here, I still cover a large amount of topics in the hope that this will prove useful to others seeking to understand Kubernetes’ attack surface.
3. "Exploring Cloud Native projects in CNCF Sandbox. Part 5: 13 arrivals of January 2025" by Dmitry Shurupov, Palark.
Learn about the following new CNCF projects: Podman Container Tools and Podman Desktop, bootc, composefs, k0s, KubeFleet, SpinKube, container2wasm, Runme Notebooks for DevOps, SlimFaas, Tokenetes, CloudNativePG, and Drasi.
4. "The Real State of Helm Chart Reliability: Hidden Risks in 100+ Open‑Source Charts" by Prequel.
Prequel's reliability research team audited 105 popular Kubernetes Helm charts to reveal missing reliability safeguards. The average score was ~3.98/10. 48% (50 charts) rated "High Risk" (score ≤3/10). Only 17% (18 charts) were rated "Reliable" (≥7/10).
5. "Reclaiming underutilized GPUs in Kubernetes using scheduler plugins" by Lalit Somavarapha, Gernot Seidler, Srujana Reddy Attunuri (HPE).
The default Kubernetes preemption mechanism (DefaultPreemption) can evict lower-priority pods to make room for higher-priority ones. But it only considers priority — not actual utilization. Pods are treated equivalently from a preemption perspective when they share the same priority, regardless of their current utilization. We evaluated several existing approaches.
6. "How We Built Our Deployment Pipeline: GitOps, ArgoCD, and Kubernetes at Dodo Payments" by Ayush Agarwal, Dodo Payments.
The investment in GitOps pays off at a certain scale. Below that scale, simpler solutions work fine. For us, running a payment platform with strict requirements around security, auditability, and reliability — GitOps isn’t optional. It’s infrastructure.
#articles
🔥3❤2👍1
Kubernetes-based alternatives to Heroku are real. Here’s one of them.
Canine positions itself as a “developer-friendly PaaS for your Kubernetes”. It’s focused on small development teams and simplifies using Kubernetes for them by providing:
- container builds performed via Docker BuildKit or Buildpacks;
- automatic deployment to GitHub and GitLab;
- web UI to deploy, scale, and manage (e.g., configure resource constraints) apps running in Kubernetes;
- integration with existing K8s tools, such as Helm, cert-manager, and Telepresence;
- single sign-on via SAML, OIDC, and LDAP.
▶️ GitHub repo
Language: Ruby | License: Apache 2.0 | 2716 ⭐️
#tools #gui
Canine positions itself as a “developer-friendly PaaS for your Kubernetes”. It’s focused on small development teams and simplifies using Kubernetes for them by providing:
- container builds performed via Docker BuildKit or Buildpacks;
- automatic deployment to GitHub and GitLab;
- web UI to deploy, scale, and manage (e.g., configure resource constraints) apps running in Kubernetes;
- integration with existing K8s tools, such as Helm, cert-manager, and Telepresence;
- single sign-on via SAML, OIDC, and LDAP.
▶️ GitHub repo
Language: Ruby | License: Apache 2.0 | 2716 ⭐️
#tools #gui
👍4
ClickHouse just got the official Kubernetes operator
Less than 5 hours ago, the official ClickHouse Operator got its first public release, v0.0.1. It allows you to create and manage ClickHouse clusters and features ClickHouse Keeper integration, storage provisioning, TLS/SSL support, and Prometheus metrics integration.
The operator is written in Go, is Open Source (Apache 2.0) and available on GitHub.
#news #releases #databases
Less than 5 hours ago, the official ClickHouse Operator got its first public release, v0.0.1. It allows you to create and manage ClickHouse clusters and features ClickHouse Keeper integration, storage provisioning, TLS/SSL support, and Prometheus metrics integration.
The operator is written in Go, is Open Source (Apache 2.0) and available on GitHub.
#news #releases #databases
❤11👍6
vCluster introduced vind, marketed as a better kind
vCluster Labs (previously known as Loft Labs) released a new tool called vind (vCluster in Docker). It is built on top of vCluster and allows you to run Kubernetes clusters directly as Docker containers, similarly to what kind (Kubernetes IN Docker) offers. However, it comes with the following extra features:
- pausing the clusters when they're not in use and resuming them;
- automatic LoadBalancer support;
- image caching (pull-through cache via local Docker daemon);
- support for connecting external nodes, which can be real cloud instances;
- support for choosing CNI and CSI plugins;
- built-in vCluster Platform UI.
You can find more details about vind on GitHub and in yesterday’s video presentation on LinkedIn.
#news #tools
vCluster Labs (previously known as Loft Labs) released a new tool called vind (vCluster in Docker). It is built on top of vCluster and allows you to run Kubernetes clusters directly as Docker containers, similarly to what kind (Kubernetes IN Docker) offers. However, it comes with the following extra features:
- pausing the clusters when they're not in use and resuming them;
- automatic LoadBalancer support;
- image caching (pull-through cache via local Docker daemon);
- support for connecting external nodes, which can be real cloud instances;
- support for choosing CNI and CSI plugins;
- built-in vCluster Platform UI.
You can find more details about vind on GitHub and in yesterday’s video presentation on LinkedIn.
#news #tools
❤4👍3🔥1
Optimising resources in Kubernetes is something we all want to do at some point. This new project aims to assist in that.
CruiseKube, dubbed as “Autopilot for Kubernetes”, is a controller that watches your K8s workloads and adjusts the resources accordingly. Here’s what it does:
- Continuously evaluates current CPU/memory usage and updates resource requests.
- Considers CPU pressure (PSI metrics) and other Pods on the node when resizing.
- Watches OOM memory values in stats and triggers Pod eviction when needed.
- Uses Prometheus as the primary metrics source.
- Provides a web UI to see and manage your settings.
▶️ GitHub repo
💬 Reddit announcement
Language: Go | License: MIT | 48 ⭐️
#tools
CruiseKube, dubbed as “Autopilot for Kubernetes”, is a controller that watches your K8s workloads and adjusts the resources accordingly. Here’s what it does:
- Continuously evaluates current CPU/memory usage and updates resource requests.
- Considers CPU pressure (PSI metrics) and other Pods on the node when resizing.
- Watches OOM memory values in stats and triggers Pod eviction when needed.
- Uses Prometheus as the primary metrics source.
- Provides a web UI to see and manage your settings.
▶️ GitHub repo
💬 Reddit announcement
Language: Go | License: MIT | 48 ⭐️
#tools
👍2🔥1
Node Readiness Controller for Kubernetes
Last week, a new Kubernetes SIG project was announced. The Node Readiness Controller can be used to define additional requirements for node readiness (e.g., GPU drivers are loaded). The controller will manage node taints to prevent scheduling until the required conditions are satisfied. It supports bootstrap-only and continuous enforcement modes. Currently, the project is in its alpha.
Find more details in the project’s documentation and on GitHub.
#news #tools
Last week, a new Kubernetes SIG project was announced. The Node Readiness Controller can be used to define additional requirements for node readiness (e.g., GPU drivers are loaded). The controller will manage node taints to prevent scheduling until the required conditions are satisfied. It supports bootstrap-only and continuous enforcement modes. Currently, the project is in its alpha.
Find more details in the project’s documentation and on GitHub.
#news #tools
👍5
Our latest selection of interesting Kubernetes-related articles recently spotted online:
1. "Kubernetes Rolling Updates for Reliable Deployments" by James Walker, Spacelift.
2. "Experimenting with Gateway API using kind" by Ricardo Katz, Red Hat.
3. "Understanding the Ingress-NGINX Deprecation — Before You Migrate to the Gateway API" by Artem Lajko.
4. "Lazy-Pulling Container Images: A Deep Dive Into OCI Seekability" by Zain Malik.
5. "Kernel Archaeology: Why 36 CPUs Crash Cilium But 32 Don’t" by Pierre Magne, Qonto.
6. "Speeding Up FluxCD Development Without Remote Pushes: Local Git Reconciliation" by Marco Boss.
#articles
1. "Kubernetes Rolling Updates for Reliable Deployments" by James Walker, Spacelift.
In this guide, we will explain the benefits of rolling updates, describe how they work, and provide detailed examples of their use. We’ll also compare how rolling updates stack up against other popular deployment strategies.
2. "Experimenting with Gateway API using kind" by Ricardo Katz, Red Hat.
This document will guide you through setting up a local experimental environment with Gateway API on kind. This setup is designed for learning and testing. It helps you understand Gateway API concepts without production complexity.
3. "Understanding the Ingress-NGINX Deprecation — Before You Migrate to the Gateway API" by Artem Lajko.
Most blog posts about the Ingress-NGINX deprecation are optimized for clicks, not for engineers who actually have to migrate production systems. You’ll find tiny demo setups, toy examples, and conclusions that fall apart the moment you apply them to an enterprise environment. That frustration is the reason this guide exists. This article is based on our real enterprise setup, built on top of the kubara framework. It documents how we approached the migration, what worked, what didn’t, and — just as important — what we decided not to migrate.
4. "Lazy-Pulling Container Images: A Deep Dive Into OCI Seekability" by Zain Malik.
This post starts with why the problem is harder than it looks at the byte level, then surveys the major approaches and what they trade off. The core of the post is a hands-on experiment: I deploy an in-cluster registry, convert images to eStargz, patch containerd with a custom snapshotter, and measure something nobody benchmarks properly. Not just pull time, but readiness, the moment a container can actually serve its first request.
5. "Kernel Archaeology: Why 36 CPUs Crash Cilium But 32 Don’t" by Pierre Magne, Qonto.
[..] The deployment looked successful. But then, over several weeks, we noticed sporadic crashes — roughly one Cilium agent per week, completely unrecoverable without restarting the entire node. No clear pattern, no obvious trigger. Rare enough to be hard to reproduce, but severe enough to block production deployment.
6. "Speeding Up FluxCD Development Without Remote Pushes: Local Git Reconciliation" by Marco Boss.
[..] I started looking for a way to develop and validate manifests locally, while still having full access to Flux features, and without resorting to brittle hacks or partial simulations. In this post, I’ll walk you through the approach I ended up with and show you how to run Flux locally in a way that actually feels usable for day-to-day development.
#articles
👍3❤2
Kubernetes WG Serving is disbanded
Yuan Tang, on behalf of the Serving working group co-chairs, announced that the WG Serving’s goal had been accomplished and that the group is disbanded.
WG Serving was created to support the development of the AI inference stack on Kubernetes, making it "an orchestration platform of choice for inference workloads". In particular, it contributed to the design of AIBrix (a part of vLLM), while other unresolved problems were implemented by llm-d. The working group also helped with Kubernetes AI Conformance requirements.
All existing related efforts are now covered by other SIGs and working groups (including SIG Node, SIG Scheduling, and WG Device Management) or specific projects (such as Gateway API Inference Extension and Inference Perf).
#news #genai
Yuan Tang, on behalf of the Serving working group co-chairs, announced that the WG Serving’s goal had been accomplished and that the group is disbanded.
WG Serving was created to support the development of the AI inference stack on Kubernetes, making it "an orchestration platform of choice for inference workloads". In particular, it contributed to the design of AIBrix (a part of vLLM), while other unresolved problems were implemented by llm-d. The working group also helped with Kubernetes AI Conformance requirements.
All existing related efforts are now covered by other SIGs and working groups (including SIG Node, SIG Scheduling, and WG Device Management) or specific projects (such as Gateway API Inference Extension and Inference Perf).
#news #genai
👍1
Sharing our latest digest of the prominent software updates in the Cloud Native ecosystem!
1. Longhorn, a Cloud Native distributed storage for Kubernetes (a CNCF Incubating project), released v1.11.0, which brought its V2 Data Engine to the Technical Preview. Other updates include balance-aware algorithm disk selection for replica scheduling, active monitoring for node disk health, and support for Kubernetes RWOP (
2. Argo CD (a CNCF Graduated project) reached v3.3.0. It introduced PreDelete hooks, automatic background refresh of OIDC tokens, support for resource names in
3. Headlamp, a Kubernetes web UI developed by the Kubernetes SIG, was updated to 0.40.0 and got configurable keyboard shortcuts, HTTPRoute support for Gateway API, icon and colour configuration for clusters, saving selected namespaces per cluster, support for
4. Cilium (a CNCF Graduated project) released 1.19.0 with lots of new features. They include multi-level subdomain matches in DNS policies, support for VRRP and IGMP protocols in host firewall rules, strict encryption modes for both IPsec and WireGuard, enrolling namespaces into Ztunnel, support for GRPCRoute in GAMMA, TLS/mTLS support for Prometheus metrics, and CRD auto-installation for Multi-Cluster Services.
5. KEDA (a CNCF Graduated project) was updated to v2.19.0, introducing a new Kubernetes Resource Scaler, file-based authentication support for
6. Karpenter, a Kubernetes Node Autoscaler developed by the Kubernetes SIG, unveiled its v1.9.0, adding
7. Istio (a CNCF Graduated project) 1.29.0 was released with DNS capture and iptables reconciliation enabled by default for ambient workloads, CRL (Certificate Revocation List) support in Ztunnel, debug endpoint authorisation enabled by default, alpha support for wildcard hosts in
#news #releases
1. Longhorn, a Cloud Native distributed storage for Kubernetes (a CNCF Incubating project), released v1.11.0, which brought its V2 Data Engine to the Technical Preview. Other updates include balance-aware algorithm disk selection for replica scheduling, active monitoring for node disk health, and support for Kubernetes RWOP (
ReadWriteOncePod) and StorageClass allowedTopologies.2. Argo CD (a CNCF Graduated project) reached v3.3.0. It introduced PreDelete hooks, automatic background refresh of OIDC tokens, support for resource names in
clusterResourceWhitelist, shallow cloning for repositories, and KEDA support (pausing and resuming KEDA resources from the Argo CD UI and ScaledJob health checks).3. Headlamp, a Kubernetes web UI developed by the Kubernetes SIG, was updated to 0.40.0 and got configurable keyboard shortcuts, HTTPRoute support for Gateway API, icon and colour configuration for clusters, saving selected namespaces per cluster, support for
a8r.io service metadata in service views, and more.4. Cilium (a CNCF Graduated project) released 1.19.0 with lots of new features. They include multi-level subdomain matches in DNS policies, support for VRRP and IGMP protocols in host firewall rules, strict encryption modes for both IPsec and WireGuard, enrolling namespaces into Ztunnel, support for GRPCRoute in GAMMA, TLS/mTLS support for Prometheus metrics, and CRD auto-installation for Multi-Cluster Services.
5. KEDA (a CNCF Graduated project) was updated to v2.19.0, introducing a new Kubernetes Resource Scaler, file-based authentication support for
ClusterTriggerAuthentication, and other improvements.6. Karpenter, a Kubernetes Node Autoscaler developed by the Kubernetes SIG, unveiled its v1.9.0, adding
Gte and Lte operators for requirements, a NodePool cost metric, and consolidation pipeline logging.7. Istio (a CNCF Graduated project) 1.29.0 was released with DNS capture and iptables reconciliation enabled by default for ambient workloads, CRL (Certificate Revocation List) support in Ztunnel, debug endpoint authorisation enabled by default, alpha support for wildcard hosts in
ServiceEntry resources with DYNAMIC_DNS resolution, HTTP compression for Envoy metrics, pilot resource filtering capabilities, and many other changes.#news #releases
❤6👍5
CNCF project velocity 2025 report
Key takeaways from the latest CNCF project velocity report:
1. Kubernetes continues to lead with the largest contributor base.
2. Backstage has more than doubled its contributions since 2024.
3. OpenTelemetry saw a 39% rise in commits and 35% rise in a contributor base.
Top 10 CNCF projects by their velocity in 2025:
1. Kubernetes
2. Cilium
3. OpenTelemetry
4. Prometheus
5. Argo
6. Meshery
7. Envoy
8. Backstage
9. Keycloak
10. Kubeflow
This GitHub repo has all the data, and here's our post on a previous velocity report published in July 2025.
#news #cncfprojects
Key takeaways from the latest CNCF project velocity report:
1. Kubernetes continues to lead with the largest contributor base.
2. Backstage has more than doubled its contributions since 2024.
3. OpenTelemetry saw a 39% rise in commits and 35% rise in a contributor base.
Top 10 CNCF projects by their velocity in 2025:
1. Kubernetes
2. Cilium
3. OpenTelemetry
4. Prometheus
5. Argo
6. Meshery
7. Envoy
8. Backstage
9. Keycloak
10. Kubeflow
This GitHub repo has all the data, and here's our post on a previous velocity report published in July 2025.
#news #cncfprojects
👍5
This new UI aims to ensure “modern Kubernetes visibility” by providing comprehensive information on your cluster and its workloads, along with several management features.
Radar is a dashboard that is intended to be “blazing fast”, displays real-time information, and runs as a single binary, not requiring to be installed on a cluster. It comes with:
- General cluster overview, including the stats for existing resources, resource utilisation, and unhealthy workloads.
- Detailed interactive graphs for Kubernetes resources with their full hierarchy and an image filesystem viewer for Pods.
- Live network traffic visualisation (via Hubble or Caretta).
- Timeline of Kubernetes events and resource changes.
- Management for Helm releases and GitOps (Argo CD and Flux) resources.
- Automatic discovery of CRDs and integrations for Gateway API, Karpenter, KEDA, cert-manager, Prometheus Operator, and Trivy.
- MCP server for AI integration.
▶️ GitHub repo
Language: TypeScript, Go | License: Apache 2.0 | 863 ⭐️
#tools #gui
Radar is a dashboard that is intended to be “blazing fast”, displays real-time information, and runs as a single binary, not requiring to be installed on a cluster. It comes with:
- General cluster overview, including the stats for existing resources, resource utilisation, and unhealthy workloads.
- Detailed interactive graphs for Kubernetes resources with their full hierarchy and an image filesystem viewer for Pods.
- Live network traffic visualisation (via Hubble or Caretta).
- Timeline of Kubernetes events and resource changes.
- Management for Helm releases and GitOps (Argo CD and Flux) resources.
- Automatic discovery of CRDs and integrations for Gateway API, Karpenter, KEDA, cert-manager, Prometheus Operator, and Trivy.
- MCP server for AI integration.
▶️ GitHub repo
Language: TypeScript, Go | License: Apache 2.0 | 863 ⭐️
#tools #gui
👍7🔥6❤4🤔1
KCDs for 2026 H2 are announced
The list of Kubernetes Community Days (KCDs) events for the second half of 2026 is published. Here's what we can expect:
- KCD Vietnam; July; new
- KCD Melbourne, Australia; August; new
- KCD San Francisco Bay Area, USA; September; Tier 1
- KCD Washington DC, USA; September; Tier 1
- KCD Gujarat, India; September; new
- KCD Sao Paulo, Brazil; September; Tier 2
- KCD Sofia, Bulgaria; September; Tier 2
- KCD Buenos Aires, Argentina; October; Tier 1
- KCD UK; October; Tier 2
- KCD Bandung, Indonesia; October; Tier 1
- KCD Nigeria; October; Tier 1
- KCD Budapest, Hungary; November; Tier 1
- KCD Porto, Portugal; November; Tier 2
- KCD Hangzhou, China; November; Tier 1
- KCD Florida, USA; December; new
- KCD Suisse-Romade, Switzerland; December; Tier 1
- KCD Aix-en-Provence, France; December; new
First-time events imply up to 200 attendees, Tier 1 are for 350+, and Tier 2 are for up to 600. In our earlier post, you can also find the list of the ongoing KCDs for 2026 H1.
#events #news
The list of Kubernetes Community Days (KCDs) events for the second half of 2026 is published. Here's what we can expect:
- KCD Vietnam; July; new
- KCD Melbourne, Australia; August; new
- KCD San Francisco Bay Area, USA; September; Tier 1
- KCD Washington DC, USA; September; Tier 1
- KCD Gujarat, India; September; new
- KCD Sao Paulo, Brazil; September; Tier 2
- KCD Sofia, Bulgaria; September; Tier 2
- KCD Buenos Aires, Argentina; October; Tier 1
- KCD UK; October; Tier 2
- KCD Bandung, Indonesia; October; Tier 1
- KCD Nigeria; October; Tier 1
- KCD Budapest, Hungary; November; Tier 1
- KCD Porto, Portugal; November; Tier 2
- KCD Hangzhou, China; November; Tier 1
- KCD Florida, USA; December; new
- KCD Suisse-Romade, Switzerland; December; Tier 1
- KCD Aix-en-Provence, France; December; new
First-time events imply up to 200 attendees, Tier 1 are for 350+, and Tier 2 are for up to 600. In our earlier post, you can also find the list of the ongoing KCDs for 2026 H1.
#events #news
❤4👍2
Sharing our new digest of the prominent software updates in the Cloud Native ecosystem!
1. Kyverno, a Kubernetes-native policy engine (a CNCF Incubating project), released 1.17 that declares its CEL policy engine stable. These CEL-based policies got numerous new function libraries for YAML/JSON parsing, X509 decoding, and more. The release also introduced support for Cosign v3, and namespaced mutation and generation.
2. OpenEverest, a Cloud Native database platform, released 1.13.0, featuring a Pod Logs Viewer displaying real-time logs from database Pods directly in the UI, dynamic value injection in LoadBalancerConfig to create reusable configurations, and support for Percona XtraDB Cluster Operator v1.19.0.
3. Backstage, a framework for building developer portals (a CNCF Incubating project), reached v1.48.0, bringing experimental refresh token support, lots of updates in the new frontend system (new navigation system, home plugin, plugin titles and icons), new UI components, experimental catalog generic SCM event handling, and module federation enabled by default.
4. Crossplane (a CNCF Graduated project) made a regular quarterly release, v2.2.0. It brought a pipeline inspector (alpha), ImageConfig configuration for DeploymentRuntimeConfig used for packages, server-side apply support in the MRD controller when updating CRDs, support for composition functions to request OpenAPI schemas, and an enhanced
5. Dex, an OpenID Connect identity and OAuth 2.0 provider (a CNCF Sandbox project), was updated to v2.45.0, adding PKCE support in the OIDC connector, a Vault signer for JWT, and enhanced static passwords.
6. Flux (a CNCF Graduated project) released 2.8 GA, featuring Helm v4 support, faster recovery from failed deployments, CEL-based health check expressions for Helm releases, ephemeral preview environments from GitHub PRs and GitLab MRs, and support for Cosign v3.
#news #releases
1. Kyverno, a Kubernetes-native policy engine (a CNCF Incubating project), released 1.17 that declares its CEL policy engine stable. These CEL-based policies got numerous new function libraries for YAML/JSON parsing, X509 decoding, and more. The release also introduced support for Cosign v3, and namespaced mutation and generation.
2. OpenEverest, a Cloud Native database platform, released 1.13.0, featuring a Pod Logs Viewer displaying real-time logs from database Pods directly in the UI, dynamic value injection in LoadBalancerConfig to create reusable configurations, and support for Percona XtraDB Cluster Operator v1.19.0.
3. Backstage, a framework for building developer portals (a CNCF Incubating project), reached v1.48.0, bringing experimental refresh token support, lots of updates in the new frontend system (new navigation system, home plugin, plugin titles and icons), new UI components, experimental catalog generic SCM event handling, and module federation enabled by default.
4. Crossplane (a CNCF Graduated project) made a regular quarterly release, v2.2.0. It brought a pipeline inspector (alpha), ImageConfig configuration for DeploymentRuntimeConfig used for packages, server-side apply support in the MRD controller when updating CRDs, support for composition functions to request OpenAPI schemas, and an enhanced
crossplane beta trace command.5. Dex, an OpenID Connect identity and OAuth 2.0 provider (a CNCF Sandbox project), was updated to v2.45.0, adding PKCE support in the OIDC connector, a Vault signer for JWT, and enhanced static passwords.
6. Flux (a CNCF Graduated project) released 2.8 GA, featuring Helm v4 support, faster recovery from failed deployments, CEL-based health check expressions for Helm releases, ephemeral preview environments from GitHub PRs and GitLab MRs, and support for Cosign v3.
#news #releases
👍5
Watching your Kubernetes Pods in real-time 3D space sounds like a deal for Friday, doesn’t it? 🙃
Observatory is a visualisation dashboard that makes this possible. Originally built for K3s, it works with other Kubernetes distros as well, allowing you to watch your containers like never before. What it offers:
- Displaying your Kubernetes nodes and Pods in the 3D space where you can travel;
- Showing sidecars as orbiting moons for multi-container Pods;
- Providing the current and continuously updated state of Pods (running, pending, etc.) as well as their memory and CPU usage visualised as size and colours.
▶️ GitHub repo
💬 Reddit announcement
Language: Go, TypeScript | License: GPL v3 | 28 ⭐️
#tools #gui
Observatory is a visualisation dashboard that makes this possible. Originally built for K3s, it works with other Kubernetes distros as well, allowing you to watch your containers like never before. What it offers:
- Displaying your Kubernetes nodes and Pods in the 3D space where you can travel;
- Showing sidecars as orbiting moons for multi-container Pods;
- Providing the current and continuously updated state of Pods (running, pending, etc.) as well as their memory and CPU usage visualised as size and colours.
▶️ GitHub repo
💬 Reddit announcement
Language: Go, TypeScript | License: GPL v3 | 28 ⭐️
#tools #gui
😁4👍2🔥1
New Kubernetes working group: AI Gateway
The “AI Gateway” term refers to network gateway infrastructure, such as proxy servers and load balancers, that implements the Gateway API specification with enhanced capabilities for AI workloads. The newly announced AI Gateway WG will create declarative APIs, standards, and guidance for AI workload networking in Kubernetes.
P.S. This announcement came shortly after disbanding the Kubernetes WG Serving.
#news #networking #genai
The “AI Gateway” term refers to network gateway infrastructure, such as proxy servers and load balancers, that implements the Gateway API specification with enhanced capabilities for AI workloads. The newly announced AI Gateway WG will create declarative APIs, standards, and guidance for AI workload networking in Kubernetes.
P.S. This announcement came shortly after disbanding the Kubernetes WG Serving.
#news #networking #genai
👍1
AWS Load Balancer Controller now supports Gateway API
Previously, AWS Load Balancer Controller relied on Application Load Balancer and Network Load Balancer as Ingress and Service in Kubernetes environments. Now, you can also use the Gateway API.
P.S. According to this GitHub issue, Azure Kubernetes Service is also expected to introduce Gateway API support for App Routing in March. Google Cloud has been recommending using its Gateway API implementation in the GKE Gateway controller to expose apps in Kubernetes for a while.
#news #networking #aws
Previously, AWS Load Balancer Controller relied on Application Load Balancer and Network Load Balancer as Ingress and Service in Kubernetes environments. Now, you can also use the Gateway API.
P.S. According to this GitHub issue, Azure Kubernetes Service is also expected to introduce Gateway API support for App Routing in March. Google Cloud has been recommending using its Gateway API implementation in the GKE Gateway controller to expose apps in Kubernetes for a while.
#news #networking #aws
👍3🔥1
Interested in seeing the contents of your container images without running them? Check out this new tool.
cek (container exploration kit) is a CLI tool for exploring the OCI images filesystem. Unlike Skopeo, it works with the container itself (rather than the container registry), i.e. it can read images directly from Docker, Podman, or containerd in addition to pulling them from remote registries. cek allows you to:
- list files in your image and display the directory tree structure;
- read file contents;
- inspect image metadata;
- export images to tar files.
▶️ GitHub repo
💬 Reddit announcement
Language: Go | License: MIT | 261 ⭐️
#tools #storage
cek (container exploration kit) is a CLI tool for exploring the OCI images filesystem. Unlike Skopeo, it works with the container itself (rather than the container registry), i.e. it can read images directly from Docker, Podman, or containerd in addition to pulling them from remote registries. cek allows you to:
- list files in your image and display the directory tree structure;
- read file contents;
- inspect image metadata;
- export images to tar files.
▶️ GitHub repo
💬 Reddit announcement
Language: Go | License: MIT | 261 ⭐️
#tools #storage
👍6
NVIDIA introduced AI Cluster Runtime (AICR)
Yesterday, the company released its recipes for GPU-accelerated Kubernetes clusters across cloud and on-premises AI factories. These recipes are “version-locked configurations for specific environments” — the combinations of drivers, runtimes, operators, kernel modules, and system settings for AI workloads that have been validated by NVIDIA. They include specific components, their versions, constraints, and the configuration values for each environment.
You can find more details in this blog post and the AICR repo on GitHub.
#news #genai
Yesterday, the company released its recipes for GPU-accelerated Kubernetes clusters across cloud and on-premises AI factories. These recipes are “version-locked configurations for specific environments” — the combinations of drivers, runtimes, operators, kernel modules, and system settings for AI workloads that have been validated by NVIDIA. They include specific components, their versions, constraints, and the configuration values for each environment.
You can find more details in this blog post and the AICR repo on GitHub.
#news #genai
👍4
Kyverno became a CNCF Graduated project
Kyverno, a Kubernetes-native policy engine originally developed in Nirmata, has become the latest addition to the list of CNCF Graduated projects. About 6 hours ago, the CNCF Technical Oversight Committee completed the relevant voting process for this project.
Today’s Kyverno adopters include Vodafone, Deutsche Telekom, Saxo Bank, LinkedIn, Spotify, US DoD Platform One, OVHcloud, and many other well-known organisations worldwide.
#cncfprojects #news #security
Kyverno, a Kubernetes-native policy engine originally developed in Nirmata, has become the latest addition to the list of CNCF Graduated projects. About 6 hours ago, the CNCF Technical Oversight Committee completed the relevant voting process for this project.
Today’s Kyverno adopters include Vodafone, Deutsche Telekom, Saxo Bank, LinkedIn, Spotify, US DoD Platform One, OVHcloud, and many other well-known organisations worldwide.
#cncfprojects #news #security
🔥22