
The Linux kernel, foundational for servers, desktops, embedded systems, and cloud infrastructure, has been under heightened scrutiny. Several vulnerabilities have been exploited in real-world attacks, targeting critical subsystems and isolation layers. In this article, we’ll walk through major examples, explain their significance, and offer actionable guidance for defenders.
One of the most alarming flaws this year involves a use-after-free vulnerability in the Linux kernel’s vsock implementation (Virtual Socket), which enables communication between virtual machines and their hosts.
How the exploit works: A malicious actor inside a VM (or other privileged context) manipulates reference counters when a vsock transport is reassigned. The code ends up freeing a socket object while it’s still in use, enabling memory corruption and potentially root-level access.
Why it matters: Since vsock is used for VM-to-host and inter-VM communication, this flaw breaks a key isolation barrier. In multi-tenant cloud environments or container hosts that expose vsock endpoints, the impact can be severe.
Mitigation: Kernel maintainers have released patches. If your systems run hosts, hypervisors, or other environments where vsock is present, make sure the kernel is updated and virtualization subsystems are patched.
Another high-impact vulnerability involves the UNIX domain socket interface and the MSG_OOB flag. The bug was publicly detailed in August 2025 and is already in active discussion.
Attack scenario:
A process running inside a sandbox (for example a browser renderer) can exploit MSG_OOB operations on a UNIX domain socket to trigger a use-after-free or out-of-bounds read/write. That allows leaking kernel pointers or memory and then chaining to full kernel privilege escalation.
Why it matters: This vulnerability is especially dangerous because it bridges from a low-privilege sandboxed process to kernel-level compromise. Many systems assume sandboxed code is safe; this attack undermines that assumption.
Mitigation:
Distributions and vendors (like browser teams) have disabled or restricted MSG_OOB usage for sandboxed contexts. Kernel patches are available. Systems that run browser sandboxes or other sandboxed processes need to apply these updates immediately.
In September 2025, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) added this vulnerability to its Known Exploited Vulnerabilities (KEV) catalog.

The speculation around a successor to the Steam Deck has stirred renewed excitement, not just for a new handheld, but for what it signals in Linux-based gaming. With whispers of next-gen specs, deeper integration of SteamOS, and an evolving handheld PC ecosystem, these rumors are fueling broader hopes that Linux gaming is entering a more mature age. In this article we look at the existing rumors, how they tie into the Linux gaming landscape, why this matters, and what to watch.
Although Valve has kept things quiet, multiple credible outlets report about the Steam Deck 2 being in development and potentially arriving well after 2026. Some of the key tid-bits:
Editorials note that Valve isn’t planning a mere spec refresh; it wants a “generational leap in compute without sacrificing battery life”.
A leaked hardware slide pointed to an AMD “Magnus”-class APU built on Zen 6 architecture being tied to next-gen handhelds, including speculation about the Steam Deck 2.
One hardware leaker (KeplerL2) cited a possible 2028 launch window for the Steam Deck 2, which would make it roughly 6 years after the original.
Valve’s own design leads have publicly stated that a refresh with only 20-30% more performance is “not meaningful enough”, implying they’re waiting for a more substantial upgrade.
In short: while nothing is official yet, there’s strong evidence that Valve is working on the next iteration and wants it to be a noteworthy jump, not just a minor update.
The rumoured arrival of the Steam Deck 2 isn’t just about hardware, it reflects and could accelerate key inflection points for Linux & gaming:
Validation of SteamOS & Linux GamingThe original Steam Deck, running SteamOS (a Linux-based OS), helped prove that PC gaming doesn’t always require Windows. A well-received successor would further validate Linux as a first-class gaming platform, not a niche alternative but a mainstream choice.
Handheld PC Ecosystem MomentumSince the first Deck, many Windows-based handhelds have entered the market (such as the ROG Ally, Lenovo Legion Go). Rumours of the Deck 2 keep spotlight on the form factor and raise expectations for Linux-native handhelds. This momentum helps encourage driver, compatibility and OS investments from the broader community.

The popular penetration-testing distribution Kali Linux has dropped its latest quarterly snapshot: version 2025.3. This release continues the tradition of the rolling-release model used by the project, offering users and security professionals a refreshed toolkit, broader hardware support (especially wireless), and infrastructure enhancements under the hood. With this update, the distribution aims to streamline lab setups, bolster wireless hacking capabilities (particularly on Raspberry Pi devices), and integrate modern workflows including automated VMs and LLM-based tooling.
In this article, we’ll walk through the key highlights of Kali Linux 2025.3, how the changes affect users (both old and new), the upgrade path, and what to keep in mind for real-world deployment.
This snapshot from the Kali team brings several categories of improvements: tooling, wireless/hardware support, architecture changes, virtualization/image workflows, UI and plugin tweaks. Below is a breakdown of the major updates.
Tooling Additions: Ten Fresh PackagesOne of the headline items is the addition of ten new security tools to the Kali repositories. These tools reflect shifts in the field, toward AI-augmented recon, advanced wireless simulation and pivoting, and updated attack surface coverage. Among the additions are:
Caido and Caido-cli – a client-server web-security auditing toolkit (graphical client + backend).
Detect It Easy (DiE) – a utility for identifying file types, a useful tool in reverse engineering workflows.
Gemini CLI – an open-source AI agent that integrates Google’s Gemini (or similar LLM) capabilities into the terminal environment.
krbrelayx – a toolkit focused on Kerberos relaying/unconstrained delegation attacks.
ligolo-mp – a multiplayer pivoting solution for network-lateral movement.
llm-tools-nmap – allows large-language-model workflows to drive Nmap scans (automated/discovery).
mcp-kali-server – configuration tooling to connect an AI agent to Kali infrastructure.
patchleaks – a tool that detects security-fix patches and provides detailed descriptions (useful both for defenders and auditors).
vwifi-dkms – enables creation of “dummy” Wi-Fi networks (virtual wireless interfaces) for advanced wireless testing and hacking exercises.

In the world of modern CPUs, speculative execution, where a processor guesses ahead on branches and executes instructions before the actual code path is confirmed, has long been recognized as a performance booster. However, it has also given rise to a class of vulnerabilities collectively known as “Spectre” attacks, where microarchitectural side states (such as the branch target buffer, caches, or predictor state) are mis-exploited to leak sensitive data.
Now, a new attack variant, dubbed VMScape, exposes a previously under-appreciated weakness: the isolation between a guest virtual machine and its host (or hypervisor) in the branch predictor domain. In simpler terms: a malicious VM can influence the CPU’s branch predictor in such a way that when control returns to the host, secrets in the host or hypervisor can be exposed. This has major implications for cloud security, virtualization environments, and kernel/hypervisor protections.
In this article we’ll walk through how VMScape works, the CPUs and environments it affects, how the Linux kernel and hypervisors are mitigating it, and what users, cloud operators and admins should know (and do).
Speculative execution vulnerabilities like Spectre exploit the gap between architectural state (what the software sees as completed instructions) and microarchitectural state (what the CPU has done internally, such as cache loads, branch predictor updates, etc). Even when speculative paths are rolled back architecturally, side-effects in the microarchitecture can remain and be probed by attackers.
One of the original variants, Spectre-BTI (Branch Target Injection, also called Spectre v2) leveraged the Branch Target Buffer (BTB) / predictor to redirect speculative execution along attacker-controlled paths. Over time, hardware and software mitigations (IBRS, eIBRS, IBPB, STIBP) have been introduced. But VMScape shows that when virtualization enters the picture, the isolation assumptions break down.
VMScape: Guest to Host via Branch PredictorVMScape (tracked as CVE‑2025‑40300) is described by researchers from ETH Zürich as “the first Spectre-based end-to-end exploit in which a malicious guest VM can leak arbitrary sensitive information from the host domain/hypervisor, without requiring host code modifications and in default configuration.”
Here are the key elements making VMScape significant:
The attack is cross-virtualization: a guest VM influences the host’s branch predictor state (not just within the guest).

Modern computing systems rely heavily on operating-system schedulers to allocate CPU time fairly and efficiently. Yet many of these schedulers operate blindly with respect to the meaning of workloads: they cannot distinguish, for example, whether a task is latency-sensitive or batch-oriented. This mismatch, between application semantics and scheduler heuristics, is often referred to as the semantic gap.
A recent research framework called SchedCP aims to close that gap. By using autonomous LLM‐based agents, the system analyzes workload characteristics, selects or synthesizes custom scheduling policies, and safely deploys them into the kernel, without human intervention. This represents a meaningful step toward self-optimizing, application-aware kernels.
In this article we will explore what SchedCP is, how it works under the hood, the evidence of its effectiveness, real-world implications, and what caveats remain.
At the heart of the issue is that general-purpose schedulers (for example the Linux kernel’s default policy) assume broad fairness, rather than tailoring scheduling to what your application cares about. For instance:
A video-streaming service may care most about minimal tail latency.
A CI/CD build system may care most about throughput and job completion time.
A cloud analytics job may prefer maximum utilisation of cores with less concern for interactive responsiveness.
Traditional schedulers treat all tasks mostly the same, tuning knobs generically. As a result, systems often sacrifice optimisation opportunities. Some prior efforts have used reinforcement-learning techniques to tune scheduler parameters, but these approaches have limitations: slow convergence, limited generalisation, and weak reasoning about why a workload behaves as it does.
SchedCP starts from the observation that large language models can reason semantically about workloads (expressed in plain language or structured summaries), propose new scheduling strategies, and generate code via eBPF that is loaded into the kernel via the sched_ext interface. Thus, a custom scheduler (or modified policy) can be developed specifically for a given workload scenario, and in a self-service, automated way.
SchedCP comprises two primary subsystems: a control-plane framework and an agent loop that interacts with it. The framework decouples “what to optimise” (reasoning) from “how to act” (execution) in order to preserve kernel stability while enabling powerful optimisations.
Here are the major components:

After years of debate and development, bcachefs—a modern copy-on-write filesystem once merged into the Linux kernel—is being removed from mainline. As of kernel 6.17, the in-kernel implementation has been excised, and future use is expected via an out-of-tree DKMS module. This marks a turning point for the bcachefs project, raising questions about its stability, adoption, and relationship with the kernel development community.
In this article, we’ll explore the background of bcachefs, the sequence of events leading to its removal, the technical and community dynamics involved, and implications for users, distributions, and the filesystem’s future.
Before diving into the removal, let’s recap what bcachefs is and why it attracted attention.
Origin & goals: Developed by Kent Overstreet, bcachefs emerged from ideas in the earlier bcache project (a block-device caching layer). It aimed to build a full-featured, general-purpose filesystem combining performance, reliability, and modern features (snapshots, compression, encryption) in a coherent design.
Mainline inclusion: Bcachefs was merged into the mainline kernel in version 6.7 (released January 2024) after a lengthy review and incubation period.
“Experimental” classification: Even after being part of the kernel, bcachefs always carried disclaimers about its maturity and stability—they were not necessarily recommends for production use by all users.
Its presence in mainline gave distributions a path to ship it more casually, and users had easier access without building external modules—an important convenience for adoption.
The excision of bcachefs from the kernel was not sudden but the culmination of tension over development practices, patch acceptance timing, and upstream policy norms.
“Externally Maintained” status in 6.17In kernel 6.17’s preparation, maintainers marked bcachefs as “externally maintained.” Though the code remained present, the change signified that upstream would no longer accept new patches or updates within the kernel tree.
This move allowed a transitional period. The code was “frozen” inside the tree to avoid breaking existing systems immediately, while preparation was made for future removal.

The Linux Mint team has officially unveiled Linux Mint 22.2, codenamed “Zara”, on September 4, 2025. As a Long-Term Support (LTS) release, Zara will receive updates through 2029, promising users stability, incremental improvements, and a comfortable desktop experience.
This version is not about flashy overhauls; rather, it’s about refinement — applying polish to existing features, smoothing rough edges, weaving in new conveniences (like fingerprint login), and improving compatibility with modern hardware. Below, we’ll delve into what’s new in Zara, what users should know before upgrading, and how it continues Mint’s philosophy of combining usability, reliability, and elegance.
Here’s a breakdown of key changes, refinements, and enhancements in Zara.
Base, Support & Kernel StackUbuntu 24.04 (Noble) base: Zara continues to use Ubuntu 24.04 as its upstream base, ensuring broad package compatibility and long-term security support.
Kernel 6.14 (HWE): The default kernel for new installations is 6.14, bringing support for newer hardware.
However — for existing systems upgraded from Mint 22 or 22.1 — the older kernel (6.8 LTS) remains the default, because 6.14’s support window is shorter.
Zara is an LTS edition, with security updates and maintenance promised through 2029.
Zara introduces a first-party tool called Fingwit to manage fingerprint-based authentication. With compatible hardware and support via the libfprint framework, users can:
Enroll fingerprints
Use fingerprint login for the screensaver
Authenticate sudo commands
Launch administrative tools via pkexec using the fingerprint
In some cases, bypass password entry at login (unless home directory encryption or keyring constraints force password fallback)
It is important to note that fingerprint login on the actual login screen may be disabled or limited depending on encryption or keyring usage; in those cases, the system falls back to password entry.
Sticky Notes app now sports rounded corners, improved Wayland compatibility, and a companion Android app named StyncyNotes (available via F-Droid) to sync notes across devices.

In early September 2025, Ubuntu users globally experienced disruptive delays in installing updates and new packages. What seemed like a fleeting outage—only about 36 minutes of server downtime—triggered a cascade of effects: mirrors lagging, queued requests overflowing, and installations hanging for days. The incident exposed how fragile parts of Ubuntu’s update infrastructure can be under sudden load.
In this article, we’ll walk through what happened, why the fallout was so severe, how Canonical responded, and lessons for users and infrastructure architects alike.
On September 5, 2025, Canonical’s archive servers—specifically archive.ubuntu.com and security.ubuntu.com—suffered an unplanned outage. The status page for Canonical showed the incident lasting roughly 36 minutes, after which operations were declared “resolved.”
However, that brief disruption set off a domino effect. Because the archives and security servers serve as the central hubs for Ubuntu’s package ecosystem, any downtime causes massive backlog among mirror servers and client requests. Mirrors found themselves out of sync, processing queues piled up, and users attempting updates or new installs encountered failed downloads, hung operations, or “404 / package not found” errors.
On Ubuntu’s community forums, Canonical acknowledged that while the server outage was short, the upload / processing queue for security and repository updates had become “obscenely” backlogged. Users were urged to be patient, as there was no immediate workaround.
Throughout September 5–7, users continued reporting incomplete or failed updates, slow mirror responses, and installations freezing mid-process. Even newly provisioning systems faced broken repos due to inconsistent mirror states.
By September 8, the situation largely stabilized: mirrors caught up, package availability resumed, and normal update flows returned. But the extended period of degraded service had already left many users frustrated.
At first blush, 36 minutes seems trivial. Why did it have such prolonged consequences? Several factors contributed:
Centralized repository backplane Ubuntu’s infrastructure is architected around central canonical repositories (archive, security) which then propagate to mirrors worldwide. When the central system is unavailable, mirrors stop receiving updates and become stale.

Android has long been focused on running mobile apps, but in recent years, features aimed at developers and power users have begun pushing its boundaries. One exciting frontier: running full Linux graphical (GUI) applications on Android devices. What was once a novelty is now gradually becoming more viable, and recent developments point toward much smoother, GPU-accelerated Linux GUI experiences on Android.
In this article, we’ll trace how Linux apps have run on Android so far, explain the new architecture changes enabling GPU rendering, showcase early demonstrations, discuss remaining hurdles, and look at where this capability is headed.
Google’s Linux Terminal app is the core interface for running Linux environments on Android. It spins up a virtual machine (VM), often booting Debian or similar, and lets users enter a shell, install packages, run command-line tools, etc.
Initially, the app was limited purely to text / terminal-based Linux programs; graphical apps were not supported meaningfully. More recently, Google introduced support for launching GUI Linux applications in experimental channels.
Limitations: Rendering & PerformanceEven now, most GUI Linux apps on Android are rendered in software, that is, all drawing happens on the CPU (via a software renderer) rather than using the device’s GPU. This leads to sluggish UI, high CPU usage, more thermal stress, and shorter battery life.
Because of these limitations, running heavy GUI apps (graphics editors, games, desktop-level toolkits) has been more experimental than practical.
The big leap forward is moving from CPU rendering to GPU-accelerated rendering, letting the device’s graphics hardware do the heavy lifting.
Lavapipe (Current Baseline)At present, the Linux VM uses Lavapipe (a Mesa software rasterizer) to interpret GPU API calls on the CPU. This works, but is inefficient, especially for complex GUIs or animations.
Introducing gfxstreamGoogle is planning to integrate gfxstream into the Linux Terminal app. gfxstream is a GPU virtualization / forwarding technology: rather than reinterpreting graphics calls in software, it forwards them from the guest (Linux VM) to the host’s GPU directly. This avoids CPU overhead and enables near-native rendering speeds.