KNIHOVNA 2025    2026()  2025()  2024()  2023()  2022()  OTHER()

23.11.25

Careless Whisper:
Exploiting Silent Delivery Receipts to Monitor
Users on Mobile Instant Messenge

With over 3 billion users globally, mobile instant messaging apps have become indispensable for both personal and professional communication.

PAPERS

PAPERS

9.11.25

Death by a Thousand Prompts: Open Model Vulnerability Analysis

Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and security postures of eight open-weight large language models (LLMs) models to identify vulnerabilities that may impact subsequent fine-tuning and deployment.

PAPERS

PAPERS

9.11.25

InputSnatch: Stealing Input in LLM Services via Timing Side-Channel Attacks

Large language models (LLMs) possess extensive knowledge and question-answering capabilities, having been widely deployed in privacy-sensitive domains like finance and medical consultation. During LLM inferences, cache-sharing methods are commonly employed to enhance efficiency by reusing cached states or responses for the same or similar inference requests.

PAPERS

PAPERS

9.11.25

What Was Your Prompt? A Remote Keylogging Attack on AI Assistan

AI assistants are becoming an integral part of society, used for asking advice or help in personal and confidential issues. In this paper, we unveil a novel side-channel that can be used to read encrypted responses from AI Assistants over the web: the token-length side-channel.

PAPERS

PAPERS

9.11.25

WHISPER LEAK: A SIDE-CHANNEL ATTACK ON LARGE
LANGUAGE MODE

Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses.

PAPERS

PAPERS

16.10.25

RMPocalypse: How a Catch-22 Breaks AMD SEV-SNP

AMD SEV-SNP offers confidential computing in form of confidential VMs, such that the untrusted hypervisor cannot tamper with its confidentiality and integrity.

PAPERS

PAPERS

16.10.25

Pixnapping: Bringing Pixel Stealing out of the Stone Age

Pixel stealing attacks enable malicious websites to leak sensitive content displayed in victim websites.

PAPERS

PAPERS

3.10.25

WireTap

Intel's Software Guard eXtensions (SGX) is a hardware feature in Intel servers that aims to offer strong integrity and confidentiality properties for software, even in the presence of root-level attackers.

PAPERS

PAPERS

3.10.25

Battering RAM

With Battering RAM, we show that even the latest defenses on Intel and AMD cloud processors can be bypassed. We built a simple, $50 interposer that sits quietly in the memory path, behaving transparently during startup and passing all trust checks.

PAPERS

PAPERS

21.9.25

VMSCAPE: Exposing and Exploiting Incomplete Branch Predictor Isolation in Cloud Environments

Abstract—Virtualization is a cornerstone of modern cloud infrastructures, providing the required isolation to customers. This isolation, however, is threatened by speculative execution attacks which the CPU vendors attempt to mitigate by extending the isolation to the branch predictor state.

PAPERS

PAPERS

21.9.25

Phoenix: Rowhammer Attacks on DDR5 with Self-Correcting Synchronizati

Abstract—DDR5 has shown an increased resistance to Rowhammer attacks in production settings. Surprisingly, DDR5 achieves this without additional refresh management commands, pointing to the deployment of more sophisticated inDRAM Target Row Refresh (TRR) mechanisms.

PAPERS

PAPERS

17.9.25

EMBER2024 - A Benchmark Dataset for Holistic Evaluation of Malware Classifie

A lack of accessible data has historically restricted malware analysis research, and practitioners have relied heavily on datasets provided by industry sources to advance.

PAPERS

PAPERS

17.9.25

Securing DRAM at Scale: ARFM-Driven Row
Hammer Defense with Unveiling the Threat of Short
tRC Patterns

Abstract—Since the disclosure of the row hammer (RH) attack phenomenon in 2014, a significant threat to system security, it has been active research in both industry and academia.

PAPERS

PAPERS

17.9.25

ECC.fail: Mounting Rowhammer Attacks on DDR4 Servers with ECC Memory

Rowhammer is a hardware vulnerability present in nearly all computer memory, allowing attackers to modify bits in memory without directly accessing them.

PAPERS

PAPERS

17.9.25

Rowhammer-Based Trojan Injection:
One Bit Flip Is Sufficient for Backdooring DNNs

While conventional backdoor attacks on deep neural networks (DNNs) assume the attacker can manipulate the training data or process, recent research introduces a more practical threat model by injecting backdoors during the inference stage.

PAPERS

PAPERS

31.8.25

Design Patterns for Securing LLM Agents against Prompt Injections

Large Language Models (LLMs) are becoming integral components of complex software systems, where they serve as intelligent agents that can interpret natural language instructions, make plans, and execute actions through external tools and APIs

PAPERS

PAPERS

27.8.25

Sni5Gect: A Practical Approach
to Inject aNRchy into 5G NR

Sni5Gect: A Practical Approach
to Inject aNRchy into 5G NR

PAPERS

PAPERS

20.7.25

Matanbuchus 3.0

From a Teams Call to a Ransomware Threat: Matanbuchus 3.0 MaaS Levels Up

PAPERS

PAPERS

12.7.25

GPUHammer: Rowhammer Attacks on GPU Memories are Practic

Rowhammer is a read disturbance vulnerability in modernDRAM that causes bit-flips, compromising security and reliability. While extensively studied on Intel and AMD CPUs with DDR and LPDDR memories, its impact on GPUs using GDDR memories, critical for emerging machine learning applications, remains unexplored

PAPERS

PAPERS

12.7.25

TapTrap: Animation-Driven Tapjacking on Android

Users interact with mobile devices under the assumption that the graphical user interface (GUI) accurately reflects their actions, a trust fundamental to the user experience.

PAPERS

PAPERS

24.6.25

LLMs unlock new paths to monetizing exploit

We argue that Large language models (LLMs) will soon alter the economics of cyberattacks. Instead of attacking the most commonly used software and monetizing exploits by targeting the lowest common denominator among victims, LLMs enable adversaries to launch tailored attacks on a user-by-user basis.

PAPERS

AI

24.6.25

Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrai

Large Language Models (LLMs) guardrail systems are designed to protect against prompt injection and jailbreak attacks.

PAPERS

AI

15.6.25

SmartAttack: Air-Gap Attack via Smartwatches

Air-gapped systems are considered highly secure against data leaks due to their physical isolation from external networks.

PAPERS

PAPERS

21.4.25

KernJC: Automated Vulnerable Environment Generation for Linux Kernel Vulnerabilities

Linux kernel vulnerability reproduction is a critical task in systemsecurity. To reproduce a kernel vulnerability, the vulnerable environment and the Proof of Concept (PoC) program are needed. Most existing research focuses on the generation of PoC, while the construction of environment is overlooked. However, establishing an effective vulnerable environment to trigger a vulnerability is challenging

PAPERS

Vulnerebility

21.4.25

CDN Cannon: Exploiting CDN Back-to-Origin
Strategies for Amplification Attacks

Content Delivery Networks (CDNs) provide high availability, speed up content delivery, and safeguard against DDoS attacks for their hosting websites. To achieve the aforementioned objectives, CDN designs several back-to-origin strategies that proactively pre-pull resources and modify HTTP requests and responses.

PAPERS

ATTACK

21.4.25

ImageC2Gen: Customizing GenAI models to Conceal Commands in
Images for Command and Control (C2) Attacks

Command and Control (C2) attacks involve establishing an encrypted connection between victim
machines and C2 servers. Utilizing Image-based C2 makes it more challenging for the network security and forensic analysis, even when firewalls have decryption capabilities enabled.

PAPERS

AI

13.4.25

We Have a Package for You! A Comprehensive Analysis of Package Hallucinations
by Code Generating LL

The reliance of popular programming languages such as Python and JavaScript on centralized package repositories and open-source software, combined with the emergence of code-generating Large Language Models (LLMs), has created a new type of threat to the software supply chain: package hallucinations. T

PAPERS

AI

6.4.25

Fast Flux

Many networks have a gap in their defenses for detecting and blocking a malicious technique known as “fast flux.”

PAPERS

MALWARE

24.2.25

SysBumps: Exploiting Speculative Execution in System Calls for
Breaking KASLR in macOS for Apple Silicon

Apple silicon is the proprietary ARM-based processor that powers the mainstream of Apple devices. The move to this proprietary architecture presents unique challenges in addressing security issues, requiring huge research efforts into the security of Apple silicon-based systems. In this paper, we study the security of KASLR, the randomization-based kernel hardening technique, on the stateof-the-art macOS system equipped with Apple silicon processors.

PAPERS

PAPERS

28.1.25

Uncovering New Classes of Kernel Vulnerabiliti

Uncovering New Classes of Kernel Vulnerabiliti

PAPERS

PAPERS

25.1.25

FLOP: Breaking the Apple M3 CPU via False Load Output Predictions

To bridge the ever-increasing gap between the fast execution speed of modern processors and the long latency of memory accesses, CPU vendors continue to introduce newer and more advanced optimizations. While these optimizations improve performance, research has repeatedly demonstrated that they may also have an adverse impact on security.

PAPERS

PAPERS

25.1.25

SLAP: Data Speculation Attacks via Load Address Prediction on Apple Silicon

Since Spectre’s initial disclosure in 2018, the difficulty of mitigating speculative execution attacks completely in hardware has led to the proliferation of several new variants and attack surfaces in the past six years. Most of the progeny build on top of the original Spectre attack’s key insight, namely that CPUs can execute the wrong control flow transiently and disclose secrets through side-channel traces when attempting to alleviate control hazards, such as conditional or indirect branches and return statements.

PAPERS

PAPERS