Hand-object interaction (HOI) is fundamental for humans to express intent. Existing HOI generation research is predominantly confined to fixed grasping patterns, where control is tied to physical priors such as force closure or generic intent instructions, even when expressed through elaborate language. Such an overly general conditioning imposes a strong inductive bias for stable grasps, thus failing to capture the diversity of daily HOI. To address these limitations, we introduce the free-form HOI generation, which aims to generate controllable, diverse, and physically plausible HOI conditioned on fine-grained intent, extending HOI from grasping to free-form interactions, like pushing, poking, and rotating. To support this task, we construct WildO2, an in-the-wild diverse diverse 3D HOI dataset, which includes diverse HOI derived from internet videos. Specifically, it contains 4.4k unique interactions across 92 intents and 610 object categories, each with detailed semantic annotations. Building on this dataset, we propose TOUCH, a three-stage framework centered on a multi-level diffusion model that facilitates fine-grained semantic control to generate versatile hand poses beyond grasping priors. This process leverages explicit contact modeling for conditioning and is subsequently refined with contact consistency and physical constraints to ensure realism. Comprehensive experiments demonstrate our method's ability to generate controllable, diverse, and physically plausible hand interactions representative of daily activities.
Overview of our three-stage framework TOUCH for generating hand-object interactions from multi-level text prompts and object meshes. CIM stands for the Condition Injection Module.
The proposed data generation pipeline for WildO2. The process begins with O2HOI(Object-only to Hand-Object Interaction) frame pair extraction from in-the-wild videos, followed by a three-stage pipeline for 3D reconstruction, camera alignment, and hand-object refinement to produce high-fidelity interaction data.
Dataset Analysis. (a) Breakdown of WildO2 reconstruction outcomes. (b) An illustration of the interplay between the most frequent object categories, interaction types, and hand contact regions. Object and action definitions are adapted and refined from Something-Something V2. Contact regions are derived based on our dataset analysis. (c) Specific segmentation of the 17 hand parts and their contact frequency distribution in the dataset, along with a contact heatmap of the entire hand.
Comparisons of different methods on samples from the WildO2 test set. Each sample consists of SSCs and an object mesh as input, with the output being an interactive hand pose.
Visualized cases of generation under different control conditions for the same object.
@misc{han2025touchtextguidedcontrollablegeneration,
title={TOUCH: Text-guided Controllable Generation of Free-Form Hand-Object Interactions},
author={Guangyi Han and Wei Zhai and Yuhang Yang and Yang Cao and Zheng-Jun Zha},
year={2025},
eprint={2510.14874},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.14874},
}