Ruyi Signature
Personal photo

Ruyi Ding

Final Year PhD Student, Northeastern University

Research Interests: AI Security, Hardware Security, Side-Channel Analysis

“如意”在中文中寓意“顺遂心意”,象征着对美好未来的追求与坚定信念。

In Chinese, 'Ruyi' (如意) conveys the meaning of 'fulfilling one's aspirations,' representing the pursuit of a prosperous future and steadfast resolve.

About Me

I am Ruyi Ding, a fifth year Ph.D. candidate at Northeastern University working with Yunsi Fei in the NUEESS group. My research focuses on AI security and hardware security, specializing in neural network robustness, privacy preservation, and side-channel analysis.
I will join the ECE Department at Louisiana State University as an assistant professor in Fall 2025.
I am actively looking for Ph.D. students. Feel free to shoot me an email if you are interested in AI security and hardware security!

Read more...

News

  • [2025-05] Invited to serve as a TPC member for ICCAD 2025
  • [2025-04] Received HOST 2025 Travel Grant. Thank you, HOST!
  • [2025-04] Received DAC Young Fellow. Thank you, DAC!
  • [2025-03] Received Northeastern 2025 Outstanding PhD Student Research Award. Thank you, Northeastern!
  • [2025-02] Received Northeastern PhD Network Travel Award. Thank you, Northeastern!
  • [2025-02] One paper is accepted in HOST 2025
  • [2025-02] One paper is accepted in DAC 2025
  • [2025-01] I was awarded the Internet Society Fellowship! Thank you, NDSS 2025!
  • [2024-10] One paper is accepted in NDSS 2025 !
  • [2024-09] One paper is accepted in NeurIPS 2024 !
  • [2024-07] One paper is accepted in ECCV 2024 !

Works

DAC 2025

Graph in the Vault: Protecting Edge GNN Inference with TEE

GNNVault introduces the first secure Graph Neural Network (GNN) deployment strategy using Trusted Execution Environment (TEE) to protect model IP and data privacy on edge devices. By partitioning the model before training and employing a private GNN rectifier, GNNVault effectively safeguards GNN inference against link stealing attacks

Learn More →
NDSS 2025

Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing

EncoderLock is a novel method that safeguards pre-trained encoders from malicious probing by restricting performance on prohibited domains while preserving functionality in authorized ones. Its domain-aware techniques and self-challenging training ensure robust protection, advancing the development of responsible AI.

Learn More →
NeurIPS 2025

GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction

GraphCroc enhances graph autoencoders (GAEs) with cross-correlation, improving representation of features like islands and directional edges in multi-graph scenarios. It ensures robust reconstruction and reduces bias, outperforming self-correlation-based GAEs.

Learn More →
ECCV 2024

Non-transferable Pruning

NonTransferable Pruning (NTP) safeguards pretrained DNNs by controlling transferability to unauthorized domains via selective pruning. Using ADMM and fisher space regularization, NTP optimizes sparsity and non-transferable learning loss, measured by SLC-AUC. Experiments show NTP outperforms state-of-the-art methods, ensuring models are unsuitable for unauthorized transfer learning in supervised and self-supervised contexts.

Learn More →
ICCV 2023

VertexSerum: Poisoning Graph Neural Networks for Link Inference

VertexSerum enhances graph link stealing by amplifying connectivity leakage, using an attention mechanism for accurate node adjacency inference. It outperforms state-of-the-art attacks, boosting AUC scores by 9.8% across datasets and GNN structures. Effective in black-box and online settings, VertexSerum demonstrates real-world applicability in exploiting GNN vulnerabilities for link privacy breaches.

Learn More →
asiaccs 2023

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

EMShepherd detects adversarial attacks by capturing electromagnetic (EM) traces of model execution, leveraging differences in EM footprints caused by adversarial inputs. Using benign samples and their EM traces, it trains classifiers and anomaly detectors, achieving a 100% detection rate for most adversarial types on FPGA accelerators. This air-gapped approach matches state-of-the-art white-box detectors without requiring internal model knowledge.

Learn More →

Research Interests

  • AI Security: Exploring machine learning security and privacy issue during training, inference and deployment.
  • Hardware Security: Security and Privacy of embedding DNNs.
  • Side-channel Analysis: Power/EM side-channel anaylsis and micro-architecture SCA.
  • Data Analysis: Traffic data analysis and event detection.

Contact

Feel free to reach out to me at ding.ruy[at]northeastern[dot]edu or connect with me on LinkedIn.