Workshop

Building LLMs with NVIDIA NeMo and Securing Them with Confidential Computing

ABOUT SESSION

This two-part session covers:

Part 1 – Building with NeMo: Scalable, Sovereign, and Secure AI Workflows

Discover how NVIDIA NeMo and NIM microservices enable enterprises to build agentic and generative AI workflows on cloud, edge, or on-prem, while preserving full data sovereignty and control. Explore tools that make this a reality –NeMo Safety, NIM, the Data Flywheel Blueprint and more through real-world enterprise use cases. Learn how to deploy secure, air-gapped systems that scale from prototype to production, followed by a hands-on demo.

Part 2 – Full Spectrum Model IP Protection – How Confidential Computing Secures Federated Learning and Inference Against Theft and Access.

Deploying AI models on untrusted infrastructure presents serious IP risks. Simply running code inside a TEE is not sufficient to protect your model. We show how a zero-trust, Confidential VM-based approach with hardened images protects model assets from deployment through execution. We will walk through building and securing a federated fine-tuning pipeline using NeMo and NVIDIA FLARE—mitigating boot-time tampering, checkpoint leaks, and attestation bypass. The same approach applies to general inference workloads.

Attendees will learn how to: 

  • Build security-hardened CVMs for model IP protection
  • Apply them to federated training and inference in untrusted environments
  • Enable secure collaboration without exposing proprietary models

Shashank Verma
Shashank Verma

Data Scientist & Developer Advocate, NVIDIA

View More Sessions