NVIDIA is hiring an AI ML Intern focused on applying machine learning to wireless signal processing. This article outlines the role’s objectives, core responsibilities, and candidate requirements. It organizes the provided details for clarity, helping prospective applicants understand expectations, key tasks, technical skills required, and collaboration points within NVIDIA and with other business units.
Role overview and objectives
The AI ML Intern will develop and optimize AI/ML modules for functional blocks in wireless signal processing. The role emphasizes selecting suitable ML architectures and complexity for RAN functions, benchmarking OTA performance gains against compute needs, and iteratively improving model performance across platforms.
Core responsibilities
- Develop and optimize AI/ML modules for functional blocks specifically in wireless signal processing.
- Perform literature survey to understand the prior art on AI/ML for RAN.
- Analyze and identify suitable ML architectures for the RAN functions of interest.
- Identify the right ML architecture and complexity for each functional block.
- Collaborate with multi-functional teams to optimize OTA performance and compute complexity with DevTech and other NVIDIA business units.
- Benchmark OTA performance improvements with AI models and compute needs on different platforms.
- Iteratively train, test and modify model architectures for performance improvements.
Candidate requirements and technical skills
Applicants must be full-time PhD students researching AI and Wireless domains and available to intern for at least 6 months or more starting from last week of January 2026.
- Thorough understanding of wireless Layer1/Layer2 functions and algorithm aspects.
- Excellent grasp of AI and ML concepts; familiarity with Transformers, CNNs and other ML architectures.
- Hands-on experience simulating signal processing algorithms in MATLAB and Python.
- Programming skills in C/C++.
- Experience in analyzing problems, identifying model architectures, developing models, training and optimization (preferably in signal processing domains).
This summary presents the role, responsibilities, and candidate requirements exactly as provided. It highlights the focus on AI/ML for wireless signal processing, the need for literature survey and benchmarking, collaboration across NVIDIA teams, and the specific technical and academic qualifications required for the internship starting from last week of January 2026.









