Open roles
within our portfolio

ML Solutions Engineer

TensorWave

TensorWave

Software Engineering, Data Science
Las Vegas, NV, USA
Posted on Nov 18, 2025

Location

Las Vegas, Nevada

Employment Type

Full time

Location Type

On-site

Department

Engineering

ML Solutions Engineer (ROCm Portability)

At TensorWave, we’re leading the charge in AI compute, building a versatile cloud platform that’s driving the next generation of AI innovation. We’re focused on creating a foundation that empowers cutting-edge advancements in intelligent computing, pushing the boundaries of what’s possible in the AI landscape.

About the Role:

We are seeking an exceptional ML Solutions Engineer who specializes in GPU portability and performance optimization. This is a senior-level role for someone who has significant experience with CUDA, ROCm, and kernel development, and is passionate about enabling workloads to run efficiently on AMD hardware.

As a technical expert, you will help migrate and optimize CUDA-based workloads to ROCm, working with both internal teams and third-party developers. You will play a critical role in advancing our ROCm enablement strategy and driving adoption across the ecosystem.

Key Responsibilities:

  • Partner with customers, internal engineering, and third-party developers to migrate CUDA workloads to ROCm.

  • Profile, debug, and optimize GPU kernels for performance, scalability, and efficiency.

  • Contribute to ROCm enablement across open source ML frameworks and libraries.

  • Leverage tools such as Composable Kernel, HIP, PyTorch/XLA, and RCCL to enable and tune distributed training workloads.

  • Provide technical guidance on best practices for GPU portability, including kernel-level optimizations, mixed precision, and memory hierarchy usage.

  • Act as a technical liaison, translating customer requirements into actionable engineering work.

  • Create internal documentation, playbooks, and training material to scale knowledge across teams.

  • Represent TensorWave in the broader ROCm ecosystem through contributions, collaboration, and customer advocacy.

Qualifications:

Must-Have:

  • 5+ years of experience in GPU programming, ML infrastructure, or HPC roles.

  • Strong hands-on experience with CUDA, HIP, and ROCm.

  • Proficiency in kernel development (e.g., CUDA, HIP, Composable Kernel, Triton).

  • Deep knowledge of GPU performance profiling tools (Nsight, rocprof, perf, etc.).

  • Understanding of distributed ML workloads (e.g., PyTorch Distributed, MPI, RCCL).

  • Proven ability to work in customer-facing technical roles, including solution design and workload migration.

  • Strong programming skills in Python, C++, and GPU kernel languages.

Nice-to-Have:

  • Contributions to ROCm-enabled open source ML frameworks (PyTorch, Megatron, vLLM, SGLang, etc.).

  • Familiarity with compiler technology (LLVM, MLIR, XLA).

  • Experience with containerized environments and Kubernetes for GPU workloads.

  • Knowledge of performance modeling for multi-GPU and multi-node workloads.

  • Familiarity with AI/ML workload benchmarking and tuning at scale.

  • Foundation in networking, especially as it pertains to RDMA, RoCE, and Infiniband.

What Success Looks Like

Customers successfully migrate and optimize their CUDA workloads to ROCm, with measurable performance gains.

Strong collaboration between internal engineering and external developers leads to faster enablement of ROCm workloads.

Best practices, playbooks, and tooling are well-documented and continuously improved.

Make GPUs go Brrrrrrr

What We Bring:

Stock Options

100% paid Medical, Dental, and Vision insurance

Life and Voluntary Supplemental Insurance

Short Term Disability Insurance

Flexible Spending Account

401(k)

Flexible PTO

Paid Holidays

Parental Leave

Mental Health Benefits through Spring Health