Skip to Main Content

Job Title


Software Development Engineer - SGLang and Inference Stack


Company : AMD https://static.whatjobs.com/static/ajCore/img/


Location : Vancouver, metro vancouver regional distr


Created : 2026-04-22


Job Type : Full Time


Job Description

Overview At AMD, our mission is to build great products that accelerate nextgeneration computing experiencesfrom AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, youll discover the real differentiator is our culture. We push the limits of innovation to solve the worlds most important challengesstriving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. The Role As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your work will be instrumental in enhancing GPU kernel performance, accelerating deep learning models, and enabling RL training and SOTA LLM and Multimodal inference at scale across multiGPU and multinode systems. You will collaborate across internal GPU software teams and engage with opensource communities to integrate and optimize cuttingedge compiler technologies and drive upstream contributions that benefit AMDs AI software ecosystem. The Person Skilled engineer with strong technical and analytical expertise in GPGPU C++, Triton, TileLang or DSL development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver highquality solutions. Strong problemsolving skills, a proactive approach, and a keen understanding of software engineering best practices are essential. Key Responsibilities Optimize Deep Learning Frameworks: Enhance performance of frameworks like TensorFlow, PyTorch, and SGLang on AMD GPUs via upstream contributions in opensource repositories. Develop and Optimize Deep Learning Models: Profile, analyze, code change and tune largescale training and inference models for optimal performance on AMD hardware. Day0 supports to many SOTA models, DeepSeek 3.2, Kimi K2.5, etc. GPU Kernel Development: Design, implement, and optimize highperformance GPU kernels using HIP, Triton, TileLang or other DSLs for AI operator efficiency. Collaborate with GPU Library and Compiler Teams: Work closely with internal compiler and GPU math library teams to integrate, optimize and align kernellevel optimizations with fullstack performance goals. Initiate and help with different level codegen optimizations. Contribute to SGLang Development: Support optimization, feature development, and scaling of the SGLang framework across AMD GPU platforms for LLM, multimodal serving and RLtraining. Distributed System Optimization: Tune and scale performance across both multiGPU (scaleup) and multinode (scaleout) environments, including inference parallelism, prefilldecode disaggregation, WideEP and collective communication strategies. Graph Compiler Integration: Integrate and optimize runtime execution through graph compilers such as XLA, TorchDynamo, or custom pipelines. OpenSource Collaboration: Partner with external maintainers to understand framework needs, propose optimizations, and upstream contributions effectively. Apply Engineering Best Practices: Leverage modern software engineering practices in debugging, profiling, testdriven development, and CI/CD integration. Preferred Experience Strong Programming Skills: Proficient in C++ and/or Python (PyTorch, Triton, TileLang), with demonstrated ability to code, debug, profile, and optimize performancecritical code. SGLang and LLM Optimization: Handson experience with SGLang or similar LLM inference frameworks is highly preferred. Compiler and GPU Architecture Knowledge: Background in compiler design or familiarity with technologies like LLVM, MLIR, or ROCm is a plus. Heterogeneous System Workloads: Experience running and scaling workloads on largescale, heterogeneous clusters (CPU + GPU) using distributed training or inference strategies. AI Framework Integration: Experience contributing to or integrating optimizations into deep learning frameworks such as PyTorch, SGLang, vLLM, Slime, VeRL. GPGPU Computing: Working knowledge of HIP, CUDA, Triton, TileLang or other GPU programming models; experience with GCN/CDNA architecture preferred. Academic Credentials Bachelors and/or Masters Degree in Computer Science, Computer Engineering, Electrical Engineering, Physics or a related field. Benefits and EEO Statements Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or feebased recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or thirdparty affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMDs Responsible AI Policy is available here. This posting is for an existing vacancy. #J-18808-Ljbffr