Skip to Main Content

Job Title


Platform Engineer (Full-time)


Company : Strong Compute Corporation


Location : Sydney, Australia


Created : 2025-06-27


Job Type : Full Time


Job Description

Were building the operating system for AI computeseamless workstation style access as a single entry point into global compute, with ultra fast data transit connecting everything.If you love high-performance computing, distributed systems, and AI infrastructure, and have experience managing large-scale GPU clusters and storage systems, youll fit right in.What youll work on:Scalable, distributed AI infrastructure across cloud, on-prem, and colocation environmentsGPU orchestration and fault-tolerant scheduling (Slurm, Kubernetes, Ray, and other orchestration frameworks)Supercomputing clusters and high-performance storage solutions for AI workloadsUltra-fast data pipelines for petabyte-scale AI training workloadsMulti-cloud orchestration and on-premise AI data centers, making compute feel like a single, unified systemDevOps & MLOps automation for streamlined model training and deploymentSecurity and reliability for distributed computing across the public internetScaling compute clusters 10-20x, from 128 to 1024+ GPUs, ensuring high uptime, reliability, and utilizationOptimizing HPC clusters for AI training, including upgrade pathways and cost-efficiency strategiesYour background would include some or all of the following:Strong systems engineering skills with experience in distributed computing and storage for AI workloadsProficiency in GPU cluster management, including NVIDIA GPUs, Slurm, and KubernetesDeep understanding of distributed training frameworks and multi-cloud architectures (AWS, GCP, Azure, and emerging GPU clouds)Experience managing large-scale clusters, including team leadership, hiring, and scaling operationsExpertise in high-performance storage (Ceph, S3, ZFS, Lustre, and others) for massive AI datasetsAbility to optimize cluster utilization, uptime, and scheduling for cost-effective operationsUnderstanding of colocation strategies, managing AI data centers, and running HPC workloads in mixed environmentsDevOps/MLOps experience, automating training pipelines for large-scale AI modelsExperience working with AI/ML researchers, optimizing infrastructure for deep learning trainingThis role is perfect for senior engineers who have built and scaled large AI compute clusters and are passionate about pushing the boundaries of distributed computing and AI training infrastructure.Our cultureWe move fast. We ship weeklynew features, improvements, and fixes go live fast.We test big. Every month, we stress test with large groups of users face to face, get real-world feedback, and iterate rapidly.We build together. On site only, in SF or Sydney.We iterate relentlessly. Direct user feedback shapes our roadmapwe release, test, refine, and keep moving. We travel when needed. Engineers may travel between SF and Sydney to run events and meet with clients.Location: SF or Sydney (OG startup house vibe, great food, late nights, all the GPUs)Equipment & Benefits:Top spec Macbook + separate GPU cluster dev environments for each engineer.Weekly cash bonus when you work out 3+ times a week.Comprehensive health benefits, including a choice of Kaiser, Aetna OAMC, and HDHP (HSA-eligible) plans for our SF-based team members.Highest in the world 20 year exercise window for optionsDont have all the skills? Apply anyway! Were looking for people who move fast, learn fast, and ship fast. If thats you, lets talk.Want to get to know us first? Attend one of our upcoming events. #J-18808-Ljbffr