We are looking for talented individuals to join our team in 2027. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at our Company. Successful candidates must be able to commit to an onboarding date by end of year 2027. Please state your availability and graduation date clearly in your resume. Team Introduction: With the fast growth of Bytedance's business, Bytedance's system infrastructure is currently at a massive scale and requires versatile system solutions. Our lab collaborates with HQ teams on advanced R&D projects, focusing on LLM/AI + Infrastructure technologies, which includes both infrastructure for LLM/AI and LLM/AI for infrastructure. To name a few, we are building a new cloud-native vector index library. Our TextToSQL project ranks top on well known industry benchmarks. We also work on advanced AIOps technologies that are used by our Volcano cloud products. Besides achieving great business impacts, we also encourage publishing on top tier conferences. In year 2025 alone, our lab published nearly 20 papers in top tier conferences, such as SIGMOD, VLDB, FSE, ICLR, EuroSys, WWW etc. We hire students with great technical skills, willingness to learn and solve complex technical challenges and passion in making an impact on millions of users. With the large-scale deployment of large language models (LLMs) and AI Agents, traditional cloud-native infrastructure can no longer meet the extreme performance and elasticity demands of AI workloads. This project conducts systematic research across the full stack of AI infrastructure, focusing on the following areas: Data Management for LLM and Agents 1. Cloud-native vector search: Optimize core technologies for vector retrieval in large model applications. Build a cloud-native distributed vector indexing engine to support ultra-large-scale vector search with low latency and low cost. 2. Multi-modal query processing: Support seamlessly integrated multi-modal query processing, including vector, full-text and regular SQL query processing over various typed data. In addition, how to support large scale semantic operators in a cost-effective and low-latency way is also our research focus. Intelligence & Agent Architecture - Explore infrastructure auto-optimization based on AI Agent workflows. Build a self-evolving business Agent framework, enabling full-stack intelligent optimization through "AI for Infrastructure". We look into various ways to apply AI/LLM in solving infrastructure problems, such as AIOps, NL-to-SQL, Auto Skills etc. This project aims to build next-generation AI-native infrastructure to support LLMs and AI Agents, improving resource utilization, reducing costs, enabling elastic scalability, and driving the evolution of AI infrastructure technologies. Topic Content: With the large-scale adoption of LLMs and AI agents, traditional cloud-native infrastructure can no longer meet the ultra-high performance and elasticity requirements of AI workloads. This topic conducts systematic research across the entire AI infrastructure stack: 1. Network and Observability: Research intelligent fault localization and root cause analysis for large-scale AI clusters, combined with intelligent tuning of time-series databases to improve cluster stability. 2. Storage Systems: Develop serverless high-performance elastic file systems and storage acceleration architectures specifically for AI scenarios, explore hardware-software co-optimization for DPU, and overcome AI storage performance bottlenecks. 3. Data Center Power Scheduling: Research GPU/CPU/MEM heterogeneous collaborative scheduling technologies, build a heterogeneous power orchestration system for AI agents, and address scheduling challenges including heterogenous workloads and state dependencies. 4. Vector Retrieval: Optimize core vector retrieval technologies for LLM-powered applications, building a cloud-native distributed vector index engine to meet ultra-large-scale vector retrieval demands with low latency and low cost. 5. Intelligence and Agent Architecture: Explore automatic infrastructure optimization based on AI Agent workflows, build a self-evolvable business agent framework, and enable full-stack intelligent optimization through AI for Infra. This topic aims to build a next-generation AI-native infrastructure to support the deployment of LLMs and AI agents, improve resource utilization, reduce costs, support elastic scaling, and drive the technological evolution of AI infrastructure. Minimum Qualifications: - Individuals who are completing or recently completed a PhD in Software Development, Computer Science, Computer Engineering, or a related technical discipline. - Skilled in at least one mainstream programming language (e.g., C/C++, Python, Go), with strong coding ability, data structures, and algorithm fundamentals. - Be able to independently design and develop complex systems - Deliver design documents, independent deliverables, and demo systems - Be familiar with state-of-the-art data management technologies Preferred Qualifications: - Strong learning ability and self-motivation. Good teamwork and communication skills. - Excellent problem analysis and solving abilities. - Capable of independently exploring solutions. Strong mental resilience and adaptability when facing challenges. - Have research experience in LLMs and infrastructure is highly preferred - Have published at least two papers as first authors in conferences or journals such as SIGMOD, VLDB, ASE, FSE, NSDI/OSDI, EuroSys etc.
Job Title
Research Scientist - Technologies of Data Management, LLM and AI Agents - Global Frontier Tech Recruitment Program - 2027 Start (PhD)