Find your next role
Discover amazing opportunities across our network of companies committed to gender equality in the workplace.
Amazon
Software Engineering, Data Science
Seattle, WA, USA
We’re building foundational large language model capabilities for Amazon Stores that combine general world knowledge with Amazon’s e-commerce domain expertise to create more intuitive, conversational, and personalized shopping experiences for our customers. We’re looking for pioneers who are passionate about technology, innovation, and customer experience, and who want to make a lasting impact in a rapidly evolving space. You’ll work alongside talented scientists and engineers to invent on behalf of customers and unlock the next generation of LLM-powered shopping experiences.
If you’re excited about working at the intersection of large-scale ML systems, post-training and inference optimization, and customer-facing innovation, this is a unique opportunity to join a dynamic team shaping the future of AI at Amazon.
Key job responsibilities
In this role, you will leverage your engineering expertise to develop and optimize generative AI systems for shopping. On a day-to-day basis, you will:
* Design and optimize high-performance kernels, custom operators, and low-level acceleration techniques that maximize hardware utilization and reduce computational overhead for LLM training and inference.
* Drive improvements in memory management, parallel computing, kernel fusion, attention optimization, and matrix multiplication efficiency to reduce latency and increase throughput at scale.
* Partner closely with applied scientists, engineering teams and product managers to define requirements, support experimentation, and deliver production-ready systems.
* Move quickly in ambiguous environments, make thoughtful short- and long-term trade-offs, and deliver incrementally across a wide range of technologies, from distributed data processing to ML infrastructure and kernel-level optimization.
* Develop tooling to accelerate experimentation, improve observability, and generate insights across model quality, latency, throughput, and efficiency metrics.