Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
In this role, you will be responsible for productizing the most critical ML use cases for our company.
You will work closely with Product leadership and our ML Research and Applied ML teams to identify the most promising areas within the industry and research community for us to go after, balancing business value for our customers and ML thought leadership for Cerebras.
You will translate abstract neural network requirements into concrete deliverables for the Engineering team and work with cross-functional partners to establish roadmaps, process, success criteria, and feedback loops to continuously improve our products.
This role combines both the highly technical with the highly strategic. Successful candidates will have deep understanding of machine learning and deep learning concepts, a familiarity with common modern models (in particular in the LLM space), and the ability to understand the mathematical foundations behind them. Ideal candidates can go beyond model understanding to see the connections and commonalities across different types of neural networks in different application domains. They will also be close followers of the recent developments in deep learning and have a point of view on which types of models may be widely used within the next 1, 3, 5 years.
At Cerebras, we're proud to be among the few companies globally capable of training massive LLMs with over 100 billion parameters. We're active contributors to the open-source community, with millions of downloads of our models on Hugging Face. Our customers include national labs, global corporations across multiple industries, and top-tier healthcare systems. This month, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. We are already booking hundreds of millions of dollars in revenue each year, with a strong growth trajectory.
As the Cerebras ML PM, you will be in the pilot’s seat, driving the transformational role of AI in multiple different industries and getting to work with some of the largest and most interesting datasets in the world alongside a world-class ML research and engineering team.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2025.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Subscribe to Rise newsletter