Careers
Join a small, senior team
building the fastest on-device AI inference engine
Join a small, senior team building the fastest on-device AI inference engine
Careers
Join a small, senior team, building the fastest on-device AI inference engine.
Open Positions
About us
Mirai builds the fastest on-device inference engine for Apple Silicon. In under a year, a 14-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
About us
Mirai builds the fastest on-device inference engine for Apple Silicon. In under a year, a 14-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
Why us?
Mirai is founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users). Our team is small (14 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
Mirai is founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users). Our team is small (14 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
About us
Mirai builds the fastest on-device inference engine for Apple Silicon. In under a year, a 14-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
Why us?
Mirai is founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users). Our team is small (14 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
We’re always interested in meeting exceptional people
We’re always interested in meeting exceptional people.
If you’re an engineer, researcher, or builder, write us
If you’re an engineer, researcher, or builder, write us.
We’re always interested in meeting exceptional people
If you’re an engineer, researcher, or builder, write us
Open Positions
About us
Mirai builds the fastest on-device inference engine for Apple Silicon. In under a year, a 14-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
Why us?
Mirai is founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users). Our team is small (14 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
We’re always interested in meeting exceptional people.
If you’re an engineer, researcher, or builder, write us.