Mirai builds the fastest on-device inference engine for Apple Silicon.
In under a year, a 12-person team built a full stack, from model optimization to a proprietary runtime, outperforming MLX and llama.cpp on supported models.
We’re making local inference practical, fast, and reliable for real products.
Founded by proven entrepreneurs who built and scaled consumer AI leaders like Reface (200M+ users, backed by Andreessen Horowitz) and Prisma (100M+ users).
Our team is small (12 people), senior, and deeply technical. We ship fast and own problems end-to-end.
We’re advised by a former Apple Distinguished Engineer who worked on MLX, and backed by leading AI-focused funds and individuals.
Turn our Apple Silicon technical lead into market dominance.
We are focusing on:
Maintaining a clear performance lead over open stacks.
Expanding model support without sacrificing speed or reliability.
Building world-class developer tooling, documentation, and benchmarks.
Powering companies where latency, cost, and privacy actually matter.
Why join us?
Impactful Work
You’ll work on core infrastructure that directly shapes how AI runs on billions of devices. Not demos, not prototypes, but production systems.
Career Growth
You’ll take ownership of complex, low-level systems early, and grow alongside a team that has already shipped and scaled AI products.
Collaborative Team
We’re a small, highly collaborative team. No silos, no layers. Just smart people solving hard problems together.
Technology
You’ll work on model optimization, inference runtimes, deployment tooling, and performance-critical systems. Setting new standards for on-device AI.
Open Positions:
Founding GTM / business development rockstar.
US (SF/Bay Area)
GTM, Business Development
Full Time
