We’re joined by Rishab Mehra in his London HQ. As founder and CTO of Pinnacle, an AI-driven human performance coaching platform that has just raised $1.6 million in funding, Rishab offers an interesting insight from his beginnings at Stanford’s renowned Vision and Learning Lab with Fei-Fei Li, to Apple’s machine learning groups, where he had authorship on more than 20 patents. His past includes computer vision work at Nature and NeurIPS, building ML systems at Apple for five years that touched billions, and now the startup risk of building a company that is going to transform the way we approach optimizing human performance.
Let’s start with your current venture. What is Pinnacle about, and what motivated you to leave Apple after five successful years to start this company?
Pinnacle is the realization of all my dreams, utilizing the newest AI technology to directly and positively impact people’s lives. At Apple, I was fortunate to have developed ML systems that were utilized on a daily basis by billions of individuals but became increasingly aware of the potential of applying the same principles of personalization and scale, but apply them to something even more personal: human performance coaching. The world of coaching is currently fragmented and unaffordable to most as the science of performance keeps advancing. At Pinnacle, we want to make world-class coaching accessible to all by connecting AI with evidence-based methods in weekly check-ins and customized training plans for any sporting, mental, or wellbeing objective. It’s so rewarding to be building something that is not only an app, but truly about unlocking human potential.
Your research background at Stanford with Fei-Fei Li, which led to publications in Nature and NeurIPS, is impressive. How has that academic foundation influenced your approach to building AI products?
It was life-changing to work with Fei-Fei Li. She made us treat AI and computer vision not as technical issues but as a means to an end to address actual human issues. That strictly academic mind-set, from hypothesis to experimental design to peer review, has been invaluable to me throughout my career.
At Stanford, we always asked “What is the fundamental problem we’re trying to solve?” and not “What’s the coolest technology?” This was true at Apple and true at Pinnacle. For instance, when building ML features such as Homescreen redesign or App Clips, I used a structured methodology to discover users at a very profound level and validate hypotheses. The academic education also instilled in me the principle of reproducibility and rigorous testing, which gets lost with the hurry to release shiny AI demos.
Filing 21 patents at Apple shows remarkable innovation. What was it like working on AI systems deployed at such a scale?
It was thrilling and humbling to work on AI at Apple. When your systems would be running in parallel on hundreds of millions of people, each design and architecture decision matters a lot. The challenges were performance optimization over heterogeneous hardware platforms, privacy by design, and rock-solid stability. The best part was leading cross-collaborative efforts that emerged from rapid prototyping exercises.
These were not computer technical issues; they were challenges to re-engineer the relationship between human beings and their machines with AI. The pace was hectic, and we exchanged innovation for practical limitations.
You’ve experienced AI from the angles of research, big tech, and startups. What’s your take on the current competition between open source and closed source AI models?
This is a fundamental and intriguing dynamic in the AI space today. Closed source models, like OpenAI and Anthropic, permit quicker iteration and possibly tighter safety controls but also centralize power and potentially limit innovation at the edges. Open source models, like Meta’s Llama, democratize access so startups and researchers can experiment without exorbitant infrastructure costs. Apple’s focus on on-device AI offers yet another perspective, with a focus on privacy and responsiveness in real time.
For me as a founder, access to open source AI has been a game changer, enabling small teams like Pinnacle’s to create sophisticated products at velocity. I believe the future will be hybrid, different approaches optimized for different use cases, more than a single winner-take-all.
On-device AI seems to be a major trend, especially with privacy concerns rising. Where do you see this technology heading?
On-device AI is certainly in the middle of the future of consumer applications. My experience at Apple reinforced that privacy is a growing concern for users, and that on-device processing rather than the cloud addresses many problems as well as enabling new types of real-time interactive experiences.
At Pinnacle, we’re looking into how on-device models could enable real-time coaching feedback based on behavioral or biometric data. The major issues are to make these models very power- and storage-efficient without compromising on ability. I envision speedy development in model compression and specialized hardware that will enable AI capabilities in products and scenarios we have yet to imagine. This will fundamentally change what’s possible with AI-powered experiences.
You mentioned LLM evaluation as a focus area. How do you know if large language models are working effectively in real-world applications?
Going beyond conventional metrics for LLMs is one of the largest challenges that we have. BLEU scores can only get us so far in informing us whether the AI is indeed assisting individuals in getting work accomplished. At Pinnacle, we’re constructing test systems where we’re placing real-world pragmatism, safety, and human values alignment first. This implies longitudinal studies, control groups, and measuring results that actually do count for users, such as enhanced performance or behavioral changes. The AI research community must move to adopt complete evaluation paradigms for stability, fairness, and long-term effects, in addition to accuracy or novelty. The real proof of an AI system is whether it generates long-term beneficial value.
AI risks often focus on long-term existential threats, but you’ve pointed out more immediate dangers. Could you elaborate?
Of course, long-term AI safety matters, yet I’m far more worried about near-term threats we’re already confronted with, i.e., AI scams. Deepfakes, voice cloning, and customized phishing are all becoming extremely sophisticated and user-friendly so that malicious actors can target the most vulnerable populations such as the elderly. These are not pie-in-the-sky threats of the future but current threats today.
Democratization of AI power means that bad applications are running amuck as well as good applications. Short-term harm prevention needs a combination of private sector self-regulation, public education, and updating of policy frameworks. The technology is developing faster than the regulatory frameworks are able to keep up, and so something therefore needs to be done in advance.
Having just recently raised $1.6 million for Pinnacle, how hard is it to build a unicorn startup, and what’s your growth plan?
Building a unicorn startup is very hard, and the odds are against most founders. It takes more than good tech, timing, market, team, and a dash of luck all play gigantic roles. For Pinnacle, our immediate focus is demonstrating strong product-market fit in a high-functioning individual market looking for personalized, science-based coaching. Then we will follow on and push into the surrounding markets and scale the user base. AI can deliver a competitive advantage through personalization at scale, but tech by itself is not all that is required. We have to build solid user relations and deliver performances that actually make us stand out. My experience at Apple solidified that user experience is most important, and it remains so today.
What do you think about startup accelerators? Are they worthwhile for technical founders such as yourself?
Startup accelerators can be hugely valuable but are not always the right option. They provide structure, mentorship, and networks, which can be life-altering, to first-time founders. But for a candidate like myself, with industry and network experience already under the belt, the benefit would instead be in the form of investor introductions that are targeted and domain expertise, in health tech or AI specifically. We haven’t applied to any accelerators to avoid over-dilution.
Looking forward, how do you see the intersection of AI and human performance evolving in the next several years?
I think we’re at the cusp of an exponential shift in human potential. AI combined with biometric sensing, behavioral science, and performance research will lead to whole new classes of applications far beyond today’s coaching apps. We will see real-time, context-aware AI applications covering physical, cognitive, emotional, and skills development holistically. They will not just offer generic suggestions but also deeply understand limitations and goals of the subject.
Convergence with brain quantification, LLMs, and growing mental performance awareness will lead to end-to-end human enhancement ecosystems. The greatest challenge will be human agency, making sure that AI augments and doesn’t substitute judgment and intuition. We at Pinnacle would want to be at the forefront in this field.