#StartupsEverywhere: Stanford, Calif.

#StartupsEverywhere: Drew Barvir, Co-Founder, Sonar Mental Health
This profile is part of #StartupsEverywhere, an ongoing series highlighting startup leaders in ecosystems across the country. This interview has been edited for length, content, and clarity.

Building private and safe digital spaces

Drew Barvir is the co-founder and CEO of Sonar Mental Health, a mental health and coaching platform supporting the mental welfare of students. We sat down with Drew to discuss his company, data privacy, the use of AI in sensitive spaces, and more.

Can you tell us about your background and what led you to Sonar Mental Health? 

I started Sonar Mental Health while I was doing my MBA at Stanford in 2022. I'd always been passionate about mental health and the adolescent mental health space, as it ties into my family history. I wanted to focus on preventative, and scalable support for adolescents. I started talking to as many people as possible. There's this whole cohort of young people— who start off with mild to moderate conditions—who don't get any support. Sonar is a personal wellbeing companion for every student, using a combination of AI, and trained staff. We partner with school districts, and we offer chat support to every student, 24 hours a day, 7 days a week.

Can you tell us a bit more about how Sonar works for your users and how you’ve approached supporting them at scale? 

Students have access through text messages, on our app, on our website, and across any device we partner with. Students are always talking to Sonny, Sonny is not a chat-bot, we choose to use the consistent name because it allows two components. One, there's a psychological component of consistency, and they focus on themselves. Then, the second component, allows us to be more flexible on how we're staffing on the back end. To be able to support response times, under a minute, and any time of day, is a game changer. 

We also work closely with Sonar Changemakers, students we engage with weekly. We're constantly testing ideas, asking questions, trying new approaches, and gathering their feedback in real time.

What role do humans play in supporting students?

We have a team of clinicians who collaborate with our engineers to develop our systems and hire and train our “Wellbeing Coaches.” Those Coaches are the ones who interface with students via their phones. They’re independent contractors—many are mental health professionals—and they serve as a subclinical support tool. When students need more intensive care, we escalate to licensed clinicians. Our clinicians will also typically interface with families and schools when needed for specific situations. 

How are you using AI?

We use AI to make our team of Wellbeing Coaches more efficient and effective. With AI we are able to respond to many people, in a very short period of time. We've developed an internal product that pulls in all the relevant data and information from past conversations with a student. Then our Coach AI co-pilot suggests responses—using summaries of past conversations, available resources, and relevant context—and gives us recommendations on the kind of style and what the student responds best to. Having that human coach in the loop helps train our AI. Most importantly, it ensures that all students are getting safe, helpful responses.

We trade off different models depending on the use case. For example, we’ve found that Claude is better at active listening and empathy, while others are better at information recall or summarizing conversations. 

How do you handle sensitive or serious content students might share, like bullying or mental health concerns? And what guardrails do you have in place to ensure students are using the service as intended?

We prioritize safety as quickly as possible. If we see concerning behavior from a student, we'll first encourage them to seek help from someone in their life. If they aren't doing that, and we have reason to be concerned, we’ll let them know that we’re concerned, and tell the student that we’re going to reach out to their emergency contacts, or even the local authorities, in extreme cases. When it comes to inappropriate content or misuse of the platform, we set clear boundaries with students. In some cases, we’ve suspended accounts or paused conversations for a period of time, but overall, we don’t see much of that behavior.

How do you think role of technology in improving kids’ lives?

We have to live with and understand the reality that kids are going to use tech. AI usage is only going in one direction. We need to think about the positives and then put safeguards around ensuring any risks are mitigated, because the positives are immense.

The national average mental health professional to student ratio is almost 400 to one. That means each student could see a counselor every three months. In today's generation, kids are looking to engage within minutes, not wait months or weeks, so there's a huge accessibility issue. Another barrier to accessibility is the stigma many students still feel around seeking mental health support. For some, the school counselor’s office doesn’t feel like a judgment-free space where they can open up.

In addition to making mental health support more accessible, technology also offers cognitive benefits students can open up to a machine knowing there’s a real human behind it who brings empathy and care. That layer of distance helps create a judgment-free space where students feel safe taking that first step. We’ve heard from many students that it’s freeing to be honest about how they’re feeling without having to sugarcoat it or feel guilty for sharing.

States are getting increasingly involved in AI policy. What do you want policymakers to keep in mind as they think about rules in this space?

We’re very pro-transparency—we think it's a good thing to have the end user know what they're engaging with, and give them some agency over whether to engage with AI. When it comes to policy, the devil is in the details, because certain requirements can add operational complexity. It’s disappointing to see all of this AI policy get pushed down to the states, because we know it's going to be a huge mishmash of different regulations. It's fine to have standards around AI transparency, but we’d prefer a national standard. It would be a nightmare if we would have to disclose it in one way in California, one way in Illinois, and one way in Texas. Then we’d have to have 50 different versions of the product, and that would be a huge feat from a development perspective. 

We already deal with regulatory complexity because we’re in the student mental health space, so we’re navigating requirements under the Children’s Online Privacy Protection Act, the Family Educational Rights and Privacy Act, and the Health Insurance Portability and Accountability Act. As policymakers approach these questions, there needs to be some education and thoughtfulness around the impact on startups.

What are your goals for Sonar Mental Health in the future? 

Our focus right now is on scaling through more school districts, but we also see a much broader opportunity to support the wider care system. Our goal is to build deeper integrations beyond schools, moving from a solution that delivers outcomes in education settings to one that’s embedded across the entire care ecosystem – thereby expanding access, shortening wait times and lowering total cost of care. We think that this has so many benefits across the board. 


All of the information in this profile was accurate at the date and time of publication.

Engine works to ensure that policymakers look for insight from the startup ecosystem when they are considering programs and legislation that affect entrepreneurs. Together, our voice is louder and more effective. Many of our lawmakers do not have first-hand experience with the country's thriving startup ecosystem, so it’s our job to amplify that perspective. To nominate a person, company, or organization to be featured in our #StartupsEverywhere series, email advocacy@engine.is.