#StartupsEverywhere: New York, n.y.
#StartupsEverywhere: Chad DePue, CEO & Founder, Sitch
This profile is part of #StartupsEverywhere, an ongoing series highlighting startup leaders in ecosystems across the country. This interview has been edited for length, content, and clarity.
Updating matchmaking for the digital age
From an investor at an acquired startup to the Senior Director of Engineering at Snap Inc., Chad DePue has spent his career working throughout the tech ecosystem. Now, Chad is bringing his experience to his own startup–Sitch–an AI-powered dating app that plays the role of matchmaker for its users. We sat down with Chad to talk about Section 230, the state of AI regulation, and more.
Tell us about your background. What led you to Sitch?
I have had a long career in tech that started at Microsoft. But I always wanted to build startups, so I left after a few years with a group of really talented engineers to start a company. After it was acquired, I eventually started a consultancy for mobile apps. There, we had a lot of successful clients, but one that stands out is Whisper, which I first joined as a pre-seed investor, then when they got to Series B, joined them as CTO—helping grow it to 200 million users globally. In 2016, I joined Snap, where I was involved in building the trust and safety team, developer platform, and eventually core product engineering for Messaging, Bitmoji, Memories, Music, and Games.
While I was at Snap, I was investing in startups, and thinking about what comes next. I wanted to do something in consumer tech to use AI to promote connection as humans rather than isolation. I met Dini Mullaji, my cofounder, who was also thinking about this. We started Sitch and got a16z Speedrun (pre-seed) and M13 backing at our Seed round to help us build the product and grow the company. We’re going market by market and will be in five cities by the end of this year.
What is the work you all are doing at Sitch?
Sitch is an AI matchmaking service. A user spends about 30 to 35 minutes onboarding, and AI is involved in that process, helping us understand what’s important to you, generate your profile, and choose pictures to include (because people… mostly men... are bad at it). Humans review every profile to make sure users are high quality, are based in the city where we’re operating, and to avoid people that should not be on the platform. We also use AI to help with matchmaking. We build with existing models to keep up with the latest tech, and train on top of them for our purposes. Our use of AI is really all about enhancing human capacity—for our users to create quality profiles, and for us to review profiles, make matches, and more.
Most of our users are high intent, usually in their 30s, with strong preferences about what they’re looking for. We match or present people with very few others daily. Users pay per match, which is actually a novel business model that means we have the same goal as our users, because users won’t stick around to pay for bad connections. It also helps reduce spam, and users with mal-intent because the economic incentives to be a bad actor on the platform aren’t there. We have high conversion to paid user rates thanks to this approach.
What role do policy frameworks play in your ability to moderate profiles, curate content users are presented with, and facilitate communication between users?
We could not run a service like this without the certainty Section 230 provides. If we could not moderate low-quality profiles—because those people could then sue us, even if our ability to remove low-quality profiles is protected by the First Amendment—users would not trust or find value in our matchmaking. We take escalations very seriously, but we also want to allow users the freedom to know we’re not policing their content if it’s not against our policies. I know from my time at Whisper and Snap that there is so much that you can’t anticipate when you make a new feature available. Some users—by human nature—will use a new means of communication for nefarious purposes. We review and approve every user on our platform but then allow them to interact with others if they both want to meet. Section 230 enables us to respond to bad actors, both before they get on the platform and if another user escalates an interaction to our team, but also means that those bad actors don’t ruin it for everyone else.
You underscored the importance of certainty when it comes to moderating user content. Are there other areas of tech policy where you think startups like yours could benefit from additional certainty?
We need a 230-like law for AI that sets down where the division of liability lies so that we can have the certainty to build startups and grow them into successful companies. Right now, I spend quite a bit of time on AI policy analysis, which I did not expect when I started the company. It eats into our resources and capacity to focus on running the company.
It’s not just AI policy, though. Individual states have different rules about consumer data privacy, if you have to perform age verification, and what you need to do in the event of a data breach. For consumers, it can put them at risk because age verification, for example, leads to storage of sensitive identity information by individual sites and puts it at risk of breach. For us, it costs so much in legal fees to navigate—to evaluate what we would need to do for a given market, and then to implement those frameworks. One uniform, nationwide framework for these issues would be ideal for startups to scale across the country.
Policy uncertainty also keeps us from expanding into particular markets, not just across states, but internationally. The EU AI Act has a prohibition on social scoring, and while we don’t regard our product as a social score, they might. We have to denote and order user profiles so we can systematically match them to other users we believe they could have a romantic connection with. Would a regulator call that a “score?” Facilitating matchmaking is the core of our product, and it doesn’t make sense to try to expand to European markets with that being such a big question mark. We will reevaluate our launch plans next year but currently it’s a significant legal and policy risk.
Is there anything else you want policymakers to know about AI and public policy?
Transparency in the AI space is important, but it’s necessary to understand where that transparency is achievable. For example, consumers can expect transparency about when and for what purposes AI is used. But transparency into AI decisionmaking isn’t always possible. These models are trained on trillions of data points; it would be impossible to trace back to what specific piece of data informed an output. By the way, we don’t necessarily have any more insight into human decisionmaking. I can't always tell you exactly what factors led me to picking out the socks I'm wearing today when I opened my sock drawer this morning. AI is making so many inferences based on so much data that it can't necessarily point you to the specific reasons it's recommending you wear these socks today either!
What are your goals for Sitch moving forward?
Currently, we’re live in New York City, San Francisco, and Los Angeles, and by the end of the year we are planning to open Sitch to users in Chicago and Austin. There’s a lot of interest all around the country, we have a waitlist in at least 20 other US Cities, so hopefully we can match our growth to the demand. Overall, we want to use AI to connect real people who are serious about dating, tired of swiping, and ready to meet their person.
All of the information in this profile was accurate at the date and time of publication.
Engine works to ensure that policymakers look for insight from the startup ecosystem when they are considering programs and legislation that affect entrepreneurs. Together, our voice is louder and more effective. Many of our lawmakers do not have first-hand experience with the country's thriving startup ecosystem, so it’s our job to amplify that perspective. To nominate a person, company, or organization to be featured in our #StartupsEverywhere series, email advocacy@engine.is.