Imagine wearing a piece of technology that makes you want to punch the person next to you in the face. Sounds extreme, right? But that’s exactly the gut-check test Kevin Rose, a seasoned investor and general partner at True Ventures, uses to evaluate AI hardware. His bold rule? If a device triggers that kind of visceral reaction, it’s probably not worth investing in. And this is the part most people miss: it’s not just about the tech—it’s about how it makes us feel and how it fits into our social fabric.
Rose, an early investor in household names like Peloton, Ring, and Fitbit, has largely steered clear of the AI hardware frenzy sweeping Silicon Valley. While others chase the next big thing in smart glasses or AI pendants, he’s taking a step back to ask the hard questions. But here’s where it gets controversial: Rose argues that many AI wearables today violate fundamental social norms, particularly around privacy. “A lot of these devices are always on, always listening,” he explains. “It’s like they’re trying to be the smartest person in the room, and that’s just not healthy.”
His perspective isn’t just theoretical—it’s rooted in experience. As a former board member of Oura, which dominates 80% of the smart ring market, Rose has seen firsthand what separates successful wearables from failures. It’s not just about technical prowess; it’s about emotional resonance and social acceptability. “As an investor, you have to ask, ‘How does this make me feel? How does it make others feel?’” he shared at TechCrunch Disrupt last week. “And a lot of AI stuff today feels like it’s missing that human touch.”
Take his personal experience with the Humane AI pendant, which briefly captured global attention. During an argument with his wife, he found himself using the device to ‘prove’ his point by reviewing recorded conversations. “That was the last time I wore it,” he admitted. “You don’t want to win an argument by pulling up AI logs. It just doesn’t sit right.”
Rose also questions the value of AI in everyday scenarios, like using smart glasses to identify a monument. “We’re slapping AI onto everything, and it’s starting to feel like the early days of social media—decisions that seem harmless now but might haunt us later,” he warns. He shares a relatable example: a friend who used a photo app to erase a gate from their yard to improve a picture. “Their kids are going to look at that photo and wonder, ‘Didn’t we have a gate?’ It’s these small, seemingly innocent choices that add up.”
Even with his young children, Rose sees the tension. Using OpenAI’s Sora to generate videos of Labradoodles, his kids asked where they could get those puppies. “It’s awkward explaining that those puppies aren’t real,” he says. His solution? Treat AI like movie magic—just as actors aren’t really flying on screen, those puppies are a digital illusion.
But don’t mistake Rose for a Luddite. He’s deeply optimistic about AI’s potential to transform entrepreneurship and venture capital. “The barriers to entry for entrepreneurs are shrinking every day,” he notes. He recounts a colleague who built and deployed a complete app using AI coding tools during a drive from LA to San Francisco—a task that would have taken ten times longer just six months ago. “When Google’s Gemini 3 hits the market, we’re looking at near-zero errors,” he predicts. “High school coding classes will become ‘vibe coding’ classes, and the next billion-dollar business could come from a random high school. It’s only a matter of time.”
This shift is reshaping the venture capital landscape. Entrepreneurs can now delay fundraising or even skip it entirely, which Rose believes will improve the VC industry. Here’s the controversial part: While many firms are hiring armies of engineers to keep up, Rose thinks the real value for VCs lies in emotional intelligence (EQ). “Entrepreneurs will face emotional, not just technical, challenges,” he argues. “The VCs who can show up as long-term partners with high EQ—those who’ve been around the block and seen these problems at scale—they’re the ones who’ll be sought after.”
So, what does Rose look for in founders? He circles back to advice from Larry Page: “A healthy disregard for the impossible.” He’s drawn to founders with bold, audacious ideas—the kind that make others say, “That’s a horrible idea. Why are you doing this?” Even if the idea fails, Rose values the mindset behind it. “We love your mind, we love where you are, and we’ll back you again,” he says.
But here’s the question for you: Is Rose’s focus on emotional resonance and social acceptability the right approach to evaluating AI hardware, or is he underestimating the tech’s potential? And as AI continues to infiltrate every aspect of our lives, how do we strike the right balance between innovation and ethical considerations? Let’s hear your thoughts in the comments!