Happy Thanksgiving from the Decagon family! 💙
Decagon
Software Development
San Francisco, California 36,358 followers
The leading conversational AI platform empowering every brand to deliver concierge customer experience.
About us
Decagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. Decagon is backed by Accel, Andreessen Horowitz, Bain Capital Ventures, BOND, A*, Elad Gil, and the founders of Box, Airtable, Rippling, Okta, and more.
- Website
-
https://decagon.ai
External link for Decagon
- Industry
- Software Development
- Company size
- 201-500 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Specialties
- AI Agents and Conversational AI
Locations
-
Primary
Get directions
100 1st St
San Francisco, California 94105, US
Employees at Decagon
-
Aaref Hilaly
Aaref Hilaly is an Influencer Partner at Bain Capital Ventures
-
Paloma Ochi
Marketing at Decagon
-
Adriana Karaboutis
Corp Board Director (Aon, Autoliv, Perrigo, Savills) | Technology Strategist and AI Advisor Decagon, iGreenTree | Drive Digital Transformation…
-
Brittany Hoffman
Product designer | Formally Faire, Airbnb
Updates
-
Most companies don’t want to limit their support teams. They want to show up for every customer, but the reality of finite resources often gets in the way. Decagon changes what’s possible by making high-quality, real-time support dramatically more efficient. When the cost and complexity of each interaction drop, teams can extend hours, reach more customers, and finally build the level of service they’ve always aimed to deliver. It’s a shift that leads to happier users, stronger retention, and support experiences that exceed expectations. This week, our Co-founder and CTO, Ashwin Sreenivas, joined Nataraj Sindam on the Startup Project Podcast - AI Startups with Product Market Fit to dig into how companies are making that shift today. Full episode in the comments below. ⬇️
-
Last week our teams in both SF and NYC headed out to some incredible AI community events that focused on bringing women in the field together. Thanks to team at Cursor for a great time here in SF, and to the teams at Clay and Baseten in NYC. Across both, our team learned: • Feeling overwhelmed in such a rapidly evolving space isn’t a failure; it’s a sign you’re growing and making a real impact. • With AI, the obstacles to building and shipping your first product are lower than ever. • If you want something, go after it with everything you’ve got. Take the risk, trust yourself, and commit fully. We’re grateful to be part of such a vibrant, supportive tech community and are energized by every chance to connect and build together.
-
-
Fast, reliable AI agents start with a specialized architecture built for real-world performance. At Decagon, we fine-tune smaller models for the many sub-tasks involved in resolving a customer’s request. Each component is deeply aligned to its role, from interpreting customer intent to selecting the right workflow. In our latest blog, our Head of Research, Max Lu, explains how we combine supervised fine-tuning with reinforcement learning to help each model become an expert in its function, boosting both performance and modularity. Fine-tuning is a critical part of how we build agents that are fast, performant, and reliable. Read the full blog in the comments.
-
-
The voice of the customer has never mattered more. Surveys and NPS scores capture only fragments of how customers feel, and often too late to make a difference. In our new eBook “Why Voice of the Customer Matters More Than Ever”, we explore how AI is transforming everyday customer conversations into real-time intelligence that drives better decisions, stronger loyalty, and faster growth. Inside, you’ll learn how to: ✅ Turn every support interaction into structured, actionable insights ✅ Give product teams direct access to customer “ground truth” ✅ Transform support from a cost center into a strategic growth engine Download the full eBook below. ⬇️
-
-
A lot of consultants reach the same turning point. Solving big problems is exciting, but eventually you want to build something real. That’s exactly where Tara Roudi was before joining Decagon. After years advising companies on AI, she wanted to work on products that ship and own outcomes end to end. As an Agent PM, she gets to do exactly that. The consulting skills she built translate naturally here, from navigating complex organizations to bringing clarity to ambiguous problems. Her story highlights why the APM role is such a compelling next step for people ready for more ownership. You get the pace and problem-solving you’re used to, paired with the satisfaction of building AI agents that go into production with some of the world’s biggest brands. If you’re thinking about making the move from advisor to builder, Decagon is a great place to start. Read the full blog in the comments. ⬇️
-
-
We’re hiring a Group Product Manager for our Agent PM team in NYC 🗽 In this role, you will: 💡 Lead and coach a team of high-performing Agent PMs building enterprise-grade AI agents. 📈 Develop playbooks and processes that define how this function scales. 🤝 Partner with leaders across Product, GTM, and Engineering to turn customer insights into platform impact. 🎯 Help shape strategy and execution across some of the most ambitious AI deployments in the world. 📍 Full-time, on-site in New York City If that sounds like you, check out the full JD in the comments, or tag someone who’d be a great fit!
-
-
When we built the latest generation of Decagon’s voice agents, we wanted to optimize latency without compromising on accuracy. That meant rethinking both how our models were trained and how they were served in production. We pushed on two key fronts: improving model accuracy and delivering real-time inference. On the model side, we used data augmentation, supervised fine-tuning, and reinforcement learning to train compact models that rival much larger systems in quality and robustness. On the inference side, the team optimized every layer of the serving stack to minimize latency and make voice interactions feel instantaneous and natural. We achieved a 65% reduction in latency, along with major improvements in responsiveness and reliability. Building something this fast and consistent required deep collaboration between research and infrastructure teams who care about doing things right. Link below to read more on how we built Voice 2.0. 👇
-