Yes, would love to chat. You can block my calendar here: cal.com/kabrasidhant.
Agree to everything you said. That is why we have our observability platform, which allows your live calls to be monitored. The idea is to use the observability platform to run real-life simulations so as you make fixes, you can test it in simulation environment
Yes, Ishan, we are working with clients who have their own custom builds instead of using a platform. All we need is your bot context and an endpoint to run simulations.
Thanks a lot. The focus currently is to link the observability and testing environment so that test sets can automatically be created based on findings in actual production calls. Currently, it requires human intervention to create a scenario for testing based on findings of production calls.
Great question! We've seen significant demand for testing and observability across sectors like healthcare, insurance, home services, and e-commerce. You’re correct—many of our customers also rely on tools like Voiceflow, Vapi, and others to build their agents.
What they love about our platform is having both testing and observability in one place. Observability helps identify issues while testing allows them to simulate and prevent those problems before they escalate. This dual approach is especially helpful for teams dealing with voice-specific challenges, industry-specific nuances, or company-specific edge cases.
Our tool is particularly valuable for teams stuck with manual testing—it saves time in iterating the bot and ensure the edge cases are taken care of
Thats great. We also generate adversarial scenarios for our customer's voice agents like you did. The roleplay example was made for you to get a sample evaluation of your performance. In reality, we generate simulations automatically and provide analytics on your AI agent's performance, as demonstrated in the demo video.