Your AI bill is growing faster than anyone can explain it. Your invoice jumped 40% week over week, but was that from your production chatbot, experimental fine-tuning, or provisioned throughput nobody used?
Join us as Amit Kinha and Rajan Bhave walk through practical steps to allocate AI costs fairly and measure what (and who) actually drives spend.
AI costs scatter across shared GPU clusters, untaggable API calls, and third-party platforms like OpenAI and Anthropic…none of which fit your existing allocation playbook.
In this session we'll explore how to:
Start with "good enough" allocation when tagging isn't sufficient
Account for training costs, provisioned throughput, and external platforms
Identify meaningful units of work that connect cost to business outcomes
Run controlled tests to establish cost baselines before scaling
Partner with ML teams so AI cost allocation becomes collaborative, not contentious
This session is for:
FinOps teams allocating AI costs across business units and product teams
Engineering teams and Engineering Managers who need visibility into what their AI experiments actually cost
Amit serves as Field Chief Technology Officer at DoiT, where he guides the vision and roadmap for the DoiT Cloud Intelligence™ platform and advocates for FinOps best practices across the industry.
Previously, he was Director of Cloud FinOps at Citigroup, leading product and engineering efforts to build in-house FinOps tooling, optimize multi-cloud spend and shape the company’s broader cloud strategy.
Rajan Bhave
Rajan is a Data & AI Specialist at DoiT, is based in the Stuttgart region of Germany. He is deeply passionate about AI, data, and cloud technologies, consistently seeking to learn and share insights in this dynamic field. Rajan values connecting with others, exchanging ideas, and fostering continuous learning. In his free time, he enjoys cooking, exploring trails, hiking, swimming, and playing table tennis.