CONTEXT 2025 Highlights:
How Industry Leaders Are Preparing for Enterprise AI Agents 

CONTEXT 2025 Highlights:
How Industry Leaders Are Preparing for Enterprise AI Agents 

Enterprise AI agents cannot succeed without context. That simple reality brought 1,500+ data and AI leaders together for CONTEXT: The Metadata & AI Summit. DataHub convened this community around a shared challenge: AI agents are the next enterprise technology layer, and like every layer before, they depend on a strong foundation.

That foundation is trusted, contextualized data. Leaders from Apple, Netflix, Foursquare, Robinhood, Block, and more, shared how they’re building AI-ready data stacks for the agentic era.

Across a dozen sessions, speakers explained how metadata provides the context that makes enterprise AI safe, explainable, and production-ready. From strategic vision to proven implementation, CONTEXT showed what it actually takes to move agents from proof-of-concept to production.

Key highlights of CONTEXT 2025

DataHub Co-Founder and CTO Shirshanka Das opened the summit by addressing what will dominate AI discourse over the next year: context management. Fragmented RAG pipelines and application-specific context engineering can’t keep up with enterprise agentic systems. Systematic, organization-wide context management capabilities will separate AI experiments from accelerated agentic AI adoption.

Watch Shirshanka’s opening keynote on demand →

The sessions that followed proved this thesis in practice.

Apple demonstrated how they’re using agentic workflows in their data catalog to automate metadata enrichment, flag anomalies, and write data quality rules. Netflix explained the strategic reasoning behind their investment in global catalog infrastructure. Robinhood and Block walked through how they built end-to-end lineage and the practical first steps teams can take. Aledade showed how governance stops being a bottleneck when it’s embedded in developer workflows. Financial services leaders explored how industry regulations can serve as a starting point for AI readiness.

A clear pattern emerged: organizations winning with AI aren’t just building better models. They’re building better context infrastructure.

Jeff Weiner, Executive Chairman of LinkedIn and Founding Partner of Next Play Ventures, closed the summit with a reality check for data and AI leaders: capitalizing on board-level mandates requires translating technical capabilities into business outcomes. His message connected the technical foundation explored throughout the day to the strategic leadership required to make it real.

Below, we break down what happened in each session and why you’ll want to watch on demand.

All session recordings now available

Unlocking AI’s Potential Through Context Management

Shirshanka Das, CTO & Co-Founder, DataHub

Context engineering was supposed to solve AI’s scalability challenges. Instead, tactical approaches are creating compounding technical debt. Fragmented RAG pipelines, application-specific prompt templates, and isolated memory systems can’t scale across the enterprise.

In this opening keynote, Shirshanka Das diagnosed why these approaches fail and proposed a systematic alternative: context management.

His key argument: metadata designed for human consumption differs fundamentally from context engineered for AI agents. Traditional metadata management focuses on discovery and governance for data teams. AI agents need rich, interconnected context that enables them to read, write, and act safely on enterprise data. Without this foundational layer, models fail in production environments.

I’m predicting that context engineering, which has started off as this gold rush towards making AI more reliable for application development, is going to have its management moment.

Shirshanka Das

CTO & Co-Founder, DataHub

Shirshanka outlined the essential building blocks of an enterprise context platform and explained why current approaches create bottlenecks rather than competitive advantages. The session explored emerging architectural patterns that transform context from a constraint into a strategic asset, revealing how organizations can move beyond one-off implementations to systematic context management that scales organization-wide. 

The implications are transformative: context management isn’t just about making AI work better. It’s about unlocking entirely new possibilities for enterprise intelligence that weren’t achievable before.

For teams struggling with inconsistent AI outputs and brittle systems, this keynote offers a practical path forward.

See Shirshanka’s complete vision for context management →

Agentic Workflows in Data Catalog: Beyond “Talk to Your Data”

Praveen Kanamarlapudi and Ravi Sharma, Apple

As Apple‘s data landscape grew, manual catalog management became unsustainable. Ravi Sharma and Praveen Kanamarlapudi revealed how Apple uses agentic workflows to transform their data catalog from passive documentation into an active governance partner.

Their approach goes beyond conversational interfaces. Agents act as “digital stewards.” They continuously scan metadata, identify gaps, and propose updates. They help Apple’s team automate catalog quality, access management, metadata enrichment, documentation, anomaly detection, and retention recommendations. When policies change, agents adjust workflows automatically.

Ravi Sharma

To do more … you have to bring in more intelligence. And that’s where agentic systems can go beyond.

Ravi Sharma

Apple

For teams wondering how to deploy agents safely in data catalogs, this session demonstrates what’s possible when you balance automation with human oversight. Apple’s phased approach and architectural patterns provide a practical blueprint for agent deployment.

See how to safely deploy agents in your data catalog →

Convergence of Context: Moving Towards a Global Catalog for Netflix

Nitin Sarma, Sr. Engineering Manager Data Discovery, Governance & Experiences, Netflix

Netflix‘s expansion into ads, live events, and games created exponential data growth. New data practitioners faced an impossible challenge: finding reliable datasets when discovery was siloed across teams and institutional knowledge didn’t scale.

This challenge led Netflix to reframe the problem entirely. Why limit discovery to data? Software engineers need to understand API impacts. ML practitioners need to trace model training sources. Product managers need to connect business metrics to underlying datasets. Netflix’s answer: a global catalog spanning all technical assets.

We will be doing Netflix a huge service if we’re thinking about these problems in a more global manner.

Nitin Sarma

Sr. Engineering Manager Data Discovery, Governance & Experiences, Netflix

Nitin explained how this approach solves both present and future challenges. The metadata infrastructure Netflix is building for human discovery will serve as the context foundation for AI agents. Organizations that build robust discovery systems today are inherently preparing for agentic AI tomorrow.

For teams grappling with siloed discovery across data, APIs, and services, Netflix’s global catalog strategy offers a practical path forward.

Learn about Netflix’s vision for unified discovery →

Leading Through the AI Revolution: A Conversation with Jeff Weiner

Jeff Weiner, Founding Partner, Next Play Ventures & Executive Chairman, LinkedIn, with Swaroop Jagadish

Jeff Weiner shared hard-earned wisdom from scaling LinkedIn from 33 million to 690 million members through data-driven leadership. His core framework: “The long-term value of any organization is based on the speed and quality of its decisions.” That speed comes from getting the right information to the right people at the right time, built on a foundation of trust.

Jeff Weiner

Trust is consistency over time. That applies not only to people, it applies to data. It applies to my sources of data. It applies to the pipelines through which the data is being distributed or transported.

Jeff Weiner

Executive Chairman of LinkedIn and Founding Partner of Next Play Ventures

For AI and agents, the same principles apply. Build trust through consistency, understand data sources, and know when to trust versus verify. 

Jeff’s career advice for professionals: use AI extensively to understand it, develop your sense of taste (a distinctly human quality AI hasn’t replaced), share your learnings with teams (not just outputs), and shift from fixing customer problems to helping them build what they envision.

His message for data and AI leaders: you finally have board-level mandates, but seizing this moment requires translating technical capabilities into business outcomes and understanding both intended and unintended consequences of your actions.

See Jeff Weiner’s complete conversation on leadership, trust, and navigating the AI revolution →

FinServ Compliance: Making Regulations Work for You

Ravi Josyula, Head of Enterprise Data, Webster Bank; Sid Narayan, Head of Data Governance, Valley Bank, with Stephen Goldbaum

BCBS 239 requires financial institutions to prove data quality, traceability, and governance. Most firms approach this as a compliance burden, spending heavily on manual documentation that becomes outdated immediately. Ravi Josyula and Sid Narayan shared a different approach.

Their core insight: build infrastructure for AI agents, and regulatory compliance follows naturally. Manual lineage documented in spreadsheets becomes outdated after each change. It requires expensive consultant reviews for every audit. In contrast, automated element-level lineage serves both regulatory exams and future AI models simultaneously.

The reason this works: regulations and AI require the same thing—high-quality, well-governed, traceable data. Build once, satisfy both.

To be competitive in today’s market, you have to be a leader in AI.

Sid Narayan

Head of Data Governance, Valley Bank

For financial services teams balancing compliance costs with AI ambitions, this session demonstrates how to build infrastructure that serves both goals.

Learn real-world FinServ compliance strategies →

The Rhythm of AI: Creativity, Metadata, and the Next Wave of Innovation

Alex Pall, Founder, The Chainsmokers & Mantis Venture Capital, with Shirshanka Das

In this fireside chat, Alex Pall, half of The Chainsmokers and founder of Mantis VC, brought a creative industry perspective to enterprise AI challenges. Drawing from his unique position as both artist and investor, Alex explored how AI tools augment creative workflows while artists retain creative control.

The conversation tackled a critical question for the AI era: attribution and responsible use. Alex emphasized why metadata becomes essential infrastructure for rights management and ethical AI adoption in creative industries. When The Chainsmokers’ music flows through streaming platforms, bars, and international markets, metadata ensures proper attribution at every step.

When our songs are getting distributed to bars across America, metadata is what’s attributing the rights back to us so that we can continue to earn a living.

Alex Pall

Founder, The Chainsmokers & Mantis Venture Capital

Enterprise AI faces the same challenge: tracking data provenance, maintaining audit trails, and ensuring proper attribution as data moves through training pipelines and into production models.

From his investor perspective, Alex emphasized why data foundations matter for AI success. Most AI deployments fail because companies try to bolt new technology onto legacy systems. Organizations that succeed build natively for AI and prioritize usage retention over vanity metrics.

See Alex Pall’s perspective on AI, creativity, and data infrastructure →

Data Supply Chain Visibility: Practical Benefits of End-to-End Lineage

Srikanth Devidi, Head of Data and Experimentation Platform, Robinhood Markets, Inc.; Raghavendra Rojkhird, Head of Data Engineering and Governance, Block, with John Joyce

Understanding where data comes from and where it goes is essential for operating at scale under financial industry regulations. For Robinhood’s Srikanth Devidi and Block‘s Raghavendra Rojkhird, complete data traceability was non-negotiable. Both leaders explained their strategies for building end-to-end lineage across batch, streaming, and BI systems.

Their approaches differed strategically. Robinhood prioritized completeness from day one, using Spline agents to extract lineage from thousands of Spark jobs and standardizing workflows so lineage is captured automatically. Block took a bottom-up approach, starting with business-critical use cases and working backwards.

Both emphasized a crucial cultural shift: embedding lineage as a requirement in development workflows, not an afterthought.

Lineage isn’t the end goal. It’s the foundation that enables the next generation of automation.

Srikanth Devidi

Head of Data and Experimentation Platform, Robinhood Markets, Inc.

The results demonstrate why lineage matters. Robinhood unlocked UK and EU market launches by automating GDPR compliance checks across their entire data lake. Block now estimates blast radius when purging data, preventing unintended downstream impacts and broken pipelines.

This session provides practical frameworks for implementing lineage at scale, building organizational buy-in, and demonstrating business impact from leaders who’ve successfully deployed it.

Learn practical lineage implementation strategies →

Context for Agents: Fireside Chat with João Moura

João Moura, CEO, CrewAI, with Swaroop Jagadish

The MIT State of AI in Business 2025 study, which reported that 95% of AI pilots fail, made headlines. João Moura sees a different reality at CrewAI, where customers run hundreds of thousands of agents in production. In conversation with Swaroop Jagadish, he explained what separates successful deployments from failures. The disconnect isn’t technical capability. It’s organizational readiness.

The main problem right now in getting these projects that are using AI agents and AI in general to success: it’s actually the people. It’s not the tech.

João Moura

CEO, CrewAI

At CrewAI’s scale, João sees deployment patterns emerging across industries. Agents are moving beyond IT departments into business units. Back-office teams in revenue, support, and logistics are deploying agents for operational tasks. Financial services firms are testing front-facing use cases.

The biggest unlock? “Data has gravity,” João explained, meaning where and how you store data determines what’s possible with agents.

Learn the organizational shifts, data architecture decisions, and deployment patterns that separate successful agent implementations from failed pilots at enterprise scale.

See how to scale AI agents in your enterprise →

How Foursquare Built a Data Marketplace Using Metadata

Vikram Gundeti, CTO, Foursquare, with Shirshanka Das

Geospatial data delivers valuable insights across industries, from site selection to insurance risk assessment, but data scientists often lack the specialized GIS expertise to work with it.

Foursquare’s solution: the FSQ Spatial H3 Hub. A data marketplace that indexes geospatial data to a grid system, transforming complex formats into tabular data accessible through familiar tools like Spark, DuckDB, and Python.

Making these geospatial datasets accessible created new enterprise-scale challenges, including governance, versioning, and lineage. Metrics like walk scores depend on points of interest, transit, and road data. When any of that data changes, which happens constantly, the metrics require recomputation. Lineage becomes mission-critical for tracking versions and dependencies.

Every time the place data changes, which it constantly does, or the roads or transit information changes, you have to recompute it. And that lineage needs to be tracked. You need to understand which version of the data you’re using.

VIKRAM GUNDETI

CTO, Foursquare

Vikram’s philosophy: “Treat data with the same rigor as software.” Foursquare built versioning, lineage, and access controls using DataHub to create a two-sided marketplace that benefits both data consumers and producers.

This session reveals how Foursquare uses metadata to democratize geospatial data while maintaining enterprise-grade lineage, governance, and attribution across a thriving data marketplace.

Learn how metadata unlocks governed data marketplaces →

Shift-left Governance: Enabling Engineering Teams to Define Data Policies

Enrique Sosa, Technical Product Manager, Aledade, with John Joyce

Aledade processes billions of healthcare records for 3 million lives. For them, governance is essential for customer trust and compliance. But their culture prizes shipping quickly and empowering engineers. The challenge: implement governance without killing velocity.

How do we prevent governance policies from interfering with our development velocity?

Enrique Sosa

Technical Product Manager, Aledade

Enrique introduced a shift-left approach: embed governance in existing workflows. Now, Aledade puts security mechanisms in native environments like Snowflake and Databricks. Engineers apply tags as they normally would, rather than inputting information directly into the data catalog. DataHub intercepts those tags and organizes assets automatically. The incentive for the engineers: “If you want people to discover your assets easily, document them in DataHub.”

The session reveals patterns for embedding governance into developer workflows without manual gates, showing how to move policies from documents into automated enforcement.

Learn how to transform governance into a developer productivity tool →

Driving Data Catalog Adoption Through Psychology and Design

Björn Barrefors, Metadata Management Lead, ICA Gruppen

Adoption, not technology, is often governance’s biggest challenge. Björn Barrefors at ICA Gruppen, Sweden’s largest food retailer, demonstrated this by treating DataHub’s rollout like a competitive product launch, applying psychology and design principles to drive adoption.

Make governance feel like help, not homework, and you’ll build trust and momentum with users.

Björn Barrefors

Metadata Management Lead, ICA Gruppen

Björn’s design principles: clean UI matters, use familiar terminology, and think ecosystem (like YouTube, success requires an alignment of incentives for all stakeholder groups). His philosophy: “Done is better than perfect.”

This session reveals tactical strategies for building user-driven governance through product thinking, user psychology, and deliberate design choices.

Learn ICA’s adoption playbook for data governance →

Metadata Masterclass: Scaling Across Global Enterprises

Mikhael Mazu brought 15+ years of data strategy experience in financial services to share frameworks for metadata transformation at scale. His core insight: metadata solves data’s fundamental trust problem by creating transparency. 

His clearing platform transformation demonstrated this principle. Migrating a 17-year-old system serving 36 businesses, his team made a bold bet: spend six months building canonical data models and metadata architecture before touching business logic. The result: a project expected to take 36 months completed in 22 months.

Once we had the foundation of our metadata architecture, we cut the time of the delivery of the project almost by two.

Mikhael Mazu

VP – Head of Metadata Management

For leaders building enterprise metadata strategies, this masterclass offers frameworks for driving change through strategy, leadership, and demonstrated value in regulated industries.

Learn how to build enterprise-wide metadata alignment →

Building the context layer

Throughout CONTEXT, a pattern emerged: the foundations you build today determine whether your AI initiatives scale or stall.

The teams moving fastest aren’t chasing better models. They’re building trust through end-to-end lineage, enabling discovery through metadata, and embedding governance at the source. They’re treating internal tools like products and applying software rigor to data. 

This is how production AI gets built. And these foundations set the stage for context management to power agentic AI that delivers sustainable business value.

Our CTO, Shirshanka Das, opened CONTEXT by predicting context management would dominate AI discourse over the next year. After twelve sessions from practitioners building at scale, it’s clear why: metadata isn’t just documentation anymore. It’s part of the foundation that makes AI safe, explainable, and ready for production.

Watch all sessions on demand to see the architectures, frameworks, and lessons from teams moving AI from pilot to production.

Recommended Next Reads