Breaking News
Explainable AI to Drive Half of GenAI Deployments by 2028 as Trust Becomes Critical: Gartner
2026-03-31
Gartner predicts that explainable AI (XAI) will become a central pillar of enterprise AI strategies, with investments in LLM observability expected to reach 50% of generative AI deployments by 2028, up sharply from around 15% today.
The shift reflects growing pressure on organizations to ensure transparency, reliability, and accountability as generative AI systems scale across business-critical functions.
XAI refers to a set of capabilities that help interpret how AI models work—explaining their outputs, identifying potential biases, and assessing their strengths and limitations. Alongside this, LLM observability tools are designed to monitor and evaluate the behavior of large language models in real-world environments, tracking factors such as hallucinations, bias, token usage, and output quality rather than just traditional IT metrics like speed or uptime.
According to Pankaj Prasad, the rapid expansion of generative AI is creating a widening gap between adoption and trust. He noted that explainability helps clarify why a model produces a specific output, while observability ensures that the system behaves consistently and reliably over time.
Without these capabilities, Gartner warns that organizations may be forced to limit AI deployments to low-risk or internal use cases where outputs can be easily verified, restricting the broader return on investment from AI initiatives.
The need for stronger governance is becoming more urgent as the generative AI market continues to expand. Gartner estimates that the global market for GenAI models will surpass $25 billion in 2026 and grow to $75 billion by 2029, driven by widespread adoption across industries.
As AI systems become more embedded in decision-making processes, enterprises are increasingly focused on evaluating output quality—such as factual accuracy, logical consistency, and bias—rather than simply measuring performance efficiency. This shift is also driving the adoption of new validation approaches, including human-in-the-loop systems and continuous evaluation frameworks.
Gartner advises organizations to implement explainability tracing for high-impact use cases, adopt multidimensional observability platforms that monitor both performance and output quality, and integrate evaluation metrics into development pipelines to ensure continuous validation before deployment.
The firm also highlights the importance of aligning legal, compliance, and business stakeholders around explainability requirements, as governance and accountability become key differentiators in the next phase of AI adoption.
As enterprises move beyond experimentation toward scaled deployment, Gartner’s outlook underscores that trust mechanisms—built on explainability and observability—will be essential for unlocking the full potential of generative AI.
The shift reflects growing pressure on organizations to ensure transparency, reliability, and accountability as generative AI systems scale across business-critical functions.
XAI refers to a set of capabilities that help interpret how AI models work—explaining their outputs, identifying potential biases, and assessing their strengths and limitations. Alongside this, LLM observability tools are designed to monitor and evaluate the behavior of large language models in real-world environments, tracking factors such as hallucinations, bias, token usage, and output quality rather than just traditional IT metrics like speed or uptime.
According to Pankaj Prasad, the rapid expansion of generative AI is creating a widening gap between adoption and trust. He noted that explainability helps clarify why a model produces a specific output, while observability ensures that the system behaves consistently and reliably over time.
Without these capabilities, Gartner warns that organizations may be forced to limit AI deployments to low-risk or internal use cases where outputs can be easily verified, restricting the broader return on investment from AI initiatives.
The need for stronger governance is becoming more urgent as the generative AI market continues to expand. Gartner estimates that the global market for GenAI models will surpass $25 billion in 2026 and grow to $75 billion by 2029, driven by widespread adoption across industries.
As AI systems become more embedded in decision-making processes, enterprises are increasingly focused on evaluating output quality—such as factual accuracy, logical consistency, and bias—rather than simply measuring performance efficiency. This shift is also driving the adoption of new validation approaches, including human-in-the-loop systems and continuous evaluation frameworks.
Gartner advises organizations to implement explainability tracing for high-impact use cases, adopt multidimensional observability platforms that monitor both performance and output quality, and integrate evaluation metrics into development pipelines to ensure continuous validation before deployment.
The firm also highlights the importance of aligning legal, compliance, and business stakeholders around explainability requirements, as governance and accountability become key differentiators in the next phase of AI adoption.
As enterprises move beyond experimentation toward scaled deployment, Gartner’s outlook underscores that trust mechanisms—built on explainability and observability—will be essential for unlocking the full potential of generative AI.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




