Latest Research


This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
+1-888-670-8889 (US/CAN) or
+1-703-340-1171 (International)

From Digital Maturity to AI Value: An Interview With Zoho’s Ram Ramamoorthy

Research By: John Annand, Thomas Randall, Info-Tech Research Group

At ZohoDay26, Info-Tech’s John Annand sat down with Ramprakash “Ram” Ramamoorthy (Director of AI Research at Zoho). Over a wide-ranging conversation, John and Ram discussed Zoho’s go-to-market strategy for AI capabilities. This note provides a structured summary of that interview.

John Annand interviewing Ram Ramamoorthy

Watch the full interview

At ZohoDay26, Info-Tech’s John Annand sat down with Ramprakash “Ram” Ramamoorthy (Director of AI Research at Zoho). Over a wide-ranging conversation that explored Zoho’s go-to-market strategy for AI capabilities, three themes emerged:

  • Digital and architectural foundations precede AI value.
  • Policy boundaries must be embedded directly into the development environment.
  • Smaller language models or purpose-built machine learning systems should be preferred wherever possible.

1. Digital and Architectural Foundations Precede AI Value

The core of John and Ram’s discussion is that AI value is downstream of digital maturity. Zoho’s core message to CIOs and CTOs is “fix your digital foundation.” Ram is explicit that organizations, particularly small and mid-sized enterprises, must first formalize their digital strategy. This includes moving off spreadsheets, consolidating fragmented systems, and ensuring that core business processes are embedded within integrated applications. AI maturity, in Ram’s framing, is an outcome of a coherent digital strategy.

Ram gives several examples. When an organization deployed AI for ticket assignment within Zoho Desk, they encountered an orchestration problem: if a support staff is on leave or over capacity, that AI capability must be connected across HRMS, workload management, reporting hierarchies, escalation structures, and SLA data. AI systems require a unified enterprise architecture layer that integrates data, processes, roles, and constraints.

Deploying high-value AI capabilities is fundamentally about business process mapping and systems integration. Zoho’s position is that loosely coupled integrations create clutter. AI demands natively connected systems with consistent identity management and privacy controls. For CIOs and CTOs over the next 12 months, Ram recommends de-siloing, rationalizing the application portfolio, reducing spreadsheet dependency, formalizing integration patterns, and embedding privacy controls. AI initiatives launched into fragmented environments will underperform or generate complexity rather than value.

2. Controlled Customization and the Role of Platform Guardrails

A second theme centers on the tension between customization and governance. John raises a typical infrastructure concern: how to maintain discipline across SDLC hygiene, consistency, auditability, and compliance.

Ram’s response highlights an upcoming Zoho announcement: AppOS. AppOS is Zoho’s framework that enables developers to build apps inside the Zoho platform using a wide range of LLMs. The idea is to provide an organization’s users with a shared foundation that embeds business rules, identity, and permissions into the development process. AppOS is, as Ram calls it, “vibe coding with brakes.”

Zoho’s approach addresses the evolution of citizen development. The person closest to the business problem may understand requirements best, but not necessarily security models, audit logging, access control, or compliance obligations. AppOS abstracts and standardizes those concerns so builders can focus on business logic while inheriting governance automatically.

3. Right-Sized AI, Explainability, and Purpose-Built Models

The third theme addresses AI design philosophy, particularly explainability and operational reliability. Zoho’s approach emphasizes “right-sizing” models. Rather than defaulting to large, highly general language models, the company prefers smaller (7B-8B parameter) language models or purpose-built machine learning systems wherever possible. The rationale is twofold:

  1. Smaller models are more explainable.
  2. Many enterprise use cases do not require broad generalization.

For constrained tasks (email summarization, spam detection, or anomaly detection), purpose-built models with clearly defined variables provide statistical traceability and debuggability. For example, spam classification can reference domain age, link structure, headers, and HTML patterns without invoking a general-purpose generative model.

This philosophy becomes particularly pronounced in Zoho’s ManageEngine and AIOps contexts. IT operations data is fully digital but not natural language. Large LLMs optimized for conversational reasoning are poorly suited for raw operational telemetry. Instead, Zoho uses knowledge graphs to establish system relationships, anomaly detection engines to identify deviations from steady state, and forecasting models to anticipate issues. Only at the final interface layer does a smaller LLM translate outputs into human-readable explanations.

In this layered architecture, knowledge graphs establish relational context, traditional ML models detect anomalies and root causes, and smaller models summarize findings for human operators. Ram posits that this structure minimizes hallucination risk and improves explainability while preserving usability for less experienced engineers.

Conclusion

Across their conversations, Ram presented to John a consistent vision of Zoho’s strategy and positioning:

  • AI is embedded within a broader platform, not treated as a standalone feature.
  • Architectural coherence and governance precede AI success.
  • Customization must be controlled by structural guardrails.
  • Smaller, purpose-built models are often superior to large, generalized ones in enterprise contexts.
  • LLMs are most effective at translation and summarization, not as primary reasoning engines over structured operational data.

Ram’s unifying argument is that enterprise AI success depends on disciplined architecture and right-sized model selection.

Latest Research

All Research
Visit our IT’s Moment: A Technology-First Solution for Uncertain Times Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171