Agentic AI Nexus: When Machines Decide

By Dr. Jasmin Cowin, Associate Professor, Touro University, USA

December 7, 2024

Artificial Intelligence (AI) is redefining how organizations approach automation and decision-making. While traditional AI focuses on supporting specific tasks, two groundbreaking approaches – Agentic AI and Retrieval-Augmented Generation (RAG) are pushing the boundaries of autonomous operation. 

Agentic AI refers to AI systems capable of leveraging advanced reasoning, strategic planning, and independent decision-making to address complex, multi-step problems. By drawing on extensive datasets and sophisticated analytical techniques, agentic AI has the potential to improve productivity across a range of industries, including supply chain management, cybersecurity, and healthcare. It is important to recognize that these capabilities represent a broad conceptual framework that continues to evolve, with ongoing research refining and expanding the underlying methods.

RAG is an approach designed to enhance generative language models by integrating them with information retrieval systems. In this framework, the model draws on external knowledge sources, allowing it to produce more accurate, current, and contextually relevant outputs than those generated by the language model alone. By grounding the text in retrievable facts, RAG reduces the likelihood of unsupported claims or “hallucinations.” As with agentic AI, RAG remains an area of active development, and future refinements may further improve its reliability and versatility.

AI Agents Explored

Generative AI applications like chatbots and image creators can generate content, AI agents go a step further—they don’t just create, they act. Imagine an AI system that generates a response and follows multiple complex steps to implement a solution.

According to IBM, AI agents refer to:

“(…) a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents can encompass various functionalities beyond natural language processing, including decision-making, problem-solving, interacting with external environments, and executing actions.”

To illustrate, AWS offers a compelling example on its website:

“An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. For example, consider a contact center AI agent that wants to resolve customer queries. The agent will automatically ask the customer different questions, look up information in internal documents, and respond with a solution. Based on the customer responses, it determines if it can resolve the query itself or pass it on to a human…”

Previously, deploying AI agents for complex tasks was a daunting challenge. But the game has changed with the introduction of foundation models and Large Language Models (LLMs). As McKinsey explains: “By moving from information to action—think virtual coworkers able to complete complex workflows – the technology promises a new wave of productivity and innovation.”

Multiple industry leaders have entered the space:

  • Amazon entered the game with Amazon Bedrock Agents by announcing, “Amazon Bedrock supports multi-agent collaboration, allowing multiple specialized agents to work together on complex business challenges.”
  • Apple launched its Apple Intelligence, which Tim Cook, Apple’s CEO, describes as “Our unique approach combines generative AI with a user’s personal context to deliver truly helpful intelligence.
  • Google recently unveiled its Google Cloud AI agent ecosystem program and advertised it as “Build, deploy, and promote AI agents through Google Cloud’s AI agent ecosystem.”
  • Meta showcases CICERO, which is described as “The first AI to play at a human level in Diplomacy, a strategy game that requires building trust, negotiating and cooperating with multiple players..”
  • Microsoft introduced new agentic functionalities for its widely popular Copilot, stating, “We’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance, and supply chain team.”
  • Nvidia developed Generative AI-Powered Visual AI Agents, which are able to “combine both vision and language modalities to understand natural language prompts and perform visual question-answering. For example, answering a broad range of questions in natural language that can be applied against a recorded or live video stream.”

AI Agents Triple Threat 

The emergence of AI agents presents a complex web of legal challenges that demand attention from policymakers and the public. The interconnected issues of privacy risks, systemic bias, and the concerning ability of AI’s ability for anthropomorphic deception traits create a particularly thorny challenge. While AI agents promise enhanced efficiency and personalization, their need for extensive personal data, their potential to perpetuate societal biases, and their increasingly sophisticated ability to present themselves as human-like entities raise serious concerns about user protection and personal and societal impact.  Particularly, the anthropomorphic nature of these systems – their ability to engage in empathetic conversation and present human-like personas – adds another layer of complexity, as users may be more susceptible to manipulation when interacting with seemingly human agents. These challenges extend beyond existing regulatory frameworks designed for simpler AI systems, suggesting new, comprehensive legislation addressing both agentic and RAGs AI’s unique capabilities and deceptive potential. 

Furthermore, Agentic AI and RAG have the potential to profoundly alter the dynamics of several key operational domains. In monetary systems, agentic AI-driven algorithms could dynamically adjust interest rates, allocate financial resources, and orchestrate complex trading strategies far faster and more autonomously than human analysts. Although this might improve market liquidity and resilience, it could also introduce systemic vulnerabilities if the underlying models incorporate biased or incomplete data. In academic and research consortia, RAG-enabled models might accelerate literature reviews, hypothesis generation, and collaborative discovery, yet the speed and scale of automated knowledge production could devalue traditional peer review processes and erode established standards of scholarly rigor. In military ecosystems, agentic AI systems with advanced planning capabilities could streamline logistics, develop strategic battle plans, and synthesize intelligence sources using RAG-based retrieval. However, there is a risk that adversarial manipulation of source materials or subtle misalignment of decision-making objectives could compromise national security. Infrastructure domains, such as energy distribution and global communication networks, could benefit from real-time optimization and fault detection. Still, rapid, automated decision-making may reduce the capacity for human oversight, potentially amplifying the impact of unforeseen failures. 

Ultimately, integrating Agentic AI and RAG into organizational workflows demands not only nuanced human oversight and critical engagement but also robust accountability mechanisms at both horizontal and vertical levels – where horizontal accountability involves human-in-the-loop within the same organizational tier holding one another accountable through measures such as cross-functional evaluation, peer review, and lateral oversight, and vertical accountability provides structured checks and balances through hierarchical governance bodies and regulatory agencies overseeing compliance.