Agentic AI – Architecting the future of Credit Risk at Ninjacart – Part B

In the first chapter of our Agentic AI series, we unfolded the foundation—the core platform components painstakingly built to usher in a new era of automation at Ninjacart, specifically for Credit Risk analysis. These building blocks laid the groundwork for something more powerful: intelligent Agents capable of decision-making, learning, and collaboration.
Table Of Content
Now, in this second part of the story, we journey deeper into the heart of the system—where innovation meets execution.
Architecting the future of credit risk

Detailed Design
For the Credit Risk Automation system, beyond the foundational platform components, the following technical elements were addressed:
- System Integration: Existing microservices were encapsulated as MCP Servers to enable seamless data ingestion from legacy and external systems.
- Memory Architecture: Implemented a dual-layer custom memory management system to support both short-term and long-term contextual retention for agents.
- Risk Modelling: Integrated advanced risk models to enhance decision-making accuracy and provide explainable outputs, ensuring model transparency and regulatory compliance.
- Agent-Based Execution: Deployed multiple specialised agents, each simulating the functions of their human counterparts by carefully drawing bounded contexts for each of the agents, to parallelise and automate domain-specific tasks.
- Agent Orchestration Framework: Established an orchestration layer to coordinate inter-agent communication and workflows, ensuring consistent and goal-aligned output generation.
From Legacy to Intelligence
To bring these Agents to life, we didn’t start from scratch. Instead, we tapped into the wealth ofservices already in our ecosystem. Our approach was clear: maximize the potential of what wealready had while integrating state-of-the-art advancements to elevate the entire process.
When we set out to transform Credit Risk decisioning, we knew that automation alone wouldn’tbe enough. Our agents needed more than rules—they needed context. And that’s where MCP(Model Context Protocol) came in.
MCP wasn’t just another tool—it became the nerve center for our agentic system. It gave our agents the ability to operate dynamically by pulling in real-time business context. For CreditRisk, this meant accessing critical borrower data scattered across multiple internal services—financial history, transaction patterns, credit exposure, and more. What once took hours of manual stitching now happened in milliseconds. MCP made context available on demand, powering every decision with relevant, up-to-date insights.
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardised way to connect LLMs with the context they need.
So, the data for the credit decisioning will be powered by our existing services, which will be used as MCP Servers. Some of the services that we used to enable our Credit Risk Automation are
- USS (User store Service)
- DAM (Digital Asset Management Service)
- GEMS (Generic Entity MicroService)
- LSP (Loan Service Provider Service)
- Risk Service
Challenges faced during MCP implementation
- We had to take a step-wise approach for implementing Spring AI-based MCP servers in our existing microservices.
- First step, we had to convert the existing services, which were running on Java 8 / Spring 2. x to Java 21 / Spring 3.4.x
- Second step, we added the tools necessary for that particular service to expose to the agents
- Not only this, the rule book, which we discussed in Part A, is a new service introduced also was created natively supporting MCP.
Risk Models

But tools, no matter how sophisticated, are only part of the equation. Impact comes from models that know where to look.
That’s why, for Eligibility Analysis, we didn’t rely on a single algorithm. Instead, we deployed acurated ensemble of machine learning models, each crafted to evaluate a specific dimension of creditworthiness. One model examined historical repayment behaviours. Another looked at market signals. Others captured behavioural trends, income stability, and utilization patterns. Like expert advisors working in sync, they delivered a 360° view of borrower risk.
Together, MCP and the model suite enabled something truly powerful: context-aware, precision-grade decisioning—automated, explainable, and fast. A system where agents don’t just act—they understand.
The risk models play a critical role in assessing a borrower’s creditworthiness. To achieve optimal performance, we conducted extensive evaluations across multiple modelling approaches and ultimately adopted the FinGPT framework, leveraging Qwen for domain-specific financial analysis and FinR1 for advanced reasoning capabilities.
In addition to foundation models, we incorporated XGBoost for structured data modelling and employed a Chain-of-Thought (CoT) prompting strategy to enhance decision explainability. This hybrid approach enables the system to provide a transparent, step-by-step rationale behind eligibility decisions, thereby improving trust and auditability.
Brains behind the bots
In general, the memory for an agent is something that we provide via context in the prompt passed to LLM that helps the agent to better plan and react, given past interactions or data not immediately available.
There are three types of long-term memory:
- Episodic – This type of memory contains past interactions and actions performed by the agent. After an action is taken, the application controlling the agent would store the action in some kind of persistent storage so that it can be retrieved later if needed. A good example would be using a vector Database to store the semantic meaning of the interactions.
- Semantic – Any external information that is available to the agent and any knowledge the agent should have about itself. You can think of this as a context similar to one used in RAG applications. It can be internal knowledge only available to the agent or a grounding context to isolate part of the internet-scale data for more accurate answers.
- Procedural – This is systemic information like the structure of the System Prompt, available tools, guardrails, etc. It will usually be stored in Git, Prompt and Tool Registries.

Rewiring Risk
Agentic generally refers to the capacity to act independently and achieve outcomes through self-directed action. In the context of AI, it describes a system or agent that can autonomously make decisions, take actions, and adapt to changing circumstances. This means the agent doesn't require constant human guidance but can work towards specific goals independently.
To power our automation efforts, we implemented a multi-agent architecture using LangGraph. Each agent was purpose-built—designed with a singular function in mind—and together, they operate as a coordinated system to streamline the Credit Risk process. Think of it as a decentralised intelligence network, where agents specialise, collaborate, and communicate to eliminate friction in decision-making. We also have Human in the Loop (HITL) as part of the agent to accommodate Human feedback when in doubt.
The advantage to this architecture is that each agent can scale individually based on the need. Here is a sample of how all the above components are used in a single agent

In the Credit Risk Automation workflow, four primary agents were deployed, each responsible for a specific stage of the decisioning pipeline:
1. Document Verification Agent: This agent is responsible for validating the authenticity and completeness of borrower-submitted documents. It also performs structured data extraction from unstructured inputs using OCR techniques. Extracted attributes, such as Market License Numbers, are leveraged for downstream tasks like deduplication and identity validation.
Skills: OCR, Name matching, DOB matching, Address matching
2. Screening Operations Agent: Executes pre-eligibility risk assessments, including bank statement deduplication, behavioural analysis using historical transaction patterns (where available), and checks against internal fraud rings or blacklists.
Skills: Bank Statement Dedupe, Transactions Behaviour analysis, Blacklist analysis
3. Eligibility & Lender Selection Agent: Utilises internal risk models to assess borrower eligibility and dynamically selects the most suitable lending partner based on borrower-lender profile alignment.
Skills: Eligibility check using multiple models, Lender selection
4. Reporting Agent: Aggregates and summarises outputs from all upstream agents, generating a comprehensive decisioning report. This report serves both operational traceability and as an input to further processing or manual review when necessary.
Skills: Report generation
Conclusion
What you’ve seen so far is just the surface. In the next installment—we’ll dive deep into the deployment architecture of the agents and the components used to make sure our agents have proper auditing, traceability and scalability.
Contributors
- Arya – Agents
- Deepan – Agents & Framework
- Jahnavi – Risk Models
- Nidhi – Risk Models
- Alan – MCP Servers
- Sharukh – Vision
- Vijay – Vision
References
- MCP – Model Context Protocol
- FinGPT – GitHub – AI4Finance-Foundation/FinGPT: FinGPT: Open-Source Financial Large Language Models! Revolutionize, We release the trained model on HuggingFace.
- Fin R1 – Fin-R1: A Large Language Model for Financial Reasoning through…
- XgBoost – XGBoost Documentation — xgboost 3.0.2 documentation
- Chain of thoughts – Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Langgraph – LangGraph
Please share this article if you like it!
No Comment! Be the first one.