Response Evaluation and Monitoring Results
View Response Evaluation Results
AI Assistant → Configuration → Response Evaluation

Evaluation Metrics
MaiAgent platform provides response evaluation functionality, recording and automatically scoring each Q&A interaction, including the following metrics:
Faithfulness
Whether the LLM provides truthful answers rather than fabricated ones
LLM, RAG, Knowledge Base
✅
✅
Answer Relevance
Whether the LLM's response is on point, complete, and free of redundant text
LLM, RAG, Knowledge Base
✅
✅
Context Precision
Whether the RAG-retrieved content is relevant to the question
RAG, Knowledge Base
✅
✅
Answer Correctness
Accuracy of response compared to correct answer
LLM, RAG, Knowledge Base
✅
✅
Answer Similarity
Semantic similarity between response and correct answer
LLM, RAG, Knowledge Base
✅
✅
Context Recall
Whether RAG retrieval includes all relevant information compared to correct answer
RAG, Knowledge Base
✅
✅

Causes of Low Scores and Solutions
LLM Capability Issues - Unable to answer questions based on reference materials
Solution: Switch to a more capable LLM
RAG Retrieval Performance - Whether relevant information is being found
Solution: Contact MaiAgent official support
Knowledge Base Content Adequacy
Solution: Supplement with correct knowledge base data and FAQ content
Last updated
Was this helpful?
