Search Testing

Before officially deploying the LLM, you can use the search test preview feature to verify the relevance and content of the data retrieved by the LLM

Why is Search Testing Needed?

  1. Ensure Response Accuracy

  • The quality of AI assistant responses directly impacts customer satisfaction

  • Incorrect or irrelevant answers can damage brand trust

  1. Avoid Information Omissions

  • Prevent important product details or safety instructions from being overlooked

  • Optimize knowledge base usage efficiency

  1. Improve Answer Consistency

  • Ensure the same question receives consistent answers at different times

  • Avoid response instability caused by knowledge base updates

Through search testing and fragment preview features, you can view the quality of data fragments used by the AI assistant in real-time, ensuring your Q&A standards.

How to Get Started?

  1. Enter the knowledge base page, select the knowledge base you want to view and enter settings

  1. After entering, click on Search Test to access the page

Enter the content you want to search in the dialog box: "What should I pay attention to when choosing a campsite?", and after pressing the search button, the AI assistant will search for the most relevant documents based on your conversation content.

  1. View search results

The right side will display the fragments that the AI assistant will retrieve when answering. As shown in the image, there are 12 fragments (for retrieval fragment count settings, please refer to: How to Create a Knowledge Base: Basic Settings)

The right side shows the retrieved content used to answer this question. Each fragment displays:

  • Word count: The number of words covered by the fragment

  • Hit count: The number of times the AI assistant referenced this fragment when answering (only counts official Q&A, test Q&A is not included in hit count calculations)

Higher hit counts indicate that users ask more questions about this fragment.

Search testing records past test Q&A sessions, allowing you to quickly select historical records to retest and ensure the same topics maintain or improve data reference quality.

Checking Knowledge Fragment Quality

Through search testing, you can determine which fragments the AI assistant uses during reasoning, and then check whether the content of these fragments needs optimization, such as:

If you find many blank fragments in the search results, and these blank fragments all belong to the "Beginner Camping.pdf" file, you can check whether the content of "Beginner Camping.pdf" is all correct and update it in time before official deployment.

Search Testing Helps You

Regularly Test Important Questions

  • Recommend frequently testing core business-related questions

  • Perform retesting after knowledge base updates

  • Record and compare improvement in test results

Identify Content Gaps

  • Discover knowledge gaps: Test common questions to find queries with no matching results

  • Example: Testing "tent waterproof rating explanation" finds no relevant fragments → need to supplement waterproof specification documents

Evaluate Content Completeness

  • Check answer depth: Confirm whether AI answers cover all aspects of the question

  • Example: Query "camping safety precautions" → Check whether it covers all aspects such as weather, wildlife, fire safety, etc.

Question Testing Techniques

You can:

  • Ask the same question in different ways

  • Test complex scenarios and multi-condition queries

  • Verify the retrieval effectiveness of professional terminology and product names

Through these testing techniques, comprehensively evaluate the accuracy and completeness of responses.

Last updated

Was this helpful?