How to Use Web Scraping (Data Crawling) Feature
Simply enter a URL, and MaiAgent will crawl and structure the text and link data from the page, making it easy for you to quickly select and import data into your knowledge base to build your AI assis
Feature Purpose and Value
As an enterprise user, you may often receive instructions from management to reference or compile regulatory information from certain public websites.
If you have a technical background, or have engineering support, you might be able to automatically extract data by writing crawler programs. However, for non-technical personnel, this usually means manually organizing page by page, which is not only time-consuming and labor-intensive but also prone to missing critical information.
In such cases, you can leverage MaiAgent's web crawler feature to quickly extract website content through a No-Code approach, automatically create structured data, significantly improve information organization efficiency, and invest your time in higher-value core business activities.
How to Perform Web Crawling?
To create a crawler request, you can:
Create a Page Crawl Request
Navigate to "AI Features > AI Assistant > Crawler" in the left sidebar, and click the "+ Create Page Crawl Request" button in the upper right corner.

Enter URL
Enter the URL of the page you want to crawl and press the [Confirm] button.
Please note that the URL cannot exceed 200 characters
If the status does not change, you can click the refresh button in the upper right corner to update the page status


View Crawled Data
When the status shows completed, click "Import" on the right to view the crawled data entries.
Select Data
Check the boxes on the left to select the data you want to import into the knowledge base. After making your selections, click the "Import" button, and the data will be automatically imported into that AI assistant's knowledge base.


In the knowledge base, you can see the data presented as .md files, which can be configured with tags and metadata just like regular data.

Web Crawler Usage Notes
Please ensure you have permission to crawl content from the target website
It is recommended to test with a small amount of data before performing large-scale crawling
After crawling is complete, you can verify data quality through the search test feature
Regularly update crawled data to maintain information timeliness
Last updated
Was this helpful?
