How to Use Web Scraping (Data Crawling) Feature

Simply enter a URL, and MaiAgent will crawl and structure the text and link data from the page, making it easy for you to quickly select and import data into your knowledge base to build your AI assis

Feature Purpose and Value

As an enterprise user, you may often receive instructions from management to reference or compile regulatory information from certain public websites.

If you have a technical background, or have engineering support, you might be able to automatically extract data by writing crawler programs. However, for non-technical personnel, this usually means manually organizing page by page, which is not only time-consuming and labor-intensive but also prone to missing critical information.

In such cases, you can leverage MaiAgent's web crawler feature to quickly extract website content through a No-Code approach, automatically create structured data, significantly improve information organization efficiency, and invest your time in higher-value core business activities.

How to Perform Web Crawling?

To create a crawler request, you can:

  1. Create a Page Crawl Request

Navigate to "AI Features > AI Assistant > Crawler" in the left sidebar, and click the "+ Create Page Crawl Request" button in the upper right corner.

  1. Enter URL

Enter the URL of the page you want to crawl and press the [Confirm] button.

  1. View Crawled Data

When the status shows completed, click "Import" on the right to view the crawled data entries.

  1. Select Data

Check the boxes on the left to select the data you want to import into the knowledge base. After making your selections, click the "Import" button, and the data will be automatically imported into that AI assistant's knowledge base.

If you want to view more data entries on the same page, you can click "10 items/page" in the lower right corner to expand the viewing range.

In the knowledge base, you can see the data presented as .md files, which can be configured with tags and metadata just like regular data.

Web Crawler Usage Notes

  • Please ensure you have permission to crawl content from the target website

  • It is recommended to test with a small amount of data before performing large-scale crawling

  • After crawling is complete, you can verify data quality through the search test feature

  • Regularly update crawled data to maintain information timeliness

Last updated

Was this helpful?