Web crawling is the automated process of systematically browsing and indexing web pages. Often performed by search engines and web scraping tools, web crawlers navigate websites by following links, collecting data such as page content and metadata. This process is essential for building search engine indexes, enabling users to retrieve relevant results. Web crawling is also used in data analysis, content aggregation, and monitoring online information. Examples include Googlebot for search indexing and specialized bots for market research or price tracking.