CNC - Search Engines and Websites (Lesson)

Search Engines and Websites

The last area of cybersecurity has to do with search engines, web sites and data.  Let's start with Search Engines.

Search Engines started in the 1990s and have developed over the years.  Search engines work in three specific ways: crawling, indexing, and ranking.  Crawling is the discovery process.   Search engines send out robots called crawlers or spiders which find new and update content.  This updated and new content is stored in an index. An index is a searchable database of all the content found.  When a user performs a search, the search engine looks through the index and "ranks" the sites by relevance.

Now let's look at data.  When a search engine is used, the term(s) you searched, the IP address of the computer you used, and a cookie-based ID enables the search engine to continue to know if requests are coming from that same computer regardless of the connections are all part of the things that are stored.  This same data can be stored depicting a fairly detailed history of your searches.

Websites also hold data.  Websites can collect IP addresses, information on how the website was used such as the time spent, where you clicked, etc.  This is called tracking.  Tracking is monitoring the user behavior.  This is how when you search for something, advertisements pop up.  These pop-ups or recommended products are tailored to you by monitoring your online activity.

Let's Review: Surf the Internet Securely

Now, as a final wrap-up, watch the video below to review some guidelines to help you manage your online privacy risks.

 

 

IMAGE CREATED BY GAVS AND USED ACCORDING TO TERMS OF USE.