Regulatory impasse within the web-scraping industry

Share this story

Web scraping is the use of hi-tech data extraction methods that allows agencies to collect data from 3rd party web sources efficiently. This data is often used for various purposes including Ad-targeting, Business Intelligence, Product Management, and Artificial Intelligence. However, due to its ubiquity and multi-platform approach, the regulation of the web scraping industry still hangs in the balance.

The controversial intervention of consulting firm Cambridge Analytica in the 2016 US elections sparked a spate of heightened scrutiny in the web-scraping industry. The firm was accused of harvesting raw data from over 87 million Facebook users.  The firm allegedly used the data to provide analytical assistance to the 2016 presidential campaign of President Donald Trump. Although no criminal convictions were made, the scandal sparked public interest in privacy-related issues. It became a turning point in regulation of the web-scraping and data harvesting industry.

While it is widely agreed unregulated data harvesting could be harmful to internet users, many benefits of the ethical practice of data scraping cannot be overlooked. It has become one of the pillars of the internet as we know it today.

Digital businesses such as music streaming platforms and e-commerce companies use data scraping tools to monitor user habits and purchase history to create a personalized experience for users. Search engines use it to deliver relevant search results efficiently on-demand. It has also led to huge advancements in the areas of machine learning and artificial intelligence.

However, critics are worried that there are currently no uniform international laws or active regulations in the field of web-scraping. And as a result, companies are not doing enough to protect user data from malicious acts. Industry expert Karolis Toleikis, CEO of IPRoyal—a foremost IP networking and brokering firm—shared insights on the regulatory controversy in the sector.

While stating that it would be more efficient to have an independent industry regulator, Toleikis described the various self-regulatory measures that his company has taken to ensure that scraping is done ethically and data is not used for malicious acts. 

“We closely monitor all clients’ requests for unusual patterns.” He continues, “If we notice more requests than usual, we suspend the account immediately and ask the client to provide more details about that particular case”. He also adds that his practice has proved to be 100% effective so far.

In the absence of a blanket constitutional framework that governs the practice, Toleikis opines that internet users need to get better educated about the legal implications of the activities that they carry out on the internet as well as information that they share. As a passive income IPRoyal Pawns app provider that helps users to share their unused bandwidth he added, “We always ask people who want to share their internet connection with us to carefully check their country’s laws and make sure they’re not doing anything illegal.”

Besides Facebook, LinkedIn is another platform that has been linked with a high-profile data-scraping scandal. In September 2019, San Francisco-based start-up hiQ Labs won a court injunction that upheld its right to harvest publicly available data from user profiles on LinkedIn. Despite the fact that it violated LinkedIn terms & conditions.

Founder of IPRoyal explained that his company is able to effectively avoid such scandals as they have blocked LinkedIn scraping by default with built-in residential proxies.

“To have the function enabled, a client has to expressly, provide their company details for identification and explain how they intend to use the data. This will help us direct users to the appropriate channel in case they have any complaints”, he added.

Mr. Toleikis also stated that social media platforms and their users also have a role to play in sanitizing the web-scraping practice and protecting crucial information from getting in the hands of unscrupulous phishers. He says that “Everyone who puts out information on the internet and makes it public should understand anyone could use it”.

The many positive use cases of web-scraping and data harvesting show that it is indeed a crucial part of how the internet functions. It helps to create a more personalized experience for internet users across various platforms. And as a result, the practice cannot simply be criminalized.

As critics continue to call for strict data protection laws and regulations, the broad integration of data-scraping is in full swing in almost every area of the internet. The regulations, however, seem inevitable. It would allow all data scraping companies to operate under a uniform set of laws and provide a framework for users to seek redress whenever there is a violation of their privacy rights.

Image Credit: Ed WebsterPixabay


Share this story

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *

Total
0
Share