Throughout the many years the internet has been around there have been several debates about the privacy of the consumer. Mainly on the subject of the consumer sharing information through their device to the producer companies. This debates has more recently spilled over to social media and the privacy of users on social media. An incident showcasing the negative implication was recently involving user data and Cambridge Analytica. The company then used the data to discourage African-American voters in the 2016 presidential election. This situation shows the great power that can be used with the collected data. Within recent years there have been several people using their position to voice their opinion on the subject like that of Zeynep Tufekci an associate professor at the University of North Carolina, an academic writer, and occasional writer for The New York Times. Majority of Tufekci’s work focuses around the subject of technology and social media. This paper will analyze Tufekci’s claim that the surveillance of social media on their users has harmed society rather than helping to further it, and other articles that have extended, illustrated, and complicated her claim.
Tufekci’s claim stems from her article “Facebook’s Surveillance Machine” The piece’s topic is about the previously mentioned situation of Cambridge Analytica; however, the article was written prior to the reveal of voter suppression. This does not lessen the impact of Tufekci’s argument about social media’s negative impact on society, as she states “This wasn’t informed consent. This was the exploitation of user data and user trust.” (Machine 10) These two sentences were alone in their own paragraph in the article, causing emphasis in the message. The emphasis is used as a transition into Tufekci’s rebuttal to Facebook’s response that the incident was within a form of consent. Within her rebuttal she states her opinion about the continuous surveillance of users that it cannot be fully seen as a mutual agreement, as she herself states “… continuous to ongoing and extensive data collection can neither be fully informed nor truly consensual – especially since it is practically irrevocable.” (Machine 12) Tufekci believes that these social media platforms shouldn’t be collecting user data to begin with, as she states later in the article “If Facebook failed to understand that this data could be used in dangerous ways… it had no business collecting anyone’s data in the first place.” (Machine 15) These quotes from Tufekci’s article solidifies her position on the subject and supports her claim of the negative impact of social media through surveillance.
Tufekci has a Ted Talk in 2017 that further extends her claim from the original text. The talk centered around the current state of advertising and the tactics used by companies to attract customers. One of the tactics she mentioned being actively used is targeting individuals who are entering a manic phase or episode. She argues that this tactic is mainly used by casinos, as Tufekci states in her own words “So let’s push that Vegas example a bit. What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase.” (Dystopia 7:08) This extends Tufekci’s original claim because the people being targeted are identified through a computer algorithm which uses siphoned data from users, as she also states prior “And these things only work if there’s an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work.” (Dystopia 6:52) Without the use of the surveillance and data harvesting the algorithm would be unaware of the habits of people on social media who are entering a manic episode. This extends her claim because allowing the collection of data from users puts those same users at risk of destroying their lives. In the casino example targeted individuals could potentially spend their life savings gambling if they were in a state of manic. The companies who own the ads are not the culprits behind the data siphoning, but rather it is the social media platforms that people around the globe use on a daily basis. Some of the platforms mentioned by Tufekci were Google, Amazon, Facebook, Alibaba, Tencent; as Tufekci states “Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others.” (Dystopia 0:12) Each of these websites are visited by millions of people on a daily basis and their users are unaware of how their search history, likes, and comments will be sold to advertisers to be used against them. These further extend the original claim because it shows the potential held within user data and its ability to ruin an individual’s future through just an advertising algorithm.
Tufekci makes her opinion on the subject of social media surveillance and data harvesting well known as she has another article written about it titled “Mark Zuckerberg, Let Me Pay for Facebook”. This article provides more solutions as to how to solve the problem of data harvesting and brings new ideas to the subject further extending her original claim. One of her major pieces of evidence brought up to further her claim is the use of Ethan Zuckerman, a media scholar and associate professor at the Massachusetts Institute of Technology. These achievements are lesser to his creation of the pop-up ads and the ad-financed business model in the early 1990s. Both of Zuckerman’s creations are still heavily implemented in today’s society, through pop-up ads on social media platforms and those same social media platforms using ads as funds. However, Zuckerman has developed a concern for his creations as stated by Tufekci “He came to regret both: the pop-up and the ad-financed business model. The former is annoying but it’s the latter that is helping destroy the fabric of a rich, pluralistic Internet.” (Pay 3) This helps to extend the original claim because it shows that the creator of the style of ads seen all throughout the internet today has feelings of guilt for creating them in the first place, pushing this further as Zuckerman appeared on NBC News in 2014 apologizing for his creations. The business model that he created is currently being used by several social media platforms today, as Tufekci also states “Internet ads are basically worthless unless they are hyper-targeted based on tracking and extensive profiling of users.” (Pay 4) This further extends the original claim as it shows that many of the companies profits stem from targeting its users with ads tailored to them. With social media platforms main source of income being ads on the website, Tufekci offers an alternate way to fund the site, as she states “I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me.” (Pay 8) Providing an alternate solution to the problem of revenue would allow users to freely use websites and platforms without the worry of having their every move analyzed; however, there is little hope that this model will be adapted into the current business model as it works so well with Facebook making an annual revenue and net income totalling a little over 55 million dollars in 2017.
While most social media platforms collect user data to enhance the user’s experience it also brings in an aspect of personalization to the user. The personalization and data allows the system to filter out what the users are uninterested in; however, this can lead to consequences as illustrated by Eli Prasier’s, current Chief Executive of the viral media sharing platform Upworthy, Ted Talk explaining these filter bubbles personalized to each user. Prasier claims that different users have different experiences when searching for information online. He ran an experiment between his friends where he had each of them search for the word “Egypt” on Google. One friend received results about the protests in Egypt while on the other hand another friend saw nothing of the matter, but rather saw travel information to the country. Praiser goes on to explain how this filtering is spreading stating “This moves us very quickly toward a world in which the internet is showing us what it thinks we want to see, but not necessarily what we need to see.” (Beware online 3:41-3:50) This further illustrates Tufekci’s original claim because Praisers experiment fully shows the effect of personalization throughout the internet, and how dangerous the potential there is. In this case the internet is allowed to withhold information from a general public that may be actively searching for it. Other implications of this filtering becoming a hazard, are uses in a political sense. Prasier noticed on his own news feed that conservative posts were showing up less and less, stating “I was surprised when I noticed one day that the conservatives had disappeared from my Facebook feed.”(Beware online 1:24-1:30) Silencing the other side of politics doesn’t allow a person to make a fair and just examination when deciding who to vote for. Through the filtering it doesn’t allow people who are undecided about their vote the option to view other perspectives of politics if they happen to follow more friends who are democratic rather than republican. Viewing the filtering from this perspective further enhances Tufekci’s original claim.
A fellow author whose articles extend Tufekci’s is David Golumbia a writer for Motherboard a branch of Vice that focuses on the future of society. Golumbia recently wrote an article revolving around the invasion of privacy that is too common among the giants of the tech world. Golumbia claims that there are no boundaries as to what is too far in the aspect of a user’s personal privacy. He explains that all companies take part in extracting data from their users ranging from Uber to roombas, each company uses their collected data for a different purpose. Some companies may use the data to further enhance the user’s experience on their platform while others use the data to sell the preferences of the user for more profit; however, there are sadly more cases of the latter as Golumbia gives several examples stating “A for-profit service sells prescription data… An app to help women track their periods sell that data to aggregators.” (Privacy Dystopia 4) Examples of these companies further extend Tufekci’s claim, that social media has done more to negatively impact society rather than enhance it, because it shows that there are companies in the technology industry that has no interest in their users but just sole profit. Companies that have this mindset could be detrimental to the advancement of society as there could be situations in which the information that was sold is used to harm the users if they are put into the wrong hands, and if these companies with this money making mindset there is no prediction that they won’t sell it to the wrong people if all they are looking for is the right price. The for-profit companies are just the beginning as there are tech giants who siphon data from their users and the public know nothing about what their intentions are with said data. Examples of this are found in, as previously stated, Roomba, Uber, Facebook, Samsung; companies with large supporters and even larger amounts of users. When Samsung released their sales during the second quarter of 2018 they alone had 71.5 million shipments of Samsung phones sent out. These companies that keep users in the dark as to what their intentions are with the data could greatly impact society if fallen into the wrong hands. Billions of people would be affected if this were to happen. Golumbia continues to state that there are no boundaries drawn for these companies invasion of privacy that determining what is a rumor and what is a fact has become difficult, he states “We are already at a point now where it seems everything is permissible in tech.” (Privacy Dystopia 9) The uneasiness determining what is fake and not when coming to one’s personal privacy when using their devices further extends Tufekci’s original claim, as both believe a user should feel that their privacy is kept to them and those they choose to share it with.
There is a bountiful amount of cases agreeing with the negative impact of surveillance of social media; however, a senior reporter at the Huffington Post, Anna Almendrala’s article “Web Surveillance Through Social Media Sites A Powerful Tool For Local Law Enforcement” complicates the argument. Tufekci presents the argument of the negative impacts that have been brought with the surveillance of social media; however she fails to neglect the benefits of background surveillance. Almendrala’s article presents ways law enforcements use the surveillance of social media sites to prevent crimes throughout the nation. The surveillance allows investigators to find connections between suspects, as Almendrala states “With just one suspect’s name, they can do more: Draw in his or her followers from Twitter or read Facebook wall posts and status updates of their ‘friends.’” (Almendrala 13) While locating suspects allows for a rapid capture, law enforcement also use the harvested data to scan for any suspicious activity. There are multiple products that assist officials to sift through the data at a swift rate, one being OpenMIND. This product enables the police to locate any suspicious behaviours throughout the entire internet, Almendrala states “It digs not just within social media, but also through blogs, online forums and the ‘deep Web,’ where many chat rooms exist.” (Almendrala 19) While it is not much these two examples goes against Tufekci’s original position, that surveillance and data harvesting from social media websites are all negative; Almendrala’s articles shows the benefits and the other uses for data harvesting to help society.
Overall there is still an ongoing debate on the effects of companies harvesting data from their users; however the debate has clearly become a one sided battle as many of consumers are unhappy with these terms. From companies harvesting data for profit based purposes to others collecting data to enhance the user’s experience. Tufekci’s main claim is a broad one; however there are several other articles and writers whose own claims support and further Tufekci’s. There are still some articles however, delving deeper into the other side of the argument yet have not been quite as successful. In conclusion though Tufekci’s claim is a broad one there is sufficient evidence to support it without much evidence against it.