Civil lawLegal News

The Implications of Artificial Intelligence on Privacy Strategies in China and the West

 

follow us on twitter

 

 

Written by- Abhivardhan

Abstract

Artificial Intelligence, as an emerging epitome of globalization and cyber philosophy and reality has been a nurturer to the innovation that IT seems to retain and impart for and to human beings generation after generations. One of the most successful examples observable is pertaining to the Privacy methods and strategies that the Government and authorities in People’s Republic of China have implemented. In addition, Alibaba has been the best perpetrator and successor to this trend, which in the end, is justifiable to a very limited extent. However, amidst this, a Cambridge Analytica Scandal and the Facebook data leak is an interesting restraint to the question that Facebook has to suffer, out of which the matter of credibility is entirely political and of no utilised aspect in case of the fate and dynamics of International Law. This article intends to give a specific insight on how International Law pertaining to the jurisprudential reality of Privacy Laws and policies relating to Alibaba and Facebook at their own respectively dealt facets that are quite diverging with a straight concurrence towards a development in the realm of the International Community.

 

Introduction

International Law, as a discipline, has been instrumental to a traditionalism, which is in an endeavour to defeat and break up into a more innovative approach. Likewise, if we take the basic portion of Treaty jurisprudence, the VCLT, then the obligatory considerations are perfectly developed and maintained, on which Crawford remarks – “States are corporate entities that necessarily operate under a regime of representation. In order to hold them bound by consensual obligations, the normal rules of authorization under treaty law apply; in order to attribute conduct for them for the purpose of determining their compliance with such obligations[2]…”. However, there is a dilution in this perspective, which the United Nations General Assembly has represented.  In 2015, UNICRI launched its programme on AI and Robotics. Utilising knowledge and information of experts in the field to educate and inform stakeholders, and in particular policy-makers, UNICRI believes it will be possible to progress discussion on robotics and artificial intelligence governance. Building consensus amongst concerned communities (national, regional, international, public and private) from theoretical and practical perspectives in a balanced and comprehensive manner is integral to its approach.[3] Thus, we may understand that still, the understandable implications of AI as a fatal and a befitting reality is under a mapping that it seems to be deemed.

Approximately 45 years ago, Buckminster Fuller, the American author, inventor, and architect, observed that technology had reached a point at which we had the ability to utilise it to provide the necessary protection and nurturing that our society requires to fulfil our needs and ensure growth. We were crossing a singularity, he felt, making things such as war obsolete. In this technical era, he questioned which political system can or should be the structure and backbone of our society or, for that matter, if one was even required at all. In contrast, the German Philosopher Martin Heidegger held a more pessimistic view of technology. While many feel that technology is something under our control, this was not the case for Heidegger. For him, once set on its course, the development of and advancements in technology were something beyond our control.[4] Still, this is quotable that “Human progress has created unprecedented opportunities with an equal potential of being wisely or improperly exploited. We have a collective responsibility to prevent the deliberate misuse of new breakthroughs. These complex threats are not confined to a single state: in this area no country, no region can advance and play safe in isolation[5]”, which is an indeed wider and optimistic speculation on a policy-based dome of how the strategies can be achieved per se. However, the approach of Alibaba and the Chinese Government on the Privacy observation methods and the Facebook-Cambridge Analytica Scandal poses an important question on how much the implications shall further and affect. The following portions shall further the required analysis.

 

Alibaba and Facebook: Their Competent but Different Strategies

Ant Financial, an affiliate of the e-commerce giant Alibaba Group, apologized to users after prompting an outcry by automatically enrolling in its social credit program to those who wanted to see the breakdown. The program, called Sesame Credit, tracks personal relationships and behavioural patterns to help determine lending decisions. Sesame Credit program is part of a broader push in China to track how people go about their day, one that could feed into the Chinese government’s ambitions — and, some people would say, Orwellian — effort to use technology to keep a closer watch on its citizens. The episode was a rare, public rebuttal of a prevailing trend in China. The country’s largest internet companies, and the government itself, have gathered even more data on internet users. While Chinese culture does not emphasize personal privacy and Chinese internet users have grown accustomed to surveillance and censorship, the anger represents a nascent, but growing demand for increased privacy and data protections online.[6] Where after, Ma, speaking at the Boao Forum for Asia in China’s southern Hainan province, was asked about privacy issues that have dogged Facebook in recent weeks after it said the personal information of up to 87 million users may have been improperly shared with political consultancy Cambridge Analytica. “The senior management should take responsibility, say, hey, from now we start to work on it,” Ma said after initially refraining from weighing in on the issue. “I will not make a comment about Facebook, but I will say, Facebook, 15 years ago, they never expected this thing to grow like that,” he said.[7] However, this cannot be termed as an obsession as such. This is different in its own purview, which is instrumental.

In 2012, the seminal Google Brain project required 16,000 microprocessor cores to run algorithms capable of learning to identify a cat. The feat was hailed as a breakthrough in deep learning: crunching vast training data sets to find patterns without guidance from a human programmer. A year later, Yunji and his brother, Chen Tianshi, who is now Cambricon’s CEO, teamed up to design a novel chip architecture that could enable portable consumer devices to rival that feat, making them capable of recognizing faces, navigating roads, translating languages, spotting useful information, or identifying “fake news.”[8] However, there is a complicated web of relationships that explains how the Trump campaign, via the help of a political consulting firm, was able to harvest raw data from up to 87 million Facebook profiles to direct its messaging. The consulting firm, Cambridge Analytica, is tangled up in several scandals[9] and has close ties to Steve Bannon and GOP megadonor Robert Mercer, is in hot water after several recent reports have raised ethical and potentially legal questions about its business practices. The New York Times and Observer reported that Cambridge obtained private Facebook data — specifically, information on tens of millions of Facebook profiles — from an outside researcher who provided it to them in violation of his own agreement with Facebook. “We believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica,” Facebook’s chief technology officer wrote on Wednesday. CEO Mark Zuckerberg will testify before a congressional committee on the matter on April 11. Separately, Channel 4 News in the UK has posted video in which Cambridge CEO Alexander Nix says his firm conducts dirty tricks such as trying to tape its candidates’ opponents accepting purported bribes or sending “some girls around to the [opposing] candidate’s house.” As a result of these reports, Cambridge announced that it would suspend Nix pending an investigation.[10]

Facebook founder and CEO Mark Zuckerberg wrote in a response to this scandal, “I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.” But former Facebook employees have said that there’s a tension between the security team and the legal/policy team in terms of how they prioritize user protection in their decision-making. “The people whose job is to protect the user are always fighting an uphill battle against the people whose job is to make money for the company,” Sandy Parakilas, who worked on the privacy side at Facebook, told the New York Times. Now, there is a decent chance that Cambridge Analytica’s work didn’t actually do much to elect Trump; the firm’s reputation in the political consulting community is less than stellar.[11] However, the basic credibility still arises and the questioning realities still exist, which must be dealt adequately. Thus, even if Alibaba is prone to a safer side, Facebook is seemed to be vulnerable, which must not be seen as such.

Conclusion

The matter of fact, is that Facebook and Alibaba, who are involved with their own advancement strategies for privacy matters and protection, do absolutely need to maintain this vague concern as soon as possible. We are moving towards a greater utility of AI tools, which Zuckerberg reclarified in his hearing by the Commerce and Judicial Committee of the U.S. Senate, which is an interesting portion of consideration. Regarding the developmental implications of International Law, this must be understood that International Law is only under a governing disarray right now, which must be adequately resolved, which can be hoped per se.

 

Written By :  ABHIVARDHAN

Edited By :   GAURAV AGARWAL

 

[1] He is an undergraduate student pursuing B.A./L.L.B.(H) in Amity University Uttar Pradesh, Lucknow, in Year-1.

[2] JAMES CRAWFORD, 415 BROWNLIE’S PRINCIPLES OF PUBLIC INTERNATIONAL LAW, (Oxford University Press, 2008) (7th ed., 2008).

[3] UNICRI Centre for Artificial Intelligence and Robotics, UNCRI, at: http://www.unicri.it/in_focus/on/UNICRI_Centre_Artificial_Robotics.

[4] UNICRI, The Risks and Benefits Of Artificial Intelligence And Robotics, Cambridge, United Kingdom, February 6-7, 2017, Report at: http://unicri.it/in_focus/files/Report_UNICRI_Cambridge_Workshop_Feb_2017.pdf, at 5.

[5] UNICRI, Chemical, biological, radiological and nuclear (CBRN) National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence, 7 October 2015, United Nations Headquarters, New York, at: http://www.unicri.it/news/article/CBRN_Artificial_Intelligence.

[6] Paul Mozur, Internet Users in China Expect to Be Tracked. Now, They Want Privacy., (2018, January 4) available at: https://www.nytimes.com/2018/01/04/business/china-alibaba-privacy.html.

[7] Reuters Staff, Alibaba’s Jack Ma urges Facebook to fix privacy issues, (April 9, 2018, 8:29 PM), available at: https://in.reuters.com/article/china-boao-alibaba-facebook/alibabas-jack-ma-urges-facebook-to-fix-privacy-issues-idINKBN1HG288.

[8] Christina Larson, China’s massive investment in artificial intelligence has an insidious downside, (February 9, 2018), available at: http://www.sciencemag.org/news/2018/02/china-s-massive-investment-artificial-intelligence-has-insidious-downside.

[9] Alvin Chang, The Facebook and Cambridge Analytica scandal, explained with a simple diagram, (April 10, 2018), available at: https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram.

[10] Andrew Prokop, Cambridge Analytica and its many scandals, explained, (March 21, 2018; Updated April 4, 2018), available at: https://www.vox.com/policy-and-politics/2018/3/21/17141428/cambridge-analytica-trump-russia-mueller.

[11] Id. 9.

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

Close
Close