MICROSOFT IS THE TEAM OF TEAMS FOR SPYING ON YOU: IT PROVIDES THE INFRASTRUCTURE ON WHICH THE NWO/GR IS TO BE FOUNDED. GAVI IS THERE TO KILL YOU AND THOSE WHO OPPOSE THEM. THEY MUST BE SHUT DOWN NOW!

 All the ways Microsoft Teams tracks you and how to stop it

Microsoft's productivity scores have come under fire from privacy campaigners. Teams also collects a lot of data
All the ways Microsoft Teams tracks you and how to stop it
Microsoft / WIRED

The rollout of new Microsoft 365 features to track productivity, which would monitor 73 pieces of “granular data” about workers, was meant to be a boon time for the technology company. But it quickly turned into a nightmare. Announced in October, and criticised heavily by technology researcher Wolfie Christl in November, the feature – Microsoft Productivity Score – was drastically scaled back this month.

The incident has proven embarrassing for Microsoft, and made people more aware than ever of the data its products collect on users. That includes Microsoft Teams, its productivity and communications tool which rivals Slack. If you’ve watched the Microsoft Productivity Score imbroglio and want to put limits on the data Teams collects on you, there are some ways to wrest back control.

Teams’ data collection

Microsoft Teams, like Skype for Business, collects three types of data about the app: what it calls census data, usage data, and error reporting data. So-called census data includes information about your device, operating system and user language, as well as generating a specific user ID that is hashed (or protected) twice, in order to not link it to a certain individual.

Some of the data is collected as standard, while others require opting in or out of sharing information with the company. Census data is collected by default, and can’t be opted out of – though Microsoft is at pains to say none of the information collected can identify an organisation or individual user.

In addition, Microsoft collects usage data, including the number of messages sent, calls and meetings joined, and the name of your organisation as registered with Teams. It also tracks when things go wrong, in order to improve services. Much of this performance collected by all the apps you use.

Microsoft also gathers data on your profile – including your email address, profile picture and phone number – and the content of your meetings, including shared files, recordings and transcripts, which are stored in the cloud for users’ personal use. That data is retained by Microsoft “until the user stops using Microsoft Teams, or until the user deletes personal data”. For individual users, data is deleted within 30 days of the user deleting the local versions of their data.

Where is your data legally stored?

Teams operates across the world, and across jurisdictions with different standards for how data should be handled. If you want to see where your individual data is physically held, you can visit the Microsoft 365 admin centre, then click Settings > Organization profile, and scroll down to Data location.

What your bosses can see

Microsoft also allows company administrators to see insights into how their workers use Teams. The Microsoft Teams admin centre – which is separate from the Microsoft 365 admin centre – allows businesses “to better understand usage patterns, help make business decisions, and inform training and communication efforts,” Microsoft claims. Administrators can see up to 19 different types of user activity data.

But for some, that’s too intrusive: administrators could theoretically produce a report on the way individuals use Teams, including the number of messages users post on any given day, the amount of time they spend on video and audio calls, whether you’ve read messages but not responded to them, and when you were last active on Teams. There doesn’t seem to be a way for users to opt out of this data collection from their company administrators – though many companies don’t seem to realise this kind of data is available.

Calm your notifications

In high-volume, high-user Teams, the number of notifications pinging in the bottom right of your screen can drive you to distraction – as well as rendering a fifth of your monitor unusable. Take back control by redrawing the reasons for receiving notifications by clicking on the three dots at the top right of any Team, selecting Channel notifications, and either turning them off entirely, or select “Custom”.

You’ll want to turn “All new posts” off to start with, to give you some peace and quiet. But you may also want to reduce Channel mentions (when someone tags in a channel using the @Channel name) from “Banner and feed” to “Only show in feed” to remove the ugly pop-ups.

Advertisement

Another bugbear on Teams can come if you’ve been invited to a pre-arranged meeting, or left a meeting partway through, but still receive notifications every time a message is typed into the chat box of the ongoing meeting. As of publication, this was an issue users highlighted as a problem, but for which there isn’t yet a workable fix. One temporary, if inelegant solution, is to change your status to “Do not disturb”, which tamps down all notifications. You can do that by clicking on your profile picture on the main page of Teams, and going down to “Available”, and changing it to “Do not disturb”.

If you’re worried about shutting yourself off entirely, it is possible to offer some people a peek through the notification barrier. In Settings > Privacy, click Manage priority access and type in the names of people from whom you want notifications – useful if you’re worried your boss might think you’ve gone AWOL.

Limit access

One of the first things many people do on WhatsApp is to turn off read receipts, so they don’t feel pressured into responding to personal chat messages as soon as they see them. You can do this in Teams, too: go to Settings > Privacy and untoggled the “Read receipts” radio button for some peace of mind and space to think.

But you should limit access in another way, not just people knowing when you’re online and when you’ve read something: if you haven’t already, ask your organisation to turn on multi-factor authentication on your account, to add an extra layer of protection. (It goes without saying you should be using a strong, unique password and a password manager in general).

You may also want safe spaces to talk about work away from your bosses’ glare – the digital equivalent of the snatched chat around the watercooler, or putting the world to rights down the pub. While we’d always recommend you do so on a different app or service that isn’t ultimately overseen by your company, so they can’t ever see the information, you can have some semblance of privacy in Teams.

Setting up private channels is possible by any user who is a member of a specific team, and allows you limit access to only certain members of a team. In your chosen team, go to the channels section, and click on the three dots. Select Add channel, then under Privacy, select Private. You can then select the specific people you’d like to add to the team – up to 250 people. The channel creator is the only person who can add or remove people from the private channel, and any files or messages sent in that private channel aren’t accessible to anyone outside it. Just keep in mind that anything you do on a work system, or network, could be traced back to you.

This article was originally published by WIRED UK

WHY DID THE US BECOME A POLICE STATE? THAT'S WHY.

 

U.S. Firms Are Helping Build China’s Orwellian State

Tech partnerships are empowering new methods of control.

By , a senior fellow for emerging technologies at the German Marshall Fund’s Alliance for Securing Democracy, and 
Foreign Policy illustration
Foreign Policy illustration
Foreign Policy illustration

My FP: Follow topics and authors to get straight to what you like. Exclusively for FP subscribers.  | 

When a Dutch cybersecurity researcher disclosed last month that Chinese security contractor SenseNets left a massive facial recognition database tracking the movements of over 2.5 million people in China’s Xinjiang province unsecured on the internet, it briefly shone a spotlight on the alarming scope of the Chinese surveillance state.

But SenseNets is a symptom of a much larger phenomenon: Tech firms in the United States are lending expertise, reputational credence, and even technology to Chinese surveillance companies, wittingly or otherwise.

The SenseNets database logged exact GPS coordinates on a 24-hour basis and, using facial recognition, associated that data with sensitive personal information, including national ID numbers, home addresses, personal photographs, and places of employment. Nearly one-third of the individuals tracked were from the Uighur minority ethnic group. In a bizarre juxtaposition of surveillance supremacy and security incompetence, SenseNets’ database was left open on the internet for six months before it was reported and, according to the researcher who discovered it, could have been “corrupted by a 12-year-old.”

The discovery suggests SenseNets is one of a number of Chinese companies participating in the construction of a technology-enabled totalitarian police state in Xinjiang, which has seen as many as 2 million Uighurs placed into “re-education camps” since early 2017. Eyewitness reports from inside the camps describe harsh living conditions, torture, and near constant political indoctrination meant to strip Uighurs of any attachment to their Islamic faith. Facial recognition, artificial intelligence, and speech monitoring enable and supercharge the Chinese Communist Party’s drive to “standardize” its Uighur population. Uighurs can be sent to re-education camps for a vast array of trivial offenses, many of which are benign expressions of faithThe party monitors compliance through unrelenting electronic surveillance of online and physical activities. This modern-day panopticon requires enormous amounts of labor, but is serving as a testing ground for new technologies of surveillance that might render this process cheaper and more efficient for the state.

Toward this goal, the party is leveraging China’s vibrant tech ecosystem, inviting Chinese companies to participate through conventional government-procurement tools. Companies built the re-education camps. Companies supply the software that watches Uighurs online and the cameras that surveil their physical movements. While based in China, many are deeply embedded in the international tech community, in ways that raise serious questions about the misuse of critical new technologies. Foreign firms, eager to access Chinese funding and data, have rushed into partnerships without heed to the ways the technologies they empower are being used in Xinjiang and elsewhere.

In February 2018, the Massachusetts Institute of Technology (MIT) announced a wide-ranging research partnership with Chinese artificial-intelligence giant and global facial-recognition leader SenseTime. SenseTime then held a 49 percent stake in SenseNets, with robust cross-pollination of technical personnel. SenseNets’ parent company Netposa (also Chinese) has offices in Silicon Valley and Boston, received a strategic investment from Intel Capital in 2010, and has invested in U.S. robotics start-ups: Bito—led by researchers at Carnegie Mellon University—and Exyn, a drone software company competing in a Defense Advanced Research Projects Agency (DARPA) artificial-intelligence challenge. This extensive enmeshing raises both moral and dual-use national-security questions. Dual-use technology is tech that can be put to both civilian and military uses and as such is subject to tighter controls. Nuclear power and GPS are classic examples, but new technologies such as facial recognition, augmented reality and virtual reality, 5G, and quantum computing are beginning to raise concerns about their dual applicability.

Beyond SenseNets, Chinese voice-recognition leader iFlytek may also be supplying software to monitor electronic communications in Xinjiang. A 2013 iFlytek patent identified by Human Rights Watch specifically touted its utility in “monitoring public opinion.” Nonetheless, like SenseTime, iFlytek recently established a multiyear research partnership with MIT. These partnerships lend reputational weight to activities that undermine freedom abroad.

Equally concerning is that the details of technical and research collaborations with Chinese companies can be opaque to international partners, concealing ethically objectionable activities.

Equally concerning is that the details of technical and research collaborations with Chinese companies can be opaque to international partners, concealing ethically objectionable activities.
 When Yale University geneticist Kenneth Kidd shared DNA samples with a scientific colleague from the Chinese Ministry of Public Security’s Institute on Forensic Science, he had no idea they would be used to refine genetic surveillance techniques in Xinjiang. Massachusetts-based company Thermo Fisher is also implicated: Until it was reported last month, the company sold DNA sequencers directly to authorities in Xinjiang for genetic mapping. Western companies and institutions must be far more vigilant in scrutinizing how Chinese partners are using their products, especially emerging technologies.

Facial recognition is a good place to start. The industry needs to establish global standards for appropriate applications—use that respects human rights and the rule of law. In the United States, Microsoft has been an industry leader in calling for regulation and has tapped employees, customers, public officials, academics, and civil society groups to develop a set of “principles for facial recognition,” which it plans to launch formally this month. When it comes to building out regulation, the devil may be in the details. But the principles—fairness, transparency, accountability, nondiscrimination, notice and consent, and lawful surveillance—are sound. Surprisingly, SenseNets lists Microsoft itself as a partner on its website, along with American chip manufacturer AMD and high-performance computing provider Amax.

In the case of SenseNets, these partnerships could be false claims by a company looking to boost credibility, unwitting collaboration on the part of U.S. tech firms, or true business relationships. “We have been able to find no evidence that Microsoft is involved in a partnership with SenseNets,” a spokesperson for Microsoft told the authors, “We will follow up with SenseNets to cease making inaccurate representations about our relationship.” But if these partnerships are real, they would violate all six of Microsoft’s principles. California-based Amax, which specializes in high-performance computing for deep-learning applications, touts a partnership with Chinese state-owned Hikvision, the world’s largest supplier of video surveillance products. AMD is also involved in a Chinese joint venture supplying proprietary x86 processor technology.

Despite a general awareness of the ways American companies and individuals are abetting surveillance in Xinjiang, U.S. Congress and government officials have yet to call for a review of the extent of U.S. investment and research partnership entanglements. The Commerce Department’s proposed rule-making on controls for certain emerging technologies is a start, but its scope remains unclear.

THE PERFECT POLICE STATE IS A CREATION OF MICROSOFT.

 Skip to main content

A Review of “The Perfect Police State” by Geoffrey Cain
from Asia Unbound

A Review of “The Perfect Police State” by Geoffrey Cain

A Chinese police officer takes his position by the road near what is officially called a vocational education centre in Yining in Xinjiang Uighur Autonomous Region, China on September 4, 2018.
A Chinese police officer takes his position by the road near what is officially called a vocational education centre in Yining in Xinjiang Uighur Autonomous Region, China on September 4, 2018. Thomas Peter/Reuters

Eric Schluessel is assistant professor of modern Chinese history at George Washington University.

The Xinjiang region of northwest China (or East Turkestan) is the homeland of the Uyghurs, a group numbering roughly eleven million people. Uyghurs speak a language closely related to Turkish, generally practice Islam, boast a history of political independence, and otherwise have a broad range of cultural practices that distinguish them from the majority Han Chinese. Since 2017, thousands of eyewitness reports and leaked official documents have emerged that attest to an ongoing effort on the part of the People’s Republic of China to quell dissent in the Uyghur homeland through detention, incarceration, and a now-pervasive network of surveillance. They describe a system of reeducation camps, which the Chinese government insists are merely mandatory boarding schools, where many Uyghurs, Kazakhs, and others deemed deviant, including some Han Chinese, are sent to be “cured” of their ideological diseases. Some characterize this as a systematic effort to eliminate not just dissenting voices, but an entire way of life—a genocide, powered by cutting-edge technology.

More on:

China

Authoritarianism

Human Rights

Crimes against humanity tend imply the existence of a criminal mastermind who could someday be put on trial. However, in The Perfect Police State: An Undercover Odyssey into China’s Terrifying Surveillance Dystopia of the Future, journalist Geoffrey Cain demonstrates that no single architect designed this system. Rather, we are presented with an indictment of unfettered state power and the unethical pursuit of profit, told through the intimate stories of eyewitnesses.

Cain delineates two narratives that gradually intertwine: One is China’s drive for technological dominance. The other is China’s fumbling effort to define and defeat an internal enemy of its own creation. Ethnoreligious oppression in Xinjiang and the effort to create an all-encompassing system of digital surveillance grew together haphazardly in the hands of security-obsessed officials and amoral executives, while ordinary people paid the price.

Uyghurs’ own stories are at the core of this book, and Cain’s chief interviewees, out of a sample of 168, are a surprising mix of well-informed insiders. We meet a former IT worker who helped create the surveillance system; a onetime spy for the Chinese; and the most sympathetic and central figure, a young, bookish woman named Maysem who simply wanted to finish her degree.

Maysem’s journey guides us through the system of reeducation that quickly encompassed her whole life in and beyond China. When we meet her, Maysem is a thoughtful young woman and former student of imprisoned Uyghur scholar Ilham Tohti, now studying abroad. Maysem returns to China and faces arbitrary bureaucratic bungling that suddenly lands her in a camp, facing beatings and mind-bending interrogations. Like any clever abuser, China has learned not to leave too many marks on its victims, but instead to chip away at their grasp on reality and sense of self-worth. Maysem recounts the effects of reeducation on her psyche as she was forced to deny her own senses and internalize the state’s demands on her thoughts and comportment.

The Perfect Police State helps us understand how this kind of arbitrary violence came about through a story about technological innovation bereft of ethical guidance. In the 2000s, the Chinese state realized that, despite all outward claims to totalitarian omniscience, it did not actually know itself very well. Cain’s interviews with Irfan, the Uyghur IT worker, show how Xinjiang for many years was the testing ground for piecemeal systems of surveillance that gave the illusion of control. Those attempts at discipline accompanied a push for economic development that left Uyghurs behind and the implementation of ill-considered and divisive policies aimed at cultural assimilation.

The deadly July 2009 protests and backlash, which began as peaceful protests against cultural restrictions and economic inequality, could have prompted Chinese leadership to reverse their most arbitrary and divisive policies. Instead, they made increasingly ham-handed and invasive interventions in Uyghur communities and families. These interventions focused on rooting out what the state perceived to be Islamist “extremism,” which in most cases meant criminalizing ordinary Islamic practices. Scholars such as Sean Roberts have argued that China created a “self-fulfilling prophecy” in which the determination to locate and punish terrorists instead led to more discontent and, eventually, actual terrorist attacks.

As the perceived threat increased, so did securitization, which in turn produced new threats and the demand for better intelligence and more powerful computers to counter them. Nevertheless, the state’s ability to surveil remained limited. Even as video cameras and checkpoints multiplied across Xinjiang, the information they collected was not integrated into a single network.

That changed around 2016 with a shift in Chinese thinking about Xinjiang and its people. Previously, the government had focused on economic development as a means to achieve stability in what it had long regarded as a “restive” region. Now Chinese President Xi Jinping and other leaders finally realized that economic development, which overwhelmingly favors Han Chinese settlers, was not winning hearts and minds. (This is to say nothing of the Xinjiang Production and Construction Corps, or bingtuan, a massive pseudo-military corporation that exists in Xinjiang but is governed separately.)

However, rather than remedy the underlying sickness in Xinjiang—the inequality and discrimination that caused the most discontent—they identified Islamist extremism as a “virus” of which Uyghurs, Kazakhs, and others needed to be cured in order to become “normal people.” That is, a primarily socioeconomic problem was treated as an ideological and cultural problem, a drive, as it were, to “kill the Uyghur, save the man,” that often confuses religion and culture. According to government documents, the symptoms of “extremism” include a broad range of mostly innocuous practices that may or may not have any relation to religion per se. Indeed, the interpretation of “Islamic extremism” reflects a suspicion towards any behaviors that are perceived to deviate from Chinese norms. In The Perfect Police State, for example, Maysem is found suspicious because she enjoys reading and pursues an education in Turkey. The result is the criminalization of ordinary Uyghur and Kazakh cultural practices.

Regardless, treating the “virus” required identifying and isolating the patients. Chinese companies and global capital provided the means to do so as their “patriotic” competition for market share and government contracts produced more refined technology, such as facial recognition software and new kinds of cameras. Cain is an experienced business journalist, and he interviewed executives at such globally recognized corporations as Huawei, as well as lesser-known but key players, such as Megvii, SenseTime, and Hikvision, who produced the key components of the new surveillance state. In 2016, cutting-edge advancements in neural networks finally made it possible to integrate those disparate technologies into a single system: the Integrated Joint Operations Platform (IJOP). The IJOP could combine biometric data with masses of other information to track, monitor, and identify potentially deviant behavior.

The IJOP is intended as a “Minority Report”-style system to identify future criminals. Yet, Cain demonstrates, it even fails in that respect. The platform was trained on data from human beings, a system of nosy neighborhood monitors who tracked changes in people’s schedules and habits. Their reports in turn were driven by vague political directives issued by overambitious officials, who set strict quotas requiring them to identify certain numbers of potential terrorists. This targeted tool for surveillance and discipline, trained on bad data, created the dragnet that detained an estimated 10 percent of the Uyghur population. Even members of loyal and “good” families such as Maysem’s were labeled enemies of the state. Reports demonstrate that many senior citizens have been sent for “job training” at reeducation centers, while longtime Chinese Communist Party members, including many cadres and secular academics, were sentenced to prison or even death on absurd charges of extremism.

Cain’s story thus seems to prove the programmer’s adage: “Garbage in, garbage out.” The IJOP provides not omniscience, but the illusion of control as defined by an ignorant machine. Predictive policing in the United States is correctly found to reinforce racial prejudices in policing—we should expect no less in China, where several companies have offered the government facial recognition software that purports to identify Uyghurs automatically. At the same time, some Uyghurs themselves draw parallels with the Cultural Revolution, when people scrambling to display their loyalty to Mao Zedong reported others for fictional thought crimes.

While it would be reasonable to assume that China is deploying arbitrary punishment strategically in order to create an atmosphere of paranoia, Cain is careful not to ascribe precise motivations to leaders without textual evidence. Rather, the book presents a tragedy of errors, a series of unforced mistakes driven by ideology as, at every step, people in power strengthened their commitment to bad policy. Scholars often analyze China’s bungling in Tibet and Xinjiang as a product of bad incentives that lead to perverse implementations of directives from the top. However, as the former spy recounts, officials in Xinjiang increasingly bought into their own paranoid propaganda and began to see enemies everywhere, which the IJOP obligingly served up. This result was, frankly, predictable.

In this way, The Perfect Police State presents a lesson about the amoral pursuit of profit, as companies fulfilled state demands without considering the human consequences or even their products’ effectiveness. One interviewed executive declared, “First we need to survive as a business, and then we can build our moral values.” Pursuing government contracts is not only a priority for these companies, but something that the state can demand of them, and this “non-political” stance serves their bottom lines. Cain points to a number of international companies, from venture capital to biotech, who have similarly involved themselves in Xinjiang by providing investments and technology.

Some of those international corporations withdrew from the abuse of surveillance technology or suspended their operations in light of human rights concerns. While that is laudable, we ought to be troubled that privacy and freedom depend to such a great extent on companies choosing to act against their self-interest. Ultimately, the international community is recognizing that one of the few effective means to address the situation in Xinjiang is to pressure businesses who may be entangled in the region’s surveillance and reeducation system, or with the forced labor that some Chinese companies evidently source from the camps. The Perfect Police State provides a partial map to that tangled web.

Meanwhile, however, The Perfect Police State reminds us that Maysem’s story is far from unique. Cain points to others who have recorded countless eyewitness reports, such as the international Uyghur activist Abduweli Ayup, who introduced most of the book’s interviewees to him, or the Kazakh activist Serikzhan Bilash. People have shared their stories for four years, first in whispers, then in a torrent that has not abated. It is time that the world simply listened.


Lettera aperta al signor Luigi di Maio, deputato del Popolo Italiano

ZZZ, 04.07.2020 C.A. deputato Luigi di Maio sia nella sua funzione di deputato sia nella sua funzione di ministro degli esteri ...