Della Vedova, di Maio, MAIE, Merlo, CGIE: tutti figli della stessa madre

 

Merlo (MAIE): “La lettera di Della Vedova al CGIE dimostra la sua impreparazione sugli italiani nel mondo”

“La mentalità burocratica – sottolinea il presidente MAIE, Sen. Ricardo Merlo - va lasciata ai burocrati. La politica è un’altra cosa. Considero assurde le parole del Sottosegretario Della Vedova e offensive per tutto il mondo delle associazioni italiane oltre confine”. E ancora: “Tutto ha un limite. Come Movimento Associativo Italiani all’Estero siamo valutando se continuare o meno ad appoggiare questo esecutivo. Noi non siamo soldatini di partito”

“E’ veramente triste che il Sottosegretario agli Esteri con la delega per gli italiani nel mondo abbia una considerazione talmente scarsa dell’associazionismo di volontariato italiano all’estero”. Così il Sen. Ricardo Merlo, presidente MAIE, commenta le parole che il Sottosegretario agli Esteri Benedetto Della Vedova ha messo nero su bianco in una lettera inviata al Segretario Generale del CGIE, Michele Schiavone.

Spiega Merlo: “Replicando al Consiglio generale degli italiani all’estero, che chiedeva giustamente maggiore attenzione e sostegno da parte del governo nei confronti dell’associazionismo italiano nel mondo, Della Vedova ha risposto che alcune peculiarità della forma giuridica associativa “rendono particolarmente difficile un intervento finanziario a sostegno delle associazioni italiane nel mondo”, anche perché – ha aggiunto – la maggioranza delle associazioni italiane all’estero non avrebbe personalità giuridica”.

“Ma perché un politico accetta una delega della quale non capisce assolutamente nulla? Non sarebbe stato meglio un no grazie? Lo dico per lui che è un dirigente con una sua traiettoria e per tutti noi italiani nel mondo costretti a subire un altro politico romano che viene a fare una borsa di studio pagata da tutti i contribuenti per conoscere il mondo degli italiani all estero” – commenta il Sen. Merlo in maniera retorica, e prosegue: “La verità è che le associazioni italiane nel mondo che hanno bisogno di aiuti concreti sono proprio quelle che nella stragrande maggioranza dei casi hanno personalità giuridica, sono legalmente costituite, hanno uno statuto e una sede fisica. Sono insomma perfettamente in regola, sotto ogni punto di vista. E quelle che non lo sono, spesso per problemi economici o di diversa natura, devono essere aiutate dal governo italiano proprio per potersi mettere in regola. Perché sono associazioni che hanno una storia e fanno tanto per gli italiani nel mondo, anche se manca una virgola nel loro statuto”.

“La mentalità burocratica – prosegue il presidente MAIE – va lasciata ai burocrati. La politica è un’altra cosa. Considero assurde le parole del Sottosegretario Della Vedova e offensive per tutto il mondo delle associazioni italiane oltre confine. Certe dichiarazioni sono proprie di chi non ha mai conosciuto davvero l’associazionismo italiano all’estero. Tanto valeva che dicesse che non ci sono soldi, o meglio, che l’attuale governo non ha intenzione di stanziare nemmeno un centesimo per la rete delle associazioni italiane all’estero, perché purtroppo fino ad oggi sembra che all’attuale esecutivo gli italiani nel mondo interessino poco”.

“Proprio per questo, il MAIE, che continua ad osservare in maniera vigile le politiche del governo per gli italiani nel mondo, sta valutando se continuare o meno ad appoggiare questo esecutivo. Perché tutto ha un limite. Vedremo la prossima legge di stabilità. Noi non siamo soldatini di partito. Siamo un movimento nato all’estero che pretende di rappresentare con dignità i nostri cittadini residenti oltre confine. Questo è il MAIE, – conclude il Sen. Merlo – ed è questo che fa la differenza”.

Twitter corruption: India and Italy share lots of things

 

India Covid: Anger as Twitter ordered to remove critical virus posts

Published

Related Topics
In this photo illustration the Twitter logo is displayed on the screen of an iPhone in front of a computer screen displaying a Twitter logoimage copyrightGetty Images

Thousands across India are outraged after the government ordered social media platform Twitter to remove posts critical of its handling of the virus.

A Twitter spokesperson confirmed it had blocked some material from being viewed in India.

The country faces a massive surge in cases, with many of its hospitals facing an oxygen shortage.

One Twitter user accused the government of "finding it easier to take down tweets than ensure oxygen supplies".

India recorded 352,991 new infections on Monday and 2,812 deaths - the highest single-day spike so far.

'A humanitarian disaster'

The government made an emergency order to censor the tweets, Twitter revealed on Lumen, a database that keeps track of global government orders around online content.

Twitter did not specify which content it had taken down but media reports say it includes a tweet from a politician in West Bengal holding Prime Minister Narendra Modi directly responsible for Covid deaths, and from an actor criticising Mr Modi for holding political rallies while the virus raged.

media caption'A person cannot even die peacefully in Delhi'

Twitter said it reviewed content when it received a "valid legal request" - in this case, the Indian government is said to have cited the Information Technology Act 2000.

"If it is determined to be illegal in a particular jurisdiction, but not in violation of the Twitter Rules, we may withhold access to the content in India only," the platform said.

An Indian official said the material in question was misleading or could spark panic.

"We cannot allow fake news that harms the country," BJP national spokesperson Gopal Agarwal told the BBC.

The crisis was being worsened by fake news, he explained, pointing out that social media content had to be in line with the rule of law.

An official of the Minister of Electronics and IT had earlier told The Hindu newspaper that it was "necessary to take action against those who are misusing social media... for unethical purposes."

But on social media, many criticised the government for focusing on "censorship" while the country was in the midst of a "humanitarian disaster".

The BBC is not responsible for the content of external sites.View original tweet on Twitter
Presentational white space
The BBC is not responsible for the content of external sites.View original tweet on Twitter
Presentational white space
The BBC is not responsible for the content of external sites.View original tweet on Twitter
Presentational white space

Many online also criticised Twitter for complying with the order, calling them "complicit".

The BBC is not responsible for the content of external sites.View original tweet on Twitter
Presentational white space
The BBC is not responsible for the content of external sites.View original tweet on Twitter
Presentational white space

Twitter has been overrun by reports of people falling sick, needing oxygen and beds. It has in the past been criticised for bowing to pressure from the Indian government.

In February, the platform blocked more than 500 accounts linked to ongoing farmer protests against agricultural reforms after the government issued a legal notice. If Twitter had not complied, it could have meant jail time for Twitter's employees in India.

Family members of a Covid-19 patient talks on a mobile phone inside a government hospitalimage copyrightGetty Images
image captionIndia is facing a deadly second wave of Covid-19 infections

Earlier this year, the Indian government believed it had beaten the virus. New cases had fallen to 11,000 by mid-February, vaccines were being exported and in March the health minister said India was "in the endgame" of the pandemic.

However, since then an increase in cases has been driven by the emergence of new variants, as well as mass gatherings, such as the Kumbh Mela festival, which drew millions of pilgrims earlier this month.

Mr Modi has faced increased criticism for lifting restrictions and resuming large gatherings.

On Sunday, the prime minister said the second wave was a storm that had "shaken the nation" but that a "positive approach" was key to fighting the pandemic.


Twitter corruption

 

Twitter’s New Features Aren’t What Users Asked For

Jack Dorsey points.
Reuters / Lucas Jackson

Sara Haider, a product-management director at Twitter, asked for feedback on some new features the company is considering on Friday. “Hey Twitter. We’ve been playing with some rough features to make it feel more conversational here,” she tweeted, sharing images of reply threading and an online-status indicator. “Still early and iterating on these ideas. Thoughts?” she asked.

While some users replied with small tweaks or suggestions (“more whitespace”), others begged Twitter to fix the one thing they feel the company continues to ignore: rampant harassment and abuse. “Talk to @jack about actually doing something instead [of] cosmetic changes,” one woman tweeted. “I don’t think Twitter shouldn’t evolve. I just think Twitter HAS to think of how features intended to help may be exploited for harm,” another user said.

Haider herself isn’t part of the team at Twitter that oversees issues related to abuse and harassment, but criticism of how the company handles such issues has reached a fever pitch. Over the past year, a slew of high-profile users, including Ed Sheeran, Millie Bobby Brown, and Wil Wheaton, have all stepped back from Twitter because of the harassment they received on the platform. Meanwhile, the company continues to announce incremental product updates that users feel ignore the real problem.

Over the past 18 months, Twitter has changed its user avatars from square-shaped to circular, redesigned Moments, added topic tags to the Explore page, spammed users’ timelines with a new “happening now” section, added endless notifications, upped the character limit to 280, promoted live video of sports events, revamped its algorithm to give older tweets more prominence, and announced plans to revoke the third-party API access that many popular apps rely on. None of these updates change the fact that outsize harassment problems have made the experience of Twitter itself miserable for many users.

While the company continues to dedicate time and resources to making minor changes aimed at boosting engagement, easy fixes for harassment are ignored. In 2016, for instance, Randi Lee Harper, the founder of the Online Abuse Prevention Initiative, laid out some options for improvement in a Medium post. Twitter has since addressed most of them, but several proposals, like auto-muting replies when someone you have blocked tweets at you or giving users with locked accounts greater ability to interact with open ones, have yet to be implemented.

There is also currently no way to opt out of inclusion in Twitter Moments. Moments was introduced in 2015 as Twitter’s way to highlight noteworthy tweets about current events, and since then it has become a widely used feature within the app. But users whose tweets are featured often immediately become targets of abuse, and, because Twitter’s curation team does not ask permission or give a heads-up before featuring a tweet, are seldom prepared for the increased attention.

Other updates, like giving users the ability to revoke DM privileges without blocking someone, or allowing users to mute words and phrases from columns on TweetDeck, could help power users curb their own experience of harassment. Some have become so desperate for a solution that they’ve endorsed radical proposals like only allowing verified accounts to tweet.

Twitter continues to emphasize that tackling abuse is a work in progress. In March, the company even solicited suggestions directly from users via a Google form. More recently, Twitter has rolled out enhanced quality filters for notifications. It’s also started giving more weight to abuse reports made by bystanders, thereby relieving victims of the burden, and temporarily restricted more accounts that demonstrate abusive behavior. When accounts are suspended for abuse, the company now tells offenders which tweets violated the platform’s rules and which rules they violated.

But despite these updates, for many users, harassment remains an inescapable part of the Twitter experience. Unless the company devotes substantial resources to tackling the problem, it’s unlikely it will be contained. Jack Dorsey, Twitter’s CEO, will likely face questions from Congress about the particular urgency of this issue as he testifies Wednesday at hearings about foreign governments’ ability to spread misinformation on the platform. As trolls and bad actors have weaponized social-media sites like Twitter, abuse and harassment campaigns can become mechanisms for spreading misinformation and divisive political content.

Some users, though, are skeptical that anything will ever change. “The annoying thing is that every few months Jack comes out with a big speech about how they’re going to fix twitter,” one user tweeted, “and ever[y] time they just continue to get it wrong.”

Related Videos

Taylor Lorenz is a former staff writer at The Atlantic.

Twitter corruption

 

Here's how Twitter tracked Modi's anti-corruption movement

The total volume of tweets in a 12 hour period reached a staggering number of 4,70,000; clearly indicating the people's interest in the developments








Picture source: Getty Images
Picture source: Getty Images


Prime Minister Narendra Modi's surprise announcement on the abolishment of the current Rs. 500 and Rs. 1000 currency notes in circulation, sent the Twitter world into a tizzy.

From memes to manic panic, the Twitter world was ablaze with tweets either raving or ranting about the move. The flavour of the tweets ranged from disbelief, to celebratory, to congratulatory, to hilarious and downright skeptical. It seemed like a lot of people had something to tweet out last night.

It figures, therefore, that the total volume of tweets in a 12 hour period reached a staggering number of 4,70,000; clearly indicating the people's interest in the developments.

While rumours and whispers regarding the movement have been floating around on Twitter since the morning of 8 September 2016, it was the Prime Minister's speech at 8 pm on Tuesday evening, that really fired up the Twitterati to come out in full swing.

The subject peaked at 11.30 PM last night with 2,000 tweets being generated per minute. Hashtag, #IndiaFightsCorruption #ModiFightsCorruption went viral instantly creating a digital community.

The unique facet of this mass Twitter conversation, was that the magnitude of this milestone development was such that it saw both celebrities as well as the 'common man' fall under the same umbrella, tweeting out under similar hashtags and echoing similar sentiments.

What could have possibly spurred the popularity of the topic on hand was its endorsement by Twitter heavyweights such as yoga gurus Sri Sri Ravi Shankar, Baba Ramdev, celebrities from the world of Bollywood, politicos like Rajnath Singh and the Prime Minister himself.

Here are five spotlight Tweets:

@PMOIndia:

Here's how Twitter tracked Modi's anti-corruption movement

@SriSri:

Here's how Twitter tracked Modi's anti-corruption movement

@RBI:

Here's how Twitter tracked Modi's anti-corruption movement

@ narendramodi

Here's how Twitter tracked Modi's anti-corruption movement

@Rajnathsingh:

Here's how Twitter tracked Modi's anti-corruption movement
Data source: Twitter India

Instagram corruption

 

The growing criticism over Instagram’s algorithm bias

Last Updated Apr 15, 2021 at 12:04 pm EDT

AddThis Sharing Buttons
Share to Reddit
Summary

Instagram has come under fire for being bias towards plus-size account holders and those from racialized communities


Biases built into algorithms can favour people who hold similar values as the creators


Instagram CEO pledged the company would do better job at serving underrepresented groups and address algorithmic bias


UPDATE: After CityNews contacted Instagram and following the publication of this story, Sarah Taylor’s Instagram account has been reinstated as of April 15.

A Facebook company spokesperson provided the following statement:

“We want our policies to be inclusive and reflect all identities, and we are constantly iterating to see how we can improve. We remove content that violates our policies, and we train artificial intelligence to proactively find potential violations. This technology is not trained to remove content based on a person’s size, it is trained to look for violating elements – such as visible genitalia or text containing hate speech. The technology is not perfect and sometimes makes mistakes, as it did in this case – we apologize for any harm caused.” 


What started out as an exciting Instagram post about a pregnancy announcement for one Toronto woman, ended with her account being deactivated due to what she fears is a biased algorithm that is facing increased scrutiny.

Sarah Taylor, a plus-size model and personal trainer, posted a photo of three running shoes beside a onesie and her baby’s sonogram. Soon after, she said she received an alert from Instagram, telling her that there was suspicious activity on her account and would therefore need to go through a verification process to authenticate her page. Despite doing so, she was told her account was being disabled for 24 hours.

“My page was gone, there was nothing there and it looked like I didn’t exist,” said Taylor. “I wasn’t given a reason, I had never had any community violations. To be shutdown with no warning at all and no previous faults against my account, made no sense at all.”

The soon-to-be mom lost over 8,000 followers, over 90 per cent of whom were women. She depends on her social media page for her livelihood, keeping her Toronto business – a fitness studio that was forced to shut down during the pandemic – running virtually.

After taking numerous steps to get her page back and following up with the social media app to appeal the removal of her account, she was told via email that her account was deactivated due to community guidelines being violated. An allegation she disputes.

“I had no hate speech, no bullying, I am not nude on my photos and mostly in fitness gear,” said Taylor. “All of my posts are all about empowering women, it’s my life’s work to help women advocate for themselves.”

More than two weeks later, Taylor still doesn’t know why her account was deactivated, adding that she is unaware of whether or not she was reported by someone else and if Instagram investigated prior to removing her page.

“The fact that no one got back to me with details is really disheartening as an influencer, as a business owner, and somebody who owns a small business and is trying to survive during COVID,” Taylor said. “I want them to give me actual reasons as to why it was shut down in the first place because there was no cause for it. I want to see change in the long run in algorithms. Stop filtering different groups if they aren’t the typical beauty standard.”

CityNews reached out to Instagram last week to ask why Taylor’s account was removed but a response has not yet been provided.

The algorithm dilemma

Taylor took to her other Instagram page to bring attention to her experiences and found that her story was just one of many that highlighted issues surrounding Instagram’s algorithm, a set of computerized rules and instructions used by the social media site.

“I discovered there were a couple other accounts that I know off who talk about very similar topics as me that have been shut down, or have had community violations, and have been shadow banned,” said Taylor. “There are so many other things that have happened when it comes to silencing the voices who are in marginalized bodies.”

For years now, a community of social media users have criticized Instagram’s algorithms for being biased towards plus-size account holders, and especially those from racialized communities.

Yuan Stevens, Policy Lead on Technology, Cybersecurity & Democracy at Ryerson University, said a computer’s system rule, in this case algorithms, can discriminate against persons.

One of the issues identified by Stevens is that algorithms are assumed to be neutral and math-based, but the technology isn’t impartial, and it’s made with “biases of their creators”. The biases built into algorithms and automated technology are also reflective of their databases and can therefore favour people who hold similar values as the creators.

Stevens said that has significant implications for plus-sized people on social media.

“I’m not surprised that plus-sized models could be targeted on social media apps like Instagram,” she said. “We know that automated decision making algorithms like face recognition technologies can be extremely inaccurate.”

Just recently, over 50 content creators who are plus-size signed up to participate in the ‘Don’t Delete My Body’ project, calling on Instagram to ‘stop censoring fat bodies’ and that Queer and BIPOC account holders are targeted at higher rates. The influencers, who are from diverse backgrounds, posted photos with the caption “Why does Instagram censor my body but not thin bodies?”

“There’s a bot in the algorithm and it measures the amount of clothing to skin ratio and if there’s anything above 60 per cent, it’s considered sexually explicit,” said Kayla Logan, one of the creators of the project. “So if you’re fat and you’re in a bathing suit, compared to your thin counterpart, that’s going to be sexually explicit. It’s inherently fat phobic and discriminatory towards fat people.”

Logan, who is a body positive and mental health content creator, adds that this issue has persisted for years. That’s why dozens participated in the project, taking photos of themselves in swimwear, lingerie, and some posed semi-nude while covering parts of their bodies. Logan said the photos taken for this project, are similar to what Instagram has allowed other account holders to post without penalty.

“Instagram is doing everything they possibly can to silence you. They will delete posts, they will flag your stories and remove them,” Logan said. “Everyone shared their experience of censorship, especially on Instagram. It’s not an isolated incident being fat and being silenced on Instagram or losing your platform.”

Logan describes herself as a body-positive fat liberation activist who has posted photos in lingerie posing next to iconic places around the world. These algorithms have also impacted her account, locking her out without any notice numerous times for posting content that’s similar to non plus-sized accounts.

“I’m all about showing your body in very artistic, beautiful, non sexualized ways. But on Instagram, fat bodies are considered sexualized.”

Logan, claims she’s also been shadow-banned for years now, which is the practice of restricting content and limiting an account’s reach where photos and videos don’t appear on the explorer page. For social media users who depend on these apps for business, that may mean losing customers and opportunities. Logan also adds that her story views have decreased by half and her branded content feature was removed, which impacts her ability to do business with companies.

“I did have that confirmed by one of the largest companies in Canada, when their IT department looked into it for me,” said Logan. “A company wanted to put money in to a sponsored post we were doing and I didn’t have that feature. I felt really embarrassed and ashamed and I had to tell them that I believe I’m shadow-banned.”

“It’s like this hush thing in the community where us plus-size influencers talk about it a lot but Instagram denies that it exists,” Logan added.

Both Taylor and Logan have said that contacting Instagram has been one of the biggest challenges, and there’s been a lack of transparency and accountability, especially when they’re accused of violating community rules and their content is repeatedly removed.

“There’s no human entity to speak with so we’re shouting into this void,” said Logan. “They’re losing their community and their livelihood, and even they can’t get a hold of Instagram. These are people that have half-a-million viewership and they can’t have this conversation with Instagram.”

“Start hiring people to actually look at things,” Taylor added. “If I submit an appeal to my account, someone should be looking at it and give me an actual answer rather than just a link with no details. It’s unfair and something has to change. That’s my hope in speaking up.”

Criticism over Instagram’s algorithms started long before Taylor’s account was removed.

Last June, when the world saw mass protests, highlighting the death of Black people by police officers and calling on governments and institutions to address systemic racism, the head of Instagram made a post standing in solidarity with the Black community.

Instagram CEO Adam Mosseri wrote then that the company will be doing a better job at serving underrepresented groups on four areas, including addressing algorithmic bias.

CityNews reached out to Facebook, which owns Instagram, to ask about the issue of algorithm biases, shadow-banning, and how the company investigates flagged accounts prior to removing them. A response has not been yet been provided.

“This is a greater conversation, it’s not just about you shutting my business page with no reason,” Taylor said. “I’m wondering if it’s a bigger conversation about censorship. If that’s the case, I will continue to be loud because that’s not okay.”

The algorithm debate

Algorithms used by social media sites have sparked big debates on not only censorship, but the responsibility companies have in addressing issues such as online hate, white supremacy, harassment and misogyny.

“It is worse for those who are Black, Indigenous and Asian because they do get targeted even more and that’s not okay,” Taylor said. “It’s very frustrating.”

Stevens said technology works in favour of some people and against others because of its bias and potential for discrimination, adding that Instagram’s algorithms aren’t perfect.

“Automated decision making technology is important because it speeds up human decision making processes and allows decisions to be made at a significant scale,” she explained. “Whereas Facebook is removing content, historically it would have relied on a person to make that decisions, automation would speed that process up and allow content to be removed at an incredible scale and speed.”

Stevens is part of a team at Ryerson Labs, looking at face recognition technology and how algorithms work. She cites the work of Shoshana Zuboff, a scholar and leader in the field of “surveillance capitalism,” saying algorithms play a role for social media companies who are collecting data.

“These companies are in the business of understanding how we think and work and nudging us in certain directions, and that’s really significant,” said Stevens. “We expect to know how technology works but algorithmic technology sometimes, it teaches itself because we feed it data.”

Stevens and her team are hoping to highlight the work of Joy Buolamwini, a computer scientist and digital activist who founded the Algorithmic Justice League, focusing on creating equitable and accountable technology.

As explained by Stevens, the organization has identified how face recognition algorithms, which are being used by social media companies, can often be inaccurate. Recently AGL analyzed 189 face algorithms submitted by developers around the world and found concerning results.

“What they found was that the algorithms were 10 to 100 times more likely to inaccurately identify a photograph of a Black or East Asian person compared to a white person,” said Stevens. “What this means is that if you are in a data base and you are being chosen for something or if they wanted to remove content or for some reason target you in some way, the chances of you being misidentified are so much greater if you’re East Asian or Black.”

Stevens adds that there needs to be more research in Canada that looks at the use of algorithms and how decisions are made, not only on social media, but also when it comes to policing.

Most of the research cited comes from the U.S., where there have been instances of people being wrongly accused of committing a crime as police services have also been known to use facial recognition technology.

“There needs to be solutions,” argued Stevens. “Social media companies are increasingly using algorithms and AI to make decisions. Our work uncovered that in 2020 Facebook’s Community Standards Enforcement report demonstrated that they’re continuously expanding their use of algorithms to make content removal decisions.”

Attention has also turned to Canada’s privacy laws when it comes to facial recognition technology. Stevens said it’s important that our government’s laws advance to prevent what she calls “wrongful takedowns” and instead, require social media companies to be more transparent about how they make their decisions.

“People should understand how decisions are made. Right now companies aren’t required to make these decisions transparent,” Stevens said. “It’s incredibly important that companies are required by the Canadian government to be open about how they decide how content is removed. Right now, we don’t have that transparency.”

Unfairly targeted 

Not having access to her account has resulted in a loss of business for Taylor, who was crowned Miss Plus Canada 2014.

Since the pandemic closed down her physical gym, she’s moved her operations online and Instagram has become a key component to growing her community.

The expectant mother also depends on social media as she works with big brands like Nike, Lululemon and Penningtons. She’s created a community with people from all around the world, which is why she’s hoping Instagram will give her her page back.

“It wasn’t just a page, it was a community,” she said. “To lose that makes me really sad, and it’s also disheartening that it happened after announcing my pregnancy. I’ve definitely lost opportunities. It’s basically slowed to a halt.”

Logan, who has nearly 38,000 followers on Instagram, adds that she and others who have been unfairly targeted by algorithms have had to create backup accounts just in case their pages are removed.

“I don’t know any thin bloggers who have a backup account in case they lose their page,” Logan said. “Almost all of my friends I know, we have backup accounts because we are terrified every single day that our accounts will be gone. So just in case, we have that second platform.”

It’s been said for decades now, that society needs to do a better job of being representative and inclusive of communities who haven’t always had a platform for representation. The same can be said about social media. Centering empowerment of other women who haven’t always seen themselves reflected has been central for both Taylor and Logan.

“I grew up as a big kid, I was a size 12 and I was bullied heavily, not just verbally. I was beat up by the guys in grade school,” Taylor said. “It became my infernal dialogue and I grew up hating myself. It affected all the decisions I made and led me to marry a man who was abusive.”

Instagram corruption

 

Why Instagram Is the Worst Social Media for Mental Health

May 25, 2017 11:54 AM EDT

Instagram is the worst social media network for mental health and wellbeing, according to a recent survey of almost 1,500 teens and young adults. While the photo-based platform got points for self-expression and self-identity, it was also associated with high levels of anxiety, depression, bullying and FOMO, or the “fear of missing out.”

Out of five social networks included in the survey, YouTube received the highest marks for health and wellbeing and was the only site that received a net positive score by respondents. Twitter came in second, followed by Facebook and then Snapchat—with Instagram bringing up the rear.

The #StatusOfMind survey, published by the United Kingdom’s Royal Society for Public Health, included input from 1,479 young people (ages 14 to 24) from across England, Scotland, Wales and Northern Ireland. From February through May of this year, people answered questions about how different social media platforms impacted 14 different issues related to their mental or physical health.

There were certainly some benefits associated with social networking. All of the sites received positive scores for self-identity, self-expression, community building and emotional support, for example. YouTube also got high marks for bringing awareness of other people’s health experiences, for providing access to trustworthy health information and for decreasing respondents’ levels of depression, anxiety, and loneliness.

But they all received negative marks, as well—especially for sleep quality, bullying, body image and FOMO. And unlike YouTube, the other four networks were associated with increases in depression and anxiety.

Previous studies have suggested that young people who spend more than two hours a day on social networking sites are more likely to report psychological distress. “Seeing friends constantly on holiday or enjoying nights out can make young people feel like they are missing out while others enjoy life,” the #StatusOfMind report states. “These feelings can promote a ‘compare and despair’ attitude.”

Social media posts can also set unrealistic expectations and create feelings of inadequacy and low self-esteem, the authors wrote. This may explain why Instagram, where personal photos take center stage, received the worst scores for body image and anxiety. As one survey respondent wrote, “Instagram easily makes girls and women feel as if their bodies aren’t good enough as people add filters and edit their pictures in order for them to look ‘perfect’.”

MORE: Why You Should Let Someone Else Choose Your Tinder Photo

Other research has found that the more social networks a young adult uses, the more likely he or she is to report depression and anxiety. Trying to navigate between different norms and friend networks on various platforms could be to blame, study authors say—although it’s also possible that people with poor mental health are drawn to multiple social-media platforms in the first place.

To reduce the harmful effects of social media on children and young adults, the Royal Society is calling for social media companies to make changes. The report recommends the introduction of a pop-up “heavy usage” warning within these apps or website—something 71% of survey respondents said they’d support.

It also recommends that companies find a way to highlight when photos of people have been digitally manipulated, as well as identify and offer help to users who could be suffering from mental health problems. (A feature rolled out on Instagram last year allowing users to anonymously flag troublesome posts.)

The government can also help, the report states. It calls for “safe social media use” to be taught during health education in schools, for professionals who work with youth to be trained in digital and social media and for more research to be conducted on the effects of social media on mental health.

The Royal Society hopes to empower young adults to use social networks “in a way that protects and promotes their health and wellbeing,” the report states. “Social media isn’t going away soon, nor should it. We must be ready to nurture the innovation that the future holds.”

Microsoft settles corruption charges with 25 million USD

 

Microsoft pays $25 million to settle corruption charges

July 23, 2019
FILE- In this May 7, 2018, file photo Microsoft CEO Satya Nadella looks on during a video as he delivers the keynote address at Build, the company's annual conference for software developers in Seattle. Microsoft is paying more than $25 million to settle federal corruption charges involving a bribery scheme in its Hungary office and three other foreign subsidiaries, the U.S. Securities and Exchange Commission said Monday, July 22, 2019. (AP Photo/Elaine Thompson, File)
1 of 2
FILE- In this May 7, 2018, file photo Microsoft CEO Satya Nadella looks on during a video as he delivers the keynote address at Build, the company's annual conference for software developers in Seattle. Microsoft is paying more than $25 million to settle federal corruption charges involving a bribery scheme in its Hungary office and three other foreign subsidiaries, the U.S. Securities and Exchange Commission said Monday, July 22, 2019. (AP Photo/Elaine Thompson, File)

NEW YORK (AP) — Microsoft is paying more than $25 million to settle federal corruption charges involving a bribery scheme in Hungary and other foreign offices.

The U.S. Securities and Exchange Commission said Microsoft will pay about $16.6 million to settle charges that it violated the Foreign Corrupt Practices Act. While the case centered on Hungary, the SEC said it also found improprieties at Microsoft offices in Saudi Arabia, Thailand and Turkey.

The Justice Department said Microsoft will also pay an $8.75 million criminal fine stemming from the Hungarian bid-rigging and bribery scheme.

ADVERTISEMENT

Federal prosecutors said that from 2013 through 2015, a senior executive and other employees at the Hungary office took part in a scheme to “inflate margins in the Microsoft sales channel” in connection with Microsoft software licenses sold to Hungarian government agencies.

Savings were falsely recorded as discounts and used for corrupt purposes, the prosecutors said.

Microsoft President Brad Smith said in a letter to employees Monday that the misconduct was “completely unacceptable” and involved a small number of employees.

Smith outlined changes to prevent public sector discounts from being used improperly and said the company is expanding its use of artificial intelligence to flag suspicious transactions.

Microsoft's anti corruption technologies (ACTS)

 

Microsoft launches Anti-Corruption Technology and Solutions (ACTS)

a lamp

Today marks the 15th anniversary of the United Nations’ International Anti-Corruption Day. On this day, Microsoft is proud to join with others from around the world to use our voice in support of International Anti-Corruption Day and to commit to take steps to reduce corruption.

In recognition of this important day, we are launching Microsoft Anti-Corruption Technology and Solutions (ACTS) to help empower governments and other stakeholders in their corruption fight. With this initiative, we hope to bend the curve of corruption by helping governments innovate with technology, expertise, and other resources.

The UN’s Anti-Corruption Day is observed each year to educate the public on the issue of corruption, to mobilize organizations and governments to work together to help eradicate it, and to highlight successful anti-corruption efforts and initiatives. As noted by the UN, corruption is a complex political, social, and economic phenomenon that is not unique to any single country or government. It undermines democratic institutions, slows economic growth, and contributes to governmental instability. And it is not a new problem. History is rife with examples across the centuries – just this last month, archeologists decoded an inscription by the Roman Emperor Septimius Severus to the people of the ancient city of Nicopolis ad Istrum suggesting gratitude and appreciation for a bribe.

The UN reports that the cost of corruption is more than $3.6 trillion dollars a year. This means that trillions of dollars every year are diverted from needed investments in education, health care, and critical infrastructure around the world. The impact of this is profound: Well-intentioned governments are thwarted in their ability to invest in basic humanitarian causes, and the deceptions caused by corruption subvert honest endeavors to foster inclusive and sustainable growth. Tragically, the people who end up suffering most are exactly the people who can afford it least.

The global events of this year have created a world particularly vulnerable to corruption. As noted by the UN Secretary-General, Antonio Guterres, “Corruption … is even more damaging in times of crisis – as the world is experiencing now with the Covid-19 pandemic. The response to the virus is creating new opportunities to exploit weak oversight and inadequate transparency, diverting funds away from people in their hour of greatest need.” Governments around the world are scrambling to address the Covid-19 pandemic – speeding to implement measures to address the health emergency and to provide resources for those hardest hit by the resulting economic downturn. These unprecedented investments, however, have exposed vulnerabilities in supply chains, procurement processes, and corruption controls.

This year’s Anti-Corruption Day theme, Recover with Integrity, serves as an important reminder of the critical importance of ensuring that pandemic resources reach their intended recipients. Unless we reduce corruption by exposing it through greater transparency and address it through more effective controls, recovery will be jeopardized.

The opportunity

At Microsoft, we believe corruption is an urgent global issue that can and must be solved. It will require a focused and comprehensive solution, and it will require governments, civil society, and the private sector all working together to promote transparency, create effective controls, and drive accountability. It is a daunting task, but never before has the world had the kinds of tools to fight corruption that exist today. We know, for instance, that data can illuminate hidden patterns and relationships to provide governments with better tools to ensure public moneys go to their intended purposes. Technology resources such as cloud computing, data visualization, artificial intelligence (AI), and machine learning provide powerful tools for governments and corporations to aggregate and analyze their enormous and complex datasets in the cloud, ferreting out corruption from the shadows where it lives, and even preventing corruption before it happens.

Our commitment

In the next decade, Microsoft ACTS will leverage the company’s investments in cloud computing, data visualization, AI, machine learning, and other emerging technologies to enhance transparency and to detect and deter corruption. We will endeavor to bring the most promising solutions to the broadest possible audience, using our partner networks, programs, and global employee base to scale solutions through careful consideration of their priorities, technical infrastructure, and capabilities.

Over the last six months, we have already begun to make investments in support of the Microsoft ACTS initiative, including a partnership with the Inter-American Development Bank to advance anti-corruption, transparency, and integrity objectives in Latin America and the Caribbean. Announced in July 2020, we are partnering with the IDB Transparency Fund to help bring greater transparency to the use of Covid-19 economic stimulus funds, building on the Mapa Inversiones platform developed by the IDB with Microsoft support and already adopted by many countries in the region. In the coming months and years, we look forward to additional partnerships, learning as we go, and empowering the work of others.

Microsoft is excited not only by the potential for technology to make positive changes on a long-standing societal problem that burdens the lives of citizens, distorts economic development, and erodes trust in public institutions, but also by the opportunity to partner with the international community in this fight.

We stand with the United Nations and the initiatives undertaken by governments around the world to stamp out corruption, and we look forward to working with governments, civil society, and others in the private sector to help us all recover with integrity.

Tags: , , , , ,

Lettori fissi