What
happens when Silicon Valley’s most powerful executives try to explain
their acquisition strategy using the language of fear, panic, and
existential dread?
You
get a rare, behind-the-scenes look at how billion-dollar decisions are
made, not with confidence, but with cold sweats and defensive
playbooks.
This
isn’t about corporate strategy as usual. It’s about the raw, unfiltered
psychology of market dominance: slide decks labeled like doomsday prep
kits, emails that read like confessions, and courtroom testimony that
turns legal defense into therapy session.
If
you want to understand how power really works in tech, and what happens
when that power finally gets questioned, this is essential reading.
We explore this today.
|
FIGHTING FOR ITS MONOPOLY
|
|
Federal
regulators are pushing for a major shake-up in Google’s advertising
operations, calling for the tech giant to spin off key parts of its ad
tech empire under court supervision.
The
Department of Justice wants Google to divest its AdX exchange, a
central marketplace connecting advertisers with digital publishers, as
part of a broader antitrust enforcement effort targeting what the
government describes as monopolistic control in online advertising.
The proposal, submitted in a legal filing,
also urges the separation of Google’s DoubleClick for Publishers (DFP),
a platform used by websites to manage and sell their ad space. The DOJ
contends that both AdX and DFP are core to Google's allegedly unlawful
dominance, facilitating behavior that undermines fair competition.
Google
pushed back strongly in its response, filed May 5, rejecting the notion
that breaking up its ad tech stack is necessary or even workable. The
company argued that its systems are so deeply embedded in its own
infrastructure that transferring the technology would require rebuilding
it from scratch for external use.
“Divestiture
is not as simple as selling either the AdX or DFP source code to a
willing buyer,” the filing stated, explaining that the code depends
heavily on Google's proprietary environment and cannot function
independently.
Rather
than selling off parts of its ad business, Google proposed a series of
behavioral changes it says would resolve the court’s concerns and
restore competition. Among the suggested measures: allowing equal access
to AdX’s real-time bidding system for competitors managing open web
display ads.
“The
DOJ conceded Google’s proposed ad tech remedy fully addresses the
court’s decision on liability,” said Lee-Anne Mulholland, Google’s vice
president of regulatory affairs, in a statement on Tuesday. She added,
“The DOJ’s additional proposals to force a divestiture of our ad tech
tools go well beyond the Court’s findings, have no basis in law, and
would harm publishers and advertisers.”
The
company also expressed openness to oversight, agreeing to the court’s
idea of appointing an external trustee to monitor the implementation of
its proposed remedies for up to three years.
This
legal standoff follows a decision last month by US District Judge
Leonie Brinkema, who concluded that Google’s practices in the ad
exchange and server markets violated the Sherman Antitrust Act,
ultimately to the detriment of advertisers and consumers alike.
|
Governments
and corporations are working hand in hand to control what you can say,
what you can read; and soon, who you are allowed to be.
New laws promise to “protect” you; but instead criminalize dissent.
Apps and sites deplatform, demonetize, and disappear accounts that step out of line.
AI-driven surveillance tracks everything you do, feeding a system built to monitor, profile, and ultimately control.
Now, they’re pushing for centralized digital IDs; a tool that could link your identity to everything you say and do online. No anonymity. No privacy. No escape.
This isn’t about safety, it’s about power.
If
you believe in a truly free and open internet; where ideas can be
debated without fear, where privacy is a right, and where no government
or corporation dictates what’s true; please become a supporter.
By becoming a supporter, you’ll help us: - Expose online censorship, surveillance, and the digital ID agenda
- Challenge restrictive laws that threaten free expression
- Provide independent analysis on the erosion of digital rights
- Support voices who refuse to bow to pressure from governments or Big Tech
We don’t answer to advertisers or political elites.
If you can, please become a supporter.
It takes less than a minute to set up, you’ll get a bunch of extra
features, guides, analysis and solutions, and every donation strengthens
the fight for online freedom.
Thank you.
.
|
|
At
a recent UK parliamentary hearing on social media and algorithms,
lawmakers ramped up calls for increased censorship online, despite
revealing that they themselves remain unclear on what the existing law,
the Online Safety Act, actually covers.
The session,
led by the Science, Innovation, and Technology Committee, displayed a
growing appetite among MPs to suppress lawful speech based on subjective
notions of harm while failing to reconcile fundamental disagreements
about the scope of regulatory authority.
Rather
than defending open discourse, members of Parliament repeatedly urged
regulators to expand their crackdown on speech that has not been deemed
illegal. The recurring justification was the nebulous threat of
“misinformation,” a term invoked throughout the hearing with little
consistency and no legal definition within the current framework.
Labour MP Emily Darlington was among the most vocal proponents of more aggressive action. Citing the Netflix show Adolescence, she suggested that fictional portrayals of misogynistic radicalization warranted real-world censorship.
She
pushed Ofcom to treat such content as either illegal or misinformative,
regardless of whether the law permits such classifications. When
Ofcom's Director of Online Safety Strategy Delivery, Mark Bunting,
explained that the Act does not allow sweeping regulation of
misinformation, Darlington pushed back, demanding specific censorship
powers that go well beyond the legislation’s intent.
Even
more revealing was the contradiction between government officials
themselves. While Bunting maintained that Ofcom’s ability to act on
misinformation is extremely limited, Baroness Jones of Whitchurch
insisted otherwise, claiming it falls under existing regulatory codes.
The discrepancy not only raised concerns about transparency and legal certainty
but also highlighted the dangers of granting censorship powers to
agencies that can’t even agree on the rules they’re enforcing.
John
Edwards, the UK Information Commissioner, shifted the discussion toward
algorithmic data use, arguing that manipulation of user data,
especially that of children, could constitute harm. While Edwards did
not advocate direct censorship, his remarks reinforced the broader push
for increased state oversight of online systems, further blurring the
line between content moderation and outright control over public
discourse.
Committee
Chair Chi Onwurah repeatedly voiced dissatisfaction that misinformation
is not explicitly addressed by the Online Safety Act, implying that its
exclusion rendered the law ineffective.
However,
as Bunting explained, the Act does introduce a narrowly defined "false
communications" offense, which only applies when falsehoods are sent
with the intent to cause significant harm—a standard that is both
difficult to prove and intentionally limited to avoid criminalizing
protected expression.
Onwurah appeared unimpressed by these legal safeguards.
Labour
MP Adam Thompson pressed Ofcom to go further, asking why platforms
weren’t being forced to de-amplify what he described as “harmful
content.” Once again, Bunting noted that Ofcom’s mandate does not
include blanket powers to suppress misinformation, and any such
expansion would require new legislation. This admission did little to
curb the committee’s broader push for more centralized control over
online content.
The
hearing also ventured into the economics of censorship, with several
MPs targeting digital advertising as a driver of “misinformation.”
Despite Ofcom’s limited remit in this area, lawmakers pushed for the
government to regulate the entire online advertising supply chain.
Baroness Jones acknowledged the issue but offered only vague references
to ongoing discussions, without proposing any concrete mechanisms or
timelines.
Steve
Race, another Labour MP, argued that the Southport riots might have
been prevented with a fully implemented Online Safety Act, despite no
clear evidence that the law would have stopped the spread of
controversial, but not illegal, claims. Baroness Jones responded by
asserting that the Act could have empowered Ofcom to demand takedowns of
illegal content. Yet when pressed on whether the specific false claims
about the attacker’s identity would qualify as illegal, she sidestepped
the question.
Ofcom’s
testimony ultimately confirmed what civil libertarians have long
warned: the Act does not require platforms to act against legal content,
no matter how upsetting or widely circulated it may be. This hasn’t
stopped officials from trying to stretch its interpretation or imply
that platforms should go further on their own terms, an approach that
invites arbitrary enforcement and regulatory mission creep.
Talitha
Rowland of the Department for Science, Innovation, and Technology
attempted to reconcile the contradictions by pointing to tech companies’
internal policies, suggesting that platform terms of service might
function as a substitute for statutory regulation. But voluntary
compliance, directed by unelected regulators under mounting political
pressure, is a far cry from a transparent legal framework.
The
entire hearing revealed a troubling dynamic: politicians eager to
police online speech, regulators unsure of their actual powers, and a
legal environment where vague definitions of “harm” are increasingly
used to justify censorship by default.
The
confusion among lawmakers and regulators alike should raise red flags
for anyone concerned with due process, democratic accountability, or the
right to express dissenting views in an open society.
|
|
These
days, no event, incident, or occasion, regardless of its nature,
appears to be too big or too small to use as an excuse to promote more
censorship in the name of "combating disinformation."
Last
week, Spain and Portugal lived through an embarrassing episode of
widespread electricity blackouts - and the current consensus is that the
reason is even more embarrassing: old infrastructure, fraught with its
own problems - that are only compounded by endless attempts to work
"green" energy sources into it.
Trillions
of dollars is the figure that experts are mentioning as needed to get
the EU's electricity grid up to speed - or rather, balance the reality
with the aggressive "progressive" policy pushes so that a similar crisis
is averted going forward.
But a conversation about these topics is apparently a hard one to have for the EU bureaucracy.
Instead,
it, through the mouth of Commissioner for Preparedness, Crisis
Management and Equality Hadja Lahbib, prefers to effectively misguide,
and deflect away from that, and onto the key talking points that are
sure to provoke a sense of paranoia among citizens: cyberattacks and
supply chain disruptions (as a result of this type of threats).
In
other words - instead of addressing actual problem(s), the focus is
being shifted to how information around them should be best managed, to
somehow score public opinion points.
Speaking for Spain's El Mundo,
Lahbib mentioned the EU Preparation Strategy, and the Union Strategy
for Preparation - apparently, her "shorthand" for the formal, and oddly
phrased, "EU Preparedness Union Strategy."
It
is a set of measures meant to "counter foreign information manipulation
and disinformation more systematically" by fully using the EU's Foreign
Information Manipulation and Interference (FIMI) toolbox, the
censorship law Digital Services Act (DSA), and the censorship initiative
- the upcoming European Democracy Shield.
So,
let's talk about the state of the EU's electrical grid, why it is the
way it is, who is responsible, and what concrete steps are being taken
to remedy the massive problem.
Judging
by Hadja's statements - let's not. Let's just accept that the system is
rotten - and then, simply how to "survive" the crises.
She
decided to take a "victory lap" regarding the supposed relevance of the
Strategy in situations like this, and specifically something called
"the 72-hour survival kit."
It
has to do with things like floods and fires - but those references take
the backseat to "threats to our phones, computers, banks, supply
chains, raw materials, and even the media we consume."
|
|
UK's
Science, Innovation, and Technology Committee's fourth and final
meeting on social media misinformation and harmful algorithms saw
renewed attacks against end-to-end encryption, and platforms like
Telegram and Signal.
And
once again, representatives of the authorities tried to pin the blame
for the Southport riots on social media and apps that are outside the
scope of what is regulated as Big Tech.
During the session held
on April 29, the Committee sought answers about "social media,
misinformation and harmful algorithms" from the regulator Ofcom, the
Information Commissioner (ICO), and the Department of Science,
Innovation and Technology (DSIT).
Labour
MP Paul Waugh focused on end-to-end encrypted messengers, specifically
the one provided by Facebook, choosing to oddly refer to the secure
online technology as "a challenge" that needs to be "combated" - and
suggest it is basically a tool of enabling child sex abuse online.
Addressing
Ofcom Director of Online Safety Strategy Delivery Mark Bunting, Waugh
wanted to know what the regulator was doing "to combat that challenge."
Bunting
replied that encryption has been "identified as one of the areas of
risk that companies have to take account of" and called it "a problem."
And
while stating that encryption provides "enormous benefits in terms of
privacy and security to users," and adding that for those reasons, it is
"very highly valued by users" (but not so much the authorities?) -
Bunting went on to state that, "it does mean that a lot of the tools
that we want to see companies use, including the AI (harms) detection
tools, aren't operable in encrypted environments."
"We
think it's a challenge for the industry. We have been clear that we're
expecting the industry to do more work on techniques that are being
developed to detect harmful activity in encrypted environments," the
Ofcom official said, as well as that this was "one of the priority areas
of work" for his technology team.
Labour
MP Emily Darlington used the meeting to go after smaller platforms that
are also providing users with end-to-end encryption - and try to forge a
link between their use, and the riots that engulfed the UK last summer
after the Southport murders of schoolchildren.
Telegram
was among those singled out, despite it having over one billion users.
Nevertheless, it was "lumped in" with Signal, 4chan, 8chan, etc., all
with the goal of connecting the dots Darlington sees between "small
apps" and "far-right extremist activity" - the implication being that
this is where such activity flies under the regulatory radar, and what
Ofcom can enforce.
English
politicians will never let us forget that Orwell was their compatriot,
so, addressing Mark Bunting, Darlington mentioned that Ofcom has
something called, "a small high harms platform task force."
The
response was that this was "a really important area" for Ofcom, while
DSIT Director for Security and Online Harm Talitha Rowland said her team
was "really concerned" about "small but risky sites (that are) a real
danger to UK citizens" - while praising Ofcom's "small but risky" task
force.
|
|
Thanks for reading,
Reclaim The Net | | | |
Nessun commento:
Posta un commento