Automated Tyranny, becoming the NORM.
They do not want to pay employees to enslave you;
robots are their bets!
Who the Tech Boy Jews.
The Jews who bought power like they do every election.
That is all ther is to know -- they will never leave us be
until we force them to leave off!
Coming up -- Data bases, Automated Death, Normal
Data Monitoring ... Real Time Spying!
++++
Silicon Valley's military ambitions
Tech companies are replacing military contractors with
AI, drones and battlefield systems
Silicon Valley is "finally getting its chance" to sell its
vision to the Pentagon, said Paolo Confino in Fortune.
Last month, President Trump signed several executive
orders to "streamline how the Department of Defense
acquires new defense systems," putting pressure on
existing contractors whose creaky systems are
overbudget and overdue. Silicon Valley has been the
engine of innovation for the United States for decades.
But it has long complained that Washington bureaucracy
left tech companies "unable to compete with existing
military contractors." In the Trump administration, tech
firms have "found a welcome audience" willing to "take
a page from their playbook."
Tech players are rapidly changing the model of warfare,
said Lizette Chapman in Bloomberg. "Instead of dozens
or even hundreds of soldiers supporting one $100
million system, one soldier using AI software could
command dozens of cheap, autonomous weapons."
That, at least, is the promise pitched by Palantir, which
recently beat out RTX Corp. for a $178 million mobile
military command contract, "the first time a software
company" has taken "the lead role on a battlefield
system." Anduril, another California startup, is raising
billions of dollars to fuel the manufacturing of "a
lengthening list of weapons, wearables, and surveillance
systems." CEO Palmer Luckey is positioning his
company as the counter to China's military, which is
rapidly moving from "hyper-sonic and self-guided
missiles to drone swarms that can augment or someday
replace manned fighter jets."
"We are entering a new era where machines go to war,"
said Zoë Corbyn in The Guardian. This has produced a
need for the innovation that the legacy stalwarts, like
Boeing and Lockheed Martin, can't provide. The U.S.
now "has more than 1,000 venture-capital-backed
companies working on 'smarter, faster, and cheaper'
defense," like drones that travel underwater, microwaveray
guns, and even self-flying fighter jets. But some
experts worry that the money pouring into defense tech
— $155 billion between 2021 and 2024—could push
the U.S. and these companies toward wanting "to use
them in war."
++++
Palantir to create vast federal data platform tying
together millions of Americans' private records, stock
jumps
Palantir to create vast federal data platform that
connects millions of Americans’ private records under a
powerful AI system. Backed by the Trump
administration, Palantir’s new deal links Social Security,
IRS, and immigration data into one centralized system.
It uses its Gotham software to flag fraud, track behavior,
and potentially shape government decisions. While
Palantir stock jumped 5.38% after the news, privacy
advocates are raising concerns about surveillance and
misuse.
Palantir to create vast federal data platform tying
together millions of Americans' private records, stock
jumps
Reuters
Palantir to create massive AI-powered federal data
system linking IRS, Social Security, and immigration
records. Trump’s deal sparks stock surge and privacy
backlash. Discover what this new contract means for
government data, civil rights, and Palantir’s future
Palantir to create vast federal data platform tying
together millions of Americans' private records, stock
jumps: Palantir Technologies (NYSE: PLTR) is back in
the spotlight after securing a major federal contract that
could reshape how the U.S. government uses data.
Under the new agreement backed by the Trump
administration, Palantir will build a vast centralized data
platform that connects sensitive records from across key
agencies—including the IRS, Social Security,
immigration databases, and more. This platform,
powered by Palantir’s Gotham software, is designed to
analyze behavioral patterns in real-time, flag potential
threats, and support decisions around public safety and
fraud detection.
The stock market liked what it saw. Palantir shares
jumped 5.38% after the announcement and are now
trading over 150% higher compared to post-election
2024 levels. But behind the stock surge, there's a deeper
story about privacy, AI surveillance, and what it means
when one tech firm gets the keys to America’s data.
What exactly is Palantir building for the U.S.
government?
Palantir isn’t just improving old databases—it’s building
what some experts are calling the most expansive
civilian surveillance infrastructure in U.S. history.
Instead of scattered files and spreadsheets, the platform
will use real-time data integration and artificial
intelligence to profile behavior, detect fraud, and
identify individuals or patterns deemed risky by the
system.
At the core of the project is Palantir’s Gotham software.
Already used by defense and intelligence agencies,
Gotham will now be used on the domestic front. It
doesn’t just track information—it makes judgments. It
could influence everything from how benefits are
distributed to who gets flagged for closer scrutiny by
law enforcement or immigration officers.
According to the original TipRanks report, this platform
will act like a “central intelligence layer,” consolidating
millions of personal records under a single AI-powered
lens.
How are privacy advocates reacting to this deal?
Civil liberties groups are raising serious alarms. Their
concerns go beyond standard data centralization. The
issue isn’t just where the data goes—it’s who controls it,
how it’s used, and what happens when it’s wrong.
Groups warn that this system could easily evolve into a
digital dragnet. With no clear public oversight or legal
guardrails, it could be used for political purposes,
targeted surveillance, or even immigration crackdowns.
Critics say the move consolidates both data and power,
raising fears of misuse during an already polarized
political climate.
And there’s a question of permanence. What starts as
fraud detection could quickly morph into a tool of
control, particularly in an election year where data is
already being weaponized.
As reported by Wired and The Daily Beast, another part
of this federal data effort—led by Elon Musk’s
Department of Government Efficiency (DOGE)—is
building a parallel system to track and monitor
immigrants using personal data. This has sparked even
more concern among privacy watchdogs.
Is Palantir stock still a good investment after the spike?
Despite the buzz, Wall Street remains divided.
According to TipRanks data, Palantir currently holds a
“Hold” rating from analysts. Out of 18 analyst ratings in
the last three months, only 3 are Buy, while 11 are Hold
and 4 are Sell.
The average 12-month price target is $100.13—about
18% below the current trading price. That suggests
analysts think the market may be too optimistic about
Palantir’s government deal and its long-term
profitability.
Some see Palantir’s role in this federal contract as a
major breakthrough in data infrastructure. But others
worry the company is now exposed to significant
political risk. A future administration could pull back on
the contract, impose stricter regulations, or dismantle
the program altogether.
Still, Palantir’s recent work with Fannie Mae on AIdriven
mortgage fraud detection shows how its tools are
expanding into both public and private sectors. Whether
that becomes a strength or a liability in the long run
depends on how the company handles its growing
influence.
What’s next for Palantir and U.S. government
surveillance?
The contract marks a major shift in how the federal
government handles data and how much it relies on
private tech firms like Palantir to manage sensitive
information. With real-time analytics, profiling tools,
and AI-assisted threat detection, this deal could define
how the state operates in the digital age.
But with that power comes scrutiny. This is more than
just a tech upgrade—it’s a test of how far AI can go
inside the state and whether the public will accept it.
Palantir may be winning in the stock market for now.
But the real story is still unfolding—and it could have
long-term implications for privacy, civil rights, and the
balance of power in America’s digital infrastructure.
+++
Trump Taps Palantir to Create Master Database on
Every American
Trump’s dystopian plan is already underway.
Palantir logo over some code on a screen
Jakub Porzycki/NurPhoto/Getty Images
The Trump administration is collecting data on all
Americans, and they are enlisting the data analysis
company Palantir to do it.
The New York Times reports that President Trump has
enlisted the firm, founded by far-right billionaire Peter
Thiel, to carry out his March executive order instructing
government agencies to share data with each other. The
order has increased fears that the government is putting
together a database to wield surveillance powers over
the American public.
Since then, the administration has been very quiet about
these efforts, increasing suspicion. Meanwhile, Palantir
has taken more than $113 million in government
spending since Trump took office, from both existing
contracts and new ones with the Departments of
Defense and Homeland Security. That number is
expected to grow, especially given that the firm just won
a new $795 million contract with the DOD last week.
Palantir is speaking with various other agencies across
the federal government, including the Social Security
Administration and the IRS, about buying its
technology, according to the Times. Palantir’s Foundry
tool, which analyzes and organizes data, is already
being used at the DHS, the Department of Health and
Human Services, and at least two other agencies,
allowing the White House to compile data from
different places.
The administration’s efforts to compile data began under
Elon Musk’s Department of Government Efficiency
initiative, which sought Americans’ personal data from
multiple agencies including the IRS, the SSA, Selective
Service, Medicare, and many others. In some cases,
court orders hindered these efforts, but not in all of
them.
Thiel has multiple ties to DOGE, both through Musk
and through many of his former employees working for
the effort or taking other jobs in the Trump
administration. And this data collection effort could give
Thiel, Musk, and Trump unprecedented power over
Americans, with the president being better able to
punish his critics and target immigrants.
Privacy advocates, student unions, and labor rights
organizations are among those who have sued to stop
Trump’s data collection efforts. Palantir’s involvement
also gives a powerful tech company access to this data,
and its CEO, Alex Karp, doesn’t exactly have a benign
agenda, hoping to cash in on American technomilitarism.
Musk too has plans for government data,
using his AI, Grok, to analyze it. Will anyone be able to
stop Trump and these tech oligarchs?
+++
Trump announces private-sector $500 billion investment
in AI infrastructure
By Steve Holland
January 21, 20257:42 PM PSTUpdated 4 months ago
Jan 21 (Reuters) - U.S. President Donald Trump on
Tuesday announced a private sector investment of up to
$500 billion to fund infrastructure for artificial
intelligence, aiming to outpace rival nations in the
business-critical technology.
Trump said that ChatGPT's creator OpenAI, SoftBank
(9984.T), opens new tab and Oracle (ORCL.N), opens
new tab are planning a joint venture called Stargate,
which he said will build data centers and create more
than 100,000 jobs in the United States.
These companies, along with other equity backers of
Stargate, have committed $100 billion for immediate
deployment, with the remaining investment expected to
occur over the next four years.
SoftBank CEO Masayoshi Son, OpenAI CEO Sam
Altman and Oracle Chairman Larry Ellison joined
Trump at the White House for the launch.
The first of the project's data centers are already under
construction in Texas, Ellison said at the press
conference. Twenty will be built, half a million square
feet each, he said. The project could power AI that
analyzes electronic health records and helps doctors
care for their patients, Ellison said.
The executives gave Trump credit for the news. "We
wouldn't have decided to do this," Son told Trump,
"unless you won."
"For AGI to get built here," said Altman, referring to
more powerful technology called artificial general
intelligence, "we wouldn't be able to do this without
you, Mr. President."
It was not immediately clear whether the announcement
was an update to a previously reported venture.
Item 1 of 3 U.S. President Donald Trump delivers
remarks on AI infrastructure, next to Oracle co-founder
Larry Ellison, SoftBank CEO Masayoshi Son and
OpenAI CEO Sam Altman at the Roosevelt room at
White House in Washington, U.S., January 21, 2025.
REUTERS/Carlos Barria
[1/3]U.S. President Donald Trump delivers remarks on
AI infrastructure, next to Oracle co-founder Larry
Ellison, SoftBank CEO Masayoshi Son and OpenAI
CEO Sam Altman at the Roosevelt room at White
House in Washington, U.S., January 21, 2025.
REUTERS/Carlos Barria Purchase Licensing Rights,
opens new tab
In March 2024, The Information, a technology news
website, reported OpenAI and Microsoft were working
on plans for a $100 billion data center project that
would include an artificial intelligence supercomputer
also called "Stargate" set to launch in 2028.
POWER-HUNGRY DATA CENTERS
The announcement on Trump's second day in office
follows the rolling back of former President Joe Biden's
executive order on AI, that was intended to reduce the
risks that AI poses to consumers, workers and national
security.
AI requires enormous computing power, pushing
demand for specialized data centers that enable tech
companies to link thousands of chips together in
clusters.
"They have to produce a lot of electricity, and we'll
make it possible for them to get that production done
very easily at their own plants if they want," Trump
said.
As U.S. power consumption rises from AI data centers
and the electrification of buildings and transportation,
about half of the country is at increased risk of power
supply shortfalls in the next decade, the North American
Electric Reliability Corporation said in December.
As a candidate in 2016, Trump promised to push a $1
trillion infrastructure bill through Congress but did not.
He talked about the topic often during his first term as
president from 2017 to 2021, but never delivered on a
large investment, and "Infrastructure Week" became a
punchline.
Oracle shares were up 7% on initial report of the project
earlier in the day. Nvidia (NVDA.O), opens new tab,
Arm Holdings and Dell (DELL.N), opens new tab
shares also rose.
Investment in AI has surged since OpenAI launched
ChatGPT in 2022, as companies across sectors have
sought to integrate artificial intelligence into their
products and services.
+++
UK turns to AI and drones for new battlefield strategy
2 days ago
Share
Save
Jonathan Beale
Defence Correspondent
EPA A British soldier of a gun battery attends the Allied
Spirit 25 exercise in Hohenfels, Germany, 12 March
2025EPA
A British soldier on exercises earlier this year
The Ministry of Defence (MoD) will spend more than
£1bn to develop technology to speed up decisions on the
battlefield.
The funding will be one of the results of the
government's long-awaited strategic defence review
which is due to be published in full on Monday.
The government has committed to raising defence
spending to 2.5% GDP from April 2027 with an
ambition to increase that to 3% in the next parliament.
In February, the prime minister said cuts to the foreign
aid budget would be used to fund the military boost.
Announcing the results of the review, the MoD said a
new Digital Targeting Web would better connect
soldiers on the ground with key information provided by
satellites, aircraft and drones helping them target enemy
threats faster.
Defence Secretary John Healey said the technology
announced in the review - which will harness Artificial
Intelligence (AI) and software - also highlights lessons
being learnt from the war in Ukraine.
Ukraine is already using AI and software to speed up the
process of identifying, and then hitting, Russian military
targets.
The review had been commissioned by the newly
formed Labour government shortly after last year's
election with Healey describing it as the "first of its
kind".
The government said the findings would be published in
the first half of 2025, but did not give an exact date.
Healey made the announcement on a visit to the MoD's
cyber headquarters in Corsham, Wiltshire.
The headquarters is where the UK military co-ordinates
their cyber activities to both prevent and to carry out
cyber-attacks.
Defence officials said over the last two years the UK's
military had faced more than 90,000 cyber-attacks by
potential adversaries.
Attacks have been on the rise, as has their level of
sophistication, they added.
Staff at Corsham said they had recently helped identify
and block malware sent to UK military personnel who
recently returned from working abroad.
They said the source of the malware was from a "known
Russian actor".
Both Russia and China have been linked to the increase
in cyber-attacks.
Defence officials have confirmed that the UK military
has also been conducting its own offensive cyberattacks.
Healey said it showed the nature of warfare was
changing.
"The keyboard is now a weapon of war and we are
responding to that," he said.
He said the UK needed to be the fastest-innovating
military within the Nato alliance.
As part of the strategic defence review, the UK's
military cyber operations will be overseen by a new
Cyber and Electromagnetic Command.
The MoD said the Command would also take the lead in
electronic warfare, from co-ordinating efforts to
intercept any adversaries communications, to jamming
drones.
Healey said the extra investment being made was
possible because of the government's "historic
commitment" to increase defence spending to 2.5% of
GDP by 2027.
However, the Nato Secretary-General, Mark Rutte, is
calling on allies to increase defence spending by more
than 3.5% of GDP.
+++
Ukraine’s AI-powered ‘mother drone’ sees first combat
use, minister says
by Anna Fratsyvir
May 29, 2025 6:35 PM
1 min read
FPV (first-person view) drones lie on boxes during
transfer by volunteers to the units of the Armed Forces
of Ukraine on Jan. 22, 2024, in Lviv, Ukraine.
(Stanislav Ivanov/Global Images Ukraine via Getty
Images)
This audio is created with AI assistance
Ukraine has deployed a new artificial intelligencepowered
"mother drone" for the first time, marking a
major step in the country's expanding use of
autonomous battlefield technology, Digital
Transformation Minister Mykhailo Fedorov announced
on May 29.
The drone system, developed by Ukraine's defense tech
cluster Brave1, can deliver two AI-guided FPV (firstperson
view) strike drones up to 300 kilometers (186
miles) behind enemy lines, according to Fedorov. Once
released, the smaller drones can autonomously locate
and hit high-value targets, including aircraft, air defense
systems, and critical infrastructure — all without using
GPS.
"The system uses visual-inertial navigation with
cameras and LiDAR to guide the drones, while AI
independently identifies and selects targets," Fedorov
said.
The system, called SmartPilot, allows the carrier drone
to return and be reused for missions within a 100-
kilometer range. Each operation costs around $10,000
— hundreds of times cheaper than a conventional
missile strike, Fedorov said.
The development comes as Ukraine continues to ramp
up domestic drone production. On April 7, President
Volodymyr Zelensky announced that the country would
scale up production of unmanned systems "to the
maximum," including long-range, ground-based, and
fiber-optic drones, which are resistant to electronic
warfare.
Ukraine has leaned heavily on technological innovation
to offset its disadvantages in manpower and firepower
since Russia's full-scale invasion began in 2022. The
use of drones, aerial, naval, and ground-based, has
become a central feature of both sides' strategies in the
war.
Fedorov said Ukraine will continue investing in
Ukrainian systems that "change the rules of the game in
technological warfare."
+++
The Gospel’: how Israel uses AI to select bombing
targets in Gaza
This article is more than 1 year old
Concerns over data-driven ‘factory’ that significantly
increases the number of targets for strikes in the
Palestinian territory
Israel-Hamas war – live updates
Harry Davies, Bethan McKernan and Dan Sabbagh in
Jerusalem
Fri 1 Dec 2023 05.03 EST
Israel’s military has made no secret of the intensity of its
bombardment of the Gaza Strip. In the early days of the
offensive, the head of its air force spoke of relentless,
“around the clock” airstrikes. His forces, he said, were
only striking military targets, but he added: “We are not
being surgical.”
There has, however, been relatively little attention paid
to the methods used by the Israel Defense Forces (IDF)
to select targets in Gaza, and to the role artificial
intelligence has played in their bombing campaign.
As Israel resumes its offensive after a seven-day
ceasefire, there are mounting concerns about the IDF’s
targeting approach in a war against Hamas that,
according to the health ministry in Hamas-run Gaza, has
so far killed more than 15,000 people in the territory.
The IDF has long burnished its reputation for technical
prowess and has previously made bold but unverifiable
claims about harnessing new technology. After the 11-
day war in Gaza in May 2021, officials said Israel had
fought its “first AI war” using machine learning and
advanced computing.
The latest Israel-Hamas war has provided an
unprecedented opportunity for the IDF to use such tools
in a much wider theatre of operations and, in particular,
to deploy an AI target-creation platform called “the
Gospel”, which has significantly accelerated a lethal
production line of targets that officials have compared
to a “factory”.
The Guardian can reveal new details about the Gospel
and its central role in Israel’s war in Gaza, using
interviews with intelligence sources and little-noticed
statements made by the IDF and retired officials.
This article also draws on testimonies published by the
Israeli-Palestinian publication +972 Magazine and the
Hebrew-language outlet Local Call, which have
interviewed several current and former sources in
Israel’s intelligence community who have knowledge of
the Gospel platform.
Their comments offer a glimpse inside a secretive, AIfacilitated
military intelligence unit that is playing a
significant role in Israel’s response to the Hamas
massacre in southern Israel on 7 October.
The slowly emerging picture of how Israel’s military is
harnessing AI comes against a backdrop of growing
concerns about the risks posed to civilians as advanced
militaries around the world expand the use of complex
and opaque automated systems on the battlefield.
“Other states are going to be watching and learning,”
said a former White House security official familiar
with the US military’s use of autonomous systems.
The Israel-Hamas war, they said, would be an
“important moment if the IDF is using AI in a
significant way to make targeting choices with life-anddeath
consequences”.
Israeli soldiers during ground operations in the Gaza
Strip.
View image in fullscreen
Israeli soldiers during ground operations in the Gaza
Strip. Photograph: IDF
From 50 targets a year to 100 a day
In early November, the IDF said “more than 12,000”
targets in Gaza had been identified by its target
administration division.
Describing the unit’s targeting process, an official said:
“We work without compromise in defining who and
what the enemy is. The operatives of Hamas are not
immune – no matter where they hide.”
The activities of the division, formed in 2019 in the
IDF’s intelligence directorate, are classified.
However a short statement on the IDF website claimed
it was using an AI-based system called Habsora (the
Gospel, in English) in the war against Hamas to
“produce targets at a fast pace”.
The IDF said that “through the rapid and automatic
extraction of intelligence”, the Gospel produced
targeting recommendations for its researchers “with the
goal of a complete match between the recommendation
of the machine and the identification carried out by a
person”.
Multiple sources familiar with the IDF’s targeting
processes confirmed the existence of the Gospel to
+972/Local Call, saying it had been used to produce
automated recommendations for attacking targets, such
as the private homes of individuals suspected of being
Hamas or Islamic Jihad operatives.
In recent years, the target division has helped the IDF
build a database of what sources said was between
30,000 and 40,000 suspected militants. Systems such as
the Gospel, they said, had played a critical role in
building lists of individuals authorised to be
assassinated.
Aviv Kochavi, who served as the head of the IDF until
January, has said the target division is “powered by AI
capabilities” and includes hundreds of officers and
soldiers.
In an interview published before the war, he said it was
“a machine that produces vast amounts of data more
effectively than any human, and translates it into targets
for attack”.
Aviv Kochavi in his role as head of the IDF in 2019.
View image in fullscreen
Aviv Kochavi in his role as head of the IDF in 2019.
Photograph: Oded Balilty/AP
According to Kochavi, “once this machine was
activated” in Israel’s 11-day war with Hamas in May
2021 it generated 100 targets a day. “To put that into
perspective, in the past we would produce 50 targets in
Gaza per year. Now, this machine produces 100 targets a
single day, with 50% of them being attacked.”
Precisely what forms of data are ingested into the
Gospel is not known. But experts said AI-based decision
support systems for targeting would typically analyse
large sets of information from a range of sources, such
as drone footage, intercepted communications,
surveillance data and information drawn from
monitoring the movements and behaviour patterns of
individuals and large groups.
The target division was created to address a chronic
problem for the IDF: in earlier operations in Gaza, the
air force repeatedly ran out of targets to strike. Since
senior Hamas officials disappeared into tunnels at the
start of any new offensive, sources said, systems such as
the Gospel allowed the IDF to locate and attack a much
larger pool of more junior operatives.
One official, who worked on targeting decisions in
previous Gaza operations, said the IDF had not
previously targeted the homes of junior Hamas
members for bombings. They said they believed that
had changed for the present conflict, with the houses of
suspected Hamas operatives now targeted regardless of
rank.
“That is a lot of houses,” the official told +972/Local
Call. “Hamas members who don’t really mean anything
live in homes across Gaza. So they mark the home and
bomb the house and kill everyone there.”
Targets given ‘score’ for likely civilian death toll
In the IDF’s brief statement about its target division, a
senior official said the unit “produces precise attacks on
infrastructure associated with Hamas while inflicting
great damage to the enemy and minimal harm to noncombatants”.
The precision of strikes recommended by the “AI target
bank” has been emphasised in multiple reports in Israeli
media. The Yedioth Ahronoth daily newspaper reported
that the unit “makes sure as far as possible there will be
no harm to non-involved civilians”.
A former senior Israeli military source told the Guardian
that operatives use a “very accurate” measurement of
the rate of civilians evacuating a building shortly before
a strike. “We use an algorithm to evaluate how many
civilians are remaining. It gives us a green, yellow, red,
like a traffic signal.”
However, experts in AI and armed conflict who spoke to
the Guardian said they were sceptical of assertions that
AI-based systems reduced civilian harm by encouraging
more accurate targeting.
A lawyer who advises governments on AI and
compliance with humanitarian law said there was “little
empirical evidence” to support such claims. Others
pointed to the visible impact of the bombardment.
“Look at the physical landscape of Gaza,” said Richard
Moyes, a researcher who heads Article 36, a group that
campaigns to reduce harm from weapons.
“We’re seeing the widespread flattening of an urban
area with heavy explosive weapons, so to claim there’s
precision and narrowness of force being exerted is not
borne out by the facts.”
Satellite images of the northern city of Beit Hanoun in
Gaza before (10 October) and after (21 October)
damage caused by the war.
View image in fullscreen
Satellite images of the northern city of Beit Hanoun in
Gaza before (10 October) and after (21 October)
damage caused by the war. Photograph: Maxar
Technologies/Reuters
According to figures released by the IDF in November,
during the first 35 days of the war Israel attacked 15,000
targets in Gaza, a figure that is considerably higher than
previous military operations in the densely populated
coastal territory. By comparison, in the 2014 war, which
lasted 51 days, the IDF struck between 5,000 and 6,000
targets.
Multiple sources told the Guardian and +972/Local Call
that when a strike was authorised on the private homes
of individuals identified as Hamas or Islamic Jihad
operatives, target researchers knew in advance the
number of civilians expected to be killed.
Each target, they said, had a file containing a collateral
damage score that stipulated how many civilians were
likely to be killed in a strike.
One source who worked until 2021 on planning strikes
for the IDF said “the decision to strike is taken by the
on-duty unit commander”, some of whom were “more
trigger happy than others”.
The source said there had been occasions when “there
was doubt about a target” and “we killed what I thought
was a disproportionate amount of civilians”.
An Israeli military spokesperson said: “In response to
Hamas’ barbaric attacks, the IDF operates to dismantle
Hamas military and administrative capabilities. In stark
contrast to Hamas’ intentional attacks on Israeli men,
women and children, the IDF follows international law
and takes feasible precautions to mitigate civilian
harm.”
‘Mass assassination factory’
Sources familiar with how AI-based systems have been
integrated into the IDF’s operations said such tools had
significantly sped up the target creation process.
“We prepare the targets automatically and work
according to a checklist,” a source who previously
worked in the target division told +972/Local Call. “It
really is like a factory. We work quickly and there is no
time to delve deep into the target. The view is that we
are judged according to how many targets we manage to
generate.”
A separate source told the publication the Gospel had
allowed the IDF to run a “mass assassination factory” in
which the “emphasis is on quantity and not on quality”.
A human eye, they said, “will go over the targets before
each attack, but it need not spend a lot of time on them”.
For some experts who research AI and international
humanitarian law, an acceleration of this kind raises a
number of concerns.
Dr Marta Bo, a researcher at the Stockholm
International Peace Research Institute, said that even
when “humans are in the loop” there is a risk they
develop “automation bias” and “over-rely on systems
which come to have too much influence over complex
human decisions”.
Moyes, of Article 36, said that when relying on tools
such as the Gospel, a commander “is handed a list of
targets a computer has generated” and they “don’t
necessarily know how the list has been created or have
the ability to adequately interrogate and question the
targeting recommendations”.
“There is a danger,” he added, “that as humans come to
rely on these systems they become cogs in a mechanised
process and lose the ability to consider the risk of
civilian harm in a meaningful way.”
++++
Since the dawn of the Industrial Revolution, workers
have had to contend with the inimical effects of
technology on their jobs. From the power loom to the
personal computer, each wave of automation has not
only increased productivity, but also empowered the
owners and managers who dictate how these
technologies reshape the workplace. Today, workers
worldwide are haunted by the specter of artificial
intelligence.
Artificial intelligence has been a mainstay in our
popular imagination for decades. Prognostications of an
AI-driven future range from apocalyptic robot takeovers
to thriving post-work societies where people live off the
wealth produced by machines. In spite of these
daydreams, robots with full human cognition are still
well within the domain of science fiction.
When people speak of AI today, what they’re most often
referring to are machines capable of making predictions
through the identification of patterns in large datasets.
Despite that relatively rote function, many in the space
believe that inevitably AI will become autonomous or
rival human intelligence. This raises concerns that
robots will one day represent an existential threat to
humanity or at the very least take over all of our jobs.
The reality is that AI is more likely to place workers
under greater surveillance than to trigger mass
unemployment.
An overwhelming majority of workers are confident
that AI will have a direct impact on their jobs, according
to a recent survey by ADP, but they do not agree on
how. Some feel that it will help them in the workplace
while 42 percent fear that some aspects of their job will
soon be automated.
These concerns are not without merit. Grandiose
statements of oncoming job losses made by tech
executives in public forums fuel worker anxiety.
Feelings of job insecurity are compounded by reports
that a majority of US firms are planning to incorporate
AI in the workplace within the next year. In fact,
Goldman Sachs predicts that generative AI could
“substitute up to one-fourth of current work.”
Yet until now the concrete results of AI have been
mixed at best. Driverless cars have not materialized to
replace humans on the road. McDonald’s cut ties with
IBM after their new automated order taking system
failed to make fast food orders more efficient. And
Google’s new AI Overview tool – which seeks to “do
the googling for you” – keeps spitting out comical
falsehoods.
AI's risksAutomation will be unequal
These shortcomings demonstrate that AI is not as
advanced as the tech industry would have us believe.
Why then are companies and investors so intent on
marketing it as a technology that is on the verge of
replacing humans?
There is a straightforward answer to this question —AI
is hyped up by firms to attract capital from investors and
investors want to grow their profits while diminishing
the power of organized labor. To put it even more
succinctly — AI doomerism is just AI boosterism
dressed differently.
AI development is an expensive business, and
entrepreneurs need to attract significant venture capital
to be able to keep their businesses above water. This has
spurred some firms to exaggerate or misrepresent their
AI capabilities, causing the Securities and Exchange
Commission to crack down on two companies for socalled
“AI-Washing.”
Yet investors and Big Tech remain undeterred. Most AI
firms continue to be unprofitable yet venture capitalists
are still flooding the sector with billions of dollars with
the hope it will one day transform the industry into a
viable and innovative business.
Cloud architect Dwayne Monroe argues in The Nation
that the idea of an AI-powered economy “is attractive to
the ownership class because it holds the promise of
weakening labor’s power and increasing – via
workforce cost reduction and greater scalability –
profitability.”
The AI-will-replace-us-all-one-day frenzy is a form of
propaganda designed to demoralize workers over a
future that may never arrive. Instead, our focus should
be on where AI is actually deployed today – that is, in
the realm of worker surveillance.
Artificial intelligence represents the latest iteration of
managerial control. The form of algorithmic
management over labor may differ depending on the
industry, but it functions to make surveillance more
efficient and more intrusive.
Chevron and Starbucks are already circumventing the
privacy rights of their employees by using AI software
to monitor the communications of their workers across a
number of platforms to flag discontent in the workplace.
Amazon delivery drivers, meanwhile, are forced to
“consent” to the installation of AI-powered cameras.
Amazon says they are used to increase driver safety, but
the cameras are designed to financially penalize drivers
for mistakes they did not commit or ordinary behavior
like fidgeting with the radio.
Moreover, military AI technology is being sold to
corporations to subvert and disrupt unionization efforts
before they gain momentum. Artificial intelligence is
effectively used for digital union busting, identifying
and firing labor organizers through keyboard tracking,
Zoom call spying, and alert systems tracking when a
large number of employees hold internal meetings.
All of this transforms the workplace into an electronic
panopticon where workers are constantly visible to an
unseen watcher encroaching on their autonomy, privacy,
and labor rights.
But the narrative surrounding AI does not need to be
one of despair. Workers are beginning to fight back and
take proactive steps against the invasive and harmful
nature of workplace AI. For example, the Teamsters
negotiated a contract with UPS that included strong
protections against AI surveillance. The
Communications Workers of America succeeded in
ensuring that any data collected by AI will be used for
training purposes only and not with the intent of
disciplining workers. And the Writers Guild of America
instituted guardrails on AI to ensure it does not put any
downward pressure on wages and remains within
control of the workers.
While tech executives may promise that AI will
fundamentally transform the economy, it is unlikely to
completely replace most workers given its narrow
proficiency at elaborate pattern matching. Experts tend
to overestimate the capabilities of autonomous machines
and very few occupations have ceased to exist due to
automation. Even so, AI’s ability to devalue labor and
diminish working conditions is troubling.
Like the working-class movements of the eighteenth
and nineteenth centuries, we must struggle to ensure
that technological advancements uplift workers,
preserve the dignity of human labor, and protect worker
privacy. Unlike AI, we can act on our own accord. We
can educate, advocate, and organize to make certain that
new technologies are implemented for the benefit of all,
not just the privileged few.
++++
Can AI help prevent suicide? How real-time monitoring
may be the next big step in mental health care
Suicide represents one of the most complex and
heartbreaking challenges in public health. One major
difficulty in preventing suicide is knowing when
someone is struggling.
Suicidal thoughts and behaviour can come and go
quickly, and they’re not always present when someone
sees a doctor or therapist, making them hard to detect
with standard checklists.
Today, many of us use digital devices to track our
physical health: counting steps, monitoring sleep, or
checking screen time. Researchers are now starting to
use similar tools to better understand mental health.
One method, called ecological momentary assessment
(EMA), collects real-time information about a person’s
mood, thoughts, behaviour and surroundings using a
smartphone or wearable device. It does this by
prompting the person to input information (active EMA)
or collecting it automatically using sensors (passive
EMA).
Get your news from actual experts, straight to your
inbox. Sign up to our daily newsletter to receive all The
Conversation UK’s latest coverage of news and
research, from politics and business to the arts and
sciences.
Research has shown EMA can be safe for monitoring
suicide risk, which includes a range of experiences from
suicidal thoughts to attempts and completed suicide.
Studies with adults show that this kind of monitoring
doesn’t increase risk. Instead, it gives us a more detailed
and personal view of what someone is going through,
moment by moment. So how can this information
actually help someone at risk?
Adaptive interventions
One exciting use is the creation of adaptive
interventions: real-time, personalised responses
delivered right through a person’s phone or device. For
example, if someone’s data shows signs of distress, their
device might gently prompt them to follow a step on
their personal safety plan, which they created earlier
with a mental health professional.
Safety plans are proven tools in suicide prevention, but
they’re most helpful when people can access and use
them when they’re needed most. These digital
interventions can offer support right when it matters, in
the person’s own environment.
There are still important questions: what kind of
changes in a person’s data should trigger an alert? When
is the best time to offer help? And what form should that
help take?
These are the kinds of questions that artificial
intelligence (AI) – and specifically machine learning –
is helping us answer.
Machine learning is already being used to build models
that can predict suicide risk by noticing subtle changes
in a person’s feelings, thoughts, or behaviour. It’s also
been used to predict suicide rates across larger
populations.
These models have performed well on the data they
were trained on. But there are still concerns. Privacy is a
big one, especially when social media or personal data
is involved.
There’s also a lack of diversity in the data used to train
these models, which means they might not work equally
well for everyone. And it’s challenging to apply models
developed in one country or setting to another.
Still, research shows that machine learning models can
predict suicide risk more accurately than traditional
tools used by clinicians. That’s why mental health
guidelines now recommend moving away from using
simple risk scores to decide who gets care.
Instead, they suggest a more flexible, person-centred
approach – one that’s built around open conversations
and planning with the person at risk.
Person viewing real-time mobile phone data. Ruth
Melia, CC BY-SA
Predictions, accuracy and trust
In my research, I looked at how AI is being used with
EMA in suicide studies. Most of the studies involved
people getting care in hospitals or mental health clinics.
In those settings, EMA was able to predict things like
suicidal thoughts after discharge.
While many studies we looked at reported how accurate
their models were, fewer looked at how often the
models made mistakes, like predicting someone is at
risk when they’re not (false positives), or missing
someone who is at risk (false negatives). To help
improve this, we developed a reporting guide to make
sure future research is clearer and more complete.
Another promising area is using AI as a support tool for
mental health professionals. By analysing large sets of
data from health services, AI could help predict how
someone is doing and which treatments might work best
for them.
But for this to work, professionals need to trust the
technology. That’s where explainable AI comes in:
systems that not only give a result but also explain how
they got there. This makes it easier for clinicians to
understand and use AI insights, much like how they use
questionnaires and other tools today.
Suicide is a devastating global issue, but advances in AI
and real-time monitoring offer new hope. These tools
aren’t a cure all, but they may help provide the right
support at the right time, in ways we’ve never been able
to before.
+++
AI makes non-invasive mind-reading possible by
turning thoughts into text
This article is more than 2 years old
Advance raises prospect of new ways to restore speech
in those struggling to communicate due to stroke or
motor neurone disease
Hannah Devlin Science correspondent
Mon 1 May 2023 11.00 EDT
An AI-based decoder that can translate brain activity
into a continuous stream of text has been developed, in
a breakthrough that allows a person’s thoughts to be
read non-invasively for the first time.
The decoder could reconstruct speech with uncanny
accuracy while people listened to a story – or even
silently imagined one – using only fMRI scan data.
Previous language decoding systems have required
surgical implants, and the latest advance raises the
prospect of new ways to restore speech in patients
struggling to communicate due to a stroke or motor
neurone disease.
Dr Alexander Huth, a neuroscientist who led the work at
the University of Texas at Austin, said: “We were kind
of shocked that it works as well as it does. I’ve been
working on this for 15 years … so it was shocking and
exciting when it finally did work.”
The achievement overcomes a fundamental limitation of
fMRI which is that while the technique can map brain
activity to a specific location with incredibly high
resolution, there is an inherent time lag, which makes
tracking activity in real-time impossible.
The lag exists because fMRI scans measure the blood
flow response to brain activity, which peaks and returns
to baseline over about 10 seconds, meaning even the
most powerful scanner cannot improve on this. “It’s this
noisy, sluggish proxy for neural activity,” said Huth.
This hard limit has hampered the ability to interpret
brain activity in response to natural speech because it
gives a “mishmash of information” spread over a few
seconds.
However, the advent of large language models – the
kind of AI underpinning OpenAI’s ChatGPT – provided
a new way in. These models are able to represent, in
numbers, the semantic meaning of speech, allowing the
scientists to look at which patterns of neuronal activity
corresponded to strings of words with a particular
meaning rather than attempting to read out activity word
by word.
The learning process was intensive: three volunteers
were required to lie in a scanner for 16 hours each,
listening to podcasts. The decoder was trained to match
brain activity to meaning using a large language model,
GPT-1, a precursor to ChatGPT.
Later, the same participants were scanned listening to a
new story or imagining telling a story and the decoder
was used to generate text from brain activity alone.
About half the time, the text closely – and sometimes
precisely – matched the intended meanings of the
original words.
“Our system works at the level of ideas, semantics,
meaning,” said Huth. “This is the reason why what we
get out is not the exact words, it’s the gist.”
For instance, when a participant was played the words
“I don’t have my driver’s licence yet”, the decoder
translated them as “She has not even started to learn to
drive yet”. In another case, the words “I didn’t know
whether to scream, cry or run away. Instead, I said:
‘Leave me alone!’” were decoded as “Started to scream
and cry, and then she just said: ‘I told you to leave me
alone.’”
The participants were also asked to watch four short,
silent videos while in the scanner, and the decoder was
able to use their brain activity to accurately describe
some of the content, the paper in Nature Neuroscience
reported.
“For a non-invasive method, this is a real leap forward
compared to what’s been done before, which is typically
single words or short sentences,” Huth said.
Sometimes the decoder got the wrong end of the stick
and it struggled with certain aspects of language,
including pronouns. “It doesn’t know if it’s first-person
or third-person, male or female,” said Huth. “Why it’s
bad at this we don’t know.”
The decoder was personalised and when the model was
tested on another person the readout was unintelligible.
It was also possible for participants on whom the
decoder had been trained to thwart the system, for
example by thinking of animals or quietly imagining
another story.
Jerry Tang, a doctoral student at the University of Texas
at Austin and a co-author, said: “We take very seriously
the concerns that it could be used for bad purposes and
have worked to avoid that. We want to make sure people
only use these types of technologies when they want to
and that it helps them.”
Prof Tim Behrens, a computational neuroscientist at the
University of Oxford who was not involved in the work,
described it as “technically extremely impressive” and
said it opened up a host of experimental possibilities,
including reading thoughts from someone dreaming or
investigating how new ideas spring up from background
brain activity. “These generative models are letting you
see what’s in the brain at a new level,” he said. “It
means you can really read out something deep from the
fMRI.”
Prof Shinji Nishimoto, of Osaka University, who has
pioneered the reconstruction of visual images from
brain activity, described the paper as a “significant
advance”. “The paper showed that the brain represents
continuous language information during perception and
imagination in a compatible way,” he said. “This is a
non-trivial finding and can be a basis for the
development of brain-computer interfaces.
The team now hope to assess whether the technique
could be applied to other, more portable brain-imaging
systems, such as functional near-infrared spectroscopy
(fNIRS).
+++
AI Chatbots Secretly Ran a Mind-Control Experiment
on Reddit
And now the site is suing.
By Ashley Fike
May 5, 2025, 9:43am
ai-chatbots-secretly-ran-a-mind-control-experiment-onredditCheng
Xin/Getty Images
Share:
Reddit users are pissed—and rightfully so. A group of
AI researchers from the University of Zurich just got
caught running an unauthorized psychological
experiment on r/ChangeMyView, one of the site’s
biggest debate communities, and no one who
participated had any idea it was happening.
The experiment involved AI chatbots posing as regular
users to see if they could subtly sway opinions on hotbutton
topics. These weren’t bland comment bots
posting generic takes. They were tailored personas—one
claimed to be a male rape victim minimizing his trauma,
another said women raised by protective parents were
more vulnerable to domestic violence, and a third posed
as a Black man against Black Lives Matter. To make the
manipulation more effective, a separate bot scanned
user profiles and fed personalized arguments back to
them.
Videos by VICE
In total, the bots dropped over 1,700 comments into
Reddit threads without revealing they were AI. And the
kicker? They were surprisingly good at convincing
people. According to a draft of the study, their
comments were three to six times more persuasive than
human ones, based on Reddit’s own “delta” system
(users give a delta when their mind has been changed).
AI Chatbots Just Ran a Mind-Control Experiment on
Reddit
The research team didn’t disclose the experiment to the
community until after it was over, violating just about
every norm in both ethics and internet culture. In a post
from the subreddit’s moderators, the reaction was blunt:
“We think this was wrong.”
Reddit’s chief legal officer, Ben Lee, took it a step
further, saying the researchers had broken the site’s
rules, violated user trust, and committed a clear breach
of research standards. “What this University of Zurich
team did is deeply wrong on both a moral and legal
level,” Lee wrote, adding that Reddit would pursue
formal legal action.
The university has since said the study will not be
published, and its ethics committee will adopt stricter
oversight for future projects involving online
communities. But the damage has already been done.
Beyond the lawsuit, this whole debacle raises bigger
questions about how AI is creeping into everyday digital
life. A March 2025 study showed OpenAI’s GPT-4.5
could fool people into thinking they were talking to a
real person 73% of the time. And it feeds into a broader
paranoia that bots are slowly taking over online spaces
—a fear known as the “dead internet” theory.
That theory might still belong in tinfoil-hat territory, but
this experiment pushed it a little closer to reality.