(21) Trump Expands Use of AI for Immigrant Deportation
Lifelong Tracking and Surveillance Through Various Data
Concerns Over Discriminatory Stigmatization of Low-Income and Minority Groups
AI Must Not Become a Tool That Kicks Away the Ladder
Upon taking office, U.S. President Donald Trump prioritized the "mass deportation of illegal immigrants." The Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE) are even under pressure to meet enforcement performance targets.
In this process, advanced surveillance technologies combined with artificial intelligence (AI) are being utilized. A comprehensive system is in operation to identify and track illegal immigrants and undocumented residents.
Technology is undoubtedly a powerful tool that can solve various societal problems and improve quality of life. AI is creating positive changes in many areas, such as improving the accuracy of medical diagnoses, enhancing energy efficiency, and predicting natural disasters.
However, it is not always the case. Especially in cases like this, the idea of "using technologies like AI to efficiently identify humans" carries potential risks. And the magnitude of those risks can be much greater than expected.
Let's take a closer look at how AI classifies and identifies humans.
AI-Based Risk Assessment System
On August 20, 2024 (local time), during a campaign rally for the U.S. presidential election held in Michigan, then-Republican candidate Donald Trump, President of the United States, is giving a speech holding data related to illegal immigration inflow and crime rates. Photo by AP Yonhap News
ICE uses an AI algorithm called the "Hurricane Score." It evaluates an immigrant's risk level on a scale from 1 to 5. It analyzes personal legal violation history, public service usage records, employment and tax history, immigration records, and related administrative information to predict the likelihood that the immigrant will evade government supervision. Although the score calculated by the algorithm is not directly used for detention or deportation decisions, it serves as an important reference for ICE agents' decision-making.
Various other data are also utilized. A vast amount of information stored by federal agencies, biometric data, data collected at the border, law enforcement drone and bodycam footage can be input into AI tools for analysis. Additionally, with cooperation from state and local governments, location data from traffic cameras, license plate recognition systems, and toll collection systems may also be used.
Moreover, DHS deploys drones equipped with AI and sensor towers to monitor the border. This system uses machine learning to distinguish between people, animals, and vehicles, detecting abnormal patterns to identify illegal crossings. There is also an app called SmartLINK. It reportedly can monitor hundreds of thousands of undocumented immigrants in real time through facial recognition and location tracking features.
The collected data are used to identify and track "illegal" immigrants. However, this system may go beyond merely improving administrative efficiency and become a tool that reinforces structural discrimination against certain races and social classes.
Above all, such AI systems can produce the side effect of stigmatizing socially vulnerable groups.
How AI Discriminates Against and Traps Low-Income Groups
A silhouette of a person holding a scale in one hand is revealed against a black background, and the inside of the silhouette is filled with 0s and 1s symbolizing digital. Photo by Getty Images Bank
In 2006, the state of Indiana in the U.S. introduced an "automated welfare eligibility screening process" using IBM technology. It minimized face-to-face services, shifting to call centers and online applications; fully automated document processing and eligibility screening; evaluated the risk of fraudulent claims through AI algorithms; and implemented standardized questions and response systems. The processing speed seemed to increase, and it appeared that fraudulent claimants were significantly filtered out.
However, this was closer to a "misclassification" result. Cases with complex family relationships or living situations found it difficult to fit into standardized forms. There were many unintended application errors or cases where applicants gave up applying altogether. Applicants with mental health issues struggled with long phone waits and complicated online procedures. There were especially many cases of disabled and elderly applicants giving up.
In Pittsburgh, a child abuse prediction algorithm was introduced. The intention was good: to detect and respond to child abuse quickly. Basic data included welfare receipt history of the family, parents' criminal records, mental health treatment records of family members, school attendance and grades, and neighborhood crime and poverty rates. Based on this, each family's risk was quantified on a scale from 1 to 20. It was expected that focused monitoring of high-risk families would enable early detection of child abuse.
However, things did not go as expected. Low-income families with extensive public service usage records were automatically classified as "high-risk" families. Once classified as "high-risk," they became continuous monitoring targets, starting a vicious cycle of easy detection by the system. Additionally, parents using mental health services began avoiding treatment out of fear of future disadvantages.
Mathematics Can Become a Weapon of Mass Destruction (WMD)
The above cases show that AI has entered a new stage where it directly affects human fundamental rights and dignity, beyond simple administrative efficiency. In particular, AI-based surveillance and control of immigrants risk further institutionalizing and reinforcing existing racial and social biases.
This is why there is growing attention to the fact that technology can deepen social inequality and further marginalize vulnerable groups.
Cathy O’Neil, an American mathematician and data scientist, compares the dangers of AI and algorithms to weapons of mass destruction (WMD). She goes further to define AI as another WMD, calling it a "Weapon of Math Destruction."
Because the decision-making process of algorithms is like a black box, it is impossible to challenge the results. Even developers cannot explain why the results came out as they did. Nevertheless, AI outputs affect millions of people simultaneously. System errors can lead to large-scale damage. This applies across fields such as education, employment, housing, finance, and welfare.
AI That Lays Down Ladders, Not Kicks Them Away
As revealed in various cases such as illegal immigrant surveillance, welfare eligibility determination, and child abuse prediction, AI and automated systems carry serious social risks beneath their efficiency and convenience. Above all, a single negative evaluation can follow a person like a stigma, inducing long-term and structural disadvantages.
Technology itself is neutral, but when combined with human judgment in design and operation and the limitations of data, it can ultimately become a tool that deepens social inequality and discrimination. AI is not a perfect or fair solution by itself. It is necessary to secure diverse training data and refine representative data. Otherwise, it merely reflects existing social problems and biases.
To develop AI technology more ethically and fairly, institutional measures ensuring transparency, accountability, and protection of socially vulnerable groups are essential. We must not be obsessed only with efficiency but ensure that values of equity and justice are reflected in the design and implementation of technology. AI automated decision-making systems implemented without such considerations only deepen existing social inequalities and exacerbate social instability in the long term. Careful consideration of the interaction between technology and humans will be the shortcut to a fairer and more inclusive society.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![9-Year-Old Illegal Immigrant to Be Monitored Until Age 90 [AI Error Note]](https://cphoto.asiae.co.kr/listimglink/1/2025030815065959862_1741414018.jpg)
![9-Year-Old Illegal Immigrant to Be Monitored Until Age 90 [AI Error Note]](https://cphoto.asiae.co.kr/listimglink/1/2025030815040959860_1741413849.png)

