The Air Force Loves War Gamers Like Alleged Leaker Teixeira

FacebookTwitterPinterestEmailShare
The criminal complaint against Jack Teixeira is photographed
The criminal complaint against Jack Teixeira is photographed Friday, April 14, 2023. The Massachusetts Air National Guardsman appeared in court in Boston, accused in the leak of highly classified military documents. (AP Photo/Jon Elswick)

The opinions expressed in this op-ed are those of the author and do not necessarily reflect the views of Military.com. If you would like to submit your own commentary, please send your article to opinions@military.com for consideration.

Dr. Emma L Briant is an internationally recognized expert and academic of propaganda and information warfare, whose work was central in exposing the Cambridge Analytica scandal and continues to inform politicians, NGOs and industry.

With the federal government spending at least $1 billion annually on defense and civilian agency programs to neutralize 'insider threats,' it’s no wonder that people are asking how it was possible that secret documents posted by Jack Teixeira, a low level 21-year-old Massachusetts Air National Guard airman, were able to circulate through the backwaters of the Internet for months before authorities even became aware of their existence.

After the massive document dumps by Chelsea Manning and Edward Snowden, new systems were put in place to prevent, or at least rapidly track, such unauthorized access to top secret files. The shiny new application touted to detect insider threats was artificial intelligence. Obviously it didn’t work in the Teixiera case.

Today, defense contractors make millions of dollars selling AI insider threat systems that are meant to predict which government employee might pose a potential national security threat. These tech entrepreneurs make big claims about their AI’s accuracy in identifying leakers, and claim an urgent need both for their systems and for access to ever more data.

Palantir is probably the best known developer of such technologies. Its CEO Alex Karp recently claimed that AI systems are “very dangerous” but in the context of wars like Ukraine, have “fundamentally changed the world” and cannot be put “back in the box.” Palantir claims its tools for rooting out insider threats enables enterprises to “identify suspicious or abnormal employee behavior using a variety of algorithmic methods.” To a similar end, last year the Pentagon awarded a “multi million dollar contract” to Torch.AI, a Leawood, Kansas-based data infrastructure artificial intelligence company, “to support the Pentagon’s efforts to combat insider threats,” known as the System for Insider Threat Hindrance, or SITH. According to its CEO Brian Weaver, “There are few situations where the quality and availability of data is more important than cyber and insider threat.” Obviously it didn’t prevent Teixiera and his pals from widely sharing top secret documents.

Egg on its Face

The Discord leak is embarrassing for the National Insider Threat Task Force, a government-wide program under the Director of National Intelligence tasked with deterring, detecting, and mitigating threats just like this one. As recently as April 10, National Security Council spokesman John F. Kirby was in the dark on key aspects of the hemorrhage, saying the NSC still did not know how much material was public, who was behind it or their motive.

“Insider Threat” is a concept with a long history, catalyzed following the Chelsea Manning leaks in President Obama’s Executive Order 13857, which established an interagency Insider Threat Task Force to develop a government-wide program. Its concepts, also articulated in NATO’s Cooperative Cyber Defence Centre of Excellence, have long sought AI prediction tools to identify potential leakers, based on past offenders like Snowden or Manning. But it would be old fashioned human-powered journalism by Bellingcat’s Aric Toler and The New York Times, not AI, that swiftly identified the leaker after spotting the documents on Russian Telegram.

Nigel Oakes, founder of the defense contractor SCL, the parent company of Cambridge Analytica, described to me in 2017 how the Pentagon’s insider threat model was “very flawed” because, with a limited sample of past national security leakers to study, “you can’t ever regress the data.” Not only that, if the pool of recruits is changing, the profile of future insider threats or leakers is also likely to shift, reducing their predictions to an imperfect science.

There are fundamental flaws in relying on a system that aims to predict future cases that are distinct, like the Discord leak, from cases that AI was trained on. Airman Teixeira evidently did not seek to blow the whistle on government programs he disagreed with and certainly didn’t take any meaningful steps to stay anonymous. Nor did he even seem to understand the full consequences of his actions, since he used his own name and address to register the Discord server. All the FBI had to do was request Teixeira’s personal information from Discord.

Yet the AI lobby argues that the solution to plugging such leaks is to undermine encryption that keeps apps and devices private and deepen Internet surveillance. But the Teixeira case is a weak justification for further eroding privacy of federal employees. Intrusive surveillance impacts everyone, not just those with ill intent, says Tom Devine, legal director of the Government Accountability Project, a whistleblower advocacy group. Devine has argued that insider threat defensive systems can be used to target individuals blowing the whistle on government waste, fraud, and abuse.

Whether or not tomorrow’s threat actors’ deceptions could be identified by eroding what remains of our privacy, the unlimited surveillance it would take to identify all threats carries the risk of undermining democracy itself. The preoccupation over government access to communication technologies may also distract from identifying underlying causes and solutions to ‘insider threats’.

Former senior CIA operations official John Sipher has claimed that the main problem is too many people having access to sensitive intelligence. He’s also suggested that background checks of teenagers work with so little life experience that they are not likely to detect problem recruits unless they have an arrest record. Whatever the truth of that, evidently no one in Teixeria’s chain of command, much less AI’s sniffers, picked up on months’ worth of Teixiera’s racist and antisemitic rants—or they failed to report it.

Youth Quake

In any event, the military depends on the constant recruitment of youngsters, including for intelligence billets. Teixeira came into intelligence through a side door as an IT technician, which enabled his access to a system holding the intelligence documents. His unit, the 102nd intelligence wing, is part of the Air Force's Distributed Common Ground System (DCGS), which processes military intelligence, including foreign imagery from drones.

Perhaps what DOD needs to do is to examine its own recruitment process and culture.

As the military adapts to new technologies like AI, augmented reality and automation, it has increasingly sought recruits with relevant technical skills. This has prompted the military to intensify its efforts to recruit teenage gamers, who’ve developed skill sets such as the ability to visualise remote operations in far away places like Afghanistan, utilize screens for 12 hours at a time, or operate peripheral devices, to fill roles like those needed for the DCGS to disseminate data ingested by drones.

According to the Washington Post, the Air Force “has arguably become the leader in fostering gaming culture.” The military is recruiting on platforms popular with gamers like Discord, or through “military sponsorships of gaming leagues” that feature violent war games. While Teixeira’s role was in technology support, “trauma experienced within this program is not isolated to pilots, techs or sensor operators,” a veteran of DCGS explained to me on condition of anonymity. This veteran said “a culture change is needed” for recruits between the ages 18 and 24, who’ve spent countless hours alone honing their war game skills, to get adequate mental health support.

One of Teixeira’s high school classmates, Kailani Reis, told the Boston Globe that Teixeira was “super quiet” and gave off “loner vibes,” while another classmate, Sarah Arnold remembered him as being quiet and keeping to himself, according to the Associated Press.

In 2019, according to the Washington Post, the Air Force sponsored a gamer tournament to find its best players among 350 contestants. The idea was to foster mental health among its young rank-and-file during the pandemic.

Capt. Oliver Parsons, the founder of Air Force Gaming, has explained that what the service needed was to create an engaging activity with a support network to help young recruits deal with the isolation brought on by the Covid-19 pandemic.

“We’re not robots. We’re normal, average people,” Parsons said, adding that if the military doesn’t make gaming culture acceptable, service members are “going to go somewhere else.”

Meanwhile, in a supreme irony, it was the Air Force itself that encouraged the young gamers to use Discord, the very platform Teixeira turned to for anti-government bonding and sharing what he saw daily at work. When it encouraged use of the platform, however, the Air Force may not have grasped how many extremist users of 4Chan also use it to network and share content, which put their young military charges at greater risk of encountering the extremist fringe.

Teixeira’s blithe attitude toward sharing top secret documents on the channels is less surprising when we consider how the military’s recruitment and training eroded important boundaries separating harmless, at-home wargaming from real life military conflicts.

That followed last year’s problematic Army recruitment ads for its 4th Psychological Operations Group, which, amazingly enough, were created to appeal to young folks drawn to conspiracy theories. Research shows that the embrace of conspiracy theories can lead to radicalization and violence, which in the military may be worsened by combat-induced trauma or psychological distress. After concerns rose about active duty military members’ involvement in the January 6 U.S. Capitol siege, the DoD took steps to address insider threats from extremists, including retooling its recruitment pitches. But clearly problems run deeper.

Meanwhile, bigger data security risks come from third-party contractors themselves who, unlike Teixiera, Manning and Reality Winner, are rarely held accountable when things go wrong. In 2017 over 100 gigabytes of highly sensitive data was found unsecured on an Amazon Web Services server, believed to be linked to a defense contractor. It appears no one was held accountable.

Analysts are still trying to find out why Teixeira did what he did—not to mention how to stop future leaks like his. But before the AI ‘Insider Threat’ lobby demands more intrusive surveillance, it might be worth asking what kind of culture the digital military has been fostering. ###

Show Full Article