The power and pitfalls of AI for US intelligence

In an example of the IC’s successful use of AI, after exhausting all other avenues — from human spies to signals intelligence — the US was able to locate an unidentified weapons of mass destruction research and development facility in a major Asian country by bus. to locate those traveling between the country. and other well-known amenities. To do that, analysts used algorithms to search and evaluate images of nearly every square inch of the country, according to a senior U.S. intelligence official who spoke in the background with the notion of not being named.

While AI can calculate, retrieve, and use programming that performs limited rational analysis, it lacks the calculus to properly parse more emotional or unconscious components of human intelligence, described by psychologists as system 1 thinking.

For example, AI can prepare intelligence reports similar to newspaper articles about baseball, which contain structured, non-logical flow and repetitive content elements. However, when instructions require complex reasoning or logical arguments that justify or demonstrate conclusions, AI appears to be lacking. When the intelligence community tested the capability, the intelligence official said, the product resembled an intelligence slip but was otherwise nonsensical.

Such algorithmic processes can overlap, complicating computational reasoning, but even then those algorithms can’t interpret context as well as humans, especially when it comes to language, such as hate speech.

Understanding AI may be more analogous to understanding a human toddler, said Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to customers, from violence to disinformation. “For example, AI can understand the basics of human language, but fundamental models don’t have the latent or contextual knowledge to perform specific tasks,” Curwin says.

“From an analytical perspective, AI has a hard time interpreting intent,” Curwin added. “Computer science is a valuable and important field, but it is social computational scientists who are making the big leaps in enabling machines to interpret, understand and predict behavior.”

To “build models that can replace human intuition or cognition,” Curwin explains, “researchers first need to understand how to interpret behavior and translate that behavior into something AI can learn.”

While machine learning and big data analytics provide predictive analytics about what might or will happen, it can’t explain to analysts how or why it arrived at those conclusions. The opacity of AI reasoning and the difficulty of verifying sources, which consist of extremely large data sets, can affect the actual or perceived soundness and transparency of those conclusions.

Transparency in reasoning and sourcing are requirements for the analytical marketing standards of products produced by and for the intelligence community. Analytical objectivity is also required by law, prompting calls within the US government to update such standards and laws in light of the increasing prevalence of AI.

Machine learning and algorithms used for predictive judgment are also considered more art than science by some intelligence practitioners. That is, they are prone to bias, noise and may involve methodologies that are not sound and lead to errors similar to those in the criminal justice forensic sciences and arts.