Microsoft appears to have stepped into a massive controversy after working with the Ministry of Early Childhood in Argentina in 2018 to develop an AI-based algorithm that can predict which girl will have a teen pregnancy with 86% accuracy using only data such as age, ethnicity, country of origin, disability, and whether the child’s home had hot water in the bathroom.

“With technology you can foresee five or six years in advance, with first name, last name, and address, which girl—future teenager—is 86 percent predestined to have an adolescent pregnancy,” said Juan Manuel Urtubey, the governor of the northern province of Salta in Argentina.

A Microsoft spokesperson announced that the technology in Salta was “one of the pioneering cases in the use of AI data” in state programs, but the project has been attacked by feminists in Argentina after it was revealed that the anti-abortion Argentinian nonprofit the Conin Foundation, was behind the technology.

“The idea that algorithms can predict teenage pregnancy before it happens is the perfect excuse for anti-women and anti-sexual and reproductive rights activists to declare abortion laws unnecessary,” wrote feminist scholars Paz Peña and Joana Varon at the time.

The government never revealed what they would do with the result of the pregnancy predictions, but was then in the process of attempting to ban abortion in Argentina. The propaganda that teenage pregnancy can be accurately predicted may have contributed to the country eventually voting to make abortion illegal in 2020.

Argentina has a long history of government surveillance and human rights abuses, including a number of Eugenics programs, and at one point in the 70s banned contraception in the country.

“[The technology program] is a patriarchal contrivance,” said Ana Pérez Declercq, director of the Observatory of Violence Against Women. “It confounds socioeconomic variables to make it seem as if the girl or woman is solely to blame for her situation. It is totally lacking any concern for context. This AI system is one more example of the state’s violation of women’s rights.

The detailed working of the algorithm was never published, and with little regulation of data practices, it shows how AI systems can be abused without any ethics oversight.