Pentagon: Artificial Intelligence Will Be Valued in Future Warfare

When Pentagon Officials speak about a technology you know it’s a serious topic of public concern. And the tech on spot now is artificial intelligence, where a senior official in the DoD (Department of Defense) has come out clear to explain how AI and robots will be a key factor in deciding outcomes on future battlefields.

Ideally, there’s a lot to ponder when machine intelligence is spoken about under the glare of security matters, in fact, this has also been a major topic with experts. To succeed in maintaining national security the concerned bodies need to also, evolve with emerging technologies.

Keeping the Adversaries in Check

Michael D. Griffin, in a report, explained that national security is a product of consistent exploration of rising technologies. “It’s clear that safety comes in many dimensions, but we need to add the future into this without losing the other key approaches that uphold the security of the country.”

In that line, the senior official advised that the US needs to fasten its grip on machine intelligence, adding that AI-driven attacks and cyber threats might in future come from countries which crave to control the world with their beliefs.

Well, for clarity it might not be clear who wants to dictate countries but it’s clear that certain countries have been developing dangerous weapons.  It’s ideally important that the US stays ahead to ensure these countries don’t disrupt peaceful coexistence between nations, which it can by being the leader in AI, to ensure the tech’s potential remains under control as it grows.

Too Early to Explore AI?

Source: The Bulletin

“Machine intelligence is still in its infant stages, we are not looking at a grown-up figure but we can definitely see the realistic potential of AI,” said the senior Pentagon official. Some countries like China are already using this technology to enhance their surveillance power.

There are also reports pointing to how certain weapon manufactures could use AI to make weapons. KAIST University in South Korea, 2015’s champion in Robotics Challenge was recently reported to be working with Hanhwa, a weapon builder in Asia. The worse part being that, KAIST and Hanhwa are developing potentially destructive intelligent quadcopters and AI-enhanced missiles. This led the AI community to renounce any conducts with the two until they assure the world that their weapons possess “meaningful human control.”

So How Safe Are AI-Powered Weapons

Tesla’s CEO Elon Musk is on record saying that he has had exposure to the most sophisticated form of AI in his venture to make the driverless car a reality, and his comments about autonomous weapons have always raise heads. Calling for immediate control, Musk said machine intelligence can spark an endless war, that’s why laws need to be created to avoid that in advance.

Another major concern about such weapons is that they could be hacked and redirected to target the innocent population as well as the authorities — something like backstabbing what it’s meant to protect. The point is if good people are not willing to take charge over artificial intelligence, the small ill-motived population in the world might use the technology to harm humanity.

Source: Phys

Nonetheless, most experts agree, there is a lot to benefit in AI, security wise, as the tool could be highly instrumental in detecting crimes early. Like for example, a team of scientists from China recently developed an algorithm that reads what people are thinking. If it comes to maturity, the system might be used to capture terroristic plans right in the mind of suspects. It might also help to disrupt scheduled attacks by revealing what’s in the mind of a suspect.

In short, when embraced strategically, AI technologies will have great value in tomorrow’s security. At basic levels, judges might also use it to help filter lies from the truth in courtrooms, thereby fastening processes of prosecution — which definitely is a component in matters safety, security (and possibly warfare at the wider scale.)

Comments