Air Force Looks to Artificial Intelligence to Fight Future Wars

Advance preparation and planning is one of the major strategies to win a battle and the military knows that. Although we don’t know when, it’s feared that the World War III might break up in future, to determine the next super-power if the U.S. will not be able to control things. To remain a top, the Air Force is looking at artificial intelligence — to upgrade its flying machinery, weaponry, monitory and surveillance systems.

That ideally means giving these things a brain to be able to reason and make decisions, to make them superior in performance. As in, imagine a jet fighter that could understand the pilot is wounded and then calls for help automatically as it predicts a safe ground to land without human effort.

That’s Already Possible With Machine Learning

Source: swissnexboston

Two weeks ago, Affectiva announced a software that is able to read the emotions of a driver, to help them remain alert. But, the prime target of the technology is to help driverless cars understand the emotions of people onboard. Tesla, an autonomous car manufacturing company has also built a tech, where their cars are able to learn of potholes from their sister driverless cars, which traveled the road.

With such technological possibilities, it means the Air Force can have a network, where aircraft can autonomously draw information about the entire nature of the battlefield. That is, places to avoid after learning about the demise of their partner aircraft.

Data, the Renewable Oil

Data is what powers artificial intelligence systems, to make them achieve the amazing things we see. Luckily, the Air Force confirms that its data capacity is more than enough to experiment with the advancements in AI –tapping data from satellites, aircraft, and computerized weapons.

The point is, with the right data, experts can develop the right algorithms that would make accessories clever and useful. “Like renewable oil, the same data can be used on a single network to accomplish different tasks. For example, an autonomous jetfighter will be able to know the type of weapons held by the enemy as well as the fastest means to disarm or pin the adversary.

Source: cdn

“Data is definitely a key thing in artificial intelligence,” said Gen. Stephen Wilson, Air Force’s Assistant Chief of Staff. In other words, efforts need to be put in place to ensure our data remains safe and genuine.

AI’s Journey in Security Matters

Different countries are also trying to incorporate machine intelligence into their security. China to be particular is using an algorithm that is able to highlight suspects or terrorists in surveillance CCTV. A team from the US is also on record for having come up with nearly a similar technology but it’s yet to be implemented for certain unanswered bias concerns.

That is, the only barrier keeping us from fully embracing machine intelligence, in specific areas of security, revolves around algorithms and them being reliable, balanced enough, as well as free from biased allegations. But in Air Force’s approach that may not be a big deal because the systems will be working on a target and defined battlefield.

AI in a Network

Source: ucharis

The Air Force is taking what we can say, a strategic approach, to create a massive intelligent system where its agents could be operating and tapping data from. For instance, fighter jets will be able to learn from a network, like how Tesla’s driverless cars are able to master a road they’ve never traveled from their sister vehicle that did travel on the same road.

“The Air Force needs to think about building a network that would connect the geosynchronous orbit right to all our various platforms. At least this would help us stay above capable adversaries like China and Russia,” said Wilson.

Keeping Hackers from the Network

Off late, fears about harking an AI-powered system or network have reduced because it’s been revealed that we can actually train machines to fight hackers autonomously. And it’s simple, you just need to present fake data to the system, alert it and it will develop ways to keep itself safe.

Nonetheless, it’s obvious that the human brain will still be highly required to ensure the adversaries don’t invent other ways that could trick the network. In summary, the “operating learning system” that may be developed by the Air Force will work under human surveillance, until it proves safe to fully operate independently.

Comments