The point is not merely to refute strong AI, the view that machines will catch up to and eventually exceed human intelligence. Rather, the point is to show society a positive way forward in adapting to machines, putting machines in service of rather than contrary to humanity’s higher aspirations.
“I think AI will result in there being less hours in the day that people are going to need to work,” Branson said. “You know, three-day workweeks and four-day weekends. Then we’re going to need companies trying to entertain people during those four days, and help people make sure that they’re paid a decent amount of money for much shorter work time.”
Welcome to computational argumentation. IBM’s Project Debater won a convincing victory in one debate but a human debater edged it out in another.
One of the world’s leading tech developers, Google, says it will no longer develop artificial intelligence programmes for the military once the current contract expires. This comes on the heels of internal resignations over military work along with a protest letter from other employees.
Affordable consumer technology has made surveillance cheap and commoditized AI software has made it automatic.
Those two trends merged this week, when drone manufacturer DJI partnered June 5 with Axon, the company that makes Taser weapons and police body cameras, to sell drones to local police departments around the United States. Now, not only do local police have access to drones, but footage from those flying cameras will be automatically analyzed by AI systems not disclosed to the public.
- A virtual border agent kiosk was developed to interview travelers at airports and border crossings and it can detect deception to flag human security agents.
- The U.S., Canada and European Union have tested the technology, and one researcher says it has a deception detection success rate of up to 80 percent — better than human agents.
- The technology relies on sensors and biometrics, and its lie-detection capabilities are based on eye movements or changes in voice, posture and facial gestures.
Terrorists only get through 20% of the time!
Artificial intelligence programs promise to do everything, from predicting the weather to piloting autonomous cars. Now AI is being applied to video surveillance systems, promising to thwart criminal activity not by detecting crimes in progress but by identifying a crime–before it happens. The goal is to prevent violence such as sexual assaults, but could such admirable intentions turn into Minority Report-style pre-crime nightmares?
Video of Uber’s self-driving car killing Elaine Herzberg is available on YouTube. It will–or at least should–produce shock waves in the culture. The Silicon Valley cult of Artificial Intelligence (AI) — and the related cult of brain science — is a main source of today’s cultural despair. If the brain is merely a machine that white-coated lab techs can measure and manipulate like any other machine, and if machines can be programmed to emulate the human brain, then human existence has no purpose. Our destiny is fixed in the same way that the paths of the planets and the orbits of electrons are fixed, and our free will, moral responsibility, devotion to the past and regard for the future are the random effluvia of a deterministic process.
Crose’s motivation is to expose white nationalists who use more or less obscure, mundane, or abstract symbols—or so-called dog whistles—in their posts, such as the Black Sun and certain Pepe the frog memes. Crose’s goal is not only to expose people who use these symbols online but hopefully also push the social media companies to clamp down on hateful rhetoric online.
The tech giant Microsoft is deploying artificial intelligence to the task of protecting our planet. Brad Smith, Microsoft’s president and chief legal officer, announced on Dec. 11 that the company would be investing $50 million in their AI for Earth program over the next five years in order to “monitor, model, and manage the Earth’s natural systems.”
Sure, what’s the worst that could happen?
Mark Zuckerberg is hailing A.I.’s ability to “save lives,” but not everyone is giving this move a “like.”
“The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights.
We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”
Professor Russell states: “This short film is more than just speculation, it shows the results of integrating and miniaturising technologies that we already have.
“I’ve worked in AI for more than 35 years. Its potential to benefit humanity is enormous, even in defence.
“But allowing machines to choose to kill humans will be devastating to our security and freedom – thousands of my fellow researchers agree.
Military robots are not all bad.
Sure, there are risks and downsides of weaponised artificial intelligence (AI), but there are upsides too. Robots offer greater precision in attacks, reduced risk of collateral damage to civilians, and reduced risk of “friendly fire”.
AI weapons are not being developed as weapons of mass destruction. They are being developed as weapons of precise destruction. In the right hands, military AI facilitates ever greater precision and ever greater compliance with international humanitarian law.
An AI character was made an official resident of a busy central Tokyo district on Saturday, with the virtual newcomer resembling a chatty seven-year-old boy.
Sooner or later, these entities will be given rights. We are witnessing the birth of a new class of citizens. William Gibson’s Neuromancer was a few decades ahead of its time (he’s Canadian you know).