The most horrifying crimes, such as “burglar bots” breaking into your apartment, are not necessarily the most dangerous, as they can easily be foiled and affect a few people at a time. On the other hand, false information generated by “bots” can ruin a known person’s reputation or blackmail. Difficult to fight, these “deepfakes” can cause considerable economic and social damage.
Artificial intelligence: serious threats
– Fake videos: impersonating someone by making them say or do things they have never said or done to request access to secure data, to manipulate opinion, or damage someone’s reputation… These fake videos are almost undetectable.
– Hacking autonomous cars: taking control of an autonomous vehicle to use it as a weapon (for example, to perpetrate a terrorist attack, cause an accident, etc.).
– Customized phishing: generate personalized and automated massages to increase the effectiveness of phishing to collect secure information or install malware.
– Hacking of AI-controlled systems: disrupt infrastructure by causing, for example, widespread power outages, traffic congestion, or disruption of food logistics.
– Large-scale blackmail: collect personal data to send automated threat messages. AI could also generate false evidence (e.g., “sextortion”).
– AI writes fake news: writing propaganda articles issued by a reliable source. AI could also generate multiple versions of a particular piece of content to increase its visibility and credibility.
Artificial Intelligence: Medium-Severity Threats
– Military robots: taking control of robots or weapons for criminal purposes. A potentially dangerous threat but challenging to implement, as military equipment is usually very protected.
– Data corruption: deliberately altering or introducing fake data to induce specific biases and, for example, making a sensor immune to weapons or encouraging an algorithm to invest in a particular Market.
– Learning-based cyberattack: perpetrating specific and massive attacks, such as using AI to probe for weaknesses in systems before launching multiple simultaneous attacks.
– Autonomous attack drones: hijack or use autonomous drones to attack a target. These drones could be particularly threatening if they act en masse in self-organizing swarms.
– Facial recognition: hijacking facial recognition systems, for example, by fabricating fake ID photos (smartphone access, surveillance cameras, passenger screening…)
Artificial intelligence: low-intensity threats
– Bias exploitation: taking advantage of existing biases in algorithms, e.g., YouTube recommendations to channel viewers or Google rankings into raising product profiles or denigrating competitors.
– Robot burglars use small autonomous robots that crawl into mailboxes or windows to retrieve keys or open doors. Damage is potentially low, as it is localized on a small scale.
– AI-assisted tracking: use machine learning systems to track an individual’s location and activity.
– Counterfeiting: making fake content, such as paintings or music, that can be sold under false authorship. The potential for harm remains relatively low as few known images or music exist.