Good Bots, Bad Bots?: The Effects of AI on Cybersecurity
As we reel into nearly the second decade of the new millennium, companies are forced to maintain a strong online presence. Keeping up with the times demands a digital-based everything, as technology drives innovation, business, social trends, even media and pop culture. Now, conversation buzzes with talk of Artificial Intelligence (AI), which is expected to continue sweeping the grand stage across all sectors in the coming year. Machine learning software is already implemented to aid in healthcare, facilitate factory production, sell tourist destinations, edit photos, and more. The world now awaits its evolution as a tool of friend or foe, in the very hot topic of cybersecurity.
From conglomerates to teenagers, we entrust our information to the cloud– click-click-clicking away to input username, password, birth date, and more. It’s necessary for daily operations; but tapping into this information is worth billions for the savvy, albeit crooked, taker. It is little wonder that businesses and governments report cybersecurity as one of their top concerns (“For US CEOs,” 2017).
As AI continues advancing as the most potent new technological tool, it therefore necessarily venn diagrams with this burning topic of security. 25% of IT leaders report cybersecurity as their biggest interest for machine learning implementation within their organizations. Machine-learning’s rapid reaction in the event of a successful attack drastically minimizes the amount of resources leaked (Q, 2018). Also, AI’s powerful vector and algorithm adaptation may not only counteract these events, but further work to anticipate and prevent them (Meyer, 2019). New, AI-enabled security software combines with its intelligent programmers to mislead hopeful robbers, and continuously remain one step ahead of their dodgy games.
On the other hand, just as self-taught machines can preemptively trick hackers, hackers themselves may also leverage programs that trick their fellow machines. Inter-organizational platforms are especially vulnerable to these attacks, as the communication between multiple players opens small windows to the well-disguised intruder. With the help of AI technology, cyber thieves are able to watch and learn the security systems they target, thus programming their own attacks that match the machine learning of the encrypted programs (Q, 2018). Fear grows as AI phishing software becomes more and more convincing to dupe even the best tech professionals, also experimentally learning to refine and expand target ranges. There is also a palpable concern surrounding the further development of a superior ransomware, which cost the British National Health System £92 million in 2018 (Meyer, 2019). This worry spreads also beyond the corporate and personal sphere, including all levels of national security and cyber warfare.
As Ben Parker would say, “With great power comes great responsibility.” Now more than ever, the technology sector must balance this dynamic, as it learns to leverage the most powerful software programs to date. Artificial Intelligence will only continue to permeate and advance in potency; as with all new tools, its positive and negative consequences now depend entirely on the human factor.