Many companies use artificial intelligence (AI) solutions to combat cyber-attacks. But, how effective are these solutions in this day and age? As of 2019, AI isn’t the magic solution that will remove all cyber threats—as many believe it to be.
Companies working to implement AI algorithms to automate threat detection are on the right track; however, it’s important to also understand that AI and automation are two entirely separate things.
Automation is a rule-based concept. You may have heard it referred to as machine learning. AI, on the other hand, involves software that is trained to learn and adapt based on the data it receives. The fact that software is capable of adapting to changes, especially in a rapidly evolving cyber threat landscape, is very promising. It’s also important to note that AI is still at a very immature stage of its development.
The promise of AI bringing cognition to the realm of software has been exciting tech enthusiasts for years. The fact remains however that it is still software. And we should all know by now that software (particularly web-based software) is vulnerable.
As AI does mature over the next few years, we can expect to see a great deal of AI-enabled automation solutions. This is especially true with regard to day-to-day routine provision tasks and particularly around SOC operations.
We must not forget that AI technologies are also a double-edged sword as not only defenders have access to such capabilities. Attackers who also possess such skills can tip the balance. Thus, with the commoditization of AI, we can expect to see more incidents like the infamous Google speech recognition API that was used to bypass Google’s own reCaptcha mechanism.
Examples such as this lead us to remember that software is only as good as the developers who designed and wrote it. After all, data science is bound by the data that is fed to the algorithms. For critical applications such as those used for medical, law enforcement, and border control purposes, we need to be aware of such pitfalls and actively filter human bias from these systems.
As IT leaders and CIOs build out their AI strategies, software security is a key consideration. Software security is always an important part of any product, whether it is in the development stage or in production, or whether it’s purchased from a vendor—AI is no exception.
When considering the possible applications (health, automotive, robotics, etc.) of AI, the importance of software security for the development of AI applications is at a really high level. It should be of high concern throughout the application’s lifecycle. And with all products brought in from third parties, security must be thoroughly vetted before being implemented.
Imagine if someone were able to take control of your AI device or software and feed it false answers. Or, picture this: an attacker who is able to control the input information that your AI needs to process—the input information that the AI will act on. For example, an attacker who is able to control the sensorics input of the surroundings of a car. Giving wrong information as input would lead to wrong decisions, which can potentially endanger lives. For this reason, the development and usage of AI must be absolutely secure.
Technologies such as interactive application security testing (IAST) allow software developers (including those developing web-based AI applications) to perform security testing during functional testing. IAST solutions help organizations to identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing methods. Through software instrumentation, IAST monitors applications to detect vulnerabilities. This technology is highly compatible with the future of AI.
As with all technology, the question comes down to how we apply it in practice. It’s a positive attribute that the industry is concerned about how AI can impact our lives. This should push developers and security teams to be more cautious and to find and implement mechanisms that will help us to avoid catastrophes relating to AI’s decisions and actions. In the end, AI will help us to improve our lives. We, in turn, must ensure that the software doing so is secure.
About the author: Boris Cipot is a senior security engineer at Synopsys. He helps companies of all shapes and sizes to create secure software. Boris joined Synopsys when Black Duck Software was acquired in 2017. He specializes in open source software security, robotics, and artificial intelligence.
Source: infosec island