Sign up for our weekly newsletter!
REGISTER NOW |
||
|
||
Artificial Intelligence & Cybersecurity: Making It Work for Your Organization![]() Artificial intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Just like humans, they will be imperfect, but also capable of achieving great things. AI presents new information risks and makes some existing ones more perilous. However, it can also be used for good and must become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business. Already, AI is finding its way into many mainstream business use cases. Organizations use variations of AI to support processes in areas including customer service, human resources and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI is and what it actually means for business and security. It is difficult to separate wishful thinking from reality. Defensive opportunities provided by AIAs AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes. Security practitioners are always fighting to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still -- as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up. The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability. The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI. Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfill one type of task. They require enough data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously -- an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job. Questions to ask:
Security leaders also need to consider issues of governance around defensive AI, including:
AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions. AI systems will make mistakes -- a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI's decision-making process. Of course, humans make mistakes too -- organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems. Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organization's cyber defenses. The time to prepare is nowThe speed and scale at which AI systems "think" will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business. AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware -- and at that point, defensive AI will not just be a "nice to have": It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks. To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses. — Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cybersecurity, BYOD, the cloud and social media across both the corporate and personal environments. Previously, he was Senior Vice President at Gartner. |
The Internet of Things (IoT) has burst into the connected world and promises much: from enabling the digital organization, to making domestic life richer and easier. However, with those promises come inevitable risks: the rush to adoption has highlighted serious deficiencies in both the security design of IoT devices and their implementation.
Humans are often regarded as the 'weakest link' in information security. However, organizations have historically relied on the effectiveness of technical security controls, instead of trying to understand why people are susceptible to mistakes and manipulation.
Establishing a business-focused security assurance program is a long-term, ongoing investment.
Machine learning, and neural networks in particular, will become a prime target for those aiming to manipulate or disrupt dependent products and services.
Information Resources
upcoming Webinars
ARCHIVED
Top Tips for Blocking pwned [email protected]$$wOrds in Your Organization
Tuesday, October 29, 2019
12 p.m. New York/ 4:00 p.m. London Podcasts
Podcast: Infrastructure Hunting – Stopping Bad Actors in Their Tracks
Being able to effectively build a threat intelligence ecosystem or threat-hunting identification response requires both user and systems sophistication and capabilities. Security, orchestration, automation and response (SOAR) is a new technology designed to provide organizations a single comprehensive platform they can use to implement an intelligence driven security strategy.
Podcast: Digital Transformation, SD-WAN & Optimal Security
Dan Reis chats to Cybera's Josh Flynn about how to achieve digital transformation without sacrificing security. ![]() like us on facebook
|
|
![]() |
||
![]() |
Security Now
About Us
Contact Us
Help
Register
Events
Supporting Partners
Twitter
Facebook
RSS
Copyright © 2019 Light Reading, part of Informa Tech, a division of Informa PLC. All rights reserved. Privacy Policy | Cookie Policy | Terms of Use in partnership with
|