Latest questions:
Trending questions:
Hot questions:
Artificial intelligence and cybersecurity
AI and cybersecurity are more and more connected. According to a study
- 29% want to use AI-based cybersecurity technology to accelerate incident detection
- 27% to accelerate incident response
- 24% to help their organization better identify and communicate risk to the business
- 22% to gain a better understanding of cybersecurity situational awareness
I would have chosen the 2nd reply, accelerate incident response.
What is your thought on this? And your direct experience?
8 answers
That is a very interesting subject and a lot of discussions are going around. I think two areas which are pretty much clear are -
1. Identifying the threat.
2. Response on the threat. (Prevention and self-fixing etc)
However, both the areas are much complicated then it looks and still being worked upon. If I have to choose your four bullet points as a priority, I will choose the first one.
But, AI is for all. What happens when threats are equipped with AI too? This is bigger then we think.
Since some 5 years, it’s been clear that companies need to respond better to security alerts even as volumes have gone up. With this year’s fast-spreading ransomware attacks and ever-tightening compliance requirements, response must be much faster. Adding staff is tough with the cybersecurity hiring crunch, so companies are turning to machine learning and artificial intelligence (AI) to automate tasks and better detect bad behavior.
In a cybersecurity context, AI is software that perceives its environment well enough to identify events and take action against a predefined purpose. AI is particularly good at recognizing patterns and anomalies within them, which makes it an excellent tool to detect threats.
Machine learning is often used with AI. It is software that can “learn” on its own based on human input and results of actions taken. Together with AI, machine learning can become a tool to predict outcomes based on past events.
That’s why, I would go for option #1 with the other ones coming at a certain distance.
I agree with Primo.
I would go for the AI-based technology to increase incident detection that would automatically accelerate incident response.
Why no think larger and go for incident prediction?
After all, AI is based on -and can only work with- stored knowledge.
Our systems get a myriad of events per minute (notifications, minor to major alarms, etc.) and these events are normally all categorised in families (power, temperature, system, network, user,...etc.).
Since they all have a timestamp we can have a look at the history of events prior to an incident and with trending analysys set up algorithms that can detect similar trends before an incident occurs; thus generating an alarm that an incident will occur in the near future...
We must all remember how AI works: data collection, analysis and probabilistic event correlation in order to obtain actionable intelligence. Computers do some things better than humans - but trying to attain improvements in the accuracy and probability of "intelligence" goes well beyond simple pattern recognition algorithms. Weather prediction activities and models have mountains of data to work with, and yet this remains very inexact: what they do is what everyone is talking about AI doing for cybersecurity regarding AI's contribution, only with greater assumed accuracy.
What I detect in much of the chatter about AI is this:
- Serious research is being done and making substantial progress.
- Serious research is being done in data collection methods and correlation methods - again with meaningful progress.
- Incident Response and predictive models are getting a lot of attention - both need much work and improvement.
What I also read and hear about AI:
- A lot of sales-speak is being thrown around touting AI's being ready for operationalization. It isn't - not yet.
- There is an undercurrent in areas of the business community that want to put this technology to work now because they (naively) think it will provide improved results over what they currently use (it won't); that it will cut the cost of a human work force while it does this (in the absolute, yes, but not while producing greatly improved outcomes);
- The views I find often expressed give me the definite impression that there is a belief that AI is yet another potential panacea that will automate security countermeasures and greatly reduce or even eliminate security issues (not clear which ones). Again, no.
This is a vitally important area to work on, but much of the essential operations within AI itself need a great deal of work before it will deliver on the promises and predictions about it are making.
Consider: calling this "artificial" means there must be a "real" intelligence that it is based on as the model it emulates. Ok. That model of course is human intelligence. The processes AI emulates stem directly from our own thinking, analysis and other methods. What this means is that we must analyze how humans think, how "one thing follows or relates to another", and how to dissect the contributory factor of the various event elements. It follows, then, that since humans are going to program AI to do its work, it is most likely to be no better at any of this than humans are now.
I believe we must do more work (ongoing) understanding how we process information to derive intelligence from it and find better ways and means of programming our ancilla to actualize what we learn. I also think we will need to get better at developing contingency plans to cover our mistakes while I progressively learn how to do this better.
I also think that we may just need a better model to go from.
AI will be/is being used as a tool by people trained in cybersecurity, but not to replace them. There are cybersecurity threats that people trained in cybersecurity will have to prevent that do not lend themselves to AI. One example is physical prevention of network access by employees or intruders carrying unauthorized devices such as cell phones.
While "AI" is certainly a buzzword throughout the corporate context today, it is not quite the panacea which it is touted to be (yet). AI, or more specifically, machine learning algorithms, used to enhance human-designed cyber security platforms is a useful application of the technology. However, it is still a largely human-driven process. Google's recent announcement of a "child AI" which is more efficient than any human-designed AI promises future AI applications which are designed by machines themselves, with little to no human input in the design phase. Once this is ubiquitous across applications of AI (in 3-5 years or so), we will see much more robust applications of the technology in many sectors, including cyber security.
I would go with option 1. However option 1 and 2 both are very important and should co-exist.
While businesses may be wanting to deploy AI for cybersecurity today, it really isn't ready.
I'm a researcher in this area, I'm directly involved with accelerating the graph, big data and time series analytics required to understand, identify and respond to cyber threats, both internal and external.
So far our efforts have shows great potential, leveraging AI's pattern matching combined with GPUs to tackle the massive amount of data crunching that has to happen. We have been able to greatly increase the situational awareness of our cyber operations, improve our rate of detection and predictive accuracy for attacks, and even deliver new insight into day to day operations through this research. It will take time for research to make it through the pipeline to a reproducible and reliable production system, but it is on the way. I think 2-3 years is the remaining TTM.