Artificial intelligence and cybersecurity

1
594 views

AI and cybersecurity are more and more connected. According to a study

  • 29% want to use AI-based cybersecurity technology to accelerate incident detection
  • 27% to accelerate incident response
  • 24% to help their organization better identify and communicate risk to the business
  • 22% to gain a better understanding of cybersecurity situational awareness

I would have chosen the 2nd reply, accelerate incident response.
What is your thought on this? And your direct experience?

Artificial Intelligence
Cyber-security
Incident Management
Security
Paolo Beffagnotti
76 months ago

8 answers

2

That is a very interesting subject and a lot of discussions are going around. I think two areas which are pretty much clear are - 
1. Identifying the threat. 
2. Response on the threat. (Prevention and self-fixing etc)
However, both the areas are much complicated then it looks and still being worked upon. If I have to choose your four bullet points as a priority, I will choose the first one. 
But, AI is for all. What happens when threats are equipped with AI too? This is bigger then we think. 

Hitesh Mathpal
63 months ago
if you accelerate with the detection you can speed up with the reply and limit damages. this is the idea behind it. if threats are equipped with AI, this will be a tough game. Which will be the best AI? - Paolo 63 months ago
Of course; well said - Dr. David E. 63 months ago
Good points. AI fights AI. But, I don't see happening very soon. I agree with Paolo on detecting the threats. - Maya 63 months ago
DITTO - thanks - Dr. David E. 63 months ago
Paolo Beffagnotti Interesting outlook. I would like to think this as a combination. Detect and Reply. ( Sounds like anti missile systems. Detect the threat and shoot) - Hitesh 63 months ago
Good analogy and metaphor - Dr. David E. 63 months ago
Hitesh Mathpal you call out the perfect example, I agree with you - Paolo 63 months ago
SWOOSH! - Dr. David E. 63 months ago
Thanks David and Paolo. - Hitesh 63 months ago
U R welcomed - Dr. David E. 63 months ago
1

Since some 5 years, it’s been clear that companies need to respond better to security alerts even as volumes have gone up. With this year’s fast-spreading ransomware attacks and ever-tightening compliance requirements, response must be much faster. Adding staff is tough with the cybersecurity hiring crunch, so companies are turning to machine learning and artificial intelligence (AI) to automate tasks and better detect bad behavior.
In a cybersecurity context, AI is software that perceives its environment well enough to identify events and take action against a predefined purpose. AI is particularly good at recognizing patterns and anomalies within them, which makes it an excellent tool to detect threats.
Machine learning is often used with AI. It is software that can “learn” on its own based on human input and results of actions taken. Together with AI, machine learning can become a tool to predict outcomes based on past events.
That’s why, I would go for option #1 with the other ones coming at a certain distance.

Primo Bonacina
76 months ago
I agree that adding staff is not the solution, a software can learn and analyze results more quickly. As said probably both the first 2 options are equally right - Paolo 76 months ago
I agree that work in this area needs to progress towards these outcomes as being better than 50/50. I do not agree that it is ready to implement operationally yet. So, I would vote to implement and work diligently with it developmentally, but not otherwise. Not yet anyway. - Ross A. 76 months ago
how much time do you think will be needed to implement this operationally? - Paolo 76 months ago
You should use off-the-shelf products, so it varies depending on the product. Check, for example, LightCyber (now bought by Palo Alto Networks): https://www.paloaltonetworks.com/products/secure-the-network/magnifier-behavioral-analytics - Primo 76 months ago
Thanks for sharing this link Primo, it is very interesting. I will dig more into it - Paolo 76 months ago
Curious and interesting - Dr. David E. 64 months ago
1

I agree with Primo.
I would go for the AI-based technology to increase incident detection that would automatically accelerate incident response.
Why no think larger and go for incident prediction?
After all, AI is based on -and can only work with- stored knowledge.
Our systems get a myriad of events per minute (notifications, minor to major alarms, etc.) and these events are normally all categorised in families (power, temperature, system, network, user,...etc.).
Since they all have a timestamp we can have a look at the history of events prior to an incident and with trending analysys set up algorithms that can detect similar trends before an incident occurs; thus generating an alarm that an incident will occur in the near future...

Samuel Verheye
76 months ago
No. Articifical intelligence in this area works with an information source(s), pattern recognition, and probabilistic event correlation to derive outcomes. Incident prediction? The technology generally available is nowhere near ready to do that yet. Much more work, sampling and data analysis (first, second and third order analysis) must yet be done. - Ross A. 76 months ago
Sorry, 10 years ago I was leading the Command Center of a TelCo Operator where we installed a solution to handle events of the complete network. The system allowed us to graphically visualize all the elements and their interconnections. Thanks to the stored events we could analyse events prior to a major event (impacting the customers) and create automated warnings based on similar event-detection - Samuel 76 months ago
With the actual AI technology (self learning) and far higher calculating power on computers than 10 yeras ago, I believe that some kind of incident-prediction is not far away. - Samuel 76 months ago
Given the complex nature of attacks and the obfuscation techniques available today that were not around even 5 years ago, and the advancements in deterministic-logic programming, much of the success of ten years ago has been greatly overshadowed by what can be done now. Nonetheless, the machines will still have to learn "your" environment before becoming that effective. - Ross A. 76 months ago
Every time this topic arises, the notion of the consistently successful stock market predictive model springs irresistably to mind. - Ross A. 67 months ago
Past is NOT prologue - Dr. David E. 64 months ago
1

We must all remember how AI works:  data collection, analysis and probabilistic event correlation in order to obtain actionable intelligence.  Computers do some things better than humans - but trying to attain improvements in the accuracy and probability of "intelligence" goes well beyond simple pattern recognition algorithms.  Weather prediction activities and models have mountains of data to work with, and yet this remains very inexact:  what they do is what everyone is talking about AI doing for cybersecurity regarding AI's contribution, only with greater assumed accuracy.

What I detect in much of the chatter about AI is this:

  1. Serious research is being done and making substantial progress.
  2. Serious research is being done in data collection methods and correlation methods - again with meaningful progress.
  3. Incident Response and predictive models are getting a lot of attention - both need much work and improvement.


What I also read and hear about AI:

  1.  A lot of sales-speak is being thrown around touting AI's being ready for operationalization. It isn't - not yet.
  2. There is an undercurrent in areas of the business community that want to put this technology to work now because they (naively) think it will provide improved results over what they currently use (it won't); that it will cut the cost of a human work force while it does this (in the absolute, yes, but not while producing greatly improved outcomes);
  3. The views I find often expressed give me the definite impression that there is a belief that AI is yet another potential panacea that will automate security countermeasures and greatly reduce or even eliminate security issues (not clear which ones).  Again, no.


This is a vitally important area to work on, but much of the essential operations within AI itself need a great deal of work before it will deliver on the promises and predictions about it are making. 

Consider:  calling this "artificial" means there must be a "real" intelligence that it is based on as the model it emulates.  Ok.  That model of course is human intelligence.  The processes AI emulates stem directly from our own thinking, analysis and other methods.  What this means is that we must analyze how humans think, how  "one thing follows or relates to another", and how to dissect the contributory factor of the various event elements.  It follows, then, that since humans are going to program AI to do its work, it is most likely to be no better at any of this than humans are now.

I believe we must do more work (ongoing) understanding how we process information to derive intelligence from it and find better ways and means of programming our ancilla to actualize what we learn.  I also think we will need to get better at developing contingency plans to cover our mistakes while I progressively learn how to do this better.

I also think that we may just need a better model to go from.

Ross A. Leo
76 months ago
I agree with your thought that people belief that AI (and IT in general) is a potential panacea for security but we are still far away from there - Paolo 76 months ago
There is still the small matter of the legacy of technology disappointments over the last half-century. AI wil be/is being greatly oversold and its competence overestimated, and its credibility will be harmed, setting it back greatly. This outcome is what AI-driven solutions has suffered repeatedly since the 1970's. Thus, I am understandably concerned about it today. - Ross A. 76 months ago
Well said and agree - Dr. David E. 64 months ago
Cybersecurity, privacy, and security are creating such pressing issues for hospitals, other technology projects may be waylaid and discord among IT leadership could occur if the emerging influence of security professionals is not handled properly, according to the 2019 HIMSS U.S. - Dr. David E. 63 months ago
1

AI will be/is being used as a tool by people trained in cybersecurity, but not to replace them. There are cybersecurity threats that people trained in cybersecurity will have to prevent that do not lend themselves to AI. One example is physical prevention of network access by employees or intruders carrying unauthorized devices such as cell phones.

Daniel Webster
76 months ago
Good points. Physical prevention of network access carrying unauthorized devices is crucial - Paolo 76 months ago
I agree that "augmentation" not "replacement" is what should happen. I am concerned that our profit-minded business-oriented colleagues will blinded, naively pursue the latter rather than take the wiser course. - Ross A. 76 months ago
I have spent 10 years on hospital Boards. 53 hospital networks have been fined by HHS last I looked for HIPPA violations, which are cybersecurity breaches. These fines have totaled more than $75 million. Applying nice round numbers that averages to $1.4 million in fines per security breach, not including any civil or criminal charges. It's wise to employ cyber pros. to prevent losses. - Daniel 76 months ago
Daniel, I agree with you, it is wise to employ cyber pros to prevent losses, several companies are working in this way - Paolo 76 months ago
YES - Dr. David E. 64 months ago
1

While "AI" is certainly a buzzword throughout the corporate context today, it is not quite the panacea which it is touted to be (yet). AI, or more specifically, machine learning algorithms, used to enhance human-designed cyber security platforms is a useful application of the technology. However, it is still a largely human-driven process. Google's recent announcement of a "child AI" which is more efficient than any human-designed AI promises future AI applications which are designed by machines themselves, with little to no human input in the design phase. Once this is ubiquitous across applications of AI (in 3-5 years or so), we will see much more robust applications of the technology in many sectors, including cyber security.

Thompson Mackey, ARM
76 months ago
Agree - Dr. David E. 64 months ago
1

I would go with option 1. However option 1 and 2 both are very important and should co-exist.

Charu Gulati
63 months ago
Not bad! - Dr. David E. 63 months ago
agree Charu Gulati these should always go together to be complete - Paolo 63 months ago
Was the NIKE website just hacked? - Dr. David E. 63 months ago
0

While businesses may be wanting to deploy AI for cybersecurity today, it really isn't ready.
I'm a researcher in this area, I'm directly involved with accelerating the graph, big data and time series analytics required to understand, identify and respond to cyber threats, both internal and external.
So far our efforts have shows great potential, leveraging AI's pattern matching combined with GPUs to tackle the massive amount of data crunching that has to happen. We have been able to greatly increase the situational awareness of our cyber operations, improve our rate of detection and predictive accuracy for attacks, and even deliver new insight into day to day operations through this research. It will take time for research to make it through the pipeline to a reproducible and reliable production system, but it is on the way. I think 2-3 years is the remaining TTM.

Joe Eaton
67 months ago
I agree with your point Joe Eaton. The potential is great but we are not ready to fully deploy AI for cybersecurity today - Paolo 67 months ago
Potential = is just that ... potential - Dr. David E. 64 months ago
Very optimistic, too - Dr. David E. 63 months ago

Have some input?