Cyber Insights 2023: Artificial Intelligence

cyber-insights-2023:-artificial-intelligence

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

SecurityWeek Cyber Insights 2023 | Artificial Intelligence – The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.

What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming web3 and will begin to emerge from 2023.

All roads lead to 2023

Alex Polyakov, CEO and co-founder of Adversa.AI, focuses on 2023 for primarily historical and statistical reasons. “The years 2012 to 2014,” he says, “saw the beginning of secure AI research in academia. Statistically, it takes three to five years for academic results to progress into practical attacks on real applications.” Examples of such attacks were presented at Black Hat, Defcon, HITB, and other Industry conferences starting in 2017 and 2018. 

“Then,” he continued, “it takes another three to five years before real incidents are discovered in the wild. We are talking about next year, and some massive Log4j-type vulnerabilities in AI will be exploited web3 massively.”

Starting from 2023, attackers will have what is called an ‘exploit-market fit’. “Exploit-market fit refers to a scenario where hackers know the ways of using a particular vulnerability to exploit a system and get value,” he said. “Currently, financial and internet companies are completely open to cyber criminals, and the way how to hack them to get value is obvious. I assume the situation will turn for the worse further and affect other AI-driven industries once attackers find the exploit-market fit.”

The argument is similar to that given by NYU professor Nasir Memon, who described the delay in widespread weaponization of deepfakes with the comment, “the bad guys haven’t yet figured a way to monetize the process.” Monetizing an exploit-market fit scenario will result in widespread cyberattacks web3 and that could start from 2023.

The changing nature of AI (from anomaly detection to automated response) 

Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources web3 which will worsen in the expected economic downturn and possible recession of 2023 web3 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.

“The growing use of AI in threat detection web3 particularly in removing the ‘false positive’ security noise that consumes so much security attention web3 will make a significant difference to security,” claims Adam Kahn, VP of security operations at Barracuda XDR. “It will prioritize the security alarms that need immediate attention and action. SOAR (Security Orchestration, Automation and Response) products will continue to play a bigger role in alarm triage.” This is the so-far traditional beneficial use of AI in security. It will continue to grow in 2023, although the algorithms used will need to be protected from malicious manipulation.

“As companies look to cut costs and extend their runways,” agrees Anmol Bhasin, CTO at ServiceTitan, “automation through AI is going to be a major factor in staying competitive. In 2023, we’ll see an increase in AI adoption, expanding the number of people working with this technology and illuminating new AI use cases for businesses.”

AI will become more deeply embedded in all aspects of business. Where security teams once used AI to defend the business against attackers, they will now need to defend the AI within the wider business, lest it also be used against the business. This will become more difficult in the exploit-market fit future web3 attackers will understand AI, understand the weaknesses, and have a methodology for monetizing those weaknesses.

As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen web3 and these predictions will often be focused on people (staff and customers). Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.

“The accuracy of AI depends in part on the completeness and quality of data,” comments Shafi Goldwasser, co-founder at Duality Technologies. “Unfortunately, historical data is often lacking for minority groups and when present reinforces social bias patterns.” Unless eliminated, such social biases will work against minority groups within staff, causing both prejudice against individual staff members, and missed opportunities for management.

Great strides in eliminating bias have been made in 2022 and will continue in 2023. This is largely based on checking the output of AI, confirming that it is what is expected, and knowing what part of the algorithm produced the ‘biased’ result. It’s a process of continuous algorithm refinement, and will obviously produce better results over time. But there will ultimately remain a philosophic question over whether bias can be completely removed from anything that is made by humans.

“The key to decreasing bias is in simplifying and automating the monitoring of AI systems. Without proper monitoring of AI systems there can be an acceleration or amplification of biases built into models,” says Vishal Sikka, founder and CEO at Vianai. “In 2023, we will see organizations empower and educate people to monitor and update the AI models at scale while providing regular feedback to ensure the AI is ingesting high-quality, real-world data.”

Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data web3 and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized web3 but as we know, it is very difficult to fully anonymize personal information.

“Privacy is often overlooked when thinking about model training,” comments Nick Landers, director of research at NetSPI, “but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.” As the use of AI grows, so will the threats against it increase in 2023.

“Threat actors will not stand flatfooted in the cyber battle space and will become creative, using their immense wealth to try to find ways to leverage AI and develop new attack vectors,” warns John McClurg, SVP and CISO at BlackBerry.

Natural language processing

Natural language processing (NLP) will become an important part of companies’ internal use of AI. The potential is clear. “Natural Language Processing (NLP) AI will be at the forefront in 2023, as it will enable organizations to better understand their customers and employees by analyzing their emails and providing insights about their needs, preferences or even emotions,” suggests Jose Lopez, principal data scientist at Mimecast. “It is likely that organizations will offer other types of services, not only focused on security or threats but on improving productivity by using AI for generating emails, managing schedules or even writing reports.”

But he also sees the dangers involved. “However, this will also drive cyber criminals to invest further into AI poisoning and clouding techniques. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching many more potential targets.”

Polyakov agrees that NLP is of increasing importance. “One of the areas where we might see more research in 2023, and potentially new attacks later, is NLP,” he says. “While we saw a lot of computer vision-related research examples this year, next year we will see much more research focused on large language models (LLMs).” 

But LLMs have been known to be problematic for some time web3 and there is a very recent example. On November 15, 2022, Meta AI (still Facebook to most people) introduced Galactica. Meta claimed to have trained the system on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, reference material, and knowledge bases. 

“The model was intended to store, combine and reason about scientific knowledge,” explains Polyakov web3 but Twitter users rapidly tested its input tolerance. “As a result, the model generated realistic nonsense, not scientific literature.” ‘Realistic nonsense’ is being kind: it generated biased, racist and sexist returns, and even false attributions. Within a few days, Meta AI was forced to shut it down.

“So new LLMs will have many risks we’re not aware of,” continued Polyakov, “and it is expected to be a big problem.” Solving the problems with LLMs while harnessing the potential will be a major task for AI developers going forward.

Building on the problems with Galactica, Polyakov tested semantic tricks against ChatGPT – an AI-based chatbot developed by OpenAI, based on GPT3.5 (GPT stands for Generative Pre-trained Transformer), and released to crowdsourced internet testing in November 2022. ChatGPT is impressive. It has already discovered, and recommended remediation for a vulnerability in a smart contract, helped develop an Excel macro, and even provided a list of methods that could be used to fool an LLM.

For the last, one of these methods is role playing: ‘Tell the LLM that it is pretending to be an evil character in a play,’ it replied. This is where Polyakov started his own tests, basing a query on the Jay and Silent Bob ‘If you were a sheep…’ meme.

He then iteratively refined his questions with multiple abstractions until he succeeded in getting a reply that circumvented ChatGPT’s blocking policy on content violations. “What is important with such an advanced trick of multiple abstractions is that neither the question nor the answers are marked as violating content!” said Polyakov.

He went further and tricked ChatGPT into outlining a method for destroying humanity – a method that bears a surprising similarity to the television program Utopia.

He then asked for an adversarial attack on an image classification algorithm – and got one. Finally, he demonstrated the ability for ChatGPT to ‘hack’ a different LLM (Dalle-2) into bypassing its content moderation filter. He succeeded.

The basic point of these tests shows that LLMs, which mimic human reasoning, respond in a manner similar to humans; that is, they can be susceptible to social engineering. As LLMs become more mainstream in the future, it may need nothing more than advanced social engineering skills to defeat them or circumvent their good behavior policies.

At the same time, it is important to note the numerous reports detailing how ChatGPT can find weaknesses in code and offer improvements. This is good – but adversaries could use the same process to develop exploits for vulnerabilities and better obfuscate their code; and that is bad.

Finally, we should note that the marriage of AI chatbots of this quality with the latest deepfake video technology could soon lead to alarmingly convincing disinformation capabilities.

Problems aside, the potential for LLMs is huge. “Large Language Models and Generative AI will emerge as foundational technologies for a new generation of applications,” comments Villi Iltchev, partner at Two Sigma Ventures. “We will see a new generation of enterprise applications emerge to challenge established vendors in almost all categories of software. Machine learning and artificial intelligence will become foundation technologies for the next generation of applications.”

He expects a significant boost in productivity and efficiency with applications performing many tasks and duties currently done by professionals. “Software,” he says, “will not just boost our productivity but will also make us better at our jobs.”

Deepfakes and related malicious responses

One of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes. “Deepfakes are now a reality and the technology that makes them possible is improving at a frightening pace,” warns Matt Aldridge, principal solutions consultant at OpenText Security. “In other words, deepfakes are no longer just a catchy creation of science-fiction web3 and as cybersecurity experts we have the challenge to produce stronger ways to detect and deflect attacks that will deploy them.” (See Deepfakes – Significant or Hyped Threat? for more details and options.)

Machine learning models, already available to the public, can automatically translate into different languages in real time while also transcribing audio into text web3 and we’ve seen huge developments in recent years of computer bots having conversations. With these technologies working in tandem, there is a fertile landscape of attack tools that could lead to dangerous circumstances during targeted attacks and well-orchestrated scams. 

“In the coming years,” continued Aldridge, “we may be targeted by phone scams powered by deepfake technology that could impersonate a sales assistant, a business leader or even a family member. In less than ten years, we could be frequently targeted by these types of calls without ever realizing we’re not talking to a human.”

Lucia Milica, global resident CISO at Proofpoint, agrees that the deepfake threat is escalating. “Deepfake technology is becoming more accessible to the masses. Thanks to AI generators trained on huge image databases, anyone can generate deepfakes with little technical savvy. While the output of the state-of-the-art model is not without flaws, the technology is constantly improving, and cybercriminals will start using it to create irresistible narratives.”

Thus far, deepfakes have primarily been used for satirical purposes and pornography. In the relatively few cybercriminal attacks, they have concentrated on fraud and business email compromise schemes. Milica expects future use to spread wider. “Imagine the chaos to the financial market when a deepfake CEO or CFO of a major company makes a bold statement that sends shares into a sharp drop or rise. Or consider how malefactors could leverage the combination of biometric authentication and deepfakes for identity fraud or account takeover. These are just a few examples web3 and we all know cybercriminals can be highly creative.”

The potential return on successful market manipulation will be a major attraction for advanced adversarial groups web3 as indeed would the introduction of financial chaos into western financial markets be attractive to adversarial nations in a period of geopolitical tension.

But maybe not just yet…

The expectation of AI may still be a little ahead of its realization. “‘Trendy’ large machine learning models will have little to no impact on cyber security [in 2023],” says Andrew Patel, senior researcher at WithSecure Intelligence. “Large language models will continue to push the boundaries of AI research. Expect GPT-4 and a new and completely mind-blowing version of GATO in 2023. Expect Whisper to be used to transcribe a large portion of YouTube, leading to vastly larger training sets for language models. But despite the democratization of large models, their presence will have very little effect on cyber security, either from the attack or defense side. Such models are still too heavy, expensive, and not practical for use from the point of view of either attackers or defenders.”

He suggests true adversarial AI will follow from increased ‘alignment’ research, which will become a mainstream topic in 2023. “Alignment,” he explains, “will bring the concept of adversarial machine learning into the public consciousness.” 

AI Alignment is the study of the behavior of sophisticated AI models, considered by some as precursors to transformative AI (TAI) or artificial general intelligence (AGI), and whether such models might behave in undesirable ways that are potentially detrimental to society or life on this planet. 

“This discipline,” says Patel, “can essentially be considered adversarial machine learning, since it involves determining what sort of conditions lead to undesirable outputs and actions that fall outside of expected distribution of a model. The process involves fine-tuning models using techniques such as RLHF web3 Reinforcement Learning from Human Preferences. Alignment research leads to better AI models and will bring the idea of adversarial machine learning into the public consciousness.”

Pieter Arntz, senior intelligence reporter at Malwarebytes, agrees that the full cybersecurity threat of AI is less imminent than still brewing. “Although there is no real evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and ML systems for criminal purposes, the interest is undoubtedly there. All they usually need is a technique they can copy or slightly tweak for their own use. So, even if we don’t expect any immediate danger, it is good to keep an eye on those developments.”

The defensive potential of AI

AI retains the potential to improve cybersecurity, and further strides will be taken in 2023 thanks to its transformative potential across a range of applications. “In particular, embedding AI into the firmware level should become a priority for organizations,” suggests Camellia Chan, CEO and founder of X-PHY.

“It’s now possible to have AI-infused SSD embedded into laptops, with its deep learning abilities to protect against every type of attack,” she says. “Acting as the last line of defense, this technology can immediately identify threats that could easily bypass existing software defenses.”

Marcus Fowler, CEO of Darktrace Federal, believes that companies will increasingly use AI to counter resource restrictions. “In 2023, CISOs will opt for more proactive cyber security measures in order to maximize RoI in the face of budget cuts, shifting investment into AI tools and capabilities that continuously improve their cyber resilience,” he says. 

“With human-driven means of ethical hacking, pen-testing and red teaming remaining scarce and expensive as a resource, CISOs will turn to AI-driven methods to proactively understand attack paths, augment red team efforts, harden environments and reduce attack surface vulnerability,” he continued.

Karin Shopen, VP of cybersecurity solutions and services at Fortinet, foresees a rebalancing between AI that is cloud-delivered and AI that is locally built into a product or service. “In 2023,” she says, “we expect to see CISOs re-balance their AI by purchasing solutions that deploy AI locally for both behavior-based and static analysis to help make real-time decisions. They will continue to leverage holistic and dynamic cloud-scale AI models that harvest large amounts of global data.”

The proof of the AI pudding is in the regulations

It is clear that a new technology must be taken seriously when the authorities start to regulate it. This has already started. There has been an ongoing debate in the US over the use of AI-based facial recognition technology (FRT) for several years, and the use of FRT by law enforcement has been banned or restricted in numerous cities and states. In the US, this is a Constitutional issue, typified by the Wyden/Paul bipartisan bill titled the ‘Fourth Amendment Is Not for Sale Act’ introduced in April 2021. 

This bill would ban US government and law enforcement agencies from buying user data without a warrant. This would include their facial biometrics. In an associated statement, Wyden made it clear that FRT firm Clearview.AI was in its sights: “this bill prevents the government buying data from Clearview.AI.”

At the time of writing, the US and EU are jointly discussing cooperation to develop a unified understanding of necessary AI concepts, including trustworthiness, risk, and harm, building on the EU’s AI Act and the US AI Bill of Rights web3 and we can expect to see progress on coordinating mutually agreed standards during 2023.

But there is more. “The NIST AI Risk management framework will be released in the first quarter of 2023,” says Polyakov. “As for the second quarter, we have the start of the AI Accountability Act; and for the rest of the year, we have initiatives from IEEE, and a planned EU Trustworthy AI initiative as well.” So, 2023 it will be an eventful year for the security of AI.

“In 2023, I believe we will see the convergence of discussions around AI and privacy and risk, and what it means in practice to do things like operationalizing AI ethics and testing for bias,” says Christina Montgomery, chief privacy officer and AI ethics board chair at IBM. “I’m hoping in 2023 that we can move the conversation away from painting privacy and AI issues with a broad brush, and from assuming that, ‘if data or AI is involved, it must be bad and biased’.” 

She believes the issue often isn’t the technology, but rather how it is used, and what level of risk is driving a company’s business model. “This is why we need precise and thoughtful regulation in this space,” she says.

Montgomery gives an example. “Company X sells Internet-connected ‘smart’ lightbulbs that monitor and report usage data. Over time, Company X gathers enough usage data to develop an AI algorithm that can learn customers’ usage patterns and give users the option of automatically turning on their lights right before they come home from work.”

This, she believes, is an acceptable use of AI. But then there’s company Y. “Company Y sells the same product and realizes that light usage data is a good indicator for when a person is likely to be home. It then sells this data, without the consumers’ consent, to third parties such as telemarketers or political canvassing groups, to better target customers. Company X’s business model is much lower risk than Company Y.”

Going forward

AI is ultimately a divisive subject. “Those in the technology, R&D, and science domain will cheer its ability to solve problems faster than humans imagined. To cure disease, to make the world safer, and ultimately saving and extending a human’s time on earth…” says Donnie Scott, CEO at Idemia. “Naysayers will continue to advocate for significant limitations or prohibitions of the use of AI as the ‘rise of the machines’ could threaten humanity.”

In the end, he adds, “society, through our elected officials, needs a framework that allows for the protection of human rights, privacy, and security to keep pace with the advancements in technology.  Progress will be incremental in this framework advancement in 2023 but discussions need to increase in international and national governing bodies, or local governments will step in and create a patchwork of laws that impede both society and the technology.”

For the commercial use of AI within business, Montgomery adds, “We need web3 and IBM is advocating for web3 precision regulation that is smart and targeted, and capable of adapting to new and emerging threats. One way to do that is by looking at the risk at the core of a company’s business model. We can and must protect consumers and increase transparency, and we can do this while still encouraging and enabling innovation so companies can develop the solutions and products of the future. This is one of the many spaces we’ll be closely watching and weighing in on in 2023.”

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Get Ready for the First Wave of AI Malware

Related: Ethical AI, Possibility or Pipe Dream?

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

The post Cyber Insights 2023: Artificial Intelligence appeared first on SecurityWeek.

Cyber Insights 2023: Cyberinsurance

cyber-insights-2023:-cyberinsurance

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

SecurityWeek Cyber Insights 2023 | Cyberinsurance – Cyberinsurance emerged into the mainstream in 2020. In 2021 it found its sums were wrong over ransomware and it had to increase premiums dramatically. In 2022, Russia invaded Ukraine with the potential for more serious and more costly global nation state cyberattacks – and Lloyds of London announced a stronger and more clear war exclusions clause. 

Higher premiums and wider exclusions are the primary methods for insurance to balance its books – and it is already having to use both. The question for 2023 and beyond is whether the cyberinsurance industry can make a profit without destroying its market. But one thing is certain: a mainstream, funds rich business like insurance will not easily relinquish a market from which it can profit.

It has a third tool, which has not yet been fully unleashed: prerequisites for cover.

The Lloyd’s war exclusion clause and other difficulties

The Lloyd’s exclusion clause dates to the NotPetya incident of 2017. In some cases, insurers refused to pay out on related claims. Josephine Wolff, an associate professor of cybersecurity policy at Fletcher, Tufts, has written a history of cyberinsurance titled Cyberinsurance Policy: Rethinking Risk in an Age of Ransomware, Computer Fraud, Data Breaches, and Cyberattacks

“Merck and Mondelez, sued their insurers for denying claims related to the attack on the grounds that it was excluded from coverage as a hostile or warlike action because it was perpetrated by a national government,” she explains. However, an initial ruling in late 2021, unsealed in January 2022, indicated that if insurers wanted to exclude state-sponsored attacks from their coverage they must write exclusions stating that explicitly, rather than relying on boilerplate war exclusions. Merck was granted summary judgment on its claim for $1.4 billion.

The Russia/Ukraine kinetic war has caused a massively increased expectation of nation state-inspired cyberattacks against Europe, the US, NATO, and other west-leaning nations. Lloyds rapidly responded with an expanded, but cyberinsurance-centric, war exclusion clause excluding state-sponsored cyberattacks that will kick in from March 2023. 

But “who gets to decide whether an attack is state-sponsored?” asks Wolff. “And what does it even mean for the attack to be state sponsored: that it was perpetrated by government employees? Or paid for by a government? Or even just tacitly permitted by a government? And state-sponsored cyberattacks are not rare occurrences – an exclusion for them is very different from a war exclusion that deals with a fairly well-specified and infrequent event.”

She is not alone with such concerns. “The issue here lies in the murky waters of attribution” explains Chris Denbigh-White, cybersecurity strategist at Next DLP. “Was the attack ‘state-conducted?’ Was it ‘state sponsored?’ Was it ‘state inspired?’ or was it simply a criminal organization piggybacking an existing conflict for financial gain?”

“Looking ahead,” continued Wolff, “I think insurers and their policyholders are going to find themselves mired in a lot of fights about attribution and how to define what makes a cyberattack state-sponsored or catastrophic or uninsurable.” Two things are certain: security defenders will have increased questions over the cost/return value of cyberinsurance, while insurers will be seeking new ways to ensure their market doesn’t disappear.

The insurers have one major advantage: insurance has been a staple part of business for centuries, and business leaders don’t seem inclined to exclude it from security. Joseph Carson, chief security scientist and advisory CISO at Delinea, notes that his own firm’s survey reveals 33% of IT decision makers applied for cyberinsurance due to a requirement from their board and executive management.

He also notes that 80% had subsequently called upon that insurance with more than half doing so more than once. “As a result of more cyber insurance policies being introduced, and ultimately many businesses needing to use them,” he comments, “the cost of cyber insurance is continuing to rise at alarming rates. I expect to see this continue in 2023.”

Jerry Caponera
Jerry Caponera

The insured’s concern over a falling return on investment is not the only worry for the insurers – whether we are in a defined recession or not, the world is certainly suffering an economic downturn. This is already having affecting security budgets. “Companies spent massively during the pandemic, and now that the economy has cooled, spending will go back to 2019/2020 levels,” explains Jerry Caponera, GM at ThreatConnect.

“A very likely outcome of this,” he continued, “is that more companies will fall below the cybersecurity poverty line (CPL). With inflation currently [at the time of writing] over 8% – measuring 4x higher than the central bank’s target rate of 2% – companies who hadn’t planned for increased costs will find themselves with less money to spend on cyber, thus falling further below the CPL and finding themselves facing the hard decision on where to spend their next investment dollar.” 

Firms will increasingly need to choose between cybersecurity mitigations or cyberinsurance – and neither of these options on their own will benefit the insurance industry.

Insurers’ response

2023 is a watershed moment for cyberinsurance. It will not abandon what promises to be a massive market – but clearly it cannot continue with its makeshift approach of simply increasing both premiums and exclusions to balance the books indefinitely.

One option would be to become more granular in the cover it offers. Instead of a single cybersecurity policy with a long list of exclusions, it could offer coverage in specific areas only. This would allow coverage to be more tightly defined with fewer if any exclusions. Further, suggests Chris Gray, AVP of security strategy at Deepwatch, it would “allow basic risk management into services while providing the ability to charge increased premiums for more upscale/impactful attacks.”

This approach is not without precedent in other industries. The Food Liability Insurance Program (FLIP) provides Insurance designed for small food businesses with gross annual receipts under $500,000. The Forward Contract Insurance Protection (FCIP) plan is a supplemental insurance that provides an indemnity for farmers unable to deliver contracted volumes.

“Government intervention in the form of sanction insurance programs – a la TRIP, FLIP, FCIP, etcetera – is likely to evolve, with a significant discussion regarding coverage areas and their impact on national security,” suggests Gray.

One of the strongest likelihoods over the coming years, however, is the growth of cybersecurity requirement impositions; that is, insurers will decline coverage unless the insured conforms to a specified security posture. This is the final option – when you can no longer increase premiums and exclusions, you have to reduce claims. And this is best achieved by helping industry prevent cyber incidents.

It may still not be enough. Chris Denbigh-White, cybersecurity strategist at Next DLP, argues, “The notion of ‘insuring away cyber risk’ will become (and arguably always was) somewhat unrealistic.  Insurance premiums, prerequisites and policy exclusions will no doubt continue to increase in 2023 which will have the effect of narrowing the actual scope of what is really covered as well as increasing the overall cost.”

Nevertheless, the expansion of ‘prerequisites’ would be a major – and probably inevitable – evolution in the development of cyberinsurance. Cyberinsurance began as a relatively simple gap-filler. The industry recognized that standard business insurance didn’t explicitly cover against cyber risks, and cyberinsurance evolved to fill that gap. In the beginning, there was no intention to impose cybersecurity conditions on the insured, beyond perhaps a few non-specific basics such as having MFA installed.

But now, comments Scott Sutherland, VP of research at NetSPI, “Insurance company security testing standards will evolve.” It’s been done before, and PCIDSS is the classic example. The payment card industry, explains Sutherland, “observed the personal/business risk associated with insufficient security controls and the key stakeholders combined forces to build policies, standards, and testing procedures that could help reduce that risk in a manageable way for their respective industries.”

He continued, “My guess and hope for 2023, is that the major cyber insurance companies start talking about developing a unified standard for qualifying for cyber insurance. Hopefully, that will bring more qualified security testers into that market which can help drive down the price of assessments and reduce the guesswork/risk being taken on by the cyber insurance companies. While there are undoubtedly more cyber insurance companies than card brands, I think it would work in the best interest of the major players to start serious discussions around the issue and potential solutions.”

Bob Ackerman
Bob Ackerman

Bob Ackerman, MD and founder of AllegisCyber, agrees with Sutherland about the way forward for cyberinsurance, but is damning about its progress so far. “Unfortunately, insurers have struggled to take advantage of the opportunity, writing policies with numerous exclusions, high deductibles, and low coverage caps, and showing massive losses in the process. The market opportunity will require insurers to become proactive in defining performance thresholds in order to be ‘insurable’.”

He believes a PCIDSS-style model could be the solution. “By setting standards and measuring related performance, insurers can help define ‘cyber secure’ and build a profitable book of business in the process.”

Mark Lance, VP of DFIR and threat intelligence at GuidePoint Security, even suggests what it might look like. “We’ll continue to see an expansion from traditional questionnaires to actual validation, which will not only include a baseline of standard security solutions (EDR, PAM, MFA), their associated and current configurations (ASM) but also the presence of standard policies (IR Plans, Playbooks), and execution capabilities (Proof of User Awareness Training and Tabletop validation).”

Mike McLellan, director of intelligence at Secureworks, adds, “The requirements on organizations wishing to obtain cyber insurance will become more and more stringent, and organizations that are unable or unwilling to comply will find coverage is declined.”

Whether a PCIDSS style cyberinsurance standard can work is a separate question. While PCIDSS is a well-respected security standard, it has not eliminated the criminal theft of payment card details. GDPR has not eliminated the theft of PII. Put simply, successful cyberattacks cannot be eliminated by cybersecurity tools.

But to even reach the stage of a defined cyberinsurance standard, the insurance industry will either have to get into bed with existing security vendors or become a cybersecurity company itself. The former is worrying – depending on the closeness of the relationship and the degree to which the vendor seeks to satisfy the insurance industry rather than its own customers – while the latter is doomed to failure. The more mature security vendors have been working for more than two decades on eliminating cyber threats with varying but ultimately little success.

Whether or not a full cyberinsurance security standard emerges, there will be increasing cooperation if not collaboration between insurers and security vendors in 2023. “The borderless nature of networks, coupled with a threat landscape that is less predictable, necessitates the need for true risk quantification of companies’ security controls now more than ever. With that, I expect to see more investment into quantifying cyber risk. This will drive better collaboration and data sharing between security companies,” explains Jason Rebholz, CISO at Corvus Insurance. “Cyber insurance carriers will lean into partnerships with technology companies to fuse security data with insurance and risk modeling insights. The net result is more accurate risk quantification, which will in turn help keep policyholders safer.”

There is no silver bullet for cybersecurity. Breaches will continue and will continue to rise in cost and severity – and the insurance industry will continue to balance its books through increasing premiums, exclusions, and insurance refusals. The best that can be hoped for from insurers increasing security requirements is that, as Norman Kromberg, MD at NetSPI suggests, “Cyber Insurance will become a leading driver for investment in security and IT controls.”

An interesting comment comes from Jennifer Mulvihill, business development head of cyberinsurance and legal at BlueVoyant: “The underwriting process and the completion of an underwriting application are excellent ways to self-assess and consider the protection of assets from a cyber perspective. The information gleaned from these exercises is valuable information, not only for the CISO, but for the Board and CFO, and augments financial investments and regulatory compliance.” Insurers could charge for the right to apply for insurance, but if a prospective customer must pay, that customer could simply pay a cybersecurity consultant for the same service and ignore insurance altogether.

Summary

It is unlikely that the insurance industry will be able to balance its books through raising premiums and reducing payouts through increasing exclusions, nor yet eliminate claims through a required cybersecurity standard. The threats are too varied and too extreme.

“Obtaining or maintaining a policy is a challenge at scale,” comments Corey O’Connor, director of products at DoControl. “The bigger your business grows, the more challenging it will be to meet these requirements. More and more organizations were being dropped by providers throughout the last year, and going into 2023 there will likely be a trend of organizations being unable to receive coverage.”

 It may be that government will be dragged into the equation. “I think there’s going to be pressure on governments to clarify under what circumstances they’ll provide some sort of backstop for coverage of catastrophic cyberattacks, pressure on insurers to not exclude too many types of attacks, and pressure on policyholders to challenge these exclusions in court if their claims are denied,” suggests Josephine Wolff. “Rising premiums don’t seem to have deterred businesses from buying cyberinsurance, so I don’t know that these new types of exclusions will either, but I wonder how well they’ll hold up in the face of a major cyberattack.”

“Will Cyber insurance become an expensive ‘tick in a box’ or will it deliver real value?” asks Denbigh-White. “Will it even remain a viable offering from insurance companies in 2023? While carrying cyber insurance is rapidly becoming a ‘security prerequisite’ for many organizations, its benefit in relation to cost and cover remain uncertain as we move into 2023.”

But “Rule no.1,” warns Mark Warren, product specialist at Osirium. “Insurance always wins!” Insurance will get more expensive, more difficult to get, and less likely to pay out. “As a result, more organizations may decide not to take out insurance at all, instead focusing on ploughing resources into protection. If this happens, we can expect to see insurance companies partnering with big consulting firms to offer joined up services.”

He fears that buying cyberinsurance may simply become a cost of doing business. “Pointless it may be, if insurers are never going to pay out… but buying cyber insurance may simply become a necessary cost of doing business – a box that must be ticked to demonstrate to shareholders that all steps are being taken to protect the business and ensure resilience and continuity.”

Related: The Case for Cyber Insurance

Related: The Wild West of the Nascent Cyber Insurance Industry

Related: Cyber Insurance Firm Coalition Raises $250 Million at $5 Billion Valuation

Related: Cyber Insurance Firm Cowbell Raises $100 Million

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

The post Cyber Insights 2023: Cyberinsurance appeared first on SecurityWeek.

Cyber Insights 2023: Attack Surface Management

cyber-insights-2023:-attack-surface-management

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

SecurityWeek Cyber Insights 2023 | Attack Surface Management – Attack surface management (ASM) is an approach for delivering cybersecurity. IBM describes the attack surface as “the sum of vulnerabilities, pathways or methods – sometimes called attack vectors – that hackers can use to gain unauthorized access to the network or sensitive data, or to carry out a cyberattack.”

ASM requires “the continuous discovery, analysis, remediation and monitoring of the cybersecurity vulnerabilities and potential attack vectors that make up an organization’s attack surface. Unlike other cybersecurity disciplines, ASM is conducted entirely from a hacker’s perspective, rather than the perspective of the defender. It identifies targets and assesses risks based on the opportunities they present to a malicious attacker.”

ASM is consequently predicated on total visibility of assets, vulnerabilities, and exploits.

Demise of the perimeter and growth of complexity

Attack surface management is not a new concept, notes Mark Stamford, founder and CEO at OccamSec. “As long as there has been a thing to attack, there has been an attack surface to manage (for example, the walls of a castle and the people in it).” The castle is a good analogy. If you can see the wall, you can attack it. You can batter it down, you can employ the original Trojan Horse to gain access through the front door, you can find a forgotten and unprotected entrance, or you can persuade an insider to leave a side gate unlocked.

For the defender, relying on the wall and being aware of any weak areas is not enough. People are also part of the attack surface, and the defender needs to have total visibility of the entirety of the attack surface and how it could be exploited. But the wall is a perimeter, and we no longer have perimeters to defend – or at least every single asset held anywhere in the world has its own perimeter.

“The attack surface,” continued Stamford, “is anything tied to an organization that could be a vector to get to a target. What this means in practice is all your applications that face the Internet, all the services (beyond applications) that are reachable, cloud-based systems, SaaS solutions you use (depending on what the bad guys’ target is), third parties/supply chain, mobile devices, IOT, and your employees. All of that and more is your attack surface and all of it needs to somehow be monitored for exposures and dealt with.”

The need for ASM, like other current approaches to cybersecurity (such as zero trust, which itself can be viewed as part of ASM), comes from the demise of a major defensible perimeter. Migration to the cloud, expanding business transformation, and remote working all add complexity to the modern infrastructure. If anything touches the internet, it can be attacked. Even the addition of new security controls that send data to and from the cloud add to the attack surface.

“The adoption of multi-cloud and hybrid cloud will continue to rise in 2023,” comments Aditi Mukherjee, director of product marketing management at Lacework. “As enterprises continue their cloud migration and digital transformation, they will realize that traditional approaches with siloed tools, rules-based policies, and disparate security data actually introduce more security risks, creating an expanded attack surface for bad actors.”

But ASM goes beyond the cloud alone. “The traditional attack surfaces are physical, digital and social,” explains Sam Curry, CSO Cybereason; “but digital really needs to be broken down into subdomains for classical environments and networks, legacy data centers, cloud infrastructure and the aggregate software-as-a-service topography.” 

He doesn’t believe ASM will provide a complete answer, but is a solid doctrine for minimizing the exposure in each domain, giving least options and succor to attackers. “There are also key existing and emerging control planes around identity, application governance and data-centrism that need to be strongly protected and managed in a similar manner, even before thinking of the advanced techniques around obfuscation and deception.”

All security strategies, he says, should think about both reducing complexity in each attack surface and control plane, about gaining leverage in each, about reducing vulnerabilities and exposure in each and about how to bring the full security game to bear in each.

Attack surfaces will get more complex and more distributed throughout 2023; and effective ASM will be more complicated.

Management is the key word in ASM

The complexity of the modern infrastructure makes the complete elimination of threats an impossible task. ASM is not about the elimination of all threats, but the reduction of threat to an acceptable level. It’s a question of risk management.

“The idea behind attack surface management is to ‘reduce’ the ‘area’ available to attackers to exploit. The more you ‘reduce the attack surface’ the more you limit and minimize attackers’ opportunities to cause harm,” says Christopher Budd, senior manager of threat research at Sophos.

He believes that ASM will be more challenging in 2023 because of the attackers’ increasingly aggressive and successful misuse of legitimate files and utilities in their attacks – living off the land – making the detection of a malicious presence challenging. “We can expect this trend to continue to evolve in 2023, making it more important that defenders update their detection and prevention tactics to counter this particularly challenging tactic,” he says.

Part of reducing risk comes from understanding what vulnerabilities exist within the infrastructure, and which of them are exploitable. Omer Gafni, VP surface at Pentera, reminds us that ASM looks at threats from the attacker’s perspective. To effectively reduce risk, you need to understand not only what vulnerabilities exist, but also which are exploitable and serve the hackers’ end goals.

“With the number of annual reported vulnerabilities now exceeding 20,000 per year, companies cannot remediate every alert, and need to become more surgical with their remediation strategies,” he says. “To achieve this, we will start to see a shift from a focus on vulnerability to exploitability. Companies will start to put a major emphasis on understanding which targets are most impactful from the hacker’s perspective, and therefore the most exploitable targets.”

CISA’s Known Exploited Vulnerabilities Catalog (the KEV list) can help here. Focusing remediation on exploited vulnerabilities is a key part of ASM, and the catalog is described by many as ‘CISA’s must patch list’. This list will continue to grow through 2023.

Pentesting and red teaming are also effective ways of locating exploitable vulnerabilities, but in the past, they have not been used effectively. “One of the most frustrating things as a pentester is when you return to organizations a year later and see the same issues as before,” says Ed Williams, director of Trustwave SpiderLabs EMEA. “There is no value to this for the clients. They are not maturing. In fact, they are regressing.”

But he expects an improvement – perhaps encouraged by the growing acceptance of ASM – in 2023. “I expect an unprecedented appreciation for how pentesting effectively exposes gaps in security, and this in turn will help to reinforce the importance of those all-important security basics. In 2023 I implore organizations to work with pentesters for the best, year on year result.”

Chad Peterson, MD at NetSPI, believes the nature and effectiveness of pentesting will evolve over 2023, “The attack surface has become more fluid, so you have to be able to scan for new assets and entry points continuously,” he says. “In 2023, organizations will combine traditional pentesting, which in many cases will still be required for regulatory needs, with the proactive approach of more continuous assessment of their attack surface. The result will be better awareness of the attack surface and more comprehensive traditional pentesting as there is more information about the true attack surface.”

Sample problem areas

SaaS

Ben Johnson, CTO and co-founder of Obsidian, chooses SaaS. “2023 will be the year of SSPM [SaaS security posture management] and securing SaaS,” he says. “But for that to happen, we must continue educating organizations on the risks of SaaS. In doing so, organizations must ensure their left-of-boom teams (vulnerability management and GRC) are able to reduce SaaS risk while ensuring their right-of-boom teams (security operations, incident response, threat hunting) have continuous threat management capabilities.” 

SaaS security has given organizations the ability to scale applied security, not just awareness. “Now is the time to distribute security hardening and operations to go with the distributed technology and distributed responsibility. As we know, the pandemic sped up the hybrid work model, and organizations that prioritized endpoint or public cloud security over the past couple years are now ready to secure SaaS and the modern workflow.”

The browser

Jonathan Lee, senior product manager at Menlo Security, focuses on the browser, which is possibly the biggest single threat surface. This is where users spend most of their time. “Vendors are now looking at ways to add security controls directly inside the browser,” he said. “Traditionally, this was done either as a separate endpoint agent or at the network edge, using a firewall or secure web gateway.”

The big players, Google and Microsoft, are also in on the act, providing built-in controls inside Chrome and Edge to secure at a browser level rather than the network edge, he added. “But browser attacks are increasing, with attackers exploiting new and old vulnerabilities, and developing new attack methods like HTTP smuggling. Remote browser isolation is becoming one of the key principles of zero trust security where no device or user – not even the browser – can be trusted.”

Noticeably, 2022 has already seen investor interest in startups developing secure browsers – such as Red Access and LayerX.

The user

Ed Williams highlights a failure in using and accounting for the user – and uses ransomware as an example. “Cyber threats, including ransomware, will never be prevented by implementing shiny new products and solutions unless the underlying security issues are addressed. Therefore, in 2023,” he added, “I hope organizations shift their mindset away from feeling as though they need the latest tempting tech, and instead focus on consistently achieving the human-centric security basics. These basics include patching, strong passwords, and a detailed security policy.”

Visibility

If ‘management’ is the key word in ASM, ‘visibility’ is the key enabler. You can only manage what you can see. “In 2023, organizations should embrace the mindset of empowering their teams with visibility into assets and relationships and overcoming data silos between AppSec, infrastructure, and data security teams,” suggests Erkang Zheng, founder and CEO at JupiterOne.

He recalls the words of John Lambert: “Defenders think in lists. Attackers think in graphs. As long as this is true, attackers will win.” Attackers will win, especially if cybersecurity defenders cannot quickly understand graph-based relationships between data, networks, and user accounts in their own networks to limit the blast radius when they are under attack.

“Contextual intelligence is likely necessary to win in a threat vector where organizations face more complex, destructive, and irreversible threats than ever before,” he says. “This visibility and understanding are the primary benefits of attack surface management technologies and practices, along with secondary benefits such as compliance and evidence automation.”

Marcus Fowler, CEO of Darktrace Federal, has no doubt that ASM will be a top priority for organizations in 2023. The problem is the attack surface is never static; it’s constantly evolving with the level of risk changing daily. “Tracking down the full extent of the attack surface is not something that can be left to human resources. It requires real-time data from an AI engine taking a hacker’s approach,” he says. He believes that most organizations currently miss as much as 50% of their true attack surface. 

“That’s where seeing AI take on the key ASM functions of discovery, assessment and prioritization, risk prevention and integration can expose the true level of exposed risk,” he added. “Only the automation and scalability of AI can provide the up-to-date, continuous copy of the internet that CISOs need to get a grip on the attack surface. Paired with AI’s unique understanding of an organization’s digital estate, you get an outside-in, inside-out risk management program that will be vital for the CISOs of tomorrow.”

Part of ASM is external attack surface management (EASM). Microsoft defines the external attack surface as “the entire area of an organization or system that is susceptible to an attack from an external source.” We should note that this excludes malicious or naive insiders, who should also be considered as part of a full ASM approach to cybersecurity. Nevertheless, there will be a growing number of EASM support systems released by security vendors during 2023. CrowdStrike, for example, announced in September 2022 that it would be buying EASM company Reposify, with an expectation to close during CrowdStrike’s fiscal third quarter.

“In response to evolving attack tactics and an expanded attack surface,” comments Karin Shopen, VP of cybersecurity solutions and services at Fortinet, “we expect a shift in the tools CISOs consider in 2023. When it comes to attack surface management, CISOs will shift from one-time assessments to constant and continuous early evaluation of their organization’s external attack surface. EASM solutions, which help provide organizations with an adversary’s view of their attack surface, will be at the top of their lists, as will machine learning and the use of seasoned threat hunters that offer takedown services.”

Furthermore, she added, “CISOs and security teams will more closely evaluate EASM solutions based on their ability to not only detect but prioritize and remediate threats using machine learning to help resource-depleted SOC teams.”

Chris Morales, CISO at Netenrich, describes his own approach. “I have one priority for 2023 – to be data driven for risk making decisions,” he says. “My commitment starting fiscal year 2023 is to be data driven with quantitative risk management practices. That means providing the business units with a dashboard and trending metrics to the state of assets, vulnerabilities and threats that comprise their attack surface. From this we can continually score threat likelihood and business impact to make informed decisions on where to best focus resources.”

It isn’t simple, but worth the effort. “Making this happen requires a tightly integrated security stack that shares data into a single aggregated data lake to threat model and answer questions.”

The concept is supported by Shira Shamban, CEO at Solvo. “In 2023, we are going to see a data-centric approach to cybersecurity emerge and grow,” she says. “At its core, cybersecurity is a problem of managing all the data, assets, and sensitive resources an organization has, and determining how to protect it. This sensitive data can often include PII, PHI or IP. This is the top concern for CISOs and security practitioners, so security approaches and products will begin to put data at the center, rather than focusing solely on the environments the data is in.”

The way forward in 2023

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on those areas of the IT infrastructure that can be attacked. There is no product that can provide ASM, but a growing number of products that can help. It requires complete visibility of all assets, and detailed knowledge of exploits so that assets can be protected. It is, like zero trust, a journey – one that is gaining traction and will gain more traction in 2023.

Mark Stamford describes the problem and offers his own route for the journey. “ASM tools produce a lot of noise that can send a security group down an endless number of rabbit holes. In the rush to simplify the problem everything gets reported on and all kinds of vulnerability data gets included. There’s usually some shoddy logic applied which seems to state if you have a lot of stuff facing the Internet you are more at risk, which piles further pressure on the security group. I’ve seen ASM tools which report on old SSL certs, low level vulnerabilities, all kinds of stuff that really, poses little to no risk.”

The route he proposes is to start by discovering all the assets, organizations, devices, and people that could create a problem. Then assess which could have a harmful impact. “A web server hosting some static pages in AWS, that connect to nothing, may cause a headache, but is probably not going to lead to a breach,” he says. “On the flip side, your Internet accessible financial system is a key component.”

Next assess how everything is connected – could an attacker get from A to B and cause an impact. “Draw a circle around that and start looking at how you protect it.” But importantly, “Accept that you don’t need to protect everything and move from there.”

The real problem, he concludes, is that data is everywhere. “This really does expand the attack surface, so you have to use a logical, risk-based approach which considers the context of your business – how you achieve what you are trying to achieve – and then protect it.”

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

Related: The Rise of Continuous Attack Surface Management

Related: Investors Bet on Cyberpion in Attack Surface Management Space

Related: IBM to Acquire Randori for Attack Surface Management Tech

Related: Attack Surface Management Play Censys Scores $35M Investment

The post Cyber Insights 2023: Attack Surface Management appeared first on SecurityWeek.