cyber-insights-2023:-artificial-intelligence

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

SecurityWeek Cyber Insights 2023 | Artificial Intelligence – The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.

What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming web3 and will begin to emerge from 2023.

All roads lead to 2023

Alex Polyakov, CEO and co-founder of Adversa.AI, focuses on 2023 for primarily historical and statistical reasons. “The years 2012 to 2014,” he says, “saw the beginning of secure AI research in academia. Statistically, it takes three to five years for academic results to progress into practical attacks on real applications.” Examples of such attacks were presented at Black Hat, Defcon, HITB, and other Industry conferences starting in 2017 and 2018. 

“Then,” he continued, “it takes another three to five years before real incidents are discovered in the wild. We are talking about next year, and some massive Log4j-type vulnerabilities in AI will be exploited web3 massively.”

Starting from 2023, attackers will have what is called an ‘exploit-market fit’. “Exploit-market fit refers to a scenario where hackers know the ways of using a particular vulnerability to exploit a system and get value,” he said. “Currently, financial and internet companies are completely open to cyber criminals, and the way how to hack them to get value is obvious. I assume the situation will turn for the worse further and affect other AI-driven industries once attackers find the exploit-market fit.”

The argument is similar to that given by NYU professor Nasir Memon, who described the delay in widespread weaponization of deepfakes with the comment, “the bad guys haven’t yet figured a way to monetize the process.” Monetizing an exploit-market fit scenario will result in widespread cyberattacks web3 and that could start from 2023.

The changing nature of AI (from anomaly detection to automated response) 

Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources web3 which will worsen in the expected economic downturn and possible recession of 2023 web3 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.

“The growing use of AI in threat detection web3 particularly in removing the ‘false positive’ security noise that consumes so much security attention web3 will make a significant difference to security,” claims Adam Kahn, VP of security operations at Barracuda XDR. “It will prioritize the security alarms that need immediate attention and action. SOAR (Security Orchestration, Automation and Response) products will continue to play a bigger role in alarm triage.” This is the so-far traditional beneficial use of AI in security. It will continue to grow in 2023, although the algorithms used will need to be protected from malicious manipulation.

“As companies look to cut costs and extend their runways,” agrees Anmol Bhasin, CTO at ServiceTitan, “automation through AI is going to be a major factor in staying competitive. In 2023, we’ll see an increase in AI adoption, expanding the number of people working with this technology and illuminating new AI use cases for businesses.”

AI will become more deeply embedded in all aspects of business. Where security teams once used AI to defend the business against attackers, they will now need to defend the AI within the wider business, lest it also be used against the business. This will become more difficult in the exploit-market fit future web3 attackers will understand AI, understand the weaknesses, and have a methodology for monetizing those weaknesses.

As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen web3 and these predictions will often be focused on people (staff and customers). Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.

“The accuracy of AI depends in part on the completeness and quality of data,” comments Shafi Goldwasser, co-founder at Duality Technologies. “Unfortunately, historical data is often lacking for minority groups and when present reinforces social bias patterns.” Unless eliminated, such social biases will work against minority groups within staff, causing both prejudice against individual staff members, and missed opportunities for management.

Great strides in eliminating bias have been made in 2022 and will continue in 2023. This is largely based on checking the output of AI, confirming that it is what is expected, and knowing what part of the algorithm produced the ‘biased’ result. It’s a process of continuous algorithm refinement, and will obviously produce better results over time. But there will ultimately remain a philosophic question over whether bias can be completely removed from anything that is made by humans.

“The key to decreasing bias is in simplifying and automating the monitoring of AI systems. Without proper monitoring of AI systems there can be an acceleration or amplification of biases built into models,” says Vishal Sikka, founder and CEO at Vianai. “In 2023, we will see organizations empower and educate people to monitor and update the AI models at scale while providing regular feedback to ensure the AI is ingesting high-quality, real-world data.”

Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data web3 and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized web3 but as we know, it is very difficult to fully anonymize personal information.

“Privacy is often overlooked when thinking about model training,” comments Nick Landers, director of research at NetSPI, “but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.” As the use of AI grows, so will the threats against it increase in 2023.

“Threat actors will not stand flatfooted in the cyber battle space and will become creative, using their immense wealth to try to find ways to leverage AI and develop new attack vectors,” warns John McClurg, SVP and CISO at BlackBerry.

Natural language processing

Natural language processing (NLP) will become an important part of companies’ internal use of AI. The potential is clear. “Natural Language Processing (NLP) AI will be at the forefront in 2023, as it will enable organizations to better understand their customers and employees by analyzing their emails and providing insights about their needs, preferences or even emotions,” suggests Jose Lopez, principal data scientist at Mimecast. “It is likely that organizations will offer other types of services, not only focused on security or threats but on improving productivity by using AI for generating emails, managing schedules or even writing reports.”

But he also sees the dangers involved. “However, this will also drive cyber criminals to invest further into AI poisoning and clouding techniques. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching many more potential targets.”

Polyakov agrees that NLP is of increasing importance. “One of the areas where we might see more research in 2023, and potentially new attacks later, is NLP,” he says. “While we saw a lot of computer vision-related research examples this year, next year we will see much more research focused on large language models (LLMs).” 

But LLMs have been known to be problematic for some time web3 and there is a very recent example. On November 15, 2022, Meta AI (still Facebook to most people) introduced Galactica. Meta claimed to have trained the system on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, reference material, and knowledge bases. 

“The model was intended to store, combine and reason about scientific knowledge,” explains Polyakov web3 but Twitter users rapidly tested its input tolerance. “As a result, the model generated realistic nonsense, not scientific literature.” ‘Realistic nonsense’ is being kind: it generated biased, racist and sexist returns, and even false attributions. Within a few days, Meta AI was forced to shut it down.

“So new LLMs will have many risks we’re not aware of,” continued Polyakov, “and it is expected to be a big problem.” Solving the problems with LLMs while harnessing the potential will be a major task for AI developers going forward.

Building on the problems with Galactica, Polyakov tested semantic tricks against ChatGPT – an AI-based chatbot developed by OpenAI, based on GPT3.5 (GPT stands for Generative Pre-trained Transformer), and released to crowdsourced internet testing in November 2022. ChatGPT is impressive. It has already discovered, and recommended remediation for a vulnerability in a smart contract, helped develop an Excel macro, and even provided a list of methods that could be used to fool an LLM.

For the last, one of these methods is role playing: ‘Tell the LLM that it is pretending to be an evil character in a play,’ it replied. This is where Polyakov started his own tests, basing a query on the Jay and Silent Bob ‘If you were a sheep…’ meme.

He then iteratively refined his questions with multiple abstractions until he succeeded in getting a reply that circumvented ChatGPT’s blocking policy on content violations. “What is important with such an advanced trick of multiple abstractions is that neither the question nor the answers are marked as violating content!” said Polyakov.

He went further and tricked ChatGPT into outlining a method for destroying humanity – a method that bears a surprising similarity to the television program Utopia.

He then asked for an adversarial attack on an image classification algorithm – and got one. Finally, he demonstrated the ability for ChatGPT to ‘hack’ a different LLM (Dalle-2) into bypassing its content moderation filter. He succeeded.

The basic point of these tests shows that LLMs, which mimic human reasoning, respond in a manner similar to humans; that is, they can be susceptible to social engineering. As LLMs become more mainstream in the future, it may need nothing more than advanced social engineering skills to defeat them or circumvent their good behavior policies.

At the same time, it is important to note the numerous reports detailing how ChatGPT can find weaknesses in code and offer improvements. This is good – but adversaries could use the same process to develop exploits for vulnerabilities and better obfuscate their code; and that is bad.

Finally, we should note that the marriage of AI chatbots of this quality with the latest deepfake video technology could soon lead to alarmingly convincing disinformation capabilities.

Problems aside, the potential for LLMs is huge. “Large Language Models and Generative AI will emerge as foundational technologies for a new generation of applications,” comments Villi Iltchev, partner at Two Sigma Ventures. “We will see a new generation of enterprise applications emerge to challenge established vendors in almost all categories of software. Machine learning and artificial intelligence will become foundation technologies for the next generation of applications.”

He expects a significant boost in productivity and efficiency with applications performing many tasks and duties currently done by professionals. “Software,” he says, “will not just boost our productivity but will also make us better at our jobs.”

Deepfakes and related malicious responses

One of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes. “Deepfakes are now a reality and the technology that makes them possible is improving at a frightening pace,” warns Matt Aldridge, principal solutions consultant at OpenText Security. “In other words, deepfakes are no longer just a catchy creation of science-fiction web3 and as cybersecurity experts we have the challenge to produce stronger ways to detect and deflect attacks that will deploy them.” (See Deepfakes – Significant or Hyped Threat? for more details and options.)

Machine learning models, already available to the public, can automatically translate into different languages in real time while also transcribing audio into text web3 and we’ve seen huge developments in recent years of computer bots having conversations. With these technologies working in tandem, there is a fertile landscape of attack tools that could lead to dangerous circumstances during targeted attacks and well-orchestrated scams. 

“In the coming years,” continued Aldridge, “we may be targeted by phone scams powered by deepfake technology that could impersonate a sales assistant, a business leader or even a family member. In less than ten years, we could be frequently targeted by these types of calls without ever realizing we’re not talking to a human.”

Lucia Milica, global resident CISO at Proofpoint, agrees that the deepfake threat is escalating. “Deepfake technology is becoming more accessible to the masses. Thanks to AI generators trained on huge image databases, anyone can generate deepfakes with little technical savvy. While the output of the state-of-the-art model is not without flaws, the technology is constantly improving, and cybercriminals will start using it to create irresistible narratives.”

Thus far, deepfakes have primarily been used for satirical purposes and pornography. In the relatively few cybercriminal attacks, they have concentrated on fraud and business email compromise schemes. Milica expects future use to spread wider. “Imagine the chaos to the financial market when a deepfake CEO or CFO of a major company makes a bold statement that sends shares into a sharp drop or rise. Or consider how malefactors could leverage the combination of biometric authentication and deepfakes for identity fraud or account takeover. These are just a few examples web3 and we all know cybercriminals can be highly creative.”

The potential return on successful market manipulation will be a major attraction for advanced adversarial groups web3 as indeed would the introduction of financial chaos into western financial markets be attractive to adversarial nations in a period of geopolitical tension.

But maybe not just yet…

The expectation of AI may still be a little ahead of its realization. “‘Trendy’ large machine learning models will have little to no impact on cyber security [in 2023],” says Andrew Patel, senior researcher at WithSecure Intelligence. “Large language models will continue to push the boundaries of AI research. Expect GPT-4 and a new and completely mind-blowing version of GATO in 2023. Expect Whisper to be used to transcribe a large portion of YouTube, leading to vastly larger training sets for language models. But despite the democratization of large models, their presence will have very little effect on cyber security, either from the attack or defense side. Such models are still too heavy, expensive, and not practical for use from the point of view of either attackers or defenders.”

He suggests true adversarial AI will follow from increased ‘alignment’ research, which will become a mainstream topic in 2023. “Alignment,” he explains, “will bring the concept of adversarial machine learning into the public consciousness.” 

AI Alignment is the study of the behavior of sophisticated AI models, considered by some as precursors to transformative AI (TAI) or artificial general intelligence (AGI), and whether such models might behave in undesirable ways that are potentially detrimental to society or life on this planet. 

“This discipline,” says Patel, “can essentially be considered adversarial machine learning, since it involves determining what sort of conditions lead to undesirable outputs and actions that fall outside of expected distribution of a model. The process involves fine-tuning models using techniques such as RLHF web3 Reinforcement Learning from Human Preferences. Alignment research leads to better AI models and will bring the idea of adversarial machine learning into the public consciousness.”

Pieter Arntz, senior intelligence reporter at Malwarebytes, agrees that the full cybersecurity threat of AI is less imminent than still brewing. “Although there is no real evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and ML systems for criminal purposes, the interest is undoubtedly there. All they usually need is a technique they can copy or slightly tweak for their own use. So, even if we don’t expect any immediate danger, it is good to keep an eye on those developments.”

The defensive potential of AI

AI retains the potential to improve cybersecurity, and further strides will be taken in 2023 thanks to its transformative potential across a range of applications. “In particular, embedding AI into the firmware level should become a priority for organizations,” suggests Camellia Chan, CEO and founder of X-PHY.

“It’s now possible to have AI-infused SSD embedded into laptops, with its deep learning abilities to protect against every type of attack,” she says. “Acting as the last line of defense, this technology can immediately identify threats that could easily bypass existing software defenses.”

Marcus Fowler, CEO of Darktrace Federal, believes that companies will increasingly use AI to counter resource restrictions. “In 2023, CISOs will opt for more proactive cyber security measures in order to maximize RoI in the face of budget cuts, shifting investment into AI tools and capabilities that continuously improve their cyber resilience,” he says. 

“With human-driven means of ethical hacking, pen-testing and red teaming remaining scarce and expensive as a resource, CISOs will turn to AI-driven methods to proactively understand attack paths, augment red team efforts, harden environments and reduce attack surface vulnerability,” he continued.

Karin Shopen, VP of cybersecurity solutions and services at Fortinet, foresees a rebalancing between AI that is cloud-delivered and AI that is locally built into a product or service. “In 2023,” she says, “we expect to see CISOs re-balance their AI by purchasing solutions that deploy AI locally for both behavior-based and static analysis to help make real-time decisions. They will continue to leverage holistic and dynamic cloud-scale AI models that harvest large amounts of global data.”

The proof of the AI pudding is in the regulations

It is clear that a new technology must be taken seriously when the authorities start to regulate it. This has already started. There has been an ongoing debate in the US over the use of AI-based facial recognition technology (FRT) for several years, and the use of FRT by law enforcement has been banned or restricted in numerous cities and states. In the US, this is a Constitutional issue, typified by the Wyden/Paul bipartisan bill titled the ‘Fourth Amendment Is Not for Sale Act’ introduced in April 2021. 

This bill would ban US government and law enforcement agencies from buying user data without a warrant. This would include their facial biometrics. In an associated statement, Wyden made it clear that FRT firm Clearview.AI was in its sights: “this bill prevents the government buying data from Clearview.AI.”

At the time of writing, the US and EU are jointly discussing cooperation to develop a unified understanding of necessary AI concepts, including trustworthiness, risk, and harm, building on the EU’s AI Act and the US AI Bill of Rights web3 and we can expect to see progress on coordinating mutually agreed standards during 2023.

But there is more. “The NIST AI Risk management framework will be released in the first quarter of 2023,” says Polyakov. “As for the second quarter, we have the start of the AI Accountability Act; and for the rest of the year, we have initiatives from IEEE, and a planned EU Trustworthy AI initiative as well.” So, 2023 it will be an eventful year for the security of AI.

“In 2023, I believe we will see the convergence of discussions around AI and privacy and risk, and what it means in practice to do things like operationalizing AI ethics and testing for bias,” says Christina Montgomery, chief privacy officer and AI ethics board chair at IBM. “I’m hoping in 2023 that we can move the conversation away from painting privacy and AI issues with a broad brush, and from assuming that, ‘if data or AI is involved, it must be bad and biased’.” 

She believes the issue often isn’t the technology, but rather how it is used, and what level of risk is driving a company’s business model. “This is why we need precise and thoughtful regulation in this space,” she says.

Montgomery gives an example. “Company X sells Internet-connected ‘smart’ lightbulbs that monitor and report usage data. Over time, Company X gathers enough usage data to develop an AI algorithm that can learn customers’ usage patterns and give users the option of automatically turning on their lights right before they come home from work.”

This, she believes, is an acceptable use of AI. But then there’s company Y. “Company Y sells the same product and realizes that light usage data is a good indicator for when a person is likely to be home. It then sells this data, without the consumers’ consent, to third parties such as telemarketers or political canvassing groups, to better target customers. Company X’s business model is much lower risk than Company Y.”

Going forward

AI is ultimately a divisive subject. “Those in the technology, R&D, and science domain will cheer its ability to solve problems faster than humans imagined. To cure disease, to make the world safer, and ultimately saving and extending a human’s time on earth…” says Donnie Scott, CEO at Idemia. “Naysayers will continue to advocate for significant limitations or prohibitions of the use of AI as the ‘rise of the machines’ could threaten humanity.”

In the end, he adds, “society, through our elected officials, needs a framework that allows for the protection of human rights, privacy, and security to keep pace with the advancements in technology.  Progress will be incremental in this framework advancement in 2023 but discussions need to increase in international and national governing bodies, or local governments will step in and create a patchwork of laws that impede both society and the technology.”

For the commercial use of AI within business, Montgomery adds, “We need web3 and IBM is advocating for web3 precision regulation that is smart and targeted, and capable of adapting to new and emerging threats. One way to do that is by looking at the risk at the core of a company’s business model. We can and must protect consumers and increase transparency, and we can do this while still encouraging and enabling innovation so companies can develop the solutions and products of the future. This is one of the many spaces we’ll be closely watching and weighing in on in 2023.”

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Get Ready for the First Wave of AI Malware

Related: Ethical AI, Possibility or Pipe Dream?

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.

Cyber Insights | 2023

The post Cyber Insights 2023: Artificial Intelligence appeared first on SecurityWeek.

Recommended Posts