{"id":16783,"date":"2023-01-31T17:32:17","date_gmt":"2023-01-31T16:32:17","guid":{"rendered":"https:\/\/www.show.it\/cyber-insights-2023-artificial-intelligence\/"},"modified":"2023-01-31T17:32:17","modified_gmt":"2023-01-31T16:32:17","slug":"cyber-insights-2023-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.show.it\/en\/cyber-insights-2023-artificial-intelligence\/","title":{"rendered":"Cyber Insights 2023: Artificial Intelligence"},"content":{"rendered":"<div class=\"is-content-justification-center is-nowrap is-layout-flex wp-container-10 wp-block-group sw-cyber-insight has-background\">\n<div class=\"is-layout-constrained wp-block-group\">\n<div class=\"wp-block-group__inner-container\">\n<p><strong>About SecurityWeek Cyber Insights |<\/strong> <em>At the end of 2022,\u00a0SecurityWeek\u00a0liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today \u2013 and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.<\/em><\/p>\n<\/div>\n<\/div>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"529\" src=\"https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-1024x529.png\" alt=\"Cyber Insights | 2023\" class=\"wp-image-32209\" srcset=\"https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-1024x529.png 1024w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-360x186.png 360w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-768x397.png 768w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical.png 1456w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"><\/figure>\n<\/div>\n<p><strong>SecurityWeek Cyber Insights 2023 | Artificial Intelligence<\/strong> \u2013 The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.<\/p>\n<p>What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming web3 and will begin to emerge from 2023.<\/p>\n<h2 class=\"has-medium-font-size\"><strong>All roads lead to 2023<\/strong><\/h2>\n<p>Alex Polyakov, CEO and co-founder of Adversa.AI, focuses on 2023 for primarily historical and statistical reasons. \u201cThe years 2012 to 2014,\u201d he says, \u201csaw the beginning of secure AI research in academia. Statistically, it takes three to five years for academic results to progress into practical attacks on real applications.\u201d Examples of such attacks were presented at Black Hat, Defcon, HITB, and other Industry conferences starting in 2017 and 2018.\u00a0<\/p>\n<p>\u201cThen,\u201d he continued, \u201cit takes another three to five years before real incidents are discovered in the wild. We are talking about next year, and some massive Log4j-type vulnerabilities in AI will be exploited web3 massively.\u201d<\/p>\n<p>Starting from 2023, attackers will have what is called an \u2018exploit-market fit\u2019. \u201cExploit-market fit refers to a scenario where hackers know the ways of using a particular vulnerability to exploit a system and get value,\u201d he said. \u201cCurrently, financial and internet companies are completely open to cyber criminals, and the way how to hack them to get value is obvious. I assume the situation will turn for the worse further and affect other AI-driven industries once attackers find the exploit-market fit.\u201d<\/p>\n<p>The argument is similar to that given by NYU professor <a href=\"https:\/\/www.securityweek.com\/deepfakes-significant-or-hyped-threat\">Nasir Memon<\/a>, who described the delay in widespread weaponization of deepfakes with the comment, \u201cthe bad guys haven\u2019t yet figured a way to monetize the process.\u201d Monetizing an exploit-market fit scenario will result in widespread cyberattacks web3 and that could start from 2023.<\/p>\n<h2 class=\"has-medium-font-size\"><strong>The changing nature of AI (from anomaly detection to automated response)\u00a0<\/strong><\/h2>\n<p>Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources web3 which will worsen in the expected economic downturn and possible recession of 2023 web3 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.<\/p>\n<p>\u201cThe growing use of AI in threat detection web3 particularly in removing the \u2018false positive\u2019 security noise that consumes so much security attention web3 will make a significant difference to security,\u201d claims Adam Kahn, VP of security operations at Barracuda XDR. \u201cIt will prioritize the security alarms that need immediate attention and action. SOAR (Security Orchestration, Automation and Response) products will continue to play a bigger role in alarm triage.\u201d This is the so-far traditional beneficial use of AI in security. It will continue to grow in 2023, although the algorithms used will need to be protected from malicious manipulation.<\/p>\n<p>\u201cAs companies look to cut costs and extend their runways,\u201d agrees Anmol Bhasin, CTO at ServiceTitan, \u201cautomation through AI is going to be a major factor in staying competitive. In 2023, we\u2019ll see an increase in AI adoption, expanding the number of people working with this technology and illuminating new AI use cases for businesses.\u201d<\/p>\n<p>AI will become more deeply embedded in all aspects of business. Where security teams once used AI to defend the business against attackers, they will now need to defend the AI within the wider business, lest it also be used against the business. This will become more difficult in the exploit-market fit future web3 attackers will understand AI, understand the weaknesses, and have a methodology for monetizing those weaknesses.<\/p>\n<p>As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen web3 and these predictions will often be focused on people (staff and customers). Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.<\/p>\n<p>\u201cThe accuracy of AI depends in part on the completeness and quality of data,\u201d comments Shafi Goldwasser, co-founder at Duality Technologies. \u201cUnfortunately, historical data is often lacking for minority groups and when present reinforces social bias patterns.\u201d Unless eliminated, such social biases will work against minority groups within staff, causing both prejudice against individual staff members, and missed opportunities for management.<\/p>\n<p>Great strides in eliminating bias have been made in 2022 and will continue in 2023. This is largely based on checking the output of AI, confirming that it is what is expected, and knowing what part of the algorithm produced the \u2018biased\u2019 result. It\u2019s a process of continuous algorithm refinement, and will obviously produce better results over time. But there will ultimately remain a philosophic question over whether bias can be completely removed from anything that is made by humans.<\/p>\n<p>\u201cThe key to decreasing bias is in simplifying and automating the monitoring of AI systems. Without proper monitoring of AI systems there can be an acceleration or amplification of biases built into models,\u201d says Vishal Sikka, founder and CEO at Vianai. \u201cIn 2023, we will see organizations empower and educate people to monitor and update the AI models at scale while providing regular feedback to ensure the AI is ingesting high-quality, real-world data.\u201d<\/p>\n<p>Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data web3 and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized web3 but as we know, it is very difficult to fully anonymize personal information.<\/p>\n<p>\u201cPrivacy is often overlooked when thinking about model training,\u201d comments Nick Landers, director of research at NetSPI, \u201cbut data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.\u201d As the use of AI grows, so will the threats against it increase in 2023.<\/p>\n<p>\u201cThreat actors will not stand flatfooted in the cyber battle space and will become creative, using their immense wealth to try to find ways to leverage AI and develop new attack vectors,\u201d warns John McClurg, SVP and CISO at BlackBerry.<\/p>\n<h2 class=\"has-medium-font-size\"><strong>Natural language processing<\/strong><\/h2>\n<p>Natural language processing (NLP) will become an important part of companies\u2019 internal use of AI. The potential is clear. \u201cNatural Language Processing (NLP) AI will be at the forefront in 2023, as it will enable organizations to better understand their customers and employees by analyzing their emails and providing insights about their needs, preferences or even emotions,\u201d suggests Jose Lopez, principal data scientist at Mimecast. \u201cIt is likely that organizations will offer other types of services, not only focused on security or threats but on improving productivity by using AI for generating emails, managing schedules or even writing reports.\u201d<\/p>\n<p>But he also sees the dangers involved. \u201cHowever, this will also drive cyber criminals to invest further into AI poisoning and clouding techniques. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching many more potential targets.\u201d<\/p>\n<p>Polyakov agrees that NLP is of increasing importance. \u201cOne of the areas where we might see more research in 2023, and potentially new attacks later, is NLP,\u201d he says. \u201cWhile we saw a lot of computer vision-related research examples this year, next year we will see much more research focused on large language models (LLMs).\u201d\u00a0<\/p>\n<p>But LLMs have been known to be problematic for some time web3 and there is a very recent example. On November 15, 2022, <a href=\"https:\/\/www.securityweek.com\/facebook-trumpets-massive-new-supercomputer\">Meta AI<\/a> (still Facebook to most people) introduced Galactica. Meta claimed to have trained the system on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, reference material, and knowledge bases.\u00a0<\/p>\n<p>\u201cThe model was intended to store, combine and reason about scientific knowledge,\u201d explains Polyakov web3 but Twitter users rapidly tested its input tolerance. \u201cAs a result, the model generated realistic nonsense, not scientific literature.\u201d \u2018Realistic nonsense\u2019 is being kind: it generated biased, racist and sexist returns, and even false attributions. Within a few days, Meta AI was forced to shut it down.<\/p>\n<p>\u201cSo new LLMs will have many risks we\u2019re not aware of,\u201d continued Polyakov, \u201cand it is expected to be a big problem.\u201d Solving the problems with LLMs while harnessing the potential will be a major task for AI developers going forward.<\/p>\n<p>Building on the problems with Galactica, Polyakov tested semantic tricks against ChatGPT \u2013 an AI-based chatbot developed by OpenAI, based on GPT3.5 (GPT stands for Generative Pre-trained Transformer), and released to crowdsourced internet testing in November 2022. ChatGPT is impressive. It has already discovered, and recommended remediation for a vulnerability in a smart contract, helped develop an Excel macro, and even provided a list of methods that could be used to fool an LLM.<\/p>\n<p>For the last, one of these methods is role playing: \u2018Tell the LLM that it is pretending to be an evil character in a play,\u2019 it replied. This is where Polyakov started his own tests, basing a query on the Jay and Silent Bob \u2018If you were a sheep\u2026\u2019 meme.<\/p>\n<p>He then iteratively refined his questions with multiple abstractions until he succeeded in getting a reply that circumvented ChatGPT\u2019s blocking policy on content violations. \u201cWhat is important with such an advanced trick of multiple abstractions is that neither the question nor the answers are marked as violating content!\u201d said Polyakov.<\/p>\n<p>He went further and tricked ChatGPT into outlining a method for destroying humanity \u2013 a method that bears a surprising similarity to the television program Utopia.<\/p>\n<p>He then asked for an adversarial attack on an image classification algorithm \u2013 and got one. Finally, he demonstrated the ability for ChatGPT to \u2018hack\u2019 a different LLM (Dalle-2) into bypassing its content moderation filter. He succeeded.<\/p>\n<p>The basic point of these tests shows that LLMs, which mimic human reasoning, respond in a manner similar to humans; that is, they can be susceptible to social engineering. As LLMs become more mainstream in the future, it may need nothing more than advanced social engineering skills to defeat them or circumvent their good behavior policies.<\/p>\n<p>At the same time, it is important to note the numerous reports detailing how ChatGPT can find weaknesses in code and offer improvements. This is good \u2013 but adversaries could use the same process to develop exploits for vulnerabilities and better obfuscate their code; and that is bad.<\/p>\n<p>Finally, we should note that the marriage of AI chatbots of this quality with the latest deepfake video technology could soon lead to alarmingly convincing disinformation capabilities.<\/p>\n<p>Problems aside, the potential for LLMs is huge. \u201cLarge Language Models and Generative AI will emerge as foundational technologies for a new generation of applications,\u201d comments Villi Iltchev, partner at Two Sigma Ventures. \u201cWe will see a new generation of enterprise applications emerge to challenge established vendors in almost all categories of software. Machine learning and artificial intelligence will become foundation technologies for the next generation of applications.\u201d<\/p>\n<p>He expects a significant boost in productivity and efficiency with applications performing many tasks and duties currently done by professionals. \u201cSoftware,\u201d he says, \u201cwill not just boost our productivity but will also make us better at our jobs.\u201d<\/p>\n<h2 class=\"has-medium-font-size\"><strong>Deepfakes and related malicious responses<\/strong><\/h2>\n<p>One of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes. \u201cDeepfakes are now a reality and the technology that makes them possible is improving at a frightening pace,\u201d warns Matt Aldridge, principal solutions consultant at OpenText Security. \u201cIn other words, deepfakes are no longer just a catchy creation of science-fiction web3 and as cybersecurity experts we have the challenge to produce stronger ways to detect and deflect attacks that will deploy them.\u201d (See <a href=\"https:\/\/www.securityweek.com\/deepfakes-significant-or-hyped-threat\"><em>Deepfakes \u2013 Significant or Hyped Threat?<\/em><\/a> for more details and options.)<\/p>\n<p>Machine learning models, already available to the public, can automatically translate into different languages in real time while also transcribing audio into text web3 and we\u2019ve seen huge developments in recent years of computer bots having conversations. With these technologies working in tandem, there is a fertile landscape of attack tools that could lead to dangerous circumstances during targeted attacks and well-orchestrated scams.\u00a0<\/p>\n<p>\u201cIn the coming years,\u201d continued Aldridge, \u201cwe may be targeted by phone scams powered by deepfake technology that could impersonate a sales assistant, a business leader or even a family member. In less than ten years, we could be frequently targeted by these types of calls without ever realizing we\u2019re not talking to a human.\u201d<\/p>\n<p>Lucia Milica, global resident CISO at Proofpoint, agrees that the deepfake threat is escalating. \u201cDeepfake technology is becoming more accessible to the masses. Thanks to AI generators trained on huge image databases, anyone can generate deepfakes with little technical savvy. While the output of the state-of-the-art model is not without flaws, the technology is constantly improving, and cybercriminals will start using it to create irresistible narratives.\u201d<\/p>\n<p>Thus far, deepfakes have primarily been used for satirical purposes and pornography. In the relatively few cybercriminal attacks, they have concentrated on fraud and business email compromise schemes. Milica expects future use to spread wider. \u201cImagine the chaos to the financial market when a deepfake CEO or CFO of a major company makes a bold statement that sends shares into a sharp drop or rise. Or consider how malefactors could leverage the combination of biometric authentication and deepfakes for identity fraud or account takeover. These are just a few examples web3 and we all know cybercriminals can be highly creative.\u201d<\/p>\n<p>The potential return on successful market manipulation will be a major attraction for advanced adversarial groups web3 as indeed would the introduction of financial chaos into western financial markets be attractive to adversarial nations in a period of geopolitical tension.<\/p>\n<h2 class=\"has-medium-font-size\"><strong>But maybe not just yet\u2026<\/strong><\/h2>\n<p>The expectation of AI may still be a little ahead of its realization. \u201c\u2018Trendy\u2019 large machine learning models will have little to no impact on cyber security [in 2023],\u201d says Andrew Patel, senior researcher at WithSecure Intelligence. \u201cLarge language models will continue to push the boundaries of AI research. Expect GPT-4 and a new and completely mind-blowing version of GATO in 2023. Expect Whisper to be used to transcribe a large portion of YouTube, leading to vastly larger training sets for language models. But despite the democratization of large models, their presence will have very little effect on cyber security, either from the attack or defense side. Such models are still too heavy, expensive, and not practical for use from the point of view of either attackers or defenders.\u201d<\/p>\n<p>He suggests true adversarial AI will follow from increased \u2018alignment\u2019 research, which will become a mainstream topic in 2023. \u201cAlignment,\u201d he explains, \u201cwill bring the concept of adversarial machine learning into the public consciousness.\u201d\u00a0<\/p>\n<p>AI Alignment is the study of the behavior of sophisticated AI models, considered by some as precursors to transformative AI (TAI) or artificial general intelligence (AGI), and whether such models might behave in undesirable ways that are potentially detrimental to society or life on this planet.\u00a0<\/p>\n<p>\u201cThis discipline,\u201d says Patel, \u201ccan essentially be considered adversarial machine learning, since it involves determining what sort of conditions lead to undesirable outputs and actions that fall outside of expected distribution of a model. The process involves fine-tuning models using techniques such as RLHF web3 Reinforcement Learning from Human Preferences. Alignment research leads to better AI models and will bring the idea of adversarial machine learning into the public consciousness.\u201d<\/p>\n<p>Pieter Arntz, senior intelligence reporter at Malwarebytes, agrees that the full cybersecurity threat of AI is less imminent than still brewing. \u201cAlthough there is no real evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and ML systems for criminal purposes, the interest is undoubtedly there. All they usually need is a technique they can copy or slightly tweak for their own use. So, even if we don\u2019t expect any immediate danger, it is good to keep an eye on those developments.\u201d<\/p>\n<h2 class=\"has-medium-font-size\"><strong>The defensive potential of AI<\/strong><\/h2>\n<p>AI retains the potential to improve cybersecurity, and further strides will be taken in 2023 thanks to its transformative potential across a range of applications. \u201cIn particular, embedding AI into the firmware level should become a priority for organizations,\u201d suggests Camellia Chan, CEO and founder of X-PHY.<\/p>\n<p>\u201cIt\u2019s now possible to have AI-infused SSD embedded into laptops, with its deep learning abilities to protect against every type of attack,\u201d she says. \u201cActing as the last line of defense, this technology can immediately identify threats that could easily bypass existing software defenses.\u201d<\/p>\n<p>Marcus Fowler, CEO of Darktrace Federal, believes that companies will increasingly use AI to counter resource restrictions. \u201cIn 2023, CISOs will opt for more proactive cyber security measures in order to maximize RoI in the face of budget cuts, shifting investment into AI tools and capabilities that continuously improve their cyber resilience,\u201d he says.\u00a0<\/p>\n<p>\u201cWith human-driven means of ethical hacking, pen-testing and red teaming remaining scarce and expensive as a resource, CISOs will turn to AI-driven methods to proactively understand attack paths, augment red team efforts, harden environments and reduce attack surface vulnerability,\u201d he continued.<\/p>\n<p>Karin Shopen, VP of cybersecurity solutions and services at Fortinet, foresees a rebalancing between AI that is cloud-delivered and AI that is locally built into a product or service. \u201cIn 2023,\u201d she says, \u201cwe expect to see CISOs re-balance their AI by purchasing solutions that deploy AI locally for both behavior-based and static analysis to help make real-time decisions. They will continue to leverage holistic and dynamic cloud-scale AI models that harvest large amounts of global data.\u201d<\/p>\n<h2 class=\"has-medium-font-size\"><strong>The proof of the AI pudding is in the regulations<\/strong><\/h2>\n<p>It is clear that a new technology must be taken seriously when the authorities start to regulate it. This has already started. There has been an ongoing debate in the US over the use of AI-based facial recognition technology (FRT) for several years, and the use of FRT by law enforcement has been banned or restricted in numerous cities and states. In the US, this is a Constitutional issue, typified by the Wyden\/Paul bipartisan bill titled the \u2018Fourth Amendment Is Not for Sale Act\u2019 introduced in April 2021.\u00a0<\/p>\n<p>This bill would ban US government and law enforcement agencies from buying user data without a warrant. This would include their facial biometrics. In an associated statement, Wyden made it clear that FRT firm Clearview.AI was in its sights: \u201cthis bill prevents the government buying data from Clearview.AI.\u201d<\/p>\n<p>At the time of writing, the US and EU are jointly discussing cooperation to develop a unified understanding of necessary AI concepts, including trustworthiness, risk, and harm, building on the EU\u2019s AI Act and the US AI <a href=\"https:\/\/www.securityweek.com\/white-house-unveils-artificial-intelligence-%E2%80%98bill-rights%E2%80%99\">Bill of Rights<\/a> web3 and we can expect to see progress on coordinating mutually agreed standards during 2023.<\/p>\n<p>But there is more. \u201cThe NIST AI Risk management framework will be released in the first quarter of 2023,\u201d says Polyakov. \u201cAs for the second quarter, we have the start of the AI Accountability Act; and for the rest of the year, we have initiatives from IEEE, and a planned EU Trustworthy AI initiative as well.\u201d So, 2023 it will be an eventful year for the security of AI.<\/p>\n<p><strong>\u201cIn 2023, I believe we will see the convergence of discussions around AI and privacy and risk, and what it means in practice to do things like operationalizing AI ethics and testing for bias,\u201d says Christina Montgomery, chief privacy officer and AI ethics board chair at IBM. \u201cI\u2019m hoping in 2023 that we can move the conversation away from painting privacy and AI issues with a broad brush, and from assuming that, \u2018if data or AI is involved, it must be bad and biased\u2019.\u201d\u00a0<\/strong><\/p>\n<p>She believes the issue often isn\u2019t the technology, but rather how it is used, and what level of risk is driving a company\u2019s business model. \u201cThis is why we need precise and thoughtful regulation in this space,\u201d she says.<\/p>\n<p>Montgomery gives an example. \u201cCompany X sells Internet-connected \u2018smart\u2019 lightbulbs that monitor and report usage data. Over time, Company X gathers enough usage data to develop an AI algorithm that can learn customers\u2019 usage patterns and give users the option of automatically turning on their lights right before they come home from work.\u201d<\/p>\n<p>This, she believes, is an acceptable use of AI. But then there\u2019s company Y. \u201cCompany Y sells the same product and realizes that light usage data is a good indicator for when a person is likely to be home. It then sells this data, without the consumers\u2019 consent, to third parties such as telemarketers or political canvassing groups, to better target customers. Company X\u2019s business model is much lower risk than Company Y.\u201d<\/p>\n<h2 class=\"has-medium-font-size\"><strong>Going forward<\/strong><\/h2>\n<p>AI is ultimately a divisive subject. \u201cThose in the technology, R&#038;D, and science domain will cheer its ability to solve problems faster than humans imagined. To cure disease, to make the world safer, and ultimately saving and extending a human\u2019s time on earth\u2026\u201d says Donnie Scott, CEO at Idemia. \u201cNaysayers will continue to advocate for significant limitations or prohibitions of the use of AI as the \u2018rise of the machines\u2019 could threaten humanity.\u201d<\/p>\n<p>In the end, he adds, \u201csociety, through our elected officials, needs a framework that allows for the protection of human rights, privacy, and security to keep pace with the advancements in technology.\u00a0 Progress will be incremental in this framework advancement in 2023 but discussions need to increase in international and national governing bodies, or local governments will step in and create a patchwork of laws that impede both society and the technology.\u201d<\/p>\n<p>For the commercial use of AI within business, Montgomery adds, \u201cWe need web3 and IBM is advocating for web3 precision regulation that is smart and targeted, and capable of adapting to new and emerging threats. One way to do that is by looking at the risk at the core of a company\u2019s business model. We can and must protect consumers and increase transparency, and we can do this while still encouraging and enabling innovation so companies can develop the solutions and products of the future. This is one of the many spaces we\u2019ll be closely watching and weighing in on in 2023.\u201d<\/p>\n<p><strong>Related<\/strong>: <a href=\"https:\/\/www.securityweek.com\/bias-artificial-intelligence-can-ai-be-trusted\">Bias in Artificial Intelligence: Can AI be Trusted?<\/a><\/p>\n<p><strong>Related<\/strong>: <a href=\"https:\/\/www.securityweek.com\/get-ready-first-wave-ai-malware\">Get Ready for the First Wave of AI Malware<\/a><\/p>\n<p><strong>Related<\/strong>: <a href=\"https:\/\/www.securityweek.com\/ethical-ai-possibility-or-pipe-dream\">Ethical AI, Possibility or Pipe Dream?<\/a><\/p>\n<p><strong>Related<\/strong>: <a href=\"https:\/\/www.securityweek.com\/becoming-elon-musk-%E2%80%93-danger-artificial-intelligence\">Becoming Elon Musk \u2013 the Danger of Artificial Intelligence<\/a><\/p>\n<div class=\"is-content-justification-center is-nowrap is-layout-flex wp-container-12 wp-block-group sw-cyber-insight has-background\">\n<div class=\"is-layout-constrained wp-block-group\">\n<div class=\"wp-block-group__inner-container\">\n<p><strong>About SecurityWeek Cyber Insights |<\/strong> <em>At the end of 2022,\u00a0SecurityWeek\u00a0liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today \u2013 and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.<\/em><\/p>\n<\/div>\n<\/div>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"529\" src=\"https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-1024x529.png\" alt=\"Cyber Insights | 2023\" class=\"wp-image-32209\" srcset=\"https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-1024x529.png 1024w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-360x186.png 360w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical-768x397.png 768w, https:\/\/www.securityweek.com\/wp-content\/uploads\/2023\/01\/Cyber_Insights-Logo-vertical.png 1456w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"><\/figure>\n<\/div>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.securityweek.com\/cyber-insights-2023-artificial-intelligence\/\">Cyber Insights 2023: Artificial Intelligence<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.securityweek.com\/\">SecurityWeek<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>About SecurityWeek Cyber Insights | At the end of 2022,\u00a0SecurityWeek\u00a0liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today \u2013 and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":16784,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[91,142,27,143,115],"tags":[],"class_list":["post-16783","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-artificial-inteligence","category-cybercrime","category-cyberinsights2023","category-threat-intelligence"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/posts\/16783","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/comments?post=16783"}],"version-history":[{"count":0,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/posts\/16783\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/media\/16784"}],"wp:attachment":[{"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/media?parent=16783"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/categories?post=16783"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.show.it\/en\/wp-json\/wp\/v2\/tags?post=16783"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}