On September 20, 2023, CED President, Dr. Lori Esposito Murray, hosted a Special Trustee Roundtable on Opportunities and Challenges of AI and its Impact on Cybersecurity for a discussion on how CEOs should implement the rapidly advancing capacities of artificial intelligence (AI). Trustees representing a wide variety of industries and economic sectors, discussed the impact of AI in their companies and in their sectors and shared the approach and actions they were taking to optimize the value of the technology at these early stages and mitigate the risks.
Rapidly advancing AI is changing the entire landscape of business. This requires that the CEO, the C-Suite, and the organization’s Board of Directors undertake the responsibility of decision-making on the role AI in their business operations, and how to mitigate against generative AI’s accelerating cybersecurity threat to in business operations, even in the absence of formal regulations—which may in any event be on the horizon. More broadly, both business leaders and policymakers must consider the larger challenge of how to structure guardrails around the use of AI that both allow innovation and at the same time protect both companies and the global economy from threats associated with the use of AI, including cybersecurity threats, privacy, deep fakes, untrustworthy data, and other risks.
Trusted Insights for What's Ahead™
- Generative AI may be distinguished from “precision AI”; the difference is that precision AI has no room for error and generally offers one correct answer to a question (for instance, in self-driving cars, which must stop at red lights). Precision AI is essential for cybersecurity.
- AI is already impacting a surprisingly wide variety of business operations across a number of industries and economic sectors.
- The expected pace of change amplifies the need for CEOs, their management teams and their Boards seeking to use AI to increase efficiency and productivity to commit to elevating technological expertise within their organizations, crafting governance structures to manage AI risks, developing a code of conduct and ethics around application of AI, controlling for risks, checking for bias in AI applications, developing guidelines for vendors and platforms and educating their employees.
- Leaders across a wide variety of industries and sectors strongly believe that government engagement with the private sector is critical in developing a framework on AI and cybersecurity. Collaborative engagement is critical to ensure that companies are able to swiftly implement safeguards and that the frameworks provide adequate flexibility to adapt to technologies that are rapidly evolving.
- Survey research suggests that approximately two in three Americans support government intervention to prevent the potential loss of jobs due to AI.
- AI both elevates the risks of cyberattacks through malicious use of tools by bad actors and offers tools to help companies address and mitigate the risks of cyberattacks. Companies must remain vigilant on the threat landscape.
- The biggest challenge facing CEOs and Board members is to make sure they understand exactly how to respond to a security breach. Few companies today have the internal infrastructure ready to identify and respond to attacks within the required four days required by new regulation.
- Businesses that do not have cybersecurity experts on their Boards may wish to consider establishing an advisory board to consider issues relating to AI and cybersecurity.
- Future AI-enabled cyberattacks must be fought with AI-enabled technology rather than simply adding more people to cybersecurity teams. Pattern recognition and increasing automation of cybersecurity defenses will be critical to the future of cybersecurity in an AI-enhanced cyber environment.
- Given the very fast pace of change in AI, CEOs and Board Directors should adopt a three- to five-year roadmap regarding automation and the use of data and AI, and security measures, including an integrated platform for data.
On September 20, 2023, CED President, Dr. Lori Esposito Murray, hosted a Special Trustee Roundtable on Opportunities and Challenges of AI and its Impact on Cybersecurity for a discussion on how CEOs should implement the rapidly advancing capacities of artificial intelligence (AI). Trustees representing a wide variety of industries and economic sectors, discussed the impact of AI in their companies and in their sectors and shared the approach and actions they were taking to optimize the value of the technology at these early stages and mitigate the risks.
Rapidly advancing AI is changing the entire landscape of business. This requires that the CEO, the C-Suite, and the organization’s Board of Directors undertake the responsibility of decision-making on the role AI in their business operations, and how to mitigate against generative AI’s accelerating cybersecurity threat to in business operations, even in the absence of formal regulations—which may in any event be on the horizon. More broadly, both business leaders and policymakers must consider the larger challenge of how to structure guardrails around the use of AI that both allow innovation and at the same time protect both companies and the global economy from threats associated with the use of AI, including cybersecurity threats, privacy, deep fakes, untrustworthy data, and other risks.
Trusted Insights for What's Ahead™
- Generative AI may be distinguished from “precision AI”; the difference is that precision AI has no room for error and generally offers one correct answer to a question (for instance, in self-driving cars, which must stop at red lights). Precision AI is essential for cybersecurity.
- AI is already impacting a surprisingly wide variety of business operations across a number of industries and economic sectors.
- The expected pace of change amplifies the need for CEOs, their management teams and their Boards seeking to use AI to increase efficiency and productivity to commit to elevating technological expertise within their organizations, crafting governance structures to manage AI risks, developing a code of conduct and ethics around application of AI, controlling for risks, checking for bias in AI applications, developing guidelines for vendors and platforms and educating their employees.
- Leaders across a wide variety of industries and sectors strongly believe that government engagement with the private sector is critical in developing a framework on AI and cybersecurity. Collaborative engagement is critical to ensure that companies are able to swiftly implement safeguards and that the frameworks provide adequate flexibility to adapt to technologies that are rapidly evolving.
- Survey research suggests that approximately two in three Americans support government intervention to prevent the potential loss of jobs due to AI.
- AI both elevates the risks of cyberattacks through malicious use of tools by bad actors and offers tools to help companies address and mitigate the risks of cyberattacks. Companies must remain vigilant on the threat landscape.
- The biggest challenge facing CEOs and Board members is to make sure they understand exactly how to respond to a security breach. Few companies today have the internal infrastructure ready to identify and respond to attacks within the required four days required by new regulation.
- Businesses that do not have cybersecurity experts on their Boards may wish to consider establishing an advisory board to consider issues relating to AI and cybersecurity.
- Future AI-enabled cyberattacks must be fought with AI-enabled technology rather than simply adding more people to cybersecurity teams. Pattern recognition and increasing automation of cybersecurity defenses will be critical to the future of cybersecurity in an AI-enhanced cyber environment.
- Given the very fast pace of change in AI, CEOs and Board Directors should adopt a three- to five-year roadmap regarding automation and the use of data and AI, and security measures, including an integrated platform for data.