Aspen Cyber Summit recap: How AI sophistication fractured trust across the internet
A few weeks ago, I had the opportunity to attend and speak at the Aspen Cyber Summit in New York City. For me, nothing beats the in-person experience. There’s something about talking face-to-face with industry leaders and seeing real-time reactions to ideas, opinions, and points of view that are presented, shared, and debated. In a world that is so dominated by digital interactions, some of which are manipulated and augmented, it's refreshing to have real, honest conversations with people face-to-face.
The conference this year touched upon a number of pressing topics. Not surprisingly, the growing adoption and implementation of artificial intelligence (AI) and its impact on the cybersecurity community was front and center, along with securing the 2024 Election and the Securities and Exchange Commission’s (SEC) new cyber disclosure timeline ruling. But as I sat and listened to sessions and heard exchanges between some of the brightest in the industry, I couldn’t help but pick up on a common thread throughout each session and discussion:
Trust, at a fundamental level, is broken across the internet and it’s having serious ramifications on security both at the organizational and individual level.
More recently, the sophistication and advancements around AI and deepfakes have been the driving force behind the fracturing of trust we are experiencing across the internet and there are two angles we need to examine to really understand why.
First, the accessibility of these technologies. Even five years ago, the everyday, common person wasn’t thinking about how to use AI in their own personal life, whether to create an updated professional headshot or carry out a phishing attack. Five years ago, only the most technical users and adversaries were using it primarily because the technology was so complicated, it required a specific skill set and level of training to use. But that’s all changed now. These technologies have become so accessible that it is easier than ever for the mass market to create everyday fraud – capabilities that were once limited to well-financed governments.
With that, what I call the “Believability Index” has reached a tipping point and is causing a ripple effect across organizations, generations, and industries. Thinking back again to five years ago, younger and technologically savvy generations deemed themselves capable of being able to easily determine what was real and what was fake on the internet. Phishing emails were written in a robotic tone and scattered with small yet noticeable typos and odd sentence structures. Doctored images had specific characteristics that made them easy to spot as fake. Five years ago, our grandparents and less technologically savvy individuals were the ones who were primarily falling victim to AI-powered attacks and scams, but today, the technology is so advanced that it’s really difficult to tell what is real and what is fake anymore.
With that, it's no longer just our parents or grandparents that are vulnerable to online scammers – it’s all of us, myself included. Just look at the image of the Pope in a puffer jacket from earlier this year. I’ve been in the industry for more than 20 years and am acutely aware of the tactics used by attackers and even I did a double-take, and admittedly took it to be a real image at first.
We’ve moved beyond fake images, fake documents, and fake emails. Now it's fake voicemails, fake voice notes, and fake videos. And evidence shows that our believability goes up when voice and video are introduced. It has to be real because I hear my colleague on the phone asking me to send them specific log-in information. I hear the representative from my bank asking for my social security number. But the hard truth is that these are merely fabrications that have been manufactured by adversaries and threat actors using the most advanced form of AI technology.
The rise of AI and deepfakes necessitates a re-evaluation of security models because the sophistication of the tech means we're all now targets and all under attack. Trust is broken, and that fracture is going to permeate even more in the coming year. Think about the upcoming US election – that’s a system that is already riddled with mistrust and uncertainty. Attackers are only going to exploit that further to create chaos and confusion.
We’re at a pivotal moment in time that points to a pressing need for the security model to adapt to address the convergence of AI, intent for fraud or manipulation, and the increasing reliance on the internet. Restoring integrity and authenticity is vital, with the security model needing to be integrated into the fabric of the internet.
There is no doubt that the world will run on AI, and it will be weaponized. As a society and human population, we are trusting in nature. Today, gaps in authentication and identification exist that attackers are exploiting faster than we can keep pace with. The history of security is baked in impersonation, and with this new era, security will need to evolve as the capabilities to conduct impersonations are available to the mass market.
The pressure is on: Enterprises need to move quickly to effectively prove the authenticity of their content to maintain their credibility and trust with their customers.