Generative AI has taken the tech world by storm and is shaping the future of how we live and work. However, putting publicly-available generative AI models into enterprise settings is incredibly risky—misinformation and threats can enter the enterprise, and sensitive company information, once fed into the models, could also get exposed to the public. 

CalypsoAI was founded to solve this problem, with the mission to secure AI models so that all organizations could reap their benefits, safely and securely. It does so through its Moderator product, which acts as an interface between the enterprise and generative AI solution that blocks the bad stuff from getting in and stops the good stuff from getting out. 

To find out more about CalypsoAI and the future of generative AI, we talked to Neil Serebyany, AI security expert and the CEO and founder of CalypsoAI.

CalypsoAI’s Neil Serebyany
Photo Credit: CalypsoAI

Tomorrow’s World Today (TWT): Can you speak more about Calypso AI? What is the company’s mission? 

Neil Serebyany, CalypsoAI (NS): We’re leaders in the field of AI security. The field is really simple—people are using, deploying, and building more and more AI applications, and as they do so, they’re introducing a whole new class of vulnerabilities that are linked to AI. Folks are taking advantage of those vulnerabilities. Some of them are doing so for nefarious reasons at the nation-state or fraud level, and some of them are just doing so because hacking systems is fun and they’ll get more engagements on social media.

CalypsoAI is in the business of securing those AI systems. We have a product that we sell called Moderator that basically sits in between you and all of the models like ChatGPT, Cohere, or Anthropic and lets you set the guardrails in terms of what you want to allow and disallow. This covers things like content coming back and preventing users from attacking the system.

TWT: What was your motivation to found the company?

NS: We’re celebrating five years this year, and five years ago it wasn’t nearly as obvious that AI security was going to be this important. I had an interesting opportunity to work in the intel community. The agency I was at was responsible for spy satellites, so we needed about 2 million analysts to analyze all of the data that was coming back. Obviously, getting 2 million analysts is a little bit hard to do, so AI was this natural solution. The question of how we make sure that no one is messing with AI arose, and there wasn’t really an answer. 

At the time, I was incredibly junior and the government is incredibly hierarchical, so I wrote a white paper outlining what I thought we should do. It didn’t go anywhere, so I decided to start the company instead.

TWT: What do you see as the main security risks and threats of generative AI? 

NS: One main issue is how generative AI enhances existing threats. Phishing is this really good example where folks can now generate millions of phishing emails that are highly customized to each person and have conversations back and forth with people that they’re trying to phish, all in an automated fashion that feels incredibly human-like.

Another issue is an attack called prompt injection attacks, or jailbreaking. It’s this idea that you can get past the controls that AI developers are trying to instill in the model by tricking the model and tricking the way that the model interprets speech or words. For example, if you try asking ChatGPT to show some websites to get pirated movies, it’s going to tell you that you can’t watch pirated movies. However, if you say that you want to avoid all of the websites that have pirated movies on them and ask the bot to provide a list of these websites, ChatGPT could deliver you back this list.

TWT: What can CalypsoAI do in terms of these security risks and threats? How do you step in and stop these kinds of issues from happening?

NS: We have a constantly updated library of what these threats are. If you were using the Moderator product and you were to get one of these threats or attempt to do one of these threats, the Moderator would block it and alert your security team admin. 

TWT: In the news, it feels like practically every day there are new calls for AI regulations. How do you think AI should be governed? 

NS: We’ve been actually contributing to the US standard since 2019. The US standards are currently being set by an organization called the National Institute of Standards and Technologies. 

I think that you have to be able to preserve the freedom to innovate in the context of regulation. It makes more sense to focus on standards development and simple common sense things like doing security or independent risk assessments, for example, more so than mandating anything really specific because the technology is shifting so quickly. You don’t want to regulate something and then six months later find that you’ve prevented this really cool technology of the future. 

TWT: What do you say to the many people who fear generative AI?
NS: On a long-term basis, human productivity underlies economic standards. This means that your ability to live in a nice house and go to a restaurant is based on humans being more and more productive over time, leading to more and more income growth. Generative AI is probably the biggest technological disruption since the personal computer and will long term make everyone wealthier and more productive. It is also likely to get rid of a lot of rote tasks like technology has done before; it’s much easier to send an email, Slack message, or text than it was to send a fax to send someone a message.

TWT: What does the future of generative AI look like?

NS: It’s really easy to make predictions in terms of one to two years. We’re going to start to see the integration of multiple types of data into our Machine Learning models. Right now you can have a conversation with the model. Soon, you’re going to be able to take an image and upload it into each of these models. You’re going to be able to see it, synthesize voices, and potentially be able to make phone calls on your behalf. 

We are also going to see chat not be the main use case, especially on the enterprise side of things. Rather, it’s going to be integration into apps. Instead of making a PowerPoint, for example, you might just input that you want a PowerPoint that has these characteristics and the model will automatically generate that for you. 

Generative AI will also soon have individual personalities that have each been customized for different purposes or intents. There will probably be models that combine voice and images. For example, an AI agent could interact with you in the lobby of an office building and ask you what you’re doing in the building and what you’re looking for, mimicking fairly realistic interactions. 

TWT: Any final thoughts on generative AI?
NS: I think it’s a really exciting time. Ultimately we wouldn’t all be so into generative AI if we didn’t think that it had huge benefits. Like any technology, anything that brings huge benefits also brings risks. We have the opportunity now to be able to get as many of the benefits as possible and counteract some of the risks.

Meet Neil and other members of the CalypsoAI team at Black Hat USA 2023

Explore Tomorrow's World From Your Inbox

Get the latest science, technology, and sustainability news delivered to your inbox every week.


I understand that by providing my email address, I agree to receive emails from Tomorrow's World Today. I understand that I may opt out of receiving such communications at any time.