As generative AI (GAI) platforms become more commonplace, concern over their security issues is growing. As with any digital product, security relies on four arenas. User responsibility, corporate accountability, government regulation and industry standards. The first two are unreliable because users feel put out by having to protect themselves and corporations don’t like to spend money on security upfront. That leads to the third arena, legislation produced by people who don’t know the difference between a thumb drive and a thumbtack.


That put a lot of the load on industry standards and one of the most active is the European Telecommunications Standard Institute (ETSI). Cyber Protection Magazine’s (CPM) editors Lou Covey and Patrick Boch sat down with Scott Cadzow, chair of ETSI’s Specification Group for Securing Artificial Intelligence about the progress and problems of standardizing safe GAI.

---

Send in a voice message: https://podcasters.spotify.com/pod/show/crucialtech/message
Support this podcast: https://podcasters.spotify.com/pod/show/crucialtech/support