Randomness has always been essential not only for cryptography, but also for many applications such as differential privacy or Generative AI. Recent work on generative image and video synthesis has shown that knowing or recovering the seed of the pseudo random sequence can leak sensitive information such as user prompts. At the same time, it has been demonstrated that major ML libraries (e.g., TensorFlow, JAX) rely on GPU-oriented PRNGs based on cryptographically insecure PRFs originally designed for fast parallel generation rather than adversarial robustness. This creates an urgent need for cryptographically sound, GPU-efficient alternatives.
At the same time, the growing reliance on GPUs in AI and high-performance computing motivates the development of GPU-optimised symmetric primitives more broadly. Yet this design space is still largely unexplored: many operations are extremely cheap on GPUs but require rigorous cryptanalysis from a design perspective. The proposed workshop aims to bring together experts in machine learning, cryptography, cryptanalysis, and hardware to exchange knowledge, clarify requirements, and discuss strategies and challenges for designing secure, GPU-friendly PRFs, ciphers and PRNGs, with a particular focus on their use in modern ML stacks.
TBA
Photo by Wiki user: Nicholas Hartmann.