Public opinion on ai and
AI memory
episode three:
human-centred ai development
Unlike traditional seminars, our global seminar series brings together researchers, policymakers, and practitioners to help us bridge theory and real-world application. The goal? To explore practical ways to empower digital citizens and create humane markets that drive innovation while respecting individual rights.
We are committed to giving digital citizens a say in how things should work online.
The rapid advancement of artificial intelligence (AI) has brought forth unprecedented opportunities for innovation, efficiency, and productivity. However, as AI systems become more deeply integrated into daily life, the question of how these systems handle, store, and utilise user data, referred to below as “AI memory”, has emerged as a critical issue. The architecture of AI memory not only shapes the technical capabilities of AI but also has profound implications for privacy, trust, competition, and the very fabric of digital society.
The Risks of Centralised AI Memory
AI systems today often rely on centralised memory architectures, where user data is stored and managed by the platform provider. This default approach creates a series of interlinked risks:
- Privacy Risks: Centralised storage of consumer data increases the potential for misuse, unauthorised access, and erosion of public trust.
- Monopoly and Market Concentration: When a handful of companies control both the AI models and the memory layer, they gain disproportionate economic and strategic power, stifling competition and innovation.
- Security Vulnerabilities: Centralised memory creates single points of failure, making systems more susceptible to breaches and attacks.
- Opaque Data Custody: Users often have little visibility or control over how their data is retained, shared, or used, leading to a decline in trust and willingness to adopt AI technologies.
The Imperative for Architectural Change
The core solution to these risks lies in rethinking the architecture of AI systems. Rather than fusing cognition (the reasoning capabilities of AI) with custody (the storage and management of user data), these functions should be separated. AI models should act as stateless reasoning engines, accessing user memory only when explicitly permitted, and relinquishing that data immediately after use. This architecture ensures that:
- Users retain control over their personal data.
- Memory access is explicit, auditable, and revocable.
- AI providers reduce their liability for sensitive data.
- Regulators gain a clear audit trail for compliance and enforcement.
The Universal Memory Protocol
A practical approach to this architectural shift is the implementation of a Universal Memory Protocol (UMP). UMP enables AI systems to request user context through standardised, secure channels without retaining that information. The protocol leverages existing internet infrastructure, such as DNS for resolving memory service locations and secure HTTPS/TLS connections for data transfer.
Key features of the Universal Memory Protocol include:
- Stateless AI: AI systems do not store user context beyond the immediate task.
- User-Controlled Memory: Individuals and organisations decide where and how their data is stored, whether locally, in the cloud, or in sovereign data centres.
- Interoperability: The protocol is designed to work across different AI models and platforms, supporting both personal and team-based memory solutions.
- Auditability: Every access to user memory is logged and can be reviewed for compliance and security.
Building Trust Through Transparency and Control
Public sentiment towards AI is shaped less by the technology’s capabilities and more by the degree of trust users have in how their data is handled. People are generally optimistic about the benefits of AI, but are deeply uneasy about indefinite data retention and opaque data sharing practices. Trust erodes sharply when users feel they cannot meaningfully challenge or understand how their information is used.
By making memory access explicit and user-controlled, AI systems can rebuild trust without sacrificing innovation. This approach aligns with the growing demand for privacy, transparency, and user empowerment in the digital age.
Regulatory and Geopolitical Considerations
The challenge of AI memory is not solely technical; it is also regulatory and geopolitical. Proprietary memory silos lock in business models and create high barriers to entry, foreclosing competition and innovation. Regulatory frameworks must move beyond focusing on firm size or specific behaviours and address the deeper architectural issues at play.
International coordination is essential. No single country can shift the global digital landscape alone. Like-minded regions, such as the EU, UK, Australia, Canada, and others, should find ways to collaborate on interoperable standards and enforceable safeguards. Failure to do so risks ceding control of digital identity and national memory to a few dominant players, effectively turning nations into digital colonies.
The Path Forward: Pilots and Practical Implementation
Transitioning to a federated, user-controlled memory ecosystem is both feasible and necessary. Pilot projects can validate the technical and operational aspects of the Universal Memory Protocol in controlled environments before scaling up. These pilots should focus on:
- Lightweight chatbot interfaces with no persistent memory.
- Single memory service instances are discoverable via DNS.
- Dynamic, user-controlled memory stores.
- Collaboration between universities, public sector organisations, and private enterprises.
Such pilots will help address concerns around latency, interoperability, and regulatory oversight, paving the way for broader adoption.
Economic and Social Implications
A distributed memory architecture not only enhances privacy and security but also fosters a competitive market for data services. Data brokers and other intermediaries can become allies in this new model, offering privacy and security as value propositions. The shift from oligopoly to competitive markets incentivises better practices and innovation, benefiting consumers and society at large.
Conclusion
The future of AI and digital governance hinges on the choices made today about data custody and memory architecture. By separating cognition from custody, implementing universal protocols, and fostering international cooperation, it is possible to build AI systems that are both powerful and trustworthy. The time to act is now—before centralised memory becomes too deeply embedded to change. The path forward is clear: empower users, enhance transparency, and create a digital ecosystem that truly serves human values.
Participate in the conversation
If we want digital life to work for people, we need an efficient playbook. That means rules that are simple, flexible, and based on how things actually work. Less patching old laws, more building digital spaces where people really matter.
Want to keep up with real solutions and straight talk about tech and policy? Episode two will show how these ideas can work in practice through an EU-registered data intermediation service.
Tune in, ask questions, and let’s make digital rules to truly reflect human values.