The book Our Next Reality: How the AI-powered Metaverse Will Reshape the World is structured as a debate between Alvin Wang Graylin and Louis Rosenberg, who each have over 30 years of experience in XR and AI. Graylin embodies the eternal optimist and leans towards techno-utopian views while Rosenberg voices the more skeptical perspectives while leaning more towards cautious optimism and acknowledging the privacy hazards, control and alignment risks, as well as the ethical and moral dilemmas. The book is the strongest when it speaks about the near-term implications of how AI will impact XR in specific contexts, but starts to go off the rails for me when they start exploring the more distant-future implications of Artificial Superintelligence at the economic and political scales of society. At the same time, both sides acknowledge the positive and negative potential futures, and that neither path are necessarily guaranteed as it will be up to the tech companies, governments, and broader society which path of the future we go down.

What I really appreciated about the book is that both Graylin and Rosenberg reference many personal examples and anecdotes around the intersection of XR and AI throughout each of their three decades of experience working with emerging technologies. Even though the book is structured as a debate, they also both agree on some fundamental premises that the Metaverse is inevitable (or rather spatial computing, XR, or mixed reality), and that AI has been and will continue to be a critical catalyst for it's growth and evolution.

They both also wholeheartedly agree that it is a matter of time before we achieve either an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but they differ on the implications of these technologies. Graylin believes that ASI has the potential to lead humanity into post-labor, post-scarcity, techno-utopian future reality where all of humanity has willingly given up all cultural, political, and economic control over to our ASI overlords who become these perfectly rationally-driven philosopher kings, but yet still see humans as their ancestors via an uncharacteristically anthropomorphized emotional connection with compassionate affinity. Rosenberg dismisses this as a sort of wishful thinking that humans would be able to exert any control over ASI, and that ASI would be anything other than cold-hearted, calculating, ruthless, and unpredictably alien. Rosenberg also cautions that humanity could be headed towards cultural stagnation if the production of all art, media, music, and creative endeavors is ceded over to ASI, and that unaligned and self-directed ASI could be more dangerous than nuclear weapons. Graylin acknowledges the duality of possible futures within the context of this interview, but also tends to be biased towards the more optimistic future within the actual book.

There is also a specific undercurrent of ideas and philosophies about AI that are woven throughout Graylin's and Rosenberg's book. Philosopher and historian Dr. Émile P. Torres has coined the acronym "TESCREAL" in collaboration with AI Ethicist Dr. Timnit Gebru that stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. Torres wrote an article in Truthdig elaborating on these interconnected bundle of TESCREAL ideologies are the underpinnings of many of the debates about ASI and AGI (with links included in the original quote):

At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.