It wasn’t long ago that the concept
of having a personal relationship with computers was the stuff of
science fiction — everything from HAL 9000 to V’Ger posited a
far-out future when that would start to happen. Well, according to
Mike Pell — author of THE AGE OF SMART INFORMATION — that time is
now.

Alan: Welcome to the XR for
Business Podcast with your host, Alan Smithson. Today’s guest is
somebody absolutely spectacular. Mr. Mike Powell, he is the head of
the Microsoft Garage and the author of “Age of Smart
Information,” a new book about how artificial intelligence and
spatial computing will transform the way we communicate forever. Find
the latest on Mike at futuristic.com and excerpts from his new book,
at theageofsmartinformation.com.

Mike, welcome to the show.

Mike: Thank you, Alan. My
pleasure to be here.

Alan: It’s so exciting. I was
gifted your book actually by a good friend of mine, John Bizzell. And
we had lunch and he’s “Oh, you haven’t read this book.” And
I guess he sent it to me on Amazon. I got it the next day, and I’ve
been just voraciously reading this book since, I’m about halfway
through. But man, your book has really opened my eyes to how
everything around us will not only have the data available, but it’ll
be in context to our personal needs. And it’s really incredible. So
how did you– just kind of walk us through your journey of how you
went from inventing PDFs, to writing books on smart information?

Mike: It’s a long story, but
I’ll try to keep it really short. You’re right, a lot of this did
sort of form when I was back in the early 90s when I was working on
Acrobat with some of my friends at Adobe. Back then, when we were
working on the very first electronic documents for interchange, it
was very apparent that people were not going to enjoy reading these
things like sitting upright and being uncomfortable. You really
needed some hardware and software that didn’t exist at that point to
enjoy the information, right. To enjoy whether it was book or
documents or reports, whatever it is you were reading. And so at that
time, I started to think a lot about how the information itself —
you know, the thing that we were reading — was so dead and lifeless.
I guess it was amazing that you could now transfer to other places
when people around the world could see exactly what you were trying
to say. But the thoughts about how there was always more to it
started to percolate back then. And over my career, I’ve always had
the good fortune of working on the leading edge of technology. So I
was very early into 3D and interactive graphics and visualization,
and I started to do a lot of experiments with bringing information to
life. I’ve always been fascinated with communications, helping people
communicate as clearly as they can. And so that was really the start
of a lot of this, was trying to see what we can do to help people be
able to understand and communicate better by using the information,
the things that we create every day, whether that’s tweets or emails
or books or movies or music, doesn’t matter. Whatever the medium is
that you’re communicating in, there’s always so much more that can be
brought out that we as people understand inherently, but yet are
never reflected in that final form, that piece of communication comes
in. So that’s where we started.

Alan: So let’s unpack that. So,
you know, I’m reading a PDF, then you guys probably added the ability
to have hyperlinks and then what else can you add. Now you’re looking
at, “OK, what does the world look like when the computers are no
longer bound by the 16 by 9 rectangular shape?”

It wasn’t long ago that the concept
of having a personal relationship with computers was the stuff of
science fiction — everything from HAL 9000 to V’Ger posited a
far-out future when that would start to happen. Well, according to
Mike Pell — author of THE AGE OF SMART INFORMATION — that time is
now.

Alan: Welcome to the XR for
Business Podcast with your host, Alan Smithson. Today’s guest is
somebody absolutely spectacular. Mr. Mike Powell, he is the head of
the Microsoft Garage and the author of “Age of Smart
Information,” a new book about how artificial intelligence and
spatial computing will transform the way we communicate forever. Find
the latest on Mike at futuristic.com and excerpts from his new book,
at theageofsmartinformation.com.

Mike, welcome to the show.

Mike: Thank you, Alan. My
pleasure to be here.

Alan: It’s so exciting. I was
gifted your book actually by a good friend of mine, John Bizzell. And
we had lunch and he’s “Oh, you haven’t read this book.” And
I guess he sent it to me on Amazon. I got it the next day, and I’ve
been just voraciously reading this book since, I’m about halfway
through. But man, your book has really opened my eyes to how
everything around us will not only have the data available, but it’ll
be in context to our personal needs. And it’s really incredible. So
how did you– just kind of walk us through your journey of how you
went from inventing PDFs, to writing books on smart information?

Mike: It’s a long story, but
I’ll try to keep it really short. You’re right, a lot of this did
sort of form when I was back in the early 90s when I was working on
Acrobat with some of my friends at Adobe. Back then, when we were
working on the very first electronic documents for interchange, it
was very apparent that people were not going to enjoy reading these
things like sitting upright and being uncomfortable. You really
needed some hardware and software that didn’t exist at that point to
enjoy the information, right. To enjoy whether it was book or
documents or reports, whatever it is you were reading. And so at that
time, I started to think a lot about how the information itself —
you know, the thing that we were reading — was so dead and lifeless.
I guess it was amazing that you could now transfer to other places
when people around the world could see exactly what you were trying
to say. But the thoughts about how there was always more to it
started to percolate back then. And over my career, I’ve always had
the good fortune of working on the leading edge of technology. So I
was very early into 3D and interactive graphics and visualization,
and I started to do a lot of experiments with bringing information to
life. I’ve always been fascinated with communications, helping people
communicate as clearly as they can. And so that was really the start
of a lot of this, was trying to see what we can do to help people be
able to understand and communicate better by using the information,
the things that we create every day, whether that’s tweets or emails
or books or movies or music, doesn’t matter. Whatever the medium is
that you’re communicating in, there’s always so much more that can be
brought out that we as people understand inherently, but yet are
never reflected in that final form, that piece of communication comes
in. So that’s where we started.

Alan: So let’s unpack that. So,
you know, I’m reading a PDF, then you guys probably added the ability
to have hyperlinks and then what else can you add. Now you’re looking
at, “OK, what does the world look like when the computers are no
longer bound by the 16 by 9 rectangular shape?”

Mike: Yeah, exactly. That was
part of that original thought. You need to be able to enjoy, or
absorb whatever it is, or create whatever it is in the current
context of what you’re doing. So lots of people have worked on this
problem, but now we’re getting to the point where it’s getting easier
and it’s getting easier for us to understand what you’re doing right
now, where you are, what the situation calls for. And then having
that information — that whatever it is, that artifact is that you’re
looking at or creating — having that reflect the best way for you to
absorb or communicate in your current context. And that means that
everything, all these objects that we create and consume will be very
flexible in the way that they can present themselves, depending on–
if you’re walking down the street with an Airbud in, you’ll be
getting your information in audio. If you’re in your home and you
have a large screen available, maybe it will be presented or
projected based on that. If you’re in the office or at school and you
have access to laptops or other people’s mobile devices, being able
to share things on that. So the information itself is the one that’s
going to become more smart. People will have to do less work to be
able to consume. And the computers and the systems that we create
will be doing more of the work to help us get the best information in
the right way.

Alan: You know, I read a quote
yesterday and it was, “It’s no longer about what you know,
because everybody has access to Google, and Google knows everything.
So it’s no longer about what you know, it’s how you use that
knowledge to create new and novel things.” And I thought that
was interesting. And one of the quotes from your book — I’m going to
read a bunch of quotes from your book because I think people need to
hear this stuff — “In the not too distant future, our most
frequent interactions and conversations may well be with our devices
and information, rather than real people. Oh, wait, that already
happened.” That quote there just stuck with me because I was
thinking, “Oh my God, how much time do we spend looking at our
phones?” And I look at my teenage daughter — she’s 15 — she’ll
be sitting with a group of her friends, and they’re all together, all
on their phones.

Mike: Yeah, I’ve observed that a
few times myself. Well, the interesting thing, too, is people are
starting to get more comfortable talking to devices, especially in
public places. Now, I can’t tell you how many times I’ve seen people
talk to Siri, or ask Alexa for something. And they’re just getting
more and more comfortable having conversational exchanges with pieces
of hardware and software. And that’s kind of a big leap over where we
were. People even talk to their remotes now, right? Like, if you have
Comcast Xfinity, they have the remote that will do all of the finding
your favorite shows just by talking to it. That’s become so easy for
some people, that it’s just natural now. For that to extend into what
we do in the enterprise and certainly in school, being able to be
more conversational with our information, with the things that we are
working with the most.

Alan: I got to– when I was at
CES this year, I got to try Kopin. They make glasses and stuff for–
they make the actual hardware for smart glasses. And one of the
things that they had was this thing called Whisper… something
technology, and it had microphones built into the glasses that
allowed me to command the glasses to do whatever I wanted. I could
ask them questions, whatever. But the great thing was that the person
standing right in front of me was seeing the same commands and it
wasn’t triggering it. And so being able to whisper commands because,
you know, not everybody wants to sit on the train into work and be
yelling at their glasses. My wife and I have the thing where you see
people walking down the street: are they crazy, or are they on the
phone? Do they have a earbud in, or are they just talking to
themselves? So it’s one of those things where we still haven’t really
figured out what the interactions are gonna be. Is it, am I going to
look at something and wink, or am I going to wave my hand, or am I
going to talk to it? Obviously, speech is going to be a big part of
it and it’s probably gonna be all these things. How do you think
we’re going to interact with the technology as we move into glasses?

Mike: Yeah. So glasses won’t be
the only thing that we’re interacting with. You just touched on it
very well. Whether it’s your mobile device, or your laptop or desktop
at work or school, or your X-Box, or Alexa, whatever you have at
home. We’re gonna have to be able — as designers, as experienced
designers — we have to do a better job of figuring out how to let
you have those conversations when they’re appropriate and when
they’re needed with a device, without making it feel strange. Like
you were just saying, someone walking down the street talking to
themselves. Well, it’s hard to avoid that because there’s not someone
walking next to them, right? Like, of course you’re going to look a
little strange. But in the office, if you’re working out in an open
space — like many of us do — we have conversations all the time,
right? You may be sitting at your desk and sort of shouting across
the room or talking to someone sitting next to you. That’s all
considered very natural. But it’s because there’s a person, right?
There’s sort of an entity that you’re conversing with. And in the
same way, the agents that we have today — the Siris and Cortanas and
Alexas — those will become more personified, right? Sort of embodied
AIs that will feel like you are talking more to something that has
the characteristics of a person. So it won’t be so strange even for
people around you because of the conversation, the back and forth and
the interaction with those agents will feel more natural.

Alan: It’s getting interesting
how the conversations with Siri and Alexa and Google Home, they’re
just– I don’t know if we’re learning how to ask the right questions,
or it’s getting smarter in knowing what we actually mean, or a little
bit of both. And you have this part of the book where you talk about
engineering clarity and quantifying the science of how to synthesize
the moment of clarity is our quest now. Can you unpack that a bit?

Mike: Yeah, so go back to what
you just mentioned, we’re still sort of in the uncanny valley of
conversational UI, right? There’s still something a bit strained and
a bit unnatural about it. But we will cross that very quickly. But in
the getting to clarity, that’s one of the things that’s fascinated me
for a very long time is — as someone who does of talks, I talk to
people all the time — being clear is sort of an art, right? There’s
not a lot of science to it. I guess you can prepare slides in a
particular way. You can get your talking points down, very articulate
and in proper order. But there is still something a bit intangible
about getting somebody to that that a-ha moment, that that moment of
clarity. And there is no reason with all of the things that we have
at our disposal right now, machine learning, reinforcement learning,
all of the presentation technologies involved with XR. There’s no
reason why we can’t apply a lot of our engineering talent and time to
figuring out what is it exactly that gets somebody to that moment of
understanding. You know, it’s not voodoo, right? It’s not like this
black art. There is a way to do it. And with so much work and brain
science going on, coupled with AI and XR, we will be able to actually
get you there faster, which has huge implications for education,
certainly, and enterprise communication.

Alan: Well, you spoke the right
language for me. My mission in life is to inspire and educate future
leaders to think and act in a socially, economically and
environmentally sustainable way. And I believe truly that we can
completely democratize education, give everybody the best possible
education, by– my goal is by 2037. I just picked a random date in
the future.

Mike: [laughs]

Alan: I figured, it’s going to
take 15 years for us to figure out all the tech and then another two
years to scale it. But if you think about, let’s say in five years we
wear glasses, and all the processing power is moving to the cloud. I
just did an interview with the head of XR for Verizon, and they’re
building systems that will allow cloud edge computing that are real
time edge computing, meaning I no longer have to have any computing
power on my phone or device or glasses. I can push it all into the
cloud. And as long as I’m within a 5G radius, I have the world’s most
powerful computers working for me, real time. You know, that will
unlock education and training and learning at a whole different
level. And then when you apply specific algorithms around
personalization and contextualization of that data as needed real
time, but also delivering more of that. So, you know, Netflix
delivers content as I watch more movies, but we don’t do that for
education at all in our education systems. They’re not really set up
to take advantage of exponential technologies. In fact, they’re set
up to not take advantage of any of those technologies. And over the
years, it took 20 years to get computers in schools. We don’t have 20
years to wait anymore as these technologies start to move. In the
next 10 years, majority of jobs that will be created don’t exist yet.
And we don’t know how to start training people for jobs that don’t
exist yet. So being able to give education at scale at the time of
need and hyper contextualized is very important. And what you’re
talking about here is figuring out that interaction between us and
the devices that is natural and simplifies our life rather than
complicates it.

Mike: Yeah, that’s precisely why
we started an experiment here in the Microsoft Garage, working in
conjunction with Dr. Fabio Zambetta of RMIT University in Melbourne,
Australia. I had this idea — based on some of the work in the book
and based on Fabio’s work with reinforcement learning — of creating
what we’re calling the adaptive textbook. It’s an experiment in
figuring out how we can actually make what you described, that hyper
contextualized textbook or way of learning that adapts itself to the
way that the person learns, the right message or the right medium for
the situation, and also their history of what’s worked for them and
what hasn’t. So it’s been a very successful partnership. It’s super
fascinating to work on this because it’s going for what you describe.
How can we be the best we can for students? How can we present the
information in the best way for them to be able to make sense of it
and be able to build on that?

Alan: It’s a big undertaking,
right? But as these technologies start to– and I think it’s going to
start — and you’re probably already seeing it — it’s starting in
enterprise. All the big companies are starting to go, “Oh, OK,
well, we’ll experiment with it. Oh, the experiments yielded 100
percent better results. Amazing. OK, well, let’s let’s move it
forward.” And so, I think personally it’s gonna be this road to
development using enterprise as the guinea pig, I guess, would be the
easiest way to describe it. Well, how do you see kind of schools
adopting this this new adaptive textbook?

Mike: Well, right now, it’s just
an experiment. But the ideas are out there. And certainly teachers
are grasping onto this notion that virtual reality is a great boon to
immersing kids in particular things, whether it’s physics or being
able to study, you know, things about the ocean. There’s lots of
applications for that. Augmented reality is being used in different
ways, being able to actually be able to put more than what’s on the
surface level. And that’s sort of where I dug in, as far as education
and the enterprise. I tried to sort of take it a step further and
say, you know what, it it’s not enough to augment physical object or
something digital that already exists. Let’s actually inject some
intelligence and presentation capability into those digital artifacts
themselves and see how that can actually help propel all this
forward. So that’s sort of where I laid out a roadmap in the book of
how this will work, where it will go, and education and the
enterprise itself will be the headpins for this, because there’s just
such fantastic applications for it, it’s so obvious to everyone how
this can help people immediately. And that’s probably why you’re
seeing a lot of the big wins in the enterprise coming in the training
space, because you can take something that’s been difficult and
costly and really make it not only better, but super successful for
those people.

Alan: What are some of the use
cases that you’re seeing that are starting to be used, not just
experimentally? Because I think that’s one of the problems with this
technology is people– companies will jump in, they’ll do a POC,
they’ll do a pilot, and then it gets stuck between that pilot phase
and then rolling it out at scale because, change is difficult. Where
are you seeing things moving forward in a real measurable way?

Mike: Yeah, yeah. I always like
to say “change is good unless it’s bad.” Yeah, there’s some
really great applications that we’re seeing in the enterprise.
Certainly if you look at the Microsoft Dynamics line of mixed reality
products that we brought out. Being able to do first line worker
assistance, right? Having someone be able to help you with a task or
be able to guide you through something is super interesting,
valuable, and something that’s not going to go away. Being able to
have a Hololens on and interact with people, get to the documentation
or training that you need on the spot is something that we’ve all
known would come along eventually and now actually exists. And we’re
doing quite well with that. Same thing with laying out and moving
equipment within the manufacturing or factory floors, being able to
use all the power of mixed reality to place objects, scale them,
align them, make sure that construction projects are going on the
pace that they should be going. Those are the applications that are
really paying out in the short term.

Alan: I think so. I think the
next phase of this is –band you touched on it in the book —
allowing people to author content. You have this kind of mantra that
you can be a consumer of content, which we all are, and you can also
be a creator of content, which I think more and more people are, with
things like TikTok and Instagram. People have become a lot more
creative, they’re starting to make things and author things, and
being able to create tools that allow anybody to make AR or VR
simply, I think will unlock the true potential of it.

One of the things you mentioned in the
book is “authoring content will be hugely affected by an
injection of services and platforms to create smart information
containers, that are capable of housing multiple representations of
the same information in a multitude of forms.” Whether you’re
wearing a Hololens or you’re wearing Bose AR glasses. You’ve got a
smartphone, a computer screen. There’s so many ways to consume
content. How do we make it easy for people to build once and have it
recognized by all the different mediums? That I think is a big
challenge as well.

Mike: Yeah, well, as discussed
in the book, the tool sets are really where all of the work is gonna
be done. We’ve relied on people to create multiple forms of content
forever, right? And we all know it’s a giant pain to do that. It’s
just sometimes we just don’t even have enough time; even if we have
the talent, just don’t have enough time. So, for example, we could
take this podcast in the not too distant future. You can use it as
just audio only or you can get the transcript. You can have that
transcript translated into however many dozens of languages
instantly. You can have this turn into, let’s say, a visual
presentation just by the topics that we’re talking about being auto
pulled in by some AI machine learning programs. So there are things
that we know are possible, but the tool sets have just not been
updated yet. And that’s where we’re getting to right now, is being
able to have these AI assisted authoring tools that will do a lot of
the heavy lifting for you, whether it’s for creating educational
plans. So a teacher is preparing a plan on physics. They can do what
they normally do. But in the course of doing that, the tool that
they’re using will actually pull in a whole bunch of other
information and create different forms of that. So the students can
either listen to the lecture, or see the lecture, or experience it
even in a virtual reality or mixed reality environment.

Alan: That’s really exciting.
And to your point, we actually use an AI subscription service to
transcribe all of these episodes and then we actually have a person
go through and pick out all the quotes and do that. But we’re using
like five different SAS based services to take the podcast interview,
to transcribe it, to create it into a little video that’s like a
video header and then make it available as a blog post. Make it
available. I’d never thought of different languages as well. But
yeah, this is something that we’re already doing and we don’t even
think of. It was like, how do we maximize the content?

Mike: Yeah, so the service that
we’re using — you and I are using right now — to record this
conversation can easily be adapted to generate all of those different
forms. It’s just a matter of the tool manufacturer. You know, the
service provider took it all together. I mean, all of the tech
exists. It’s just in different places.

Alan: Well, that is interesting
you say that because our new business model — and then we’ve been
working on this for about six months, trying to figure it out — is
actually to build a centralized hub that allows all the different
startups to tap into one central hub, standardize their product
offerings to a certain level that is commensurate with doing business
with corporate’s. And then that way, they can do business with
corporate’s without having to pitch everybody in. You have one
centralized platform for learning and any technology that’s invented
as we move forward — whether it’s AI or VR, XR, or whatever it is —
can just plug right into this platform. And that way companies don’t
have to be constantly on the lookout for new technologies. They have
one platform with all the new technology always there. And it’s
almost like an Amazon model where we take a small percentage and and
go from there. But the idea with that is that we couldn’t possibly
build even if it was just 360 video. If we just focus on 360, you
couldn’t possibly build all the tools necessary to deliver that and
keep it future relevant. So by building the platform by which other
startups, other smarter people around the world who are constantly
developing new tools can plug into. I think that will future proof
learning for the long time. For the long term.

Mike: Yeah, it’s always great to
put a platform together where you’re building on the amazing work of
other people. It certainly is something that has shown its value in
lots of different applications. Just take the XR space. I mean, look
at how many people have now banded together to try to get some
standardization, right? Whether it’s in the tooling or the playback.

Alan: OpenXR.

Mike: Yeah, I mean, that’s
definitely the way to go. It’s difficult to get this stuff to go
quickly because when you’ve built tools for a very long time, when
you’re building tools, you’re trying to add the next most relevant
thing for your customers. And as you describe very well, this is a
big investment, to create a better platform that’s more powerful to
be able to have things plug in to do this autogeneration. But it will
happen. It’s so obvious that that’s the missing piece for what we’ve
been trying to do.

Alan: Yeah, you’re absolutely
right. I just kept thinking how can we possibly build all the tools
that are gonna be needed for learning? It’s impossible. You break
your brain on it, you’re going, “Okay, well, this…” You
end up just crying in the corner, shaking a bit.

Mike: Yeah, well, everybody,
every entrepreneur and everyone, and any size company who wants– has
big dreams like this and really wants to make it happen, realizes
very quickly that you’ve got a call on the community, and do it at
that level. So I’m sure that you’ll have a lot of success in pulling
people together for that reason, because it is the next big thing for
us to work on. And for me, throughout the book I’m just trying to
show people that there’s more to it than this, that things are
actually going to invert. So we will no longer be spending all of our
time on the tools, because right now you and I and everybody else
spends an inordinate amount of time on our tooling. Whether that’s
Twitter or Word or After Effects or ProTools, whatever it is you’re
using. People spend so much time in the creation phase and we can
help so much. You know, there’s so much more to be done from the tool
side to help you create even more forms. And then on the playback
side, now, the playback mechanisms — as we talked about earlier —
will get so smart they’ll realize what the best form, what the best
medium is to communicate to you at that particular time.

Alan: Yeah, and I think one of
the points that you make is making things contextually relevant, you
know, meaning when you’re looking at a space, the data that you
require. And then if we fast forward maybe 10 years — let’s take 10
years out, put our crazy hats on and look out 10 years — we should
be able to just think something and the information appears to us. We
shouldn’t even have to talk to it, it should just appear as we need
it in context. There’s a list of–

Mike: Which is exactly how your
brain works today, right?

Alan: Yeah.

Mike: We already know how to do
that. That’s sort of my point, too, as we’re trying to build
mechanisms for people to create things in the way that’s convenient
for us, the tool builder. Not in the way that’s convenient for
people. And as the industry experience designers have gotten better
and better over the last decade at trying to make things more
natural. That’s where a lot of the great work with Alexa and Siri and
Cortana and those types of agents have come from. And we will close
that gap. And I don’t think it’ll be 10 years. But you’re right. At a
certain point, you will be able to conjure whatever the information
is, whether by thinking or by speaking or by gesturing. And it won’t
be as difficult as it is today.

Alan: It is difficult today.
It’s a pain in the ass. You put a Hololens on, it’s great. And then I
have to point out something, and click it in a certain way and– as
we move to eye tracking, I think that will add a whole new element.
Being able to collect millions of people’s information about their
eye tracking, what they’re looking at, that will create this
contextual loop of information, meaning we collect information, we
study what people were actually looking for because, of course, it’s
going to make mistakes when you ask for directions to X and it gives
you directions to Y and then, of course, it gets corrected. So it’s
kind of like autonomous vehicles in a way that when one car gets in
an accident, every car learns to get better. And I tried to explain
this to my brother and he couldn’t believe me that within five years
we’ll have long transport trucks on the road that just drive
themselves and don’t have anybody driving. Because I said, “look,
every time a car gets in an accident, you know, with a person
driving, nobody gets smarter, nobody gets better.” That person
just goes, “Oh shit, I got in an accident.” But when an
autonomous vehicle gets in an accident, every single car in the fleet
gets updated with new information on how to prevent that from
happening again.

Mike: Exactly. That’s the new
network effect, right? We saw it first with Moore’s Law, with
hardware and electronics. Now we saw with social media groups
growing, and the power of social. But now you’re right, with these
networks, the interconnected neural networks that are always learning
machine learning algorithms, always running in the background. We are
able to get smarter all the time to everyone’s benefit. And the same
holds true for information, whether it’s for education or in the
enterprise. That kind of stuff is super exciting because all of a
sudden something that was in isolation is now connected. And not only
is it connected, it’s getting better or more correct or more accurate
or more clear by the network effect.

Alan: Absolutely. And it’s every
day improving. I’m actually speaking with one of your colleagues, Dan
Ayoub from Microsoft Education, at the Orlando Science Center. And
we’re talking about how extended reality or XR is transforming
education and training globally. The work that you guys are doing in
the Garage, you’re probably working on things that won’t see the
light of day — or may never see the light of day — but they’re
building foundations for things that will come in five, ten years. So
what is your roadmap look like, as far as where everybody wants to
know what are what are we gonna do in the 3, 5, 10 year roadmap, 15,
if you can look out that far. What is your roadmap look like? As far
as when will we wear glasses? How will they work? How will it all
work together? Where do you see this going in the next five to ten?

Mike: Yeah. Well, clearly, you
know, all the trends that we see now will accelerate. So I’m a big
fan of how Ray Kurzweil studies the acceleration of our technology
trends. And so I do think everything that you notice today will be
sort of perfected within the next few years, meaning we will be able
to have multi experience or multi-person experiences that are
seamless, right? And just feel right. Whether it’s glasses, or
headsets, or earbuds, or some new type of mobile, or like whether
it’s a watch, or some kind of other wearable device. The technology,
just as it always has, will continue to get smaller, be more
embedded. I do think that we are going to get to a point very quickly
where the wearable aspect of things will become more important. So
right now, we carry our laptops, we carry our mobile devices and we
carry our headsets even for that matter. And we will sort of
crossover to the point where it becomes a more normal and regular
part of our wardrobe, like the kinds of things that we will always
have with us. So I think that’s–

Alan: Yeah, I have a pair of
North glasses, actually, and they’re pretty cool.

Mike: Yeah, yeah. Well, just
imagine, there’s just lots of people as interesting as how my
conversation with someone the other day about Apple Watch. And I
think it’s great news, like nice piece of work. It’s created an
interesting side effect, which is, you know how when you’re in a
meeting and someone — or at the dinner table for that matter — and
someone pulls out their phone and they start fussing with it and your
ear’s sort of put off by that.

Alan: It sucks. Stop doing that.
It’s a pain in the ass. You’re disconnecting from the conversation
you’re in. Sorry.

Mike: Right. Well, it’s right,
right. So the same thing is now– the same thing that happened with
phones is now happening with watches. So people in meetings are
messing with their watch. You know, disengaging from the
conversation. And now, of course, they’re getting a gazillion text
messages. You can make phone calls from your watch. So, it’s great
tech, but it has created a social side effect that was unintended,
perhaps and maybe misunderstood. And that will keep happening, right?
Keep happening with whether it’s glasses or, you know, your shirt or
whatever that might be, like a bracelet, whatever that device is that
we will use to create and communicate and stay in touch. If we don’t
get better at what we talked about earlier, which is the interaction
with other people. So we have to make it feel like you’re interacting
with a person. So it’s more natural to the people around you. And
that’s the other part of the equation. It’s one thing to interact
naturally with your phone, watch, glasses, whatever. But it’s
another, as you talked about, to be doing that with other people
around you. And that’s where we as experienced designers have to get
better at taking that into account. We have the sensors. We can tell
that there’s other people around. We can tell where you are. We can
tell if you’re moving or stationary, it’s all those factors that we
sort of pull in. But we really need to understand the context better.
And I talk about that a lot in the book, too, where it’s not enough
to be good at communicating clearly or to get you the right
information. It’s within the current context, meaning there’s other
people around that you’re bothering, or not paying attention to, or
disengaged from. And that’s a huge sort of area for us to explore and
get right as we go into this, as you’re talking about the next three
to five years. That’s where we’ve really fallen down, and we have to
figure that out.

Alan: You know, what’s really
interesting is North just pushed an update, and the glasses
recognized conversation and won’t show you — unless you click the
ring — they won’t show you notifications when you’re having a
conversation with somebody.

Mike: That’s great. And there’s
two sides to that. Well, I wish you did tell me that this is not
actually the person I thought that was, right? I love that sci-fi
scenario where your glasses tell you who you’re talking to. We have
to get that balance right. It’s like, it’s good. Like, that sounds
fantastic that they realize “You know what? I should be present
when someone’s talking to me or I’m in a group of people.”
That’s fantastic. But there is always more to that. Maybe I did
actually just call that person the wrong name and maybe my glasses
could tell me that that’s not actually them.

Alan: Yeah, I think the North
glasses, they figured out– my daughter’s, well, she was 14, but when
I showed her them, she said “These are really beautiful. They
get an 8 out of 10 for fashion, but a 2 out of 10 for function.”
Because the field of view is so small at 15 degrees, and she had to
wiggle them around to get them to work. And really, at the time that
she tried it, you could only check your messages and the weather.
Now, there’s you know, you can change your Spotify and you can order
a Uber. There’s all sorts of things you can do. But from the mouth of
babes is very, very important to listen to because they got the form
factor right. I mean, they’re a pair of glasses that are light and
they sit on my head and they look great, but they have very limited
functionality.

Mike: Yeah, that’s another fun
thing that I think you’ll get to in the book is when I talk about our
desire as designers and technologists sometimes to be clearly
beautiful first, rather than being beautifully clear. As your
daughter said: come on, fashion first. It’s got to look good. You
don’t want to be wearing something that looks like a pair of ski
goggles on your head. But the way that we go about that is we have to
have the other part. It has to be functionally — as she said —
functionally there. But it does need to be appealing. It does need to
be something that we want to either wear or possess or interact with.
And there’s a balance, always.

Alan: I think Snapchat’s done a
good job with their camera glasses. The first pairs, there was a huge
demand, they got this pent up demand. And now they’ve come up with a
new pair. They’re even sleeker looking. They’re always looking for
the forward fashion. But to your point at the beginning is, when it’s
just a pair of Ray-Bans that I can just buy off the shelf and they
just have this capabilities built into them. And then you can buy any
frame you want off the shelf that fits your face, not just two
designs or one design that may look great on somebody else, but maybe
not on me. I mean, I like my North glasses, but I don’t wear glasses.

Mike: Yeah. You don’t have to be
a futurist to predict that that will happen before too long. I mean,
everything’s getting faster and cheaper and smaller. Always does. And
it will continue in this particular industry. And so we will get to
that very sleek piece of eyewear, or watch, or bracelet, or necklace,
or whatever the case may be. That will happen before too long.

Alan: Have you tried the Nreal
glasses?

Mike: No, I missed those when I
was at South by [Southwest].

Alan: They’re really good, big
field of view, lightweight. They’ve offloaded the processing power to
a phone, I guess. They can wire through USB-C. But the form factor is
like a pair of glasses. And if you’re walking down the street and you
saw somebody wearing them, you wouldn’t actually know that they were
AR glasses, very similar to the North. They’re just very incognito.
The only difference is they have a huge field of view compared to the
North glasses. And they have absolutely no apps or anything
available. So, it’s more of a developer kit and trying to get that.
But I think that’s where Apple, and Google, and even Facebook have a
massive advantage over companies, even like Magic Leap with 3 billion
plus in funding. They don’t have the developer ecosystem. And I think
that’s where Apple is really going to shine because they’ve
introduced a ARKit, and Google’s introduced ARCore. And in my
opinion, those are the training wheels to true spatial computing.

Mike: Well, let’s not forget
about Microsoft and our incredible developer community and all the
great work that’s been happening with mixed reality. And in the
Hololens development kits. Especially when you look at our focus on
the enterprise, I mean, we’ve gotten very serious about business and
applying this technology to people getting their jobs done more
efficiently. And I think that there’s huge inroads that have been
made. And so something to keep an eye on, especially for your
interest in the enterprise and what XR is doing. Microsoft Dynamics
Group is doing amazing work and delivering some very, very useful
software for the enterprise. But this brings up another interesting
point for you to think about. So imagine whether it’s for education
or for the enterprise, imagining a room full of people, whether it’s
a classroom of 30 kids or three hundred at a lecture, all having
whatever, watches, glasses, and they’re all basically immersed. What
is that like? You know, if you’re in a business meeting in a
conference room with six people or, you know, there’s two of you and
three or four other people are remote and everybody’s using a
different type of immersion, you know, a different type of augmented
reality or mixed reality. What does that feel like? And so that’s
where we spend some time thinking about that type of interaction. And
what’s the value prop and what is the experience of being there and
having to deal with so many people, having so much at their disposal?
What does that do to the dynamics of our normal conversation?

Alan: I think– You know, I had
the opportunity — well, I was one of the first people to buy a
Hololens — and I got to try the Hololens 2. And hands-dow,n an
amazing device. But what I really found intriguing about the Hololens
is that they moved it from the devices department to the cloud
compute. And when they did that, it allowed me to kind of see the
vision for Microsoft as using these devices in the future, because
it’s no longer about just building a device that is cool. This is a
tool that runs on the cloud that is bringing real enterprise value
now, and that is connected to the entire Microsoft ecosystem. And
that, I think. is really powerful because that’s one of the problems
that I see with the other headsets that, you know, you put them on
and they’re great and you’ve got like 10 demos and that’s cool. And
every single one has the same problem. People go through the demo,
they go, that’s cool. And they put it down. Never put it back on.
I’ve seen it time and time and again. We’ve done thousands of demos
for people and it’s the same thing: They put it on, they look around,
they go “Oh, this is cool!”, they experience it, they take
it off, and they never want to put it on again. They don’t say, “Hey,
what else can I do?” And I think we need to kind of bridge that
gap. Whereas the path that Microsoft took with the Hololens was “We
don’t really care about cool.” I mean, the first thing I saw was
this aliens blasting through the walls, which was neat and it got my
attention. But really what it’s premised on now is the ROI driven
results. And I think that’s really the exciting part about this. When
you can go into a company and say, hey, by using this technology in
this way, you will save or make this much money.

Mike: Yeah. And I’m sure you’ve
seen this firsthand every day. The people who want VR or MR or AR to
become mainstream, that are sort of fixated on “What are the
sales numbers going to be? When are we going to break through into
mainstream?” They’re sort of missing the point, that these
devices and this technology is incredibly useful in its current form
for very particular tasks. And it’s that specialized nature that is
its true advantage right now. Someone made a great analogy for me a
couple of years ago. They said we don’t even think twice when we see
someone wearing a welder’s mask, right? We know exactly what they’re
doing and why they’re wearing a welder’s mask. And in a similar way,
that’s the value of using these headsets right now, there are very
particular specialized tasks that they are incredibly useful for. And
that’s what we should be thinking about and not fretting because they
haven’t broken into 500,000,000 sales, right?

Alan: Yeah, yeah, I think that’s
more of a VC investor mentality.

Mike: Well, but it’s also the
industry itself. There are many people who get down on VR, whether
it’s the media or the people that are actually building these things,
because there are not– in some cases breakout, you know, huge number
of successes yet, the value they provide are more than paying for the
investment in these types of scenarios, and I’m sure you see this
with your clients and people you deal with all the time in the
enterprise.

Alan: Absolutely. At the end of
the day, we started doing marketing things because at the beginning,
you’d go into a meeting and they’d say, “OK, well, who’s done it
before? How much does it cost, and what was the ROI?” You’re
like, “Nobody, a lot, and we have no idea. Still want to spend a
million bucks?” It was a really hard sell. And now when we go
and have these conversations, it’s a totally different conversation.
We go in, we do a quick demo, and then it’s about how we can drive
real measurable ROI, measuring the key performance indicators against
what you’re traditionally using. So we’ll set a benchmark and then
we’ll use VR and AR to deliver results and then we’ll compare the
two. And then based on those, you can either move forward or not. But
everybody moved forward, because the results are ridiculously
awesome.

Mike: Especially in education,
where you’re sort of focused right now, is so dramatic. And I’m sure
that the KPIs will bear all this out over time. But you can just tell
immediately, if someone asks you what are the numbers around ROI, you
ask a student, let’d try this and that’s what they need to know.

Alan: Absolutely. So what are
the most important things that businesses can do right now to start
leveraging the power of XR? I mean, you’re already seeing it across
many enterprises. But let’s say it’s a small or medium sized business
that wants to start getting into this. What’s the most important
thing businesses can do right now to leverage this power of the
technology?

Mike: Yeah. In the book, there’s
a chapter about surfacing the invisible, which I think is one of the
key benefits of any type of XR technology. There’s so much under the
surface that we never get to see. And so I think any sized business,
small, medium business or large enterprises can use the power of XR,
both AI and XR together, to surface these things that people don’t
normally see, whether that’s value, functionality, additional
information, levels of detail. There’s all the obvious things that
people always talk about. You know, you can use augmented reality to
show off some interesting facets of your product or service in the
sales cycle, certainly, but there’s more to it. And if you sort of
flip it over and look at how you run your business. There is so much
to be unlocked. You know, our businesses are complicated, whether
they’re small, a family owned business, or a medium sized
corporation, or a global company. We all have so much complexity and
there’s so much going on. Being able to visualize — that’s one of
the key things that I work on — is being able to visualize complex
systems, processes, being able to show people things that we know. We
form our own mental models and we sort of know how they work. But
this technology helps us see it, for that matter, be in it in a way
that we’ve never been able to do before. You talked about painting in
3D earlier in the podcast, right? That is just mind-blowing to any
designer or artist, right? Being able to– It’s completely life
changing. There are very similar types of experiences in the
enterprise, when you can visualize how your process is really working
or how your sales cycle is not working or how your manufacturing
business could be so much better if it were to be this way. People do
this in Excel spreadsheets today. They do it in PowerPoint, they do
it in conversation. But rarely do you get to actually visualize and
experience with other people how this could really be different. So
you can play the what if, you can simulate, you can do all kinds of
forecasting. So things that we all do on our minds today are now able
to be seen and experienced through XR. And the enterprise or small
business is a great place to do that. Certainly, like no question,
there’s easy wins on the sales and marketing side. But for me, and
possibly for you, the more interesting part is how we run our
businesses.

Alan: Absolutely. I have one
more quote I want to read from your book, and I think this sums it
up. “Through the combination of artificial intelligence, spatial
computing and human ingenuity, we have the perfect storm of elements
to create the most compelling immersive storytelling tool for the
informational aspects of our biggest global challenges. Let’s all do
our part to learn how to best utilize these new approaches and
technologies to tell these stories of hope and change before it’s too
late. And make no mistake, the hour is getting late.”

Mike: Amen.

Alan: I think we’re going to end
this conversation on a high note. The world is on fire, we know that,
we’ve done unspeakable things to our planet. We’re still kind of
worried about US versus China versus who gives a shit. We’re all on
this planet. It’s on fire. Let’s fix it as humanity and figure it out
together. There’s enough wealth. We need to use these technologies to
foster new innovations that can stave off our existential risk of
humanity.

Mike: Yeah. It’s clear to me and
I think anybody else that we do need to get mobilized to tell the
story more clearly. You know, regardless of where your politics or
your beliefs may be, there are some important things that we need to
communicate more clearly than we ever have. Because once people
realize that we can do something about it, they will. And I think
that the kids that they call the Gen Y, Gen Z, they’re doers. They
want to fix the world, they want to really focus on this. And they
will be the ones who probably use this technology in the best
possible way to go save the planet.

Alan: Amen. I have nothing else
to say to that, my friend Mike. It has been an absolute honor
speaking with you today. If you if you’re still listening to this
podcast. Thanks for listening this far. The book is “The Age of
Smart Information: How Artificial Intelligence and Spatial Computing
Will Transform the Way We Communicate Forever.” If you’re going
to do anything in this industry, this book is a must read and I
highly recommend it. Go get it on Amazon or you can go to
theageofsmartinformation.com. And if you want to learn more about
Mike, you can visit futuristic.com, or you can Google the Microsoft
Garage for all the cool, crazy things they’re making to make our
world better in the future. Thanks, Mike. I really appreciate you
joining me.

Mike: Thank you, Alan. Take
care.