How an end user experiences a new
mode of technology is almost as important — if not more — than how
the tech works on the inside. Because, let’s face it; it could be an
amazing bit of code, but if the human mind does not find it simple or
easy to use, it’s a non-starter. Austin McCasland is a UX prototyper
working with Google, and he and Alan chat about the finer points of
UX design for AR.

Alan: Today’s guest is Austin
McCasland, a designer and developer in immersive computing. Austin
has written multiple courses across VR and AR design and development.
He’s designed and developed Paint Space AR, named by Apple as one of
the best apps of 2017 and currently works full time at Google as an
AR Interaction designer. Austin employs a user-first synthesis of
technical understanding and UX design to create effective and useful
products in emerging technology. I’m really, really excited to
welcome Austin to the show. Austin, welcome to the XR for Business
Podcast.

Austin: Thank you for having me,
excited to be here.

Alan: If you’re looking to learn
more about Austin, you can visit his website at austastic.com. Let’s
dive right in; you work at Google as a UX designer — user
experience, for those who don’t know — walk us through what you do
on a daily basis.

Austin: I’m a UX designer, and I
do a lot of prototyping. Basically what I do is, I think about
software problems from a user-first perspective. Thinking about what
are things that people are doing that they need solved — or what are
things that they’re doing that they could do better with technology,
whether it’s a standard app, or with AR/VR – and then I basically
go through a process of iteration to come up with features for
products. And specifically, in my current role, I’m on a prototyping
team that looks at how we can leverage spatial computing across all
these different types of use cases. So, I see a range of use cases,
and explore what’s possible from a product perspective.

Alan: So, what are some of the
use cases that you’re seeing in your day-to-day business that you’re
prone to working on? Or what you’re really attracted to? What are
some of the best use cases so far?

Austin: The things that I look
out for when there’s a use case that could be particularly
well-suited — and I’ll speak mostly to AR here, although I can also
speak to VR — but in AR, when there is a problem that’s already
spatial in nature, it’s usually a pretty good indicator. You’ll see
this with try-on apps where — and I use this term more broadly to
also describe apps like IKEA and stuff — let’s say there’s this
problem of, “I need to see what something looks like in my space.”
AR is really well-suited to help with those types of problems. Or,
“what does something look like on me, in physical space?” Now,
those aren’t the only things. But any time that your users are doing
something out in the real world, or they need information about how
something would be in the real world, that’s usually a pretty strong
signal that you can lean into AR to provide some value there.

Alan: Let’s give an example. You
mentioned IKEA, and what they’re doing with their Place app. What
they basically allow you to do is take your phone, put a digital
piece of furniture in the exact size [you want], so you can see what
your couch is going to look like. But this kind of transcends all
types of visualizations. For example, I saw one where Coke did a
visualizer to show retailers what the new Coke machine would look
like in their stores. And it’s not just sending them a photo and
marking it up. It’s

How an end user experiences a new
mode of technology is almost as important — if not more — than how
the tech works on the inside. Because, let’s face it; it could be an
amazing bit of code, but if the human mind does not find it simple or
easy to use, it’s a non-starter. Austin McCasland is a UX prototyper
working with Google, and he and Alan chat about the finer points of
UX design for AR.

Alan: Today’s guest is Austin
McCasland, a designer and developer in immersive computing. Austin
has written multiple courses across VR and AR design and development.
He’s designed and developed Paint Space AR, named by Apple as one of
the best apps of 2017 and currently works full time at Google as an
AR Interaction designer. Austin employs a user-first synthesis of
technical understanding and UX design to create effective and useful
products in emerging technology. I’m really, really excited to
welcome Austin to the show. Austin, welcome to the XR for Business
Podcast.

Austin: Thank you for having me,
excited to be here.

Alan: If you’re looking to learn
more about Austin, you can visit his website at austastic.com. Let’s
dive right in; you work at Google as a UX designer — user
experience, for those who don’t know — walk us through what you do
on a daily basis.

Austin: I’m a UX designer, and I
do a lot of prototyping. Basically what I do is, I think about
software problems from a user-first perspective. Thinking about what
are things that people are doing that they need solved — or what are
things that they’re doing that they could do better with technology,
whether it’s a standard app, or with AR/VR – and then I basically
go through a process of iteration to come up with features for
products. And specifically, in my current role, I’m on a prototyping
team that looks at how we can leverage spatial computing across all
these different types of use cases. So, I see a range of use cases,
and explore what’s possible from a product perspective.

Alan: So, what are some of the
use cases that you’re seeing in your day-to-day business that you’re
prone to working on? Or what you’re really attracted to? What are
some of the best use cases so far?

Austin: The things that I look
out for when there’s a use case that could be particularly
well-suited — and I’ll speak mostly to AR here, although I can also
speak to VR — but in AR, when there is a problem that’s already
spatial in nature, it’s usually a pretty good indicator. You’ll see
this with try-on apps where — and I use this term more broadly to
also describe apps like IKEA and stuff — let’s say there’s this
problem of, “I need to see what something looks like in my space.”
AR is really well-suited to help with those types of problems. Or,
“what does something look like on me, in physical space?” Now,
those aren’t the only things. But any time that your users are doing
something out in the real world, or they need information about how
something would be in the real world, that’s usually a pretty strong
signal that you can lean into AR to provide some value there.

Alan: Let’s give an example. You
mentioned IKEA, and what they’re doing with their Place app. What
they basically allow you to do is take your phone, put a digital
piece of furniture in the exact size [you want], so you can see what
your couch is going to look like. But this kind of transcends all
types of visualizations. For example, I saw one where Coke did a
visualizer to show retailers what the new Coke machine would look
like in their stores. And it’s not just sending them a photo and
marking it up. It’s real-time, and that’s — I think — a really
powerful tool for sales.

Austin: Yeah, especially because
one of the key differences between what would be a standard
documentation — like in that Coke use case, for example — is that,
you can get a sense of scale in a picture, but when you’re doing
something and it actually appears to be there — you can walk around
it, you can hold other things up near to it — it gives you a much
better idea of scale in general. If you watch the Google IO keynote
this year, one of the examples they did was Visual Search, where they
just put a shark on stage, and it’s as big as the shark would be.
That example follows through to these physical products as well. It
doesn’t surprise me that Coke did that. I think it’s a really great
idea.

Alan: It’s amazing; I think
we’ve only just really scratched the surface of this. From what we’re
seeing, the larger brands are all experimenting with AR — and I’m
assuming you’re seeing this in the day-to-day basis — where there’s
just a lot more experimentation being done. It’s finding those killer
app use cases, and see-what-I-see — or virtual try-ons — are really
big.

I actually wrote an article called
“Augmented Reality’s First Killer App: VTOs,” or virtual
try-ons, and everything — from sunglasses, to footwear, to
necklaces, jewelry, contact lenses, shoes — there’s so many ways to
use AR to try things on. And I think the next big one is going to be
clothing, which I know Google has made an investment in. Clothing
virtual try-ons, but more for in-store. Are you seeing brands
starting to roll this out, or is this something people are just
testing with now?

Austin: I am not as familiar
with that specific work. What I can say is that, the verticals that I
see having the most immediate, effective, actionable use cases is
probably in fashion and beauty. For all those reasons that you just
mentioned. So, when I say “fashion,” that could be anything; from
a retail experience when you’re in a store, or it could be a brand
experience that you don’t need to be in a store to have. In beauty,
there’s these face filters. There is obviously a big opportunity
there. And you see people like Sephora have these
try-it-before-you-buy features in their apps and things like that.

Alan: Yeah, there’s a company
based in Toronto called ModiFace, and they were purchased just
recently — about a year ago — by L’Oreal, for their virtual try-on
of makeup. They had very, very accurate facial tracking, and L’Oreal
bought them for an undisclosed amount. So, it’s happening. There must
be value being created there.

The other thing you mentioned about
furniture; furniture is another big one that we’re seeing. What are
some of the UX considerations with that? Because once you’ve got this
app on your phone, and you want to see a couch, what are some of the
things that you consider when you’re designing something like that?
Or, what would you consider?

Austin: That’s a good
question… I think what you see is that it’s really easy for AR
applications to fall into two categories. One is something that is
very useful, but infrequent. Furniture shopping is one of those
examples, where I’m not shopping for furniture on a daily basis. But
when I am shopping for furniture, AR can be like very, very useful in
that circumstance. The second one being sort of the inverse of that,
which is things I do every day, but the struggle there is to get it
to be more useful than an existing app. But AR plays really well in
these sort of episodic scenarios, where I’m deciding on furniture,
for example. What I would start thinking about is the inbound funnel
is always super important for augmented reality, because currently,
the best AR experiences have a dedicated app. WebXR just isn’t quite
there yet. It’s getting close, but you can’t go to any browser and
just have these web-based AR experiences.

Actually, in the case of furniture, you
may be able to now, because these model viewers are basically being
— and when I say “model viewer,” it’s being able to put a 3D
model at a scale into space — let’s assume we’re IKEA. So, we have
an app that people might care to download. What I think about is,
“we’ve already gotten them into this app; now what are they
going to do? What do they want to see?” They probably want to
see if stuff fits in their space. They probably want to see if it
matches with the existing furniture that they have. I would start to
look for opportunities there. So let’s say I’m following the journey
of this hypothetical user, that wants to get a new couch that matches
with the other furniture in their living room. I would start to look
at the tools that are available both inside and outside of AR. I
consider AR to be both the computer vision input to AR, as well as
the output. I would start to look for, “are there ways that we can
assess the style of someone’s room from a computer vision
perspective, and then provide them with a better list of stuff that
will probably already look good?”

Alan: Wow. That’s like
next-level retail.

Austin: Exactly. And I think
that all frameworks that are out there — as well as just the CV
frameworks that are publicly available — are super powerful, and
we’re at a stage right now where magic is becoming accessible to
every developer. One thing I always tell people — because I give a
lot of workshops and I try to get people involved in AR — it seems
really intimidating and hard. It seems like magic. Like you’re going
to need a team of super geniuses to do this incredibly difficult
task. But in fact, the APIs are so easy now, that you could get a
development team up-and-running in like a week or two, and be doing
stuff like analyzing the room for style, even if it’s just getting
the color palette. Stuff like that.

Alan: Wow. So, the barriers to
entry for businesses are plummeting. But if you’re a company starting
out — this is a good question to ask, I think — let’s say you’re
Bob’s furniture store, and you’ve got an app already, and it allows
you to search the catalog and select the things… but AR is not
built-in. What would their first steps be? “Do I use VR? Do I use
AR?” It’s a bit confusing. What do you think is the first steps for
these businesses to start using? Because bringing together a
development team in a week is one thing, but most of these companies
have no experience in developing anything. Would you recommend they
find a developer to work with? Or work with another company? What are
your thoughts on that?

Austin: A couple of things to
pick apart there. The first is, how do they decide AR, VR, or both?
If the real world context matters to your users, or it needs to be a
thing that happens on the go, then AR is probably going to be a
stronger, more compelling fit. Just because you can’t incidentally
use VR; you need to be planning to do it as a consumer, because you
don’t carry the headsets with you. But if you need to have a really
detailed, highly-immersive experience — that is, you’re just
focusing on the virtual content — a good example would be if you’re
training someone how to service a new type of valve. That might be
better suited in VR, because it’s hands-free, and they can have this
really focused experience there.

Moving forward, “how am I empowered
to do this AR stuff, or VR?” What I would say is, figure out a
general idea of where you’d like to go, and then speak to someone
like a developer. But less to see if they can be your developer, and
more to figure out the difficulty level of what you’re proposing.
There are certain problems in computer vision that sound like, “of
course we can do that,” but they’re actually quite difficult; and
there are other problems that sound ridiculously complex, but that
actually might be fairly simple.

Alan: Can you give examples?

Austin: Uh,
yeah. Here’s a great one: if I told you that I could, like,
perfectly apply new makeup to your face — when you moved, it was
projected on you, it was exactly like you — that might sound really
hard, but that’s actually super easy. With all the new face APIs that
we have, you don’t need to spin up a big effort to make that happen.

But on the flip side, I can then tell
you; if you want to detect when someone is, let’s say, confused? Or
smiling? Or something like that? It’s possible, but you’re gonna have
to spend a lot more time developing that. And it’s just because, in
that particular circumstance, the way that the face moves when you’re
experiencing emotions is not as apparent. Same with eye tracking;
it’s very difficult to track someone’s gaze through a phone right
now.

There’s these nuances in development
that mean you probably want to talk to someone to see how hard what
you’re proposing is. That’s a good first step. But let’s say you get
their feedback: there’s two things you can do. If you’re feeling
really scrappy, you can do what I did, which is learn some
development and get it started. Unity is a tool that we use a ton,
and it’s cross-platform. So if you’re going to be doing a mostly
AR-dedicated app, I would look into that. And there’s tons of amazing
tutorials and content.

But let’s say you’re not looking to get
that scrappy. There are some — I can’t remember now off the top of
my head — but there are companies that will do AR/VR work for you.
However, one thing that I think a lot of people don’t realize is that
a lot of the core skillsets that you would want for making an AR or
VR product are — from a development perspective — their game
development stuff, because the people in that field understand how to
work with 3D things, how to work in 3D space. And they understand the
pipelines that are needed to get a 3D model into the world, and
they’ll be able to ramp up super quickly, even if they aren’t already
AR/VR people. If you can find an interested game developer — or a
few of them — they can probably get AR or VR stuff started very,
very quickly, because it’s just downloading one of these frameworks
like ARCore or ARKit.

Alan: You get to see a lot of
different use cases come across — I would assume — in your
research, and also just working at Google. I’m sure you see a lot of
different things at conferences and stuff. What are some of the best
use cases you’ve seen of XR for business?

Austin: I think some of the most
promising use cases right now in VR — because when we say XR, we’re
talking about AR, VR, and the weird, hazy space in between with
pass-through cameras and Magic Leap and all that — where I am seeing
there being a lot of traction, and things that I feel are really
compelling from a business perspective, a lot of them are in the B2B
space. And when I say this, it’s for headset-based experiences. This
could be for something like a Magic Leap or an Oculus or something
like that. And these B2B use cases… the thing is that, consumers
are fickle. And these headsets have an inherent cost; this barrier to
entry. These teams are trying to push the prices low, but inevitably,
if it’s difficult for me to get a consumer to even download my app?
It is much, much more difficult for me to convince them to buy a
whole device.

However, from a business perspective,
it’s really just like a return on investment thing. If I can provide
an application, and let’s say it is to train mechanics on how to
repair this thing; if I don’t have to fly them out on-site to the
actual equipment, or if I can demonstrate that they are trained
faster, or they retain information better, which makes them make
fewer mistakes it’s really easy for the business to say, “yeah,
we’re just going to buy 10 headsets and do this.”

Alan: Yeah, we’ve seen that
quite a bit.

Austin: I think training is
powerful in VR, in these circumstances. And it’s interesting, because
at first I thought, “oh, wouldn’t it be awesome if I was a car
mechanic, and I could just automatically see where the part I need to
fix is?” But the thing is that, it’s mostly consumers that need
that. The people who are already experts in their field; they already
know what to do.

Alan: They don’t need to know
where the oil goes. [laughs]

Austin: Exactly. But one thing
that I have seen that is really interesting are asymmetric,
phone-a-friend type experiences, so that you can have — an example
could be, let’s say we’re in a blue collar situation, where someone
is out repairing something in the field, right? And they may not be
the expert, but they’re the one who’s out there, and something’s
going wrong. Being able to overlay information for those types of
people; I can see what they see, and I’m pointing things out to them.
That can be really powerful, to basically make every employee that
that person has, an expert. And for some of those, you can even get
away with using Google Glass and stuff like that.

Alan: So, remote assistance and
see-what-I-see. It’s like having an expert leaning over your
shoulder.

Austin: Another area where I see
a lot of B2B traction is in architecture and construction planning.
This is — again — with the headsets that there are some AR
experiences, but a lot of architecture firms are bringing clients
through their proposals in virtual reality now. Just because it’s
such a selling point, and – frankly — a competitive advantage for
your architecture firm, to be able to actually let people step foot
in their building before a single brick is laid.

Alan: Yeah, I’ve seen a number
of different use cases. One of them in particular was a hospital,
where they rendered it out in 3D, and put everybody in the VR
headset. And then the nurses, for example, got to sit at the nurse’s
station. As they were sitting there looking around, they realized
that there was a wall that was in the way of communications, and
blocked the flow of everything. And this is before they’ve even dug
ground. So, they were able to catch this line-of-sight problem really
early.

Austin: At one of my previous
companies that I worked at, we were exploring a lot of… it wasn’t
architecture, but it was in the sort of B2B use cases for spatial
computing. If they had built that wall, and then they had to tear it
down and fix it, or something difficult happened that caused someone
to get more injured, or not get help? The cost is immediately
justified. That’s why it’s so powerful; because, as a consumer,
you’re not going to save money by getting a VR headset. You might be
able to do certain things better. You might have these awesome
experiences, and those might have value to you. But in the right
circumstance — in a B2B perspective — you can actually say, “you’re
wasting money if you don’t engage in this VR content,” because
procedural non-adherence or mistakes get made. And that’s really easy
to justify, if you’re doing that B2B stuff. It’s got a lot of legs in
the B2B space.

Alan: We’ve kind of come full
circle. We’ve talked about using AR on the mobile phone devices to
showcase virtual try-ons, and then we take it up a notch to the
virtual reality glasses, or augmented reality headsets, where you can
train people in not necessarily difficult-to-train scenarios, but
allow them to have much more practice than normal because they can
repeat it, they don’t have to travel to do it.

Let’s say, for example, your training
takes six months to get somebody a gas fitter, for example; six
months to get them up-and-running. You can probably shorten that time
dramatically, reduce the number of errors, and then — using that
see-what-I-see or remote assistance feature — virtually eliminate
all of their potential errors. If there is an unknown, they can call
for backup. Are you seeing this being rolled out at scale, or is this
still in proof-of-concepts? Or, what are you seeing?

Austin: So, I can’t get into too
many specifics on everything that I’m seeing. But I will say that
there are many proof-of-concepts out there, and there are things that
are being rolled out and tested at scale, for these high-fidelity,
headset-based experiences. The interesting thing is when we talk
about at scale — and this is why I peg these heads-up basic
experiences as probably a better fit for B2B, and the AR phone
experiences as B2C — it’s because the scale that we’re talking about
when we’re talking about the headsets, is such that, maybe you have a
business model where you say, “get 10 headsets, and you can use our
software to train your employees to do this thing.” And then
essentially, as a business owner, you’re not working on the headsets,
and once the software gets to a certain point, what you’re really
doing is almost like you’re providing a service. You’re providing
this training service through your software — almost sassy. The
scale there is like, “I have probably fewer clients, but they’re
whales. I’m doing bigger deals.” There’s big money to be made with
each client you win. So, doing something like that at scale, you
might only have 10 major clients, and that’s a big deal.

Alan: No kidding.

Austin: In the B2C stuff with
AR, the reason why this is powerful is because everyone already has a
phone. Now, you can actually make a product that’s targeted at pretty
much everyone in America right now, right? Because everyone has a
smartphone. And not just America. There are some places where people
have lower access to phones, but for now, we’ll just say everyone.

Alan: Well, I’ve read a stat
yesterday that — as of last week — there was 400 million Google,
ARCore-enabled devices out there. That’s a pretty good scale; 400
million devices. Then Apple announced their ARKit devices; there’s
about 600 million. So we’re at about a billion smartphones that have
AR-enabled superpowers built into them.

Austin: Exactly. And that’s a
big deal. If you can convince one tenth of those people to spend a
dollar each, then you’re doing pretty good. Or even a hundredth of
that many people, to spend any money at all. The challenge in the
consumer space is that consumers are fickle. B2B; you can make this
really reasoned and logical approach for return on investment to your
clients. “You’re gonna save money. This is good for both of us.”
And you land fewer of those clients with more money. If you’re an SMB
person, considering developing something for consumers, there’s still
— like you’re saying — a billion devices. That’s all a lot of
opportunity.

But people are more fickle, and your
go-to-market strategies, and your general approach to your product
are going to need to change to try to lure them in to use your app.
One of the things that I find is, in AR, little moments of… how do
I put this… flashiness – like, cool entrance animation, or
something beautiful or interesting happening — that can be really
powerful, and a huge draw. If you’re going into the consumer market,
understand that you might be spending some of your development time
working on things that are not your core value proposition, but which
help users feel like they’re having a high-quality, interesting
experience. Whereas on the B2B side, you don’t need to focus on that
quite as much, because you have a captive audience.

Alan: There’s some of these VR
communication platforms, where you can go in and you can work
together, and they’re not fancy. They’re not flashy. The environments
are decent, but they’re just there to get the job done. And they do a
great job at that. When we’ve built B2C apps and stuff, we realize
that consumers are fickle. They expect everything to look like it has
a million-dollar budget, because that’s what they’re seeing on a
daily basis on Facebook and LinkedIn. And Microsoft’s done a really
great job at sandbagging everybody by showing what the Hololens can
do and stuff, when it’s not even real. I mean, they just released a
video of Minecraft. Very, very well-edited. And then, when clients
come and say, “we want that,” you’re like, “well, that’s great.
That doesn’t exist. It’s all CG. I can make you a beautiful video
like that.” There’s a bit of a challenge between consumers’
expectations and what is possible to be delivered.

Austin: Yeah. I just spoke at
Google IO. The session that I hosted — with Diane Wong — was on
using augmented reality as a feature, and I think this is one of the
ways that you can circumvent that. I think for a lot of businesses,
the best way for them to leverage augmented reality is to not have an
AR app, but just have a feature in your app that’s powered by AR, and
consider augmented reality to just be another thing in your tech
stack. Just like you can have a back-end database that can supply
real-time information online. You also have AR that lets you have a
pass-through camera experience and understand the world, and put
stuff back into it.

Alan: I love that. I was
speaking to one client and they wanted us to build a virtual try-on
for their product, but they wanted a separate app. I said, “well,
are you going to be selling through your app?” They said, “no,
we’re selling through our website.” But then I said, “well, if
you’re selling through your website, then you’re going to hit a
button and ask people to: download an app; open the app; try on the
object; and then you have to go back to the website to buy it. You
just lost everybody, right across the board.”

Austin: Yeah.

Alan: Being able to be cognizant
of the consumer journey… if somebody is on your website, don’t take
them into another app to do something else. I love your idea of using
AR as a feature within a bigger app as part of the experience, not
the experience. Unless you’re Pokémon Go, in which case, go
nuts. This is the key takeaway. If you’re building something for a
consumer, make sure that you’re not just building AR for the sake of
AR; but, it serves a purpose within the greater potential of your
app.

Austin: I think that’s so
important. Actually, in our process — and my process — you always
just periodically need to be able to have a good answer to, “why is
this better to do in AR than not in AR?” And if you are unable to
answer that question? To be totally frank, the state of things is
that XR is emerging tech. The APIs are constantly shifting. There is
a smaller workforce that are experts in working on it. If you’re a
business that wants to do well, that’s somewhat risky, right? There
needs to be a payoff for that risk. If you can’t answer the question,
“why is this better in AR,” or, “why is this better in VR?”
Then there’s probably more traditional avenues that are going to be
easier for you, that are going to provide just as much value. But I
think what we’ve been talking about this whole time is, where are
those areas that are actually more compelling in a significant way to
your business with AR or VR than without it? Those are really the
sweet spots to keep a lookout for.

Alan: Got it; “where is it
better?” And we talked about some of them, where AR does make
sense; anything where you need to see something in context to the
real world. One of the ones I saw a long time ago was a paint one,
and it allows you to point your phone on the wall, pick any color you
want, and the wall would automatically change to that color. It was
kind of cool, but the other thing is, I saw another version of that
where you just took a photo, and it did that. It wasn’t real-time. Do
you really need it to be real-time? That’s another great question; do
these things have to be real-time? Or can they be from a photo and
redone? Because making it work real-time is a lot more difficult than
post-production, or doing an app that just finds a wall and sticks it
on.

Austin: You’re totally right. It
can often be hard to make a clear, reasoned assessment of why
real-time is better than not real-time, because it’s this feeling of,
“it’s really there, and you can really see it!” And there’s these
little things with how you move through space. But I actually
consider augmented reality to be both the in and the out. And I think
that those asynchronous operations, I would still consider to be AR.

I think one of the key things that I
have been coming to understand as I’ve been working more deeply in AR
is that you can have your camera understanding the world and
projecting stuff back out into it, but you can also have a successful
AR experience with either side, right? I can know stuff — like my
product catalog if I’m IKEA — and then all I have to do is just put
that in the world and it’s really cool. I can also just understand
stuff from my user’s perspective, and then have the output not be in
AR.

Let’s say we use that IKEA example
again, where I look around my room and it figures out my style. I
could take you into an experience where now, you’re just browsing a
list of all the things that are in your style, and there’s no AR
output. There’s this real flexibility there. I think when people
think about AR products, they’re like, “oh, what would I put
into the world?” But just look at Google Lens, for example.
Google Lens lets you get x-ray vision on stuff, so I can look at a
product and I’ll get similar products. The results often are not in
3D; they’re in 2D. That really starts to unlock what’s possible with
AR, when you think about it in that way.

Alan: Interesting, that’s a
really great way of looking at it. So, we’re coming near the end of
this podcast. What problem in the world do you want to see solved
using XR technologies?

Austin: That’s a good
question… the most powerful things that technology does is, it
makes us high-information people. We know stuff. The average person
knows more now than ever before. You’re a Google away from knowing
anything on any topic… except for in circumstances where I need to
know information about the world around me. Like, I want to look at
someone’s shirt, and know how much that shirt costs. I have to do a
lot of steps to figure that out. And as we see the form factor of
spatial computing evolve, it would be really great if we got to a
place where we could have incidental XR experiences. And by that I
mean, I don’t have to take out my phone and go into an experience,
and I don’t have to go back to my house and put on my VR headset. If
I always have something… and if you look at where the money is
flowing, I do think that there’s evidence to support that we are
getting there.

Alan: Yep. Magic Leap just
raised another $280-million.

Austin: Yeah, exactly. Here’s
why it’s important to get in now. You want to understand the space.
People say “fail fast!” Get all of your fast failing done now,
and then when we are able to have these incidental experiences where
I could just be walking down the street and I don’t have to
intentionally go in; I can have these passive things.

Alan: I actually came up with a
crazy idea for a UX for exactly that. When you’re wearing glasses, we
keep seeing this hyper-reality version of the world where marketers
have hijacked our senses, and there’s flashy stuff everywhere. But I
came up with an idea of — this, again comes down to the user
experience — if I’m walking down the street, I don’t want all these
things flashing at me, and lines that are like, just, craziness
around me. I want to know that, “okay, that building has some sort
of AR content on it.” It’s maybe highlighted or whatever. Very
subtle, but I choose when I want to see the highlighted spatial
computing information. But it should be just a natural part of my UX;
of my day. I’m like, “oh, I if I look at that sign, there’s an
extra piece of 3D content in that sign, if I allow it.”

Austin: I love that. And I think
a great proxy is, if you look at your phone, your operating system
doesn’t give you ads. You can go into experiences like apps and stuff
that might have ads, but you’re not going to make a phone call and
have someone trying to sell you something. And I think that these
spatial computing platforms are going to be similar, because no one
really wants that. Like, no one does.

Alan: [laughs] Yep.

Austin: And I think that
advertisers are aware of this as well. You don’t want to enter into a
space of negativity for the people you’re advertising to. It’s in
their best interest to be delicate in how they handle spatial
computing advertisements, anyways. I don’t think we’ll end up in that
hyper-reality future. I really hope we don’t. [chuckles]

Alan: I really hope so, too. Oh
my god, can you imagine? There’s a video on YouTube called Hyper
Reality. It’s seven minutes of what happens if marketers are allowed
to take over our senses. It’s kind of awful.

I think these shows like Black Mirror
have really opened up people’s eyes to the possible negative
consequences of this technology. And everybody that I’ve had on the
show — and almost everybody that I talked to in this industry — is
aware of the negative consequences. But we’re all pushing towards a
more inclusive future.

Austin: Absolutely. And people
who are specialists in AR and VR are fewer in number, but that is
also changing. There’s so many courses that are popping up that it’s
not oversaturated at all. But even if you compare today from two
years ago, there’s never been a better time to find people with AR
and VR expertise to help execute on your vision. The number of people
who are competent in it is growing every day, with these online
learning programs, or otherwise.

Alan: You know what? I keep
saying this; I’m shouting it from the rooftops. The timing is now.
We’re about to announce something at AWE this year, which I think
will actually contribute to the success of this, but I can’t talk to
it on this podcast–

Austin: Fair enough.

Alan: –I
don’t know when this one is going to air! Is there anything else you
want to leave the listeners with before we close off?

Austin: It bears repeating one
more time; in the early days of AR, everyone came out with AR apps.
Even the app that I made — Paint Space, which won some awards —
it’s nothing but AR. In general, we’ve seen that be a tricky
proposition. But from a business perspective, think about AR as a
technology that can allow your users to do things better, or can
empower them to do something they were not able to do before. So
don’t think about it as this separate thing. Think about it as
another way to engage with your users on your applications, and with
your business.