Princess Leia’s
desperate holographic plea to Obi-Wan Kenobi might have been a vision
of the far flung future in 1977, but today, volumetric capture is
making that a reality. Using cameras and the AR cloud to map and
replicate an object in three-dimensional space, volumetric capture
has lots of practical use cases – Raj and Alan talk about a bunch of
them.

Alan: Today’s guest is Raj
Puran, director of client XR Business Development and Partnerships at
Intel. Raj is a 25-year veteran of the semiconductor and software
industry with Intel Corp. He is currently director of Business
Development and Strategic Partnerships, focusing on growth areas of
compute in virtual, augmented, and extended realities. Raj has spent
the last four years of his tenure on building business opportunities,
use cases, experiential marketing with partnerships in location-based
entertainment, museums, education and other commercial XR segments.
Prior to moving into business development, Raj has held several
positions in I.T. systems, engineering, data center engineering,
information security, network and cellular IP development, ERP
business engineering, and Healthcare Solutions Development Budget
leads opportunities for his customers and partners to utilize
exciting and intense compute power in the immersive technology and XR
landscape. Through a collective ecosystem of compute-focused
processing, storage, sensing technology, data processing, content
creation solutions, and new innovations in the area of wireless VR,
AR, 5G, artificial intelligence, volumetric capture, and immersive
sports available from Intel. To learn more about Intel, visit
Intel.com. Raj, I’m very excited to welcome to the show. Thank you so
much for coming on.

Raj: Alan, thanks for having me.
It’s a pleasure to speak to you again.

Alan: It’s really amazing. The
last time we saw each other was at the Mixed Reality Marketing Summit
in New York City, which was a really amazing conference. It was kind
of like an un-conference. It was in the basement of the National
Geographic exhibit, where you could walk around and see all sorts of
amazing things. And in the basement of this center was some of the
brightest minds in XR Technologies getting together to discuss the
marketing capabilities. And I know you have worked on everything from
the marketing side to education side. Tell us, what are you doing at
Intel to drive XR forward?

Raj: Yeah, I think the biggest
thing for us is we are known as a PC platform company, but I think
we’re bigger than that, obviously. We’re doing things in the area of
volumetric capture. We’re working on a portable volumetric solutions
like RealSense sensing solutions, which allows you to create really
elaborate programs around immersive media and immersive experiences.

Alan: Let’s unpack that one
thing. What do you mean by volumetric capture?

Raj: Volumetric capture has
generally been where you place a subject or an object or a person in
a series of cameras, right? So this is basically a room; an array of
cameras are set up, and the subject is in the center point of that
array of cameras. And essentially, a singular object is captured, and
then it’s — utilizing point cloud and the camera data — you
essentially create a 3D object, right? That could be a 3D rendering
of said person, or object, or whatever that subject may be. And you
are able to then utilize that; whether you utilize it in a 2D
production or 3D production like VR, you can then utilize that to be
used as holograms, or virtual characters, or so forth.

Alan: So being able to create
recreate the Star Wars little hologram thing.

Raj: Absolutely. And that’s one
of the use cases, right? So, h

Princess Leia’s
desperate holographic plea to Obi-Wan Kenobi might have been a vision
of the far flung future in 1977, but today, volumetric capture is
making that a reality. Using cameras and the AR cloud to map and
replicate an object in three-dimensional space, volumetric capture
has lots of practical use cases – Raj and Alan talk about a bunch of
them.

Alan: Today’s guest is Raj
Puran, director of client XR Business Development and Partnerships at
Intel. Raj is a 25-year veteran of the semiconductor and software
industry with Intel Corp. He is currently director of Business
Development and Strategic Partnerships, focusing on growth areas of
compute in virtual, augmented, and extended realities. Raj has spent
the last four years of his tenure on building business opportunities,
use cases, experiential marketing with partnerships in location-based
entertainment, museums, education and other commercial XR segments.
Prior to moving into business development, Raj has held several
positions in I.T. systems, engineering, data center engineering,
information security, network and cellular IP development, ERP
business engineering, and Healthcare Solutions Development Budget
leads opportunities for his customers and partners to utilize
exciting and intense compute power in the immersive technology and XR
landscape. Through a collective ecosystem of compute-focused
processing, storage, sensing technology, data processing, content
creation solutions, and new innovations in the area of wireless VR,
AR, 5G, artificial intelligence, volumetric capture, and immersive
sports available from Intel. To learn more about Intel, visit
Intel.com. Raj, I’m very excited to welcome to the show. Thank you so
much for coming on.

Raj: Alan, thanks for having me.
It’s a pleasure to speak to you again.

Alan: It’s really amazing. The
last time we saw each other was at the Mixed Reality Marketing Summit
in New York City, which was a really amazing conference. It was kind
of like an un-conference. It was in the basement of the National
Geographic exhibit, where you could walk around and see all sorts of
amazing things. And in the basement of this center was some of the
brightest minds in XR Technologies getting together to discuss the
marketing capabilities. And I know you have worked on everything from
the marketing side to education side. Tell us, what are you doing at
Intel to drive XR forward?

Raj: Yeah, I think the biggest
thing for us is we are known as a PC platform company, but I think
we’re bigger than that, obviously. We’re doing things in the area of
volumetric capture. We’re working on a portable volumetric solutions
like RealSense sensing solutions, which allows you to create really
elaborate programs around immersive media and immersive experiences.

Alan: Let’s unpack that one
thing. What do you mean by volumetric capture?

Raj: Volumetric capture has
generally been where you place a subject or an object or a person in
a series of cameras, right? So this is basically a room; an array of
cameras are set up, and the subject is in the center point of that
array of cameras. And essentially, a singular object is captured, and
then it’s — utilizing point cloud and the camera data — you
essentially create a 3D object, right? That could be a 3D rendering
of said person, or object, or whatever that subject may be. And you
are able to then utilize that; whether you utilize it in a 2D
production or 3D production like VR, you can then utilize that to be
used as holograms, or virtual characters, or so forth.

Alan: So being able to create
recreate the Star Wars little hologram thing.

Raj: Absolutely. And that’s one
of the use cases, right? So, holograms; a pretty exciting use case
for something like that. But, you can also think of it as, if you
wanted to create narratives or storylines, or if you wanted to create
a series of interactions that somebody inside of a headset or an AR
device could interact with, you would utilize volumetric capture
capability to create that character. It’s the difference between, do
you want to spend a lot of time rendering a 3D character? Now, in my
personal, humble opinion, with the technology now today with 3D
modeling tools and so forth, and things like facial mapping and all
that that’s going on in the media industry, rendering that 3D person
— much like we see in the high-end games today –is really exciting.
And I think that’s really cool.

Alan: But it’s very expensive.

Raj: It’s very expensive. Very
expensive.

Alan: You know, I met with a
company last week that has a solution to drive the cost down, and to
put it in perspective, you’re talking, like, for triple A
game-rendered avatars, you’re looking at like $10,000 a second. It’s
obscene.

Raj: It’s incredibly expensive.
Now, I will say this, Alan; it’s not unlike those prices in
volumetric capture, as well. Right? I think where we’re at today is,
there’s these amazing technologies — like volumetric capture
capabilities and the ability to render those as 3D characters and 3D
objects — over time, what a company like Intel and perhaps others
are trying to do is say, “hey, how do we package solutions, and
how do we build silicon and technology that will allow anybody to
essentially create said 3D object or 3D character? Why do we need a
big studio, or a big stage to do that, when we can do that on a
personal level?” So those are things, like our RealSense
technology, that we’re working on that will one day allow everybody
sitting at their desks to have a capture of themselves, to be able to
use for things like 3D telepresence and so forth. And we’re already
working in that space. We’ve done some things.

Alan: Yeah, I was going to say
that these are things that you’ve been working on for years. The
RealSense cameras are not new, by any stretch.

Raj: No.

Alan: But they’re getting so
much faster and better. I just want to talk about, I got the chance
to see your former CEO at CES last year, talking about this
volumetric capture stage. Can you talk to this crazy volumetric
capture stage? And it was, what he showed was a fully-volumetric
scene, captured by hundreds of cameras, meaning the scene’s going
around… imagine watching the movie, but now, you can watch the
movie from the eyes of the star, or you can watch it from a window,
or you can literally move anywhere in the scene, very similar to when
you see the football players, and they pan around the whole thing, so
you can see the view of the quarterback; that kind of thing. And it
was a million dollars for 30 seconds.

Raj: [laughs] It’s very
expensive, there’s no doubt. But I think — again — these are early
innovations, which is just really exciting, because they’re so
powerful today. But I think definitely, where we’ve moved on from
individualized volumetric capture is this capture of large
footprints. Right? So, for example, you can pretty much set up in the
living room. You could set up an individualized volumetric capture,
where you’re capturing just a singular object, or a singular person.
The large-format volumetric capture that we have at our Intel studios
in Manhattan Beach is, essentially, this giant arena that is set up
to be able to capture an entire scene, which includes the various
props and the various characters and objects and so forth, to where
you can actually — one time — capture a whole series of things in
one take, utilizing a voxel technology. Right? So essentially the
voxels are able to create depth and space between all the individual
characters, so you don’t have to individually grab characters, and
then put them into the scene. You can actually grab the scene in the
format that the cameras are set up to capture all that information,
and then once that data is fed into the point cloud, you can choose
to utilize the entire scenery, or you can extract bits and pieces
from it, to be able to create other scenes. So…

Alan: This is next-level
Hollywood.

Raj: Yeah. What we’re trying to
do today is to help the media and the Hollywood industry realize how
they can utilize such said technology to do this new form of
storytelling in the form of 3D immersive storytelling. So I think
we’re helping them understand and learn and be able to utilize
technology as we have with our True View capability, which is what
we’ve been using in the sports arena for quite a while. Now, that’s
available for 3D immersive filmmaking.

Alan: Let’s look at this as a
practical use case. For example, as a business, maybe a business is
not going to be making Hollywood-level videos that you can wander
around inside. But maybe they want to use this for training —
volumetric training of things. What are some of the use cases? I know
the RealSense cameras — which are fairly small, they’re the size of
a cell phone — these cameras are getting smaller, faster, better.
Are people using these for also 3D asset capture? So, for example, I
have some products that I sell on Shopify; I want to take them and
put them into 3D and utilize Shopify’s 3D platform. Is that some of
the stuff that’s being used now?

Raj: Yeah, very much so. I mean,
my wonderful colleague, Suzanne Leibrick, has done some significant
work around how you can utilize one, to two, to many of these
RealSense cameras; to be able to capture an object in 3D and then be
able to create an asset that can be used in shopping, or it can be
used in training. For example, you can utilize a RealSense camera to
grab a little bit of depth. So, for example, James George and his
team over at Scatter has basically created this plugin that allows
you to use a RealSense camera and create this point cloud version of
yourself, for example, and inject that into a collaboration tool, or
a filmmaking application.

Alan: Is it real-time? Or are
they capturing and then–

Raj: Not all tools are real-time
yet, per se, right? Some of them are.

Alan: But that’s where it’s
going. That’s where the idea is, that you’re gonna be able to have
two of these RealSense cameras, volumetrically project yourself into
a business meeting, or a lecture hall, or something.

Raj: That’s a great use case.
Right? And that’s the hope and the dream for us. And the capability
certainly there, and there’s drastic improvements being made on the
cadence on a daily basis, it seems like. But the whole idea is that,
one day the pinhole cameras that we see on our laptops, for example,
will be able to create a depth version of us, and inject that into —
whether it be a collaboration experience, whether it be a training
scenario, whether it be Twitch streaming and things like that —
those are applications that you can use that for. That’s already
starting to happen today. And again, over time they will continue to
improve. But for the large-scale volumetric, let’s take a big
company, right? Let’s say a big company that’s involved in oil and
gas, for example, may want to do a very classic scenario example,
where they have perhaps consequence training.

As you know, there’s many industries
out there that have very hazardous experiences that the employees
have to go through; that they have to learn about, so that they’re
safe on the job, and they’re continuing to be safe with their fellow
employees. And a lot of times, that’s not easily conveyed in a 2D
environment. Merely looking at PowerPoint slides or even video is
great, but going through the motion of experiencing that? Now, the
last thing you want is as a major companies to have your employee go
through a catastrophic training example in the real time. Right? You
definitely don’t want that.

Alan: You don’t want them
learning on the job when–

Raj: OJT is not really a good
thing in a scenario like that. Right? On the job training.

Alan: We just got approached by
a nuclear facility, and they want to train people because, if things
go wrong… they have to be able to train for it, and they can’t,
now. They train using paper-based or electronic-based learning.

Raj: Yeah.

Alan: And this is people
experiential learning.

So imagine if you could create the
entire rig, and virtualize that, and then have the people and so
forth all captured within volumetric. Now, what you’re going to do is
you can use a little bit of this large-scale volumetric — like we
have at Intel studios — but you’d also take 3D rendering into
account. And what you come up with is a very comprehensive package
that allows you to do consequential training. Right? Or catastrophic
training.

Alan: Interesting.

Raj: Which is really exciting.
Being able to simulate hazardous environments without putting anybody
in harm’s way during a training scenario or training exercise is the
ideal situation. I’ve found one use case very compelling and very
exciting, and I’ve talked about it numerous times. There was a
division of the Coast Guard that was doing training on
search-and-rescue simulations. They had a helicopter that was on a
gimbal, and the SAR team — the Search And Rescue team — were
wearing VR headsets built into their helmets, and they could simulate
the environment in a very high seas environment. What would you do?
How would the helicopter pitch in and so forth? I thought that was so
fascinating, because the amount of taxpayer dollars that go into
going out and doing training with the physical assets — with the
physical helicopters and the physical resources — but also the
danger that comes with–

Alan: The danger is real danger.

Raj: Absolutely. And these are
wonderful human beings that — take into account — that they’re
putting their life on the line to rescue others. Can we put them
through training without putting them through those scenarios, and
risking their lives in just the training aspect of it? I think there
are just so many ways and so many implications of what XR, VR, AR —
all these things — can do for the commercial landscape, especially
around training and simulation. That is going to be amazing. I
remember years ago — I think it was probably, I want to say about
five years ago — we had a Unity session in our office, where we did
Unity development basics, and we invited a lot of companies from the
area in Austin to come to our office, and we would run a Unity
training session around 2D/3D asset development. And at the time,
there was an oil and gas company that had sent their employees over,
one of them being their business guy, the other one being their
developer. And they brought up an interesting use case, which — back
then — was that they supplied tablets to their employees, their new
employees, to play a game. And that game was, “what happens if?”
so, you have this area where you have a major blow out; how do you
utilize the blowout preventer, and things like that, at a refinery?

They said that the level of retention
on the tablet was much better than the traditional video and
PowerPoint training. However, their one biggest concern on that
tablet was that it was so gameified, that it was, “do they take
it seriously, or are they just playing a game in their minds?”
Right? The employees?

Alan: Yeah… but I think people
take their games very seriously.

Raj: Oh, they do. Without a
doubt. I think it also becomes, do you make sure that you always win?
Right? That kind of thing, which is always good in and of itself. But
I think when you start to put things in a more realistic scenario —
virtual reality, augmented reality; the key word there being reality
— when you can virtualize the reality, to where people have more
empathy, and a strong desire to make sure that they’re really good at
what they’re doing, so that their fellow employees’ lives are not at
stake. That becomes very serious at that point.

Raj: We really talk about
life-threatening training, but training of all sorts can leverage
this technology. It doesn’t have to be life-threatening technology.
And I think one of the things that as we start to move into a time
where a lot of people are starting to retire from the workforce, it
all sorts of aspects of the workforce — from truck driving to
manufacturing to you name it — there’s a lot of people retiring.

Raj: There’s various things
where we need more operators. Unfortunately, in the time that we live
in, Alan, certain jobs are not appealing anymore. Right? Because some
people are looking for certain types of prestige in what they do as
an employee. But there is such an important and necessary demand for
people who have a skill and a trade and a capability; things like
crane operators and heavy equipment operators. There’s sort of few
and far between these days. And so, the demand has definitely gone
up. But how do you get folks that make a pivot into, “hey, I’m
going to go become this heavy equipment operator.” How do you
get them trained very quickly?

Alan: I got to give a shout out,
because we’re investing in a company right now called UP360, and they
do exactly that. They’ve taken heavy machinery operating — their
first module was an excavator. You go in VR and you learn how to run
it; everything, from turning on the key, to turning on the fan,
radioing, to controlling the bucket, and all of these things. And you
have to go and scoop some rocks into a bucket. When I first did, I
knocked out some people. So it… [chuckles] it allows you to safely
make mistakes while you learn. But one of the things we’re going to
do is we’re going to run my kids — my kids are 11 and 14 — and
we’re going to run them through this training. We’re going to make
them proficient in VR and then we’re gonna take them and put them on
a real excavator.

Raj: Yeah.

Alan: And see if they can manage
it.

Raj: That’s spectacular. I mean,
you want to make the mistake I made, right? Which is, I live on a
fairly large swath of land here in Texas, and I needed to move, rocks
and I needed to grade my property. So I just went down to the local
place and rented a Bobcat and had it delivered. And of course, you
know, I’ve never been on a Bobcat before, but I thought that if I
can–

Alan: How hard can it be?

Raj: I can run my zero-turn
mower; it should be pretty easy. Needless to say, I think it cost me
an additional $3,000 that I had to go pay somebody else to correct my
mistakes. Right? So, moral of the story; if you’ve never been on a
Bobcat, make sure you go through some form of training prior to that.
So, had I had a VR simulator, I probably saved myself a lot of money.
So, trial and error.

Alan: Yeah. Right? It’s funny,
because — I think it was Ryan from VRScout — he went and did the
crane training about two years ago. He went in and they trained him
in VR on a crane simulator, and he went through the whole training —
spent about an hour in VR — then they took him outside and put him
on a real crane. And he was able to operate the real crane within the
safety guidelines. Within one hour of training.

Raj: It is amazing what is
coming out of the use of technology in the immersive space. Immersive
tech like VR, and even gaming. We actually have very young Grand Prix
racers who are now… I can’t even remember the young man’s name —
who was a video gamer, and he ended up racing for Nissan and LaMonde.

Alan: I know; it’s crazy.

Raj: It’s fascinating, the
adaptation that we can make–

Raj: –from a video game to the
real world.

Raj: Yeah. Yeah. So, they called
it Virtual to Real Racing, which is what Nissan was doing with their
GTR series, and yeah, it’s just fascinating, man; I can’t get over

[it]

. I think that’s an area that will start to see things like
motorsports definitely take advantage of this and utilize the
simulation environment a little bit more. I think the difference was
that people were using simulation, but they were very expensive,
grand-scale, at-the-facility deployment. Versus today, you can go buy
a wheel and a headset, and you can set up in your living room and
suddenly, you’re getting a feel of how a Formula One car — or a
Formula E, or a LaMonde-type vehicle — would perform.

Alan: We actually have a racing
car at the office. A racing simulator.

Raj: Yeah.

Alan: Because why wouldn’t you?

Raj: Of course, I think I have
space in my lab for one now, so who knows? I’m nearing that
milestone, myself.

Raj: There’s a ton of different
technologies that Intel is working on that kind of power the back end
of this stuff. And, they’re maybe not focused around the headsets and
stuff, but you guys even dabbled in the headsets and built a
reference design.

Raj: Yeah.

Alan: What are some of the new
technologies that Intel’s building that are going to enable immersive
computing, or spatial computing, and really unlock the value for
enterprise clients?

Raj: When we start to look at…
and this is an interesting place for us, right? As a company, we have
always created some of the key and core technologies that go into an
end device, or an end product. In the case of the previous product —
Alloy — that we did as a reference design, that was one of the few
times we’d actually created a fully-hatched device to be used as a
reference design in the VR space. And it was great. And we learned a
lot of things from it. We learned a lot. We had some key learnings
and some key takeaways from that. What we found was that, for us as a
business, hitting all those other touch points was far more
important. Right? Making sure that the folks that were creating the
content and the experiences were better served; making sure that the
development of cloud delivery mechanisms and cloud technology was
going to serve the community better; and also, increasing the compute
capability for really high-end renderings and so forth — and
delivering it fast — was also necessary. But I think one of the key
areas that we’re going to see advancement in, that’s gonna be super
important for the XR industry, is going to be around 5G.

Alan: I literally just wrote —
as you were talking, you said, “increased compute for rendering
— I wrote “5G” and put a box around it.

Raj: Yeah.

Alan: And you just said 5G;
let’s unpack this. What can people expect? Because all the telcos are
investing a ton of money in building these 5G towers, bringing 5G.
We’re gonna have phones that are 100 times faster. What does that
mean?

Raj: You’re going to be looking
at a pipeline that’s so necessary for the throughput. I think one of
the things that we tend to forget — because there’s always races to
the bottom, and there’s always this notion that we need to make it
smaller and faster and so forth — in the case of XR, one of the
things that tends to get lost in small packaging, right? Whether it’s
small network, or small device, is the fidelity and the intensity of
which we need to consume these content. We’re actually trying to
replace our current reality. And not replace it in the sense that…
it’s more of an augmentation. It’s, “my current reality doesn’t
serve my need to go do something. I need to be in this augmented or
this virtual reality. And in order to do that, I have to have that
capability served to my headset device.” Right? Today, the
highest fidelity is attached to the PC. We know that; the highest
fidelity is attached to the PC. There’s devices coming out that are
going to untether — and obviously, we’ve done work to deliver that
content wirelessly from a PC to an HTC VIVE headset — but
universally, across the board, we want to be able to pick up a device
and have that content either wirelessly transmitted from our local
endpoint — such as a PC — or wirelessly delivered to our headset,
like we’re seeing with Oculus Go and Oculus Quest. Even though
there’s a gate on fidelity today, we don’t want that gate to exist
down in the future.

So what we want to do is increase the
pipe, right? Increase the bandwidth and increase the capability, so
that you can pre-render and deliver things to the headset at a very
fast pace, and at a significantly wider bandwidth, so that you don’t
have those issues with fidelity any longer. Now, for us, that isn’t
just about XR; that’s across the board. That’s gaming. That’s media,
that’s data. Everything we want to be able to do, we want to be able
to do it untethered. Now, I have a fully-gigabit network in my home,
and I am wired to utilize that. Now, I’ve got expensive equipment
that I’ve put in my home to be able to give me up to a gigabit on the
wireless. But again, that pipe is still small, compared to what our
needs are going to be in the future. And as we start to grow on those
needs and there’s a dependency on those needs, to be able to make
decisions at a faster clip, to be able to see things like artificial
intelligence work on our behalf; those are technologies that are
going to require significantly larger bandwidth. And as we as humans
have a higher demand for fidelity — not just in XR, but in our
sports and our TV watching and our films that we consume — having
all of that, without having to plunk down a bunch of wires anywhere
and have it on the go. My favorite thing to do is, if I have to cover
Boston and New York, I love taking the train, right? Because I can
sit and work. I can see the scenes and so forth. I wish I had the
bandwidth. If you look at where we are on… the one thing I
absolutely hate doing is trying to work on an aircraft. Right? It is
daunting, because you’re sitting there waiting for things to

[connect]

. But if you think about it, Alan; that speed is what we
used to operate on on a daily basis.

Alan: I know! I tried to explain
to my kids the dial-up modem. [imitates dial-up modem, laughs].

Raj: So you’ve seen the
evolution of where we’ve come, from where we were, on the bandwidth
and the speed and the availability. And that’s just going to
exponentially have to get larger and faster over time.

Alan: But the crazy thing is,
unless you have to experience that kind of slow-moving data, unless
you have to actually experience that — again — we just take it for
granted, that things just should move really fast.

Raj: Yes, of course.

Alan: Today’s kids being raised
now are handed an iPad when they’re three years old, and they expect
to have access to the world’s knowledge instantly, wherever they are,
whenever they are. And so one thing that I don’t think people really
think about is, when we moved to a AR glasses, or when we move to
devices that we wear on our face for spatial computing, we have to
collect as much or more data about the environment we’re in to be
able to protect the data in context to the world. And people forget
the collection of the data is as important as the projection of the
data.

Raj: I think that’s critical
today, right? Because we are still in consumption mode as we go about
our day-to-day business. Now, it’s not until we sit down somewhere
and do something that the data is bi-directional. I think for the
most part, in our day-to-day lives, it’s uni-directional; meaning
that, we consume if we demand it. As devices become part of our
fabric, in that they collect data and they send up data and process
it for us in real-time and bring some result of that data back to us,
the bi-directional nature of data and how it gets processed and
consumed is going to change significantly over time. And I think that
as we start to get into this more and more spatial computing, where
we are reliant on XR glasses and some form of augmented information
being projected — whether it’s from the phone to the glasses. And
let’s not forget, sometimes it may be that our phone is going to be
the end point and then it will just render into our glasses and so
forth. And I think that’s probably more in the near future than the
glasses itself doing that as a standalone.

Alan: Yeah, I think so as well.
Companies like Magic Leap and Hololens, those are great devices where
most of the compute is done on the headset… well Magic Leap is
actually done on a pack that’s wired in. And that pack doesn’t
necessarily — it’s just an android pack — so it doesn’t necessarily
have to be a standalone thing. It could be your phone. And I think
nobody really knows what Apple is working on. Maybe you do, but…

Raj: I’m certainly not privy to
such information. I wish I was that special, but I’m not [laughs].

Alan: But I’m certain they’re
working on a pair of glasses that you wear them, and your phone is
the compute pack. Because why would we make something that’s
completely standalone for the foreseeable future, we’re still going
to have phones in our pockets? So why not leverage the power of that?
And with 5G and being able to use this cloud compute, or edge
computing, meaning you now have the ability to have high-powered
rendering farms at your disposal through the cloud.

Raj: Yeah.

Alan: You don’t need to have the
rendering on your phone. You need of the rendering availability to
you.

Raj: I always loved, there
was… I can’t remember the exact commercial, but there was a
commercial that talked about moving at the speed of business.

Alan: I love it.

Raj: And I love that slogan,
“moving at the speed of business.” Because we’re going to
see an evolution in how business is conducted. I think one of the key
things was, as I’m sitting here in my office and watching UPS deliver
the package I ordered yesterday from Amazon. You and I, when we were
younger, Alan, and we used to go through a catalog and order
something; we used to wait days. Days on days on days. Sometimes,
we’d order that video game, and it would take almost a month to get
there.

Alan: You checked the door every
day.

Raj: Yeah. I remember, with my
own money, I bought my very own skateboard, and I was just… it took
three weeks to get to me.

Alan: Uhg. Painful!

Raj: And those three weeks were
probably the most painful three weeks ever, as a youngster who was so
into this sport of skateboarding, I just wanted to be like my buddies
who had gotten their rigs already. And it was just like, waiting
those additional three weeks was just insane. But now…

Alan: Now, if it doesn’t come
this afternoon, we’ve got problems!

Raj: I have five kids, Alan. And
it was funny, you were talking about the speed at which they consume
on their iPods. I remember when the local-area AT&T had severed a
line or something, and it was a two hour downtime for them to quickly
get that back up and running. Those two hours, it was hilarious to
watch my kids just squirming, because it was two hours of no Wi-Fi
connectivity. We’re progressing. We’re moving on. We’re becoming a
part of a symbiotic relationship between technology and man, which
hasn’t always been there. I think it’s gotten better over time.

Alan: I was at a talk recently,
talking about the relationship between technology and humans. We talk
about the fact that it hasn’t always been there, but it really has.
Clothing is a form of manmade technology.

Raj: Absolutely.

Alan: We take it for granted,
but would you walk out to your house naked? No. It’s the same with
your phone now. You wouldn’t out of your house without your phone.
It’s just, we’re adding more layers of it. And the internal
combustion engine; try going a day without one.

Raj: Yeah, and look at where
that’s going. When we look at the speed and efficiency that electric
vehicles are now providing. One of the most fascinating things to me
was to sit down with the folks from Formula E. They’re the electric
car racing series, and they were talking about how they utilize blue
algae from the ocean to create the power that drives the cells. So
they’re trying to be the most highly sustainable program to create
electric fuel for these race cars. And when you look at their entire
chain of delivery of fuel, the dependency on these fossil fuels are
— other than what’s coming out naturally from the ocean — is
changing rapidly. We are in a time and a space in our existence that
is unlike anything we have ever seen. And it’s doubling at a rate
that we couldn’t have imagined years ago. Right? And so we’re seeing
a lot of these technologies, and XR is such a huge area in that
space, because it serves such a great purpose for anybody in the
design space of things like this. Right? You can virtually design and
test and put into practice some of these methods utilizing XR today.

Alan: One of the examples that
HTC VIVE was promoting when we had Alvin Wang Greylin on the show was
talking about how a Bell helicopters designed a brand-new helicopter
in six months. And that process normally takes four years.

Raj: It’s crazy. I mean, it’s
insane what you can do. I mean, you look at concept cars. I have
several friends who worked in the automotive industry, and when you
model a concept car — just to do the exterior modeling — it’s all
done through clay, traditionally. The interior modelling has been 2D
designs. There’s never been a marriage of that exterior clay model
and the interior 2D designs in a visual medium, other than building
the car itself. Right? And there hasn’t been a marriage of that.
Today, you can now concept the entire vehicle — to the point where
you can open the door and sit inside of it — and you’re in a
headset. You’re not actually in the vehicle. I think that’s a very
telling story of where this is moving onto.

Alan: Elizabeth Baron from Ford
was on the show, and she was saying that every executive around the
world views every new car in VR before it goes to production.

Raj: I mean, it’s there. It’s
happening. It is a viable tool for the industry. Look at the building
industry. I mean, I love what somebody like BDX is doing. And they’re
right here in my backyard here in Austin. They’re a visualization
company for a vast array of homebuilders throughout the United
States. And they were like, “hey, VR is the thing, man. It’s
something that we have to embrace, because builders aren’t going to
put up every model in their portfolio of homes everywhere.”
Right? They’re going to build one. They’re going to build two. They
might do a row of homes and build three or four. But their portfolio
consists of over a dozen different models. How do you sell a home
that your customer may be on the fence about — no pun intended —
whether they want to buy that house? They’re concerned about the
build and construction and function for their family. What if right
in the sales office, you can just walk them through that house?
Right? They won’t physically do it, they can virtually do it and get
a good look and feel of what their new family home would look like,
and be able to have an easier approach to making that decision.
That’s an invaluable tool for the sales team and for the sales force
that’s looking to close a deal, and it’s the same thing for the
building & construction industry, for major buildings. I mean,
you look at the rate at which right here in Austin, the skyline used
to never look like the way it looks like [now]. Ten years ago, there
were small buildings, maybe five, six storeys at max. And now the
skyline is dotted with these big highrises.

Well, how do you go through and look at
a schematic, or an architectural diagram and say, “yeah, you
know what, I’m happy with what I’m seeing.” And then as the
construction goes on, there are numerous function and cosmetic
changes that have to happen. There are so many companies working in
the AC visualization space that allows architecture teams to be able
to convert their renderings into 3D models that buyers can go through
and be able to see stadiums, for example. I was just in Vegas at the
Experiential Marketing Summit, and as I was driving to it, I was
seeing the new home of the soon-to-be Las Vegas Raiders going up. And
I thought, man, how great would it be? Or has it been we? I don’t
know per se, but what if the owners and the administrative staff at
the team could see what their future stadium would look and feel
like, even with people in it?

Alan: What’s it gonna look like
at 50 percent occupancy versus a hundred percent?

Raj: How do you maximize your
throughput on concessions? I think one of the biggest things we’ve
seen in stadiums in the past is where a lot of people just don’t get
up out of the seat to go buy goods and services, because it’s just
kind of like, “OK, well, this is gonna be a nightmare to go
through and then come back and get to my seat.” What if you
could actually run scenarios and figure out where would be the best
placement of stores and concession stands, to maximize the dollar
input coming from fans who really do want to go get a hot dog and a
beer or something like that? Right? But they’re not willing to do
that because it’s just too inconvenient.

Alan: There’s so many ways you
could use technology even like to go back to… I’m not usually
promoting products specifically, but the RealSense sensors having a
one RealSense camera over each of the vendors could give anybody the
ability to say, here’s the shortest line up to get a hot dog.

Raj: Yeah, exactly.

Alan: “I want to get a hot
dog. Okay, well, here; go to this lineup, because this one’s only got
two people. This one’s got 50.”

Raj: Can you imagine if, again,
that symbiotic relationship between the technology sitting at the
concession stands and you, the user, because it’s feeding you real
time data to your phone? It’s the inconvenience that we’re up against
now. Right? It isn’t that there isn’t enough concession stands. It’s
that there are more people, and there are more people enjoying a ball
game or an event or so forth. And so, as that continues to grow and
as people put on more events and those events continue to grow in
popularity, you’re going to get more people to your location. And
again, location-based VR is another area where the more you have
growth in these concessions, you want to know, hey, “if I go
there today, will I ever get a chance to participate, or will it be
too much of a burden to go and buy a souvenir or a hat or some food
or whatever it may be?” We’re starting to see that and in
theaters, too, now. Right? You can now go on an app, pick your seats
before you even get to the theater, and in some theaters you can even
preorder your food and have it delivered right to your seat when you
get there. I mean, that’s insane. It’s cool. It’s exciting.

Alan: It’s pretty cool.

Raj: But it’s happening. Right?
And it’s happening at that pace. So–

Alan: What kind of world do we
live in?

Raj: Those things like 5G, AI,
RealSense; all these things are going to play a factor in how we move
into the experiential economy even more.

Alan: Let me ask you a question.
My personal mission in life is to inspire and educate future leaders
to think and act in a socially, economically, and environmentally
sustainable way. And I got into VR and AR because I saw an
opportunity to educate — to democratize education — and to create
new types of education. We don’t… math and science and geography,
our school systems do a great job at teaching those things. But where
they fail is in basic success principles, such as mindfulness and
gratitude and positivity and purpose, and also things like financial
planning, management, communication skills, marketing, sales. These
are all fundamentals to successful people. And I see these
technologies as a way to hyper-accelerate that. You guys have done a
lot in education; in fact, you just won an award, the X-Awards, which
is experiential marketing awards for best mobile marketing tour.
Maybe you can talk about that? It was the tech learning lab, and talk
about what you guys are doing in the education sphere, and how Intel
is bringing this technology to the students and what that looks like.

Raj: One of the great things
about my job and what I do at Intel is, we’re very purposeful about
things. Right? We’re very purposeful about unlocking capabilities in
the technology. And then sometimes, in that process of being
purposeful of unlocking capability, we run into happy little
accidents, or happy little explorations that we’re like, “huh?
That’s really not what we thought of.” And so we should do
something about that. So in this particular case, I’m going to go
back a little bit. We had been approached to look at AR and VR for
our SSD technology called Octane. This is much faster than your
traditional SSD technology. And the whole idea was, can we go to a
museum and scan — or do something — with photogrammetry for
example, of one of their artifacts or paintings? So we put out a
feeler to a couple of friends that we had, and it just so happened
that Smithsonian American Art Museum was one of those that responded
and said, “hey, let’s talk.” And so it started off as we
were going to do a workload analysis and proof point on 3D rendering
through the Octane, SSD, and how fast we would see the difference
between the two. Previous generation versus this new Octane SSD. OK,
put that aside. We began to walk through the museum with
then-director Betsy Brown and deputy director Rachel Allen, and also
head of digital and so forth, Sara Snyder. Anyway, the three of us —
myself photogrammetrist Greg Downing, and my producer friend Peter
Martin — the three of us were walking with the three of them, and
what we got was a very one-on-one education about the museum, and
about individual artists, and about curation, and about the process
in which the museum puts things together. It was very educational.
Suddenly it pivoted from, “hey, we need to go prove this
technology out,” to, “we now have this story, this very
important information about the museum. How do we then put that in
to… how do we make that subject matter? How do we make that the
experience that is enriching?” Right? So what we wanted to do is
say, “hey, this is more about about enrichment than it is about
proving out a workload. We can do that as a byproduct.” That was
no big deal. But being able to enrich the user was far more
important.

So then we embarked on this journey of
trying to move the Smithsonian American Art into this place of
exploring XR and exploring AR and VR and so forth. And what came out
of that was a very amazing campaign around this curation called No
Spectators. As we started to explore and unpack how people utilized
the content to learn — how did they learn about the curators? How
did they learn about the museum? How did they learn about the art and
artifacts? — we found that when anybody got inside the headset, the
retention of that information had a high value and they would come
out of it and there would be a big smile on their face. “Oh, my
gosh, this is amazing. And I can’t believe it was able to come to
me.” The next step in that was, well, why don’t we take this on
the road? Why don’t we take these museum experiences? Why don’t we
find some other educational experiences — which we found with our
partners over at VictoryVR, Steve Grubbs.

Alan: He was on the on the show!

Raj: Yes, Steve’s a great guy.
Right? And Steve’s so gung-ho, man. He’s so excited about this space
and what he’s doing. I love heroic people like Steve Grubbs. Because
they go, “look, I’m doing this. I know this is important. I know
this is necessary and I’m going to go for it.” And so we
partnered with him and we also partnered with HP, which provided all
the equipment, as well as HTC on the headsets. We also had Oculus
participate, which was great. “Who’s this wonderful…? Hey, we
are Switzerland. Let’s go do something together, to really educate
teachers and students about what VR is.” Right? It’s not just
about playing video games or looking at 360 videos. It’s so much more
than that. We really partnered in with Infinity Marketing, who’s been
our great agency partner on bringing to life crazy ideas and
activations. We wanted to hit up 16 locations we wanted to get to. I
think we went to 12 schools. We went to four affiliate museums of the
Smithsonian Institution, and we did truck stops and we brought out
hundreds of kids and teachers and principals and administrative staff
and really just opened up this truck. It was this big cargo container
truck that sort of transformed into this tech lab. And we ushered a
ton of students and teachers through it. And not only do we show them
experiential enrichment through the museum type experiences, but we
also gave them hands-on learning tools, like being able to dissect a
frog.

I think one of the great things that
one of the teachers came and told me was, “we have to do the
frog thing because it’s so necessary for our science classes. But 1.
it’s expensive to get the frogs, and 2. its smells. And the kids hate
it. We hate it. But the virtual one was so much fun and so close to
the real thing that a lot of them were asking, hey, how can I just
replace this? Even if all I did was replace the frog dissection in my
science class, it would be worth it.” Right?

Alan: Yep. And there’s so much
more that can be unlocked in the school systems using this
technology. When I say we’ve only scratched the surface, it’s
literally like, there’s so many things that can be brought into a
classroom. You can bring the world into a classroom.

Raj: In this particular
instance, while VR was the center point because we did have quite a
few workstations out there that are doing — by the way, a shout out
to HTC, because had it not been for their new Base Station 2.0
capability, we wouldn’t be able to have the number of headsets that
we had inside of a truck; wouldn’t have been possible.

Alan: The crazy thing is, Alvin
was telling me that the new Base Stations that are coming out — Base
Stations, for those people that don’t know, are the outside sensors
pointing into the headsets to triangulate where each of the headsets
is — they’ve got these new sensors that can detect up to 40 headsets
simultaneously of the VIVE Focus, which is the standalone unit. It
doesn’t require a computer. So you can have up to 40 people in a
warehouse-sized space, and the space that they pick up is like the
size of two football fields. It’s insane.

Raj: Yeah. Could you imagine
doing training for big corporate-type environments, doing simulation
for corporate-type environments — doing warehouse training, for
example — if you want to get a team of people spun up on how to run
and operate and maintain a warehouse. This is the thing, Alan. Right?
We do these explorations in various different segments like
education, like training, like working with Bell, for example, to be
able to train their future pilots. Working with automotive to help
design vehicles and so forth. Again, the symbiosis there, they all
intertwine and lead to the use and the capability to really serve
each other. It’s not as much of a segmented approach as we think it
is. It’s actually different segments that it applies to, but the
totality of what we’re looking at is something that’s all-serving,
which is why I think XR super important the commercial landscape,
because it can do many things.

Alan: Agreed; I couldn’t agree
more. I want to thank you for your time and agreeing to be on this
podcast; I’m sure everybody listening has been very grateful for you
taking the time to share this. We can feel your passion through the
podcast. And so, I can ask you one final question.

Raj: Sure.

Alan: What problem in the world
do you want to see solved with XR Technologies?

Raj: You know, probably the
thing that I’m most passionate about is the healthcare industry. To
be honest with you… there was a documentary that I watched on…I
think it was Netflix… and it was about a man who bought a bunch of
iPods and he took them to nursing homes, where you had a lot of
elderly folks who’re either suffering from dementia or Alzheimer’s or
had traumatic brain injuries and so forth. And when these individuals
would put on the headphones and listen to music from their genre,
suddenly their brain started firing off. They were able to recall
things that they had never spoken about to their caregivers and so
forth. And what we’re seeing with VR today is that is starting to
happen. We can utilize VR to remap nerves, to remap brain function to
those nerves. We’re seeing that happen out of Brazil, for example.
We’re seeing opiate addiction being reduced through the use of VR.
Everything from surgical procedures being mapped out — a high level
of success on surgical procedures, we’ve seen through our partners at
Surgical Theater. I think that’s a huge area that hasn’t quite been
unlocked yet, and that’s the thing that excites me the most.

Alan: Well, I’m sure there will
be countless scenarios in which RealSense cameras and Intel parts are
being used across all parts of healthcare as we move into spatial
computing as a complete platform for the future of computing. So,
thank you so much.

Raj: Yeah. One day at a time,
and many leaps forward as they come, right? That’s how we’ll continue
to keep driving innovation.