In the late 19th century, Eadweard Muybridge – to win a bet – took several pictures of a horse in motion, and in the process, basically invented film. It was a brand new way to experience media, and it changed the world. Radiant Images hopes to do the same with an investment in 360 video production, and VP Michael Mansouri drops in to explain how.
Alan: Welcome to the XR for Business Podcast with your host, Alan Smithson. Today’s guest is Michael Mansouri, co-founder and vice president of Radiant Images. Michael is known as one of the industry’s most knowledgeable, inventive, and passionate technologists. Born into a family of filmmakers, he has produced and directed several high impact documentaries, most recently for the United Nations Geneva Summit for Human Rights. His documentaries help raise awareness of human and animal rights violations around the world, to provide a voice for the voiceless. He’s been always interested in the overlap of film and technology, so he co-founded Radiant Images in 2005. Mr. Mansouri’s efforts in filmmaking led to NASA and JPL’s 2018 Emmy win for outstanding original interactive program for Cassini’s grand finale, which was NASA’s first recognition in the film community. At Hawk-Eye, he hopes to break through the technology barriers surrounding digital innovation and provide a more meaningful impact that connects and engages humanity. You can learn more about the great work that Michael and his team are doing at radiantimages.com.
Michael, welcome to the show.
Michael: Hey, good morning, everyone. Michael Mansouri, co-founder of Radiant. Very happy to be on this podcast with you guys.
Alan: I am super excited. You know, the first time I found out about Radiant Images was at the UploadVR launch party in LA. And I was in this beautiful space and people were drinking drinks and everything. Good time. And I walked into one of these small rooms and I saw the collection of quite possibly the craziest 360 cameras I’ve ever seen. There was cameras with 20 lenses. There was ones that fit on your head like a helmet. There was little miniature ones. You guys had kind of everything. And I just– coming from somebody who started in VR using 360 cameras — you know, the GoPro rigs where we glued them all together — and coming from that and then walking into this room, where you would take in what we were doing from a basic standpoint of collecting 360, and you just took it to the next level. How did you guys get involved in that? Like, what was the first precipitating factor of going from traditional film to 360 filmmaking?
Michael: That’s a great question. Radiant’s history is traditional, but we do traditional way, in traditional methods. How we got really excited and involved in immersive was our background is documentarians, we ask always questions. And we ask a lot of questions that break beyond the surface and beyond the obvious. We always were much more interested in taking deeper and deeper. And part of what we did is we started really looking at our industry, motion picture, media, entertainment, and just in fact, communication, our communication methods. How have they changed in the cycles of technology shifts that happens every 10 years? What is the new method of how we engage? And what we realized is, the average American sees between 4,000 to 10,000 pieces of content, every single day. How do we distinguish?
Alan: Say that again? What?
Michael: Yeah. It’s a fact. [chuckles] The average American sees between 4,000 to 10,000 pieces of content every single day.
Alan: Okay, we’ve got to unpack. That is ridiculous.
Michael: It’s the truth. And the reality is, when we were kids or when we were much younger, our choices for media, for entertainment, for communication was very limited. What was it? It was a television. It was newspaper, magazines, and television. How many television stations do we have? And if we were lucky, we had maybe–
Alan: When I grew up, I had to get up from the TV and click the little thing. So I think we had eight, and about seven of them were staticky.
Michael: But it’s interesting to actually look at that because we move in such a fast-paced world. We’re living in a world that is completely changed. The script is completely flipped, where the largest taxi company in the world owns no taxis, the largest media company on the planet owns no media, and largest hotel chains right now own no hotels, and those are AirBNBs. So everything’s changed, digital technology has shifted a lot. But a byproduct, an outcome is that we’re bombarded with content. It’s not the times of content was king. Less and less is content king, but more and more is platforms, communication devices, how we engage. So Radiant switched its focus. We strategically decided to change our focus from just being part of what we call the status quo — the safety, the comforts — and move into something more daring. And the new daring is where the technology shift is going, and where the communication devices are moving towards, and the patterns of where communications moving forward. So what we started looking at is what’s happening in the technology cycles. And what we’ve discovered is that every 10 years there are technology cycles. If we go back from the 80s — that I remember — we had the personal computers, PCs. And in the 90s we had the laptops. And year 2000 we had the smartphones. 2010, we had wearable technologies like the Fitbits, the Apple Watch, so forth and so on. And if we’re predictors of the technology cycles, it’s swinging towards smart displays. And that’s what we’re facing now.
What we’re seeing now is that the new technology cycles are going to go to what we consider smart displays. Now you see peaks of it right now, early variations of it. When you walk into a Best Buy’s or you walk into any retailer, they’re selling you the Amazon Echo and they’re selling you the Google Home, where you ask a question, it plays a video. And soon those displays will be small enough, portable enough, that they go away from just your homes. And the communication and the operating systems would be one. So we’re moving towards where we feel the technology as well as the communication platforms are moving into which we consider smart displays. These are like the Microsoft Hololens, Magic Leaps. Several other companies are probably renounced, they’re–
Alan: Nreal, Vuzix, there’s a whole army of them. Every major company in the world is working on an XR strategy right now and communications are gonna change forever.
Michael: Yes, and that is exactly what we called. So most people think that this is like a fad or 3D, entertainment fads. This isn’t. The play’s also bigger than just headsets. The play’s really owning the next generation of operating systems. The big players, their mission is to get rid of the keyboard, the mouse, computer monitors, and heads-up display. And this interactivity that belongs between human beings, each other, products, devices, anything that is connected. Now this is where it becomes incredibly powerful, is that we’re no longer watching things on a flat-screen where we’re not connected to it. The reason you’re connected emotionally to things is because if there’s a coffee cup on your desk and you move left and right, you’re connected to that subject matter. And there is no other format yet does that, all of it is on a flat screen and you’re not connected to it. So what Radiant decided to do is to create technologies that people need, that is part of the operating systems, that is heads-up displays, holographic videos, those holographic videos are classified in the following order, whether they’re either known as 6DOF, volumetric or, Light Field. And Radiant is always and always will be agnostic to technology. and more into methods of capture. So we try to take very complicated problems and simplify it so that it can scale.
Alan: So you mentioned 6DOF, and volumetric, and lightfield. So I would think there’s kind of 3DOF, so there’s traditional 360 so you can look around– and for people listening that maybe you don’t know what these acronyms mean: DOF, or Degrees Of Freedom, meaning you can look up, down, left, right. So something like an Oculus Go, for example, allows you to look around and be inside of a video. But you can’t move around in it. And then 6DOF means you can look around, left, right, up, down. But then you can also move in those directions. You can crouch down, you can stand up high, you can move forward and backwards in translational space. And then you talk about volumetric and Light Field. Let’s unpack those a little bit.
Michael: So they’re all forms of holographic videos and I think you did a great job describing it the way I usually describe. What is the impact of volumetric or Light Field or 6DOF is that, let’s pretend we’re in a boxing match. The audience is no longer sitting next to Jay-Z and Beyonce at the premium silver house. They’re now a referee inside the boxing match. So they have full agency to interact inside this new medium, this new video file. So the main difference is that it allows a much stronger sense of presence than you normally get from a 360 video, whereas immersive as it can be, you just don’t have any agency to move. You’re in a room and your interactions is really limited to just looking left, right, up, and down. Whereas in volumetric, Light Field or any of the holographic video files — or what we call freeform videos — your audience, or you, are now able to have full agencies to navigate that location and interact with subjects, people, or anything that’s been photographed in that method.
So our main focus at Radiant now is how do we take these images that need to be captured so that the user — that’s when the new generation of operating systems to these devices that has ability to navigate entire space through spatial computing, without being locked into a desktop or look at a phone — that they’re interacting with these objects and able to navigate through it and it does not break as they walk through it, because traditional flat images inside these displays become obsolete. Because now you’re actually breaking through the volume, you walk through it. It’s no longer volume.
Alan: But from a filmmaker standpoint, it’s like, how do you deal with the fact that now your audience is not looking at a rectangle? They’re in the rectangle. They’re inside the film. How do you manage that?
Michael: It’s actually pretty interesting. And it does take a different form of thinking about how do we create entertainment? How do we do this now? Because we’ve struggled with this a lot. That’s a question that we’ve contemplated a lot, because part of us really thinks that, yeah, how do filmmakers operate in a medium where the user and the audience has full agency to navigate in the story? How do you direct them? What’s the purpose of direction, if anyone can create stories? That’s not really storytelling. It doesn’t really tap into our core DNA. Our DNA is, we’re programmed to be told stories. We’re not hunter-gatherers of stories. We’re mostly exponential. We want to gather around a fire and be told stories from our ancestors, from the caveman days, right? We were entertained.
Alan: There’s also a big shift going on from mass consumerism of content. And like you said, sitting around the fire, listening to an elder speak. But I think we’ve also kind of rounded this corner where there’s now a huge push towards creation instead of just consumption. And I think this lends itself nicely. You have things like TikTok, or whatever it’s called.
Alan: People making content at scale. YouTube is unleashed. An entire generation of Americans who want to be YouTube influencers.
Michael: Exactly. And so that’s something that we contemplate. We’re like, “OK. So how do these stories work? How does this format work? How do we bring cinema into this?” And we realized something early on is that it’s the same time as we look back in history where we have radio programming and we also have television programming. You had some producers that were radio programmers that tried to produce for television. It just didn’t work. It’s not effective. So legacy is great. It’s good to hold on to legacy. But it’s really also important to realize when you need to break legacy and bravely move into new formats, because this is not exactly cinema. This is a new format. And we try really hard to resist to classify things. And we did that early and 360 where people used to talk about 360 videos are this, or human empathy machine, they’re this or that. We try not to do that. We try really not to classify things, because that’s how you have limited scope. You really bottle it in. What if it’s just something new? What if it’s not any of the things that we predicted, just like when we first discovered electricity? Who knew all of its potentials, or all the things that can be done, outside of its classification?
We have the same capacity with these new immersive videos, but we have to think about it in a different way. We can’t bring in same methods of a system from television now into an interactive, fully navigatable video that the user gets to repurpose and revisit from different factors. So there’s some really exciting things that people are doing in this new method of volumetric and Light Field interactivities inside headsets and soon inside movie theaters.
Alan: So let’s unpack this a bit from a business standpoint. It’s the, you know, the XR for Business Podcast. So you get to see everything from Hollywood movies and let’s call it Intel Studios doing their volumetric capture. You guys worked with volumetric capture and Light Field capture. But how are companies using this to either market their products or train their staff? How are our businesses using this technology now?
Michael: So let’s just start right away really quickly to get it off the plate. How does entertainment use us entertainment companies? So a lot of the studios — some of the studios, not a lot of them — we’re very fortunate to have actually sold our new award-winning AXA stages of volumetric Light Field stage to one of the world’s largest, oldest motion picture studios. One of the first technology company that’s embedded inside one of the motion picture studios.
Alan: What’s it called, your product?
Michael: Our stage is called AXA and the AXA stage is a volumetric *and* Light Field capture system. It’s about five meters, about 16 and a half feet. And it’s a sphere that precisely positions cameras, hundreds of cameras, inward-looking at the subject so that it can be viewed from all the different variations and create volume. And that’s how we create either volumetric or Light Field through our stage.
Alan: Can you walk us through just quickly what the difference between volumetric and Light Field is, then?
Michael: So with a volumetric, what we’re in a sense doing is we’re capturing subjects from multiple camera points and that creates what we call a point cloud. A point cloud is a volume, it has depth properties in it. Some companies, some software solves just take the point cloud and then texturize it, and there are other companies that take the point cloud and they put a mesh, basically a skin. And then on top of the skin they put texture on. Where Light Field is different is that it’s not based on a point cloud volume. It’s basic. The re-interpretation, re-vantage point of all the different light rays that’s viewed from that camera prospectus. So if the camera’s seeing my shadows from– or my highlights from the top of my head, it will regenerate that viewpoint. So it’s more video-based than volume-based. Generates a different type of effect. And for your case uses, we would recommend volumetric versus Light Field, but they’re both three-dimensional navigatable video fonts.
Alan: So what would– here, let’s– so volumetric, I guess, would be equivalent to something like the Metastage or the Intel Studio stage.
Michael: Yeah. So Metastage uses Microsoft’s volumetric Hcap, holographic capture. Intel has it. And then there’s other companies that also have volumetric studios. Fraunhofer, there’s several handfuls of volumetric studios that create–
Alan: There’s 55 of them globally. [laughs]
Michael: Ah, good! Yeah, there’s an absolutely huge rush for this new way of communicating.
Alan: Yeah, and I think Verizon just bought Jaunt.
Alan: For their volumetric capture capabilities.
Michael: And we’ll get into why telco companies are best positioned for this, why is this such a big interest. With immersive, with volumetric, Light Field–
Alan: We’ve got to figure out some way to sell 5G to people.
Michael: Well, it’s the freeway system. So you have a freeway system, much bigger freeway system with very little latency. It has a lot of bandwidth that doesn’t require the headsets. Remember the smart displays that we talked about? So if you want to make it a smart display, you can’t put huge GPU/CPU power on top of someone’s head. Obviously, you can’t do that. So if you want to make it as lightweight and small as possible, that connects to a cloud and it streams the video files. This is why the timing is great. There has been a lot of great technology that was developed many, many years ago, but they just were not perfectly timed for the infrastructure. Imagine if someone created the cellphone technology back in 1950s, it wouldn’t be impactful as it is now, because there’s the infrastructure that makes it possible for us to get there. So the reality is why 5G is essential is because, yeah, we have a freeway system that’s wide open, that has no latency, that has connective tissues that connect a lot of devices to a lot of devices. And that’s where we go to Internet of Things, that’s where we go to smart factories, industry 4.0. This is the fiber, fiber-fiber of connectivity that devices need in order to access big data streamed to them. So the timing couldn’t be any better. And I think for people who are looking at this technology of volumetric, Light Field and 5G, they’re pretty much all needed. Very few people are going to be able to download 10 gigs or 20 gigs of a video file on their mobile phone. They want to get it streamed to them. And that’s why the telco companies are really looking at 5G as an enabler to take data moved through devices, to those new smart glasses that we talked about.
Alan: It’s interesting that you say that, because we’re actually building our new product platform based on the thought or the prediction that in 10 years from now everybody will wear glasses, those glasses will run on cloud computing. So the cost of the glasses themselves will be relatively negligible. The data will be streamed at a hundred to a thousand times speeds and we’ll be able to get content to everybody, anywhere in the world, immediately. And so in that content will be in context to the world around you. You’ll look at an object, it will know what it is and be able to inform you of that object. So real-time, contextualized, hyper-personalized learning, anywhere you are in the world.
Michael: That’s absolutely true. It sounds like you’re pretty much looked at one of our DACs. This is how we describe the future of computing. And–
Alan: It’s hard to explain to somebody like, OK, you know, we got these big bulky VR headsets and that’s cool. But if you look out 10 years from now, these are going to be the size and lightweight of a pair of normal glasses. They will have VR and AR kind of built-in. And anything you look at will be in context. Have you read “The Age of Smart Information” by Mike Pell?
Michael: No, I haven’t.
Alan: You need to read that, because what you’re talking about here is literally exactly what he’s talking about, it’s how XR or virtual/augmented/mixed reality and AI will combine with 5G, with quantum computing, with edge computing, with Internet of Things and every device, everything you look at in the world will have a little piece of data, that’s able to talk to you in some way.
Michael: Well, yeah. And the analogy I always give is like early days of the cellphone. Yes, cellphone technology was bounded by car. It would– only way you had access to it, was it was inside of a car. Or then when it became–
Alan: Hey, remember those giant antennas, sticking off the back of your car? [laughs]
Michael: Then you had the portable one, it was like a briefcase. You took, you walked around with it. So right now–
Alan: Then the brick. Come on, let’s talk about the brick. That thing was crazy.
Michael: Exactly. But that’s evolution. And that’s why when people talk about all brave new technology, anyone that is going against the status quo, there’s tons of naysayers. It’s throughout history. There’s people on the sidelines saying “No-one’s going to do this. Why do this?” Because people are so happy with the safety blankets of the status quo, a comfort zone. And it makes them feel, “Yes. What we have is good. Stay in your lane. Don’t do anything new. If it’s not broken, why fix it?” And you know, it really fires us up. We are born to change. We’re born to push boundaries. And I think there’s really exciting, like the stuff that– especially when you look at how cell phone technology moved, and it took 20 some odd years, to now that it’s a smart device like this, that I would hold in my hand. It’s the same thing now. Can you imagine a day that you would not wake up in the morning and walk out of your house without a mobile phone, even on a Sunday and your day off, even if you’re hiking and walking?
Alan: I really wish I could say yes to that. But no, I’m stuck to my phone.
Michael: Everyone is. We are so interconnected to that communication device. It’s beyond a cell phone–
Alan: [laughs] I wish we could have one day a week where we just turn off all the Wi-Fi in the world. [laughs] Every Sunday, it goes off from midnight to midnight.
Michael: I think we will. But the smart displays, how is this going to displace a smartphone? How is it going to get rid of your cell phone? Well, let’s think about it logically. If I have– if I’m wearing a device that’s on my glasses, translucent, and it makes me smarter as soon as you walk into a room, I see your LinkedIn page if you have your LinkedIn page turned on. So I know a little bit about who I’m talking to. How many times have you walked into a meeting embarrassing yourself, not remembering who that person was, what they do? It makes us smarter. What’s that empowerment that we get?
Alan: It’s interesting you said that, because I just read somewhere recently where the whole idea of this is obviously to make a squirter. But one of the things that politicians use is they have an assistant beside them walking through an event or whatever, and they whisper in their ear, Oh, this is so-and-so. And they do this and this. And then the daughter’s name is Sally and like just literally running down this stuff so that they can immediately walk up and say,” Hey, Bob, how you doing?” We need the power for everybody.
Michael: Yeah. It just makes us– enhances our capability. And, you know, for the naysayers, the ones that don’t believe that this will happen, they’ll think that VR was a gag and everything it’s just a fad. Well, the same argument can be made about cell phone technology.
Alan: The Internet.
Alan: “The Internet is a fad, guys! It’s not going to take off! I’m putting my bets that this Internet thing is not going to go anywhere.”
Michael: I know there was a lot of naysayers. And the same thing happened with cell phone technology. The same thing happened with computers. The first PC wasn’t an overnight hit. It didn’t happen overnight.
Alan: Yeah, it was the size of a room.
Michael: Same thing with cinema. Cinema, 1871, we had our first motion picture ever created. It was by [Eadweard] Muybridge. He took a series of cameras and placed them 27 inches apart, 12 cameras and we had the galloping horse.
Alan: Wait a second. Didn’t you guys just recreate that?
Michael: Oh, yes. Yeah, yeah, we’re doing, actually– We’re working with this amazing filmmaker, documentarian who’s looking at the father of cinema — Muybridge — and doing a documentary about him and how Stanford, Leland Stanford hired him to settle a bet: whether a horse, when it’s running, does all of its legs lift off the ground or floor. And Muybridge was an incredible photographer. And he came up with this concept. And the concept is “why don’t we take twelve cameras, position them 27 inches apart. And as the horse runs through it, they’ll take a series of pictures.” When they took these series of pictures. They realize now pictures don’t have to be static, and it moved. And we’ve always mentioned at all of our presentations that we’re, all of us, not just Radiant but I think anyone that is doing multi-camera capture, whether it’s Microsoft or Intel, we’re standing on the shoulders of great giants like Muybridge whose technology, basic, basic level of this technology is what’s being implemented. It’s bullet time, volumetric, it’s all the same principles of taking a moment of time, capturing it from multiple perspectives. So, yes, we’re doing this crazy documentary. We took hundreds of cameras and positioned them in the same recreation of the racetrack. And the horse now was able to run much longer and we did the experiment– I want to give away too much, we experimented a lot, with not just bullet time, but also what happens if we capture this in volume.
Alan: So cool. I honestly– I saw a video on LinkedIn from your office, I guess it was probably the test. It was just this camera angle running down past all these hundreds of cameras in a row. It was really incredible. I don’t want to get off too topic too much. Let’s get back to how companies are using this technology.
Michael: So we talked about entertainment, the lowest hanging fruit, which was right in front of us. How does that help studios and content creators create new levels of engagement, where your audience is now participant and in the future, when there’s movies made, it will be Al Pacino, Robert DeNiro, and you. You will be a cast inside the movie, you’re a participant. So it’s very exciting with that. Outside of entertainment, we’ve been very lucky because we’ve been working with a lot of enterprise customers on exploring how this will have an impact on future following verticals. So one is manufacturing. People say why would volumetric, looking at parts and devices make a difference in a manufacturing process? How does that make a difference? It does a lot. If you can not just use machine vision to inspect if the part is not the counting parts, but if you can see beyond that, you can actually see now if this part that’s been manufactured doesn’t meet the tolerance of its original CAD files or its original method. Vice versa, we can now see volume in every single pixel.
So we’re seeing RGB plus that and we’re able to now use it for analysis, for AI to train what’s the difference between this part that just came out of the factory line, or this piece of artwork that just was produced, versus a fake one, versus a luxury brand that’s making products and they want to make sure no one’s counterfeiting it. How does volume help of that? Well, volume helps a lot in a lot of verticals. If you could see things in depth and detail, high resolution from multiple perspectives the artificial intelligence just gets smarter and bigger. And that’s where Radiant’s new focuses is. How do we develop these methods and scale and simplify the processes of this? And that’s something we’re very good at. We can take hundreds of cameras like you saw in our full-time demonstration, and we make it a one bend operation. We make it very simple, very scalable, very easy to use. So you don’t need specialty engineers. How do you deploy it into factory lines and then how– this is the most important question, is how do you make it scalable? Because there are a lot of companies that– or amazing companies that paved the way for us, all of us, like Lytro and all these amazing companies that make groundbreaking breakthroughs, but what they were focused on was on a very high-end scale.
What we’re doing is on a very low — in principle — of consumer-grade cameras. Synchronizing those is much, much harder, but it’s also a lot more important, because now we can install these in factories for $20,000 versus $2-million. Now we can scale it, and innovations happen when you give things to people outside of your comfort zones and you let them do things beyond what you dreamed of. You can only do that if it’s not tied into big infrastructure like a huge stage, studios, servers, big computer systems, tons of resources. And that’s what our main focus is, is purely how to take something that’s complicated, simplify it down to its core, deploy it, and let that scale, just because of its cost factor is so accessible that people could use it for innovation. They could experiment, they could try new methods without saying, “Wow, if we did this test, it’s going to cost us a couple hundred grand. So let’s not do it. We have one bullet. Let’s just keep it safe. Let’s keep in our lane. Let’s not try to fix until it’s broken.” So this allows people to try new things. And that’s what we’ve done. We’ve created a method for capture that is very automated, scalable, and it can be installed in factory lines, there’s so many verticals.
Alan: Alright, let’s talk about the different verticals, because you’ve talked about manufacturing. So let’s just kind of very simplify this. A product comes off the end of the line, and maybe one every one in every hundred gets put into a volumetric capturer. Maybe everyone does. And it takes a picture, test for tolerances. All of those images in 3D information gets fed into AI. So AI then gets smarter and smarter as it does it, but it can also be used for the manufacturing facility itself. So you can capture volumetrically the entire facility and allow a manager in a different part of the world to put on a headset and stand inside the manufacturing facility and look around. And then if you overlay the IOT data, they can now do a real-time inspection of that manufacturing facility. What other ways can this be used in different realms, I guess, of different parts of business?
Michael: Ok, so here’s some of the other businesses that we’re working with. We’re also working with health and education. I can’t tell you who the institution is, but it’s a very large institution that is looking at future communications. Where they’re going, communications is impossible. The distance that they travel sometimes is beyond millions of miles, a hundred thousand miles away, and they can’t communicate. So how do they prepare themselves for emergencies, human life emergencies, and create simulations, and do it all through AI? And do it in a way where now it really makes sense, because if I want to train someone how to do a process, one of the best methods ever to do this is not just to see it from left to right or free-viewpoint video, but what you can do with volumetrics that you can’t do with almost anything else, is that if you’re, let’s say, solving the Rubik’s Cube. And if I had to shoot it with my traditional 2D method, I would put the camera over your shoulder and get that. Then I would get it under your hand, but it wouldn’t give you the same sense of being there. And if I did in VR, I would stand next to you and see you do it, and I see the room, but that’s still not good enough. But here’s what you can do in volumetric and free-viewpoint videos. I could step inside your presence. I could become you. I embody you. I see your hands move as if they were mine. And there are incredible new headset manufacturers are doing hand tracking, where you can put your hand right on top of that. So you now could really guide the trainee, could train simulations in a much better way. And the great thing is–
Alan: The Quest is now doing hand tracking. I mean, this is a $400 headset doing hand tracking.
Michael: Well, again, remember, we talked about early; the mission for all of the major tech companies is to displace keyboard, mouse, entire monitoring systems. They want to have a five-year-old have the same language skills that a 90-year-old does in a foreign country. So now that you just broke the language barrier, age barrier, education barrier; you make it simple to a core, it could scale. So now training could be the same way. Something that’s very complicated. The simulations that we’re doing by now is life-threatening simulations, right? People who have done them — how do you train them to save lives? How do you train them to do catastrophical–
Alan: All right, so last week in Orlando, I got to try the haptics gloves, which are gloves that allow you to feel, they have sensors in them that allow you to pick up things, feel things in volumetric, and also just have haptic feedback. The experience that I did, I was a medic, it was a military simulation. There was somebody in front of me. I had to stop the bleeding. I looked down, they’re missing their foot. There’s blood spraying against the wall. I grabbed the tourniquet, I put it on, I turned it tight, I stopped the bleeding. Then I had to administer morphine, So I pop the needle off the thing and they said, “OK, before you administer the morphine, put your finger on the needle.” So I put my finger on the needle, and it actually shocked me. It scared the crap out of me, because it hurt. Then I injected the needle, save the guy’s life. And all in a matter of six minutes, in a very safe environment of a conference center, I have now gone through the experience of saving someone’s life in VR. And it was– honestly, I think I have a bit of PTSD after it, because it was pretty graphic. But wow, I’ve never done that before. But I’m certain that if I had a maybe a couple hours of practice on that, I could go into a field and save someone’s life.
Michael: Now, imagine this. Imagine if that communication method wasn’t accessible to you. If you had to read a whole bunch of textbooks and go into a college situation, where one person would stand in front of you, and whatever they said to you, you have to regurgitate, remember for the rest of your life.
Alan: It seems so obsolete!
Michael: Our education method that we’ve inherited. That’s why it’s sometimes legacy, we need to shadow it and truly break the status quo in a lot of ways. Is that one person stands up in a room, recites their knowledge, and whoever captures and can regurgitate it and hold onto the rest of the life is now got the pedigree.
Alan: That’s the next person to stand in front of the room.
Michael: It’s actually flawed and I’m very fortunate as I’ve worked in documentaries and I’ve worked on an incredible documentary about the Rubik’s Cube. Rubik’s Cube was invented by an inventor, his name is Ernő Rubik. In 1970, he was a professor, wanted to teach students about three-dimensional volume. So he created a cube and then he put these different colors on it. And then when he scrambled it, he moved it, he couldn’t put it back together. Now, the Rubik’s Cube is one of the most complex puzzles. There’s one in 43 quintillion wrong answers to one right answer. But yet you have little five-year-old kids doing speedcubing day, which is–
Alan: I know, it’s nuts.
Michael: Why is that? And we asked–
Alan: I saw a clip with two at once, one in each hand.
Alan: I was like, what?
Michael: You have like five-year-old kids that can do this. And we asked like these incredible– we went around the world asking professors. And they all said the same thing: they said without having the algorithm, just on the core of taking a Rubik’s Cube and trying to solve it on your own without any algorithms, there’s less than 500 people in the world, mathematically, that can solve it. It’s really difficult. One in 43 quintillion. Yet you have a five-year-old doing this. The reason is this: it’s the same time you and I just talked about. A five-year-old could see someone else do this and I’m watching a textbook, then I read the book. It’s a tactile thing. It’s three dimensional. They could see it repeated and do it. Monkey see, monkey do. It taps into our core DNA. We are the trained monkeys that see others do something. And then we want to do it immediately and repeat it.
Alan: So, Michael, we’re getting close to the end of this. And I really hate to cut this off because I think we could have this conversation literally forever.
Michael: We can have a part 2.
Alan: We’re gonna have to have a part 2, for sure. So I ask this question of everybody. I think it’s especially important to ask you, because you get to see a lot more than most people in the world of volumetric capture. You guys are pioneering this. What is the one problem in the world that you want to see solved using XR technologies?
Michael: Wow. That’s a really good question. I wish it was just one problem in the world that we could solve. [chuckles] There’s so many. Where do we start? But I think the thing that really taps into our core belief, why we feel that this technology has the capability of breaking barriers and really making an impact, because to us at Radiant’s core philosophy, there are three guiding words that we are driven by and are our philosophies. Is what we’re doing, it’s basically divided up to three, it’s spirited, purposeful, and human. So we ask ourselves this question every day of what we’re doing, what we’re creating. Is it human? Is it spirited? And is it purposeful? What we’re doing now is incredibly human, because it creates connectivity between human beings in a world that we’re so disjointed. There’s so much information. There’s 4,000 to 9,000 pieces of content. We’re here to break that clutter and give a participant this new language that doesn’t require new learning, it gives us equality. Spirited because it lifts people’s spirit, it gives us hope when there is no hope. And purpose because we were driven by purpose. We don’t just– “cool” doesn’t cut it. “Cool” is for the surface and just for the obvious. People would just like to be entertained. We all want more than just be entertained. Those are our bylines. And that’s why we’re driven every day. It’s not easy. Everyone that’s trying to do cutting edge, trying to change people’s perspectives and get them energized to believe in something that is a little bit more challenging and new, like people before us did. But it gives us the purpose, gives us the drive. And it’s just three words: human, purposeful, and spirited.
Code is a big part of what makes XR work, of course. But for most businesses, knowing the DNA of the technology will be less important than knowing how to best use it. XR Bootcamp co-founder Ferhan Ozkan is enabling businesses interested in XR to enable themselves. Alan: Welcome to the XR for Business podcast with your host, Alan Smithson. Today, we’re speaking with Ferhan Ozkan, the co-founder of XR Bootcamp, a platform to teach professionals how to create VR and AR applications, and support companies to bridge their skills gap in XR development through an intensive onsite program, cutting edge curriculum, and industry renowned lecturers with a focus on industry portfolio projects. I am personally very, very honored to be on the advisory board of XR Bootcamp and helping them really develop the future of how organizations will train their staff on how to build XR technologies. And so with that, I’d love to welcome Ferhan to the show. Ferhan, welcome to the show, my friend. Ferhan: Hi, Alan. Pleasure to be here. Thanks for inviting. Alan: It’s absolutely my pleasure. I just want to give you a little bit of history about you. XR Bootcamp started from VR First, which was an organization bringing VR labs into universities and colleges around the world. Is that correct? Ferhan: Yes. Yes. Back then — almost four years ago — we started as VR First. The main mission was to democratize VR and AR around the world. And you also supported us on these times, because it was hard to find headsets as a developer, as a startup. And we actually tried to tackle this problem with the help of major headset manufacturers – Oculus, HTC, Leap Motion, Intel — and they supported us to create VR/AR labs around the world. And we are quite ...
The power of XR will never be able to destroy the good ol‘ desire to go out and shop, but that doesn’t mean that XR couldn’t be used to improve the shopping experience. Tracey Wiedmeyer and Alan discuss a few ideas, from browsing the catch of the day in a VR tropical wonderland, to using VR and AR to test out retail layouts before you build them. Alan: Today’s guest is Tracey Wiedmeyer. Tracey is the chief technology officer and co-founder of InContext Solutions. They’re delivering a mixed reality platform that is the world’s largest brands and retailers are using to streamline their merchandising process and go to market strategy much faster. Tracey is also the former president of the VR/AR Association chapter in Chicago, Milwaukee and a board member for the Information Research Technology Institute at Sam Walton College of Business. Tracey is also a member of the Forbes Technology Council. You can learn more about InContext Solutions at www.incontextsolutions.com. With that, I’d love to welcome to the show: Tracey Wiedmeyer. Tracey: Hey Alan, glad to be here. Alan: My pleasure. I’m so excited. This is a show I’ve been really waiting to do because you guys have been using virtual and augmented reality, mixed reality to help retailers preplan their stores, because right now a retailer, if they want to design a new store, they literally have to build a physical store, put all the shelves and build a mock store. And you’re doing this through virtual/augmented reality, and the metrics that you’re able to collect, keep maps, and where people are looking and the amount of data that you’re able to collect from users in a digital world versus a physical world is actually really quite amazing. So ...
Avid listeners will have noticed a few weeks without a podcast - that’s because Alan’s been hard at work behind-the-scenes building capital for several MetaVRse projects, including the MetaVRse Engine. This gave Alan a chance to reflect on the investment landscape of 2020, and is joined by VP of Marketing Alex Colgan to discuss the new normal that COVID has ushered into the VC world. Alan: Welcome to the XR for Business podcast with your host, Alan Smithson. Today, we have a very special episode. We're going to be talking about the investment landscape of virtual and augmented reality as it pertains to investment in startups, companies going public. What is the investment landscape look like between now and the next few years? How are things going to be funded and what can we expect from the markets in terms of returns? And what can investors really count on to drive those returns as high as possible? Today, I'm joined by the MetaVRse VP of Marketing, our wonderful Alex Colgan. He's going to be joining me today and he's going to be interviewing *me* today. Alex: Hey. Alan: Hey, what's up, Alex? Fun fact about Alex: he also lives in Halifax, Nova Scotia. Or near Halifax. He's in the eastern part of Canada. So, Alex, thanks for joining me on the show today. Alex: Canadian born and bred. Glad to be taking over the reins today. Thanks for having me on. And, yeah, let's flip it around. Alan: It's really interesting, Alex, before we get started I have to really ...