I am an advisor to the San Francisco chapter of the VR/AR Association. At a recent event, I learned about how PhaseSpace was using VR to train police cadets, and I arranged to visit the company headquarters in San Leandro for a demo.
I got that and much more. I saw multiple VR demos related to games, healthcare, education, and training and it was all of far better quality than I have previously experienced. Most important, they seem to have cured the nagging problems related to latency.
PhaseSpace is a small, privately held company, but it has been around for twenty years and has an impressive list of customers including Apple, Google
At the Speed of a Wing Flap.
According to Kan Anant, director of operations, PhaseSpace produces highly precise data with its patented active LED markers that are used to establish the area where VR interacts with sensors and allow the actual immersive effects as a person—or a group—move through a designated area seeing virtual objects from diverse perspectives.
There are several things that set PhaseSpace apart, according to Kan, but the most important one is the frame rate: PhaseSpace runs at the astoundingly high-speed of 960 Hz: with a latency rate of less than three milliseconds—the time it takes a housefly to flap its wings once.
This eliminates so many of the problems of VR such as nausea, vertigo
“Our markers are active markers, eliminating any chance for confusion,” Kan asserted.
The potential for shared multiuser experiences also expands and improves the social dynamics of spatial computing experiences. Today, usually no more than four people can share an immersive headset experience in
Safer for Cops & Suspects
During my recent visit to PhaseSpace I was taken through a series of demos using Oculus, Windows Reality, and Gear VR headsets. I had been through similar demos before, such as tours of the human brain, architectural models that let you see what different surfaces or furnishing would look like in a room and the omnipresent alien-zapping game.
The field of view was consistently large, and I didn’t have to fiddle with the headset to hear and see what I needed to hear and see.
I had the most fun and learned the most from the police training video that I had come to see.
They put an equipment-laden vest weighing about 40 pounds over my shoulders. It felt like a Kevlar Vest, and I immediately assumed the role of a police cadet who had just pulled over an extremely suspicious looking character. I was guided through a few scenarios where I made little mistakes that either got me killed or allowed the suspect to flee.
I got to appreciate how careful and fast cops must be to survive. More important, it let me also see how VR can teach far more effectively and
It has been a tough year for immersive technology champions like me. We said good bye to a period of irrational exuberance for technology that was just not ready for every day period, and we have entered a new period of clean and sober. I keep finding new, small companies showing pragmatic promise for VR, AR, XR or whatever it ends up being called. It is as world-changing as we champions have predicted. We just got the timeline wrong by perhaps a decade or so.
Roger McNamee and Google AR Maps
NBC reported this week that Google is playing with AR that will show Map users directions. If you are walking a city street, you will see arrows showing you precisely where to go. The feature works pretty much like indoor mobile maps such as Aisle 411, which has been doing this for years inside large retail stores and malls. You hold up your phone and see arrows on the floor and a photo of the actual item at your destination.
This is something I want. I very often find myself wandering around the streets of San Francisco, having trouble aligning the direction I should be heading with the street map I am looking at on my phone. These arrows solve the problem simply. Add a photo of my destination and I would say it solves it completely.
When I mentioned this on Facebook, Roger McNamee was quick to point out that this is just one more deceptive way that companies like Google and Facebook collect data. Roger knows a great deal on this topic. A former mentor to Mark Zuckerberg, Roger discusses Facebook and its data abuses as a threat to democracy in Zucked, his recently released book. (I have not yet read it. It’s next on my list.)
He’s probably right of course. But I feel that one way or another, I pay for what I get online: Either it’s money or it’s my data. Issues like this sometimes make me want to move to a cabin in the woods where there is neither electricity nor Wi-Fi
But instead, I will continue to consume this stuff as if it were all free ice cream. My guess you will too. Besides, a moving arrow on a Google Street map will allow me to reach my destination in neighborhoods where I would prefer not to get lost for safety sake.
I assume you will too.
AI Maps for Autonomous Cars
While Google Maps may tell us where to go on the sidewalk, TomTom, the old satellite mapping company that was big in the days before Waze, wants to guide autonomous cars with AI-enhanced mapping technology. What confuses me is that TomTom’s value add is supposed to be HD quality maps. I have no idea why an AI-driven car would need HD quality visuals from an AI-driven map.
In any case, the nature of geographic data is going to become more persuasive no matter what concerns thinking people may have. Perhaps, autonomous cars are also suckers for free ice cream: just like me.
Truth, Illusion, AI & Art
Who’s Afraid of Virginia Woolf was a popular play and movie in the 1960s, and the best remembered line in it was, “Truth and illusion, George, you don’t know the difference.”
It’s getting so we all are going to have that problem. A combination of AI and immersive technologies are blurring the lines between what is real and what is not. The cultural and business issues keep increasing in complexity, or so it seems to me.
These thoughts came to me as I read about GauGAN, the name for a software ‘smart paintbrush’ from Nvidia. The chipmaker used deep learning to teach this AI-powered ‘smart paintbrush’ on a million images.
The idea is that everyday people can create a crude sketch on their computers and GauGAN will transform it into a work of fine art. This is a dazzling accomplishment for AI in my view.
But I am not sure how I feel about it as a human. Among the
Does that make works that were once priceless, now worthless? At best, I feel ambivalent.
Got News for Me?
I am trying hard to produce more original content in this newsletter on all issues related to the business of disruptive technology, particularly AI and AR.
Do you know a company, product, issue or idea that would be useful or interesting to my 25,000 newsletter readers and blog visitors? Please email me if you can help. I would offer a free subscription, but everyone can have that anyway.