There are many kinds of chatbots. The big deal about the category is they learn by interacting with people and in so doing they keep getting smarter and more helpful.
For more than 60 years, proponents of Artificial Intelligence have been divided—often acrimoniously—into two camps: Autonomy vs. Augmentation. Autonomy proponents…
So much is being said about the dangers of AI. I share many concerns related to autonomous cars, massive unemployment…
I met in San Francisco last week with the three founders of PocketConfidant, a promising AI startup based in San Francisco and Nice. Two of them are executive coaches and the third is a computational neuroscientist and IT professional who have developed software to augment the services that coaches perform increasingly for business leaders and students all over the world. They are the first company I’ve actually interviewed for Augmenting People, Why AI Should Back Us Up, Not Push Us Out, my new book scheduled for release next year.
The company, formed earlier this year, is a Business Solution Partner of the International Coaching Federation (ICF), a self-regulating global organization that awards credentials only to those that it says meet rigorous education and practice requirements. Executive coaching is taught at an increasing number of elite institutions such as Harvard and Yale. ICF has about 55,000 members and is growing at a modest pace, while the demand for them is growing much more rapidly. For me, there is some irony to be covering an area that sees job growth, since most research for my new book finds me looking at appalling predictions of vocational reductions.
The growth is coming in two areas:
*Early-phase companies where scaling is suddenly rapid, and young entrepreneurs find their jobs moving from product development to managing staff and adapting to systematic management; and
*Global Enterprises where coaches are a function of HR, who uses it to fast-track junior employees who demonstrate leadership potential but lack experience.
Coaches are different from mentors in that they never suggest anything: they simply ask questions in the classic Socratic method. Clients use critical thinking to find answers within themselves. (In fact, one of the very first AI end-user applications, Symantec QA, was based on the Socratic method when it was introduced back in 1985.)
In the US, at least the number of certified coaches is growing more slowly than the demand, according to MarketResearch.com, the pressure on coaches is to be available to more people and have more time than is possible.
Elon Musk often reminds me of a quote from TS Eliot, my favorite poet: “Only those who go too far, can possibly find out how far one can go.”
Events of the recent past demonstrate that he often goes too far. But in terms of cars, spaceships, solar panels and Hyperloos, he is likely to be the one who demonstrates how far one can go.
Three cases in point:
1. Robotic Manufacturing. In May 2017, he boasted that the Tesla Model 3 factory would be the most automated in the world with an output of 75,000 vehicles per quarter because of the superior productivity of robots. But at the end of Q1 2018, the factory produced a comparatively limp output of 10,000. The reason, he discovered and declared in an interview was that “humans are underrated.”
Admitting that the miscalculation was his mistake and that people just do some things better than robots, he had Tesla pulled out a major chunk of the automation and replace it with sentient beings. In short, Musk went too far. His company and his reputation are paying a significant price for it, but at the end of the day, the Tesla is probably the most significant improvement in ground transportation since the Model T Ford.
2. SpaceX Rockets. Musk has looked to the future, done some deep thinking about it and it scares the hell out of him. But instead of walking around carrying a sign warning “The End is Near,” he has created an economic opportunity. There may come a time–not as far into the future as any of us would want, that Earth becomes uninhabitable. With that in mind, he created SpaceX which has proven that Silicon Valley technology can expedite space travel faster, better and far more cheaply, with private financing than NASA can do with government backing and traditional contractors. His goal is to build a human colony on Mars, where humans might create an inhabitable environment after destroying the one we have here.
On Sept. 19, he announced one giant leap for humankind when he announced that Yusaku Maezawa has chartered a ride on the SpaceX Big Falcon Rocket. The Japanese billionaire will bring along six-to-eight artists as guests. The flight is scheduled for 2023, but Musk schedules have been known to slip. To me, that is of little matter so much as that Musk, more than any other individual I can think of, is exploring how far we may need to go.
There are two long-standing schools for the role of Artificial Intelligence. The first is Autonomy, where AI devices and systems simply replace us. The second is Augmented, where AI serves us as a new toolset: I favor the toolset approach and am writing Augmenting People: Why AI Should Back Us Up, Not Push Us Out to explain why you should be concerned as well.
Ultimately, decisions made today between AI augmentation and autonomy will shape the future of the human race. Over the next 10-15 years, this choice will restructure our relationships with machines, each other, the planet and the universe. Right now some of the smartest technologists in the world, working in some of the best established and most promising tech companies in the world, are inventing ways for AI to do everything from nanobots that will swim through our blood zapping cancer cells to cars that are so autonomous that they are outfitted without steering wheels and with fold-out beds for a good snooze.
You’ve already read about the cars and how great they will be. In the autonomy/augmentation debate, it seems the decision has already been made, robotic cars will prevail. The government, insurance carriers, and even car-makers are just about unanimous that these vehicles are safer and far less polluting. A younger generation seems to prefer self-driving to their parents’ preference for being behind the wheel and in control.
Just about every automaker is moving toward what is called Level 6 Autonomy, which means the driving machines do it all while humans do whatever they please. Volvo is even designing a sleeping car because passengers need to do nothing but get inside the car and tell it a destination.
There are mountains of data arguing the case for automobile autonomy. Millions of miles have been logged by traditional and new automakers with very few mishaps and only two fatal accidents—both blamed on distracted humans who made fatal errors.
Cases like this sound pretty compelling. Data is the protoplasm that makes AI seem, well, intelligent. The stats argue that if we remove humans from the loop, lives will be saved, pollutions will be reduced, existing roadways can be retrofitted to enable more cars to use existing infrastructure at higher speeds and travel time will be reduced.
Johann Gutenberg had a clear winner when he invented Movable Type. His technology replaced monks with quills, enormously expanded the distribution of news and information and motivated everyday people to learn to read. It endured for well over 400 years until Ottmar Mergenthaler invented a faster, better and cheaper way to set an entire line of type, rather than just one letter at a time, as Gutenberg’s press required. Appropriately, he called it the “line-o-type.”
They still had Linotype machines in the 1970s when I got my first job for a newspaper, the Boston Herald-Traveler. But as I started my career, the Linotype machine operators were ending theirs.
This would be a trivial piece of personal information, except that now, I am researching Artificial Intelligence and the Future of Work for a new book and Linotype operators and Kurt Vonnegut keep flashing back to me from inside my analog memory bank. I’ll tell you about Vonnegut, in a few paragraphs.
It’s important to mention that the Linotype Operators had their own union at a time when unions were still strong enough to protect tradespeople. And the unions collaborated, so if they went on strike, as a member of the American News Guild, I would be obliged to go on strike, as would the Teamsters who drove delivery trucks. If the strike got long and nasty, the Teamsters could shut down the entire country.
By the time I arrived at the old Herald, Linotype machines would be destined for trash heaps, museum exhibits and little else. Personal Computers had come along and so did a new way of printing four entire pages of news at once.
Yet, as I started my career, the Herald had an entire floor dedicated to Linotype Operators sitting at or near their machines. I passed them several times daily as I took news copy from the third floor down to the first floor where the presses printed and bundle newspapers that were then loaded onto trucks for distribution and delivery.
As I passed through the Linotype section, I’d see these guys reading books, playing checkers, or cards. While not quite hostile, I found them to be generally unfriendly.
I would eventually learn what had happened and why. A few years earlier, when word processing and cold type eliminated the need for the linotype and its operators, the unions cut a deal that avoided a strike: Every linotype operator could remain employed until retirement age. Some chose to leave but others stayed, and perhaps, would remain for decades.
It was a scenario, it seemed to me where everyone lost. Perhaps the operators who stayed lost the most: while they kept getting paid, it seemed to me they lost their pride and for that their families would suffer perhaps more than if they had lost compensation.
Being the Herald Copy Boy was my night job. By day I was an English Journalism major at Northeastern University, where I too was learning the skills of a job whose market value would diminish as digital innovations advanced. At about this time, I was assigned to read Player Piano, written in the 1950s by Kurt Vonnegut. It is a futuristic novel taking place after some horrific war that reduces world population. To fight it, most working class job holders went off to fight and die in huge numbers. Left behind were managers who kept things running with engineers who automated the jobs formerly performed by working class people. The machines proved far more efficient than the humans had been; so the managers kept managing things that engineers kept refining and no one else was really needed to produce goods and services.
I first read John Markoff‘s Machines of Loving Grace in 2015 as I started to write The Fourth Transformation, a book about AR and AI in which I predicted that today’s smartphones would be replaced by smart glasses. Markoff’s book gave me reason to pause. Markoff’s book convinced me that the most transformative technology of this new century would be AI and not AR. It is by no coincidence, that as I start researching Augmenting People: Why AI Should Back Us Up, Not Push Us Out, that I reread Markoff’s extremely well-researched book again.