by Heidi Kaye |
Aware and awake.
by Heidi Kaye |
Response by Lou Sarvas on Microsofts announcement on their JV with Investment firm to advance AI :-
...there's nothing to smile about. Money and billions in investment do not impress me, it's not real, it's a concept, robbery at day light. It needs dumb-down humans willing to exchange their life force for that worthless paper we call money.
The big push by B.Gates and his buddy S. Kurtweil for AI or synthetic so called intelligence and transmutation of human kind which is trying to turn people into cyborgs is not a good news for humanity. It’s actually a death sentence. People like this dude above, the whole microsoft and apple gangs should be loaded into a rocket and shot as far into the space as possible and never be let to come back again. Humanity does not need this artificial nonsense! Then again there are many gullible humans who will embrace this nonsense...good luck with your imprisonment, long live the FOOL.
Today’s computer science and engineering students have a wonderful opportunity to put their skills and expertise to use solving the world’s biggest problems. The computer programs of today are really only constrained by the user’s imagination.
Today’s announcement that the University of Washington’s Department of Computer Science & Engineering will be elevated to a school and will bear my name is truly an honor.
UW has always felt like home to me for several reasons.
In the university library my father helped lead, as the Associate Director of Libraries from ‘60 to ’82, I spent hours and hours as a kid devouring piles of books so I could follow the latest advances in science. And I spent a lot of time in the graduate computer lab as a high school senior. Of course, I didn’t belong there, but the professors looked the other way—until we wore out our welcome, as you can guess high school students would do eventually.
I still have the letter from the computer lab director, Dr. Hellmut Golde, kicking us out. A couple lines still make me laugh.
“Dear Mr. Allen,” it begins. The letter lists several reasons for kicking us out: One was that we would use all the terminals at once and for such long periods of time that the lab became too busy and noisy. The second was that some of my co-conspirators hadn’t properly checked out equipment. And the third and truly great offense still gets me.
“Earlier this week,” the letter reads, “you removed the acoustic coupler from Dr. Hunt’s office without authorization.” It’s true. Guilty as charged. Since no one was using it, we’d taken it home so we could keep working off campus. And here’s the punch line. He said we’d taken it “without leaving at least a note. Such behavior is intolerable in any environment.” And that was the nail in our coffin, I guess. I’m still embarrassed we didn’t leave a note!
With that stern letter, our free time on UW computers came to an unfortunate end.
Another reason the University of Washington is such a special place to me is that it’s where we built the Traf-O-Data machine. While Bill Gates and I handled the software side of it, the machine itself was built on campus by a UW student named Paul Gilbert, a partner Bill Gates and I recruited into our high school business venture. Paul did an amazing job turning the first 8-bit microprocessor in Seattle into a real computer.
The idea was simple enough.
We wanted to automate the traffic-measuring process, part of which required high school students to count the hole punched into a tape each time a vehicle drove over a black tube laid across the street. We wondered if there was a less expensive solution than a minicomputer to processing the tapes. I had read about the new 8008 chip from Intel and suggested we try to build a machine based on it.
Objectively speaking, Traf-O-Data was a failure as a company. Right as our business started to pick up, states began to provide their own traffic-counting services to local governments for free. As quickly as it started, our business model evaporated.
But while Traf-O-Data was technically a business failure, the understanding of microprocessors we absorbed was crucial to our future success. And the emulator I wrote to program it gave us a huge head start over anyone else writing code at the time.
If it hadn’t been for our Traf-O-Data venture, and if it hadn’t been for all that time spent on UW computers, you could argue that Microsoft might not have happened.
I hope the lesson is that there are few true dead ends in computer science. Sometimes taking a step in one direction positions you to push ahead in another one.
And relentlessly absorbing the latest in technology can help prepare you for that new path toward success.
To think that when we were building the Traf-O-Data machine there wasn’t even a computer science department at all. And now this department is one of the best in the nation, with this next phase of expansion expected to elevate the school into the nation’s Top 5 computer science programs.
If it hadn’t been for our Traf-O-Data venture, and if it hadn’t been for all that time spent on UW computers, you could argue that Microsoft might not have happened.
This impressive program trains and educates some of the world’s best and brightest. Matter of fact, I was fortunate to be able to convince UW professor Oren Etzioni to lead the Allen Institute of Artificial Intelligence. He and his team are doing tremendous work in Fremont.
The promise of artificial intelligence and computer science generally vastly outweighs the impact it could have on some jobs. In the same way that while the invention of the airplane negatively affected the railroad industry, it opened a much wider door to human progress. As more intelligent computer assistance comes into being, it will amplify human progress.
I envy today’s young computer science and engineering students. I really do.
They have a wonderful opportunity to put their skills and expertise to use solving the world’s biggest problems. The amount of computing power available for their projects and the facility of the programming tools they can use far exceed anything we had. Today’s smartphone is many thousands of times faster than the CDC6400 students used back in 1972! And today’s computer programs are really only constrained by the user’s imagination—instead of by the small amounts of memory computers had back then.
A few examples of ambitious efforts today’s young innovators could pursue might be:
I envy today’s young computer science and engineering students. I really do.
We truly are entering a golden age of innovation in computer science, with new techniques such as deep learning at our disposal, and collaboration opening up new ways to build innovative projects.
I look forward to watching the new Paul G. Allen School of Computer Science and Engineering continue to make profound contributions both to the field and to the world. I look ahead with anticipation to the advances that will continue to flow from the school—advances that I hope will drive technology forward and change the world for the better.
Courtesy of Shelly Palmer
Last week, I compiled a list of the 5 jobs robots will take first. Today, let’s have a go at the 5 jobs robots will take last. For this article only, let’s define “robots” as technologies, such as machine learning algorithms running on purpose-built computer platforms, that have been trained to perform tasks that currently require humans to perform.
Almost every human job requires us to perform some combination of the following four basic types of tasks:
For example, an assembly line worker performs mostly manual repetitive tasks which, depending on complexity and a cost/benefit analysis, can be automated. A CEO of a major multinational conglomerate performs mostly cognitive nonrepetitive tasks which are much harder to automate. So, the trucking and taxi industries are in for a big shakeup; c-suite corporate management, not so much.
Make no mistake: at some level, every job can (and will) be done by machine. It is not a question of if; it is just a question of when. You’re going to push back now and tell me how different humans are from machines and how long it will actually take for all of this to happen. Stop. Read Can Machines Really Learn? for a primer in machine learning. Then read AlphaGo vs. You: Not a Fair Fight to understand what is happening and why you should care about it. If you’re still not convinced, have a look at What Will You Do After White-Collar Work?. It will help put all of this in perspective.
That said, there are some jobs that will be exceptionally difficult for AI to do subjectively better than humans. This is not an arbitrary list. Each of the following jobs requires a unique combination of human intuition, reasoning, empathy and emotion, which is why it will be difficult for an AI system to train for them.
As you will see, the last jobs that robots will take share a common thread: humanity.
Unless we are trying to turn our children into little computers, we cannot let computers train our children. (“Singularity” people, I know what you’re going to say. The Kurzweilian future is now estimated to begin in the year 2045. There will have to be a minimum age law associated with human/machine integration.) I can imagine a robot kneeling beside a sobbing five-year-old (who just figured out that his mom packed PB&J instead of a bologna sandwich) and offering comfort and a shoulder to cry on, but the robot is unlikely to provide an emotionally satisfying outcome. We teach our children to be human. If we want them to grow up to be human, they will have to be trained by their own kind.
Would football be interesting if it were played by robots? Maybe. Would it be fair to put human athletes on the field of play against robots? Probably not. Using today’s regulation clubs and balls, robot golfers would consistently shoot in the high 40s to low 50s. What’s the point? As long as humans strive for athletic excellence, humans will need to play sports. What about surgically enhanced, genetically modified athletes? That’s for another article.
Politics and humanity are inextricably linked. The complex mix of subtlety and nuance required to become a successful politician is not in the current purview of AI. It’s a training set that would require a level of general intelligence that is far beyond the reach of near-term technology. Machines do not need politics; they “live” in a meritocracy. Humans live in anything but. As long as fairness and equality are important topics, humans will be the only ones on the political scene. Some of you will remind me that all politicians have the same goal: to get reelected. And therefore, politicians should be very easy to program. Nope. Sadly, politicians will be among the very last professionals to lose their jobs to AI. (They are also in a unique position to legislate their own job security.)
Judges, adjudicators, arbitrators, and people who judge baking contests or Olympic sports or any type of contests that require both objective and subjective assessments have practically robot-proof jobs. Subjective judgment requires vast general knowledge. It also requires a thorough understanding of the ramifications of your decisions and, most importantly, a precise ability to play “I know, that you know, that I know” with the parties who are directly involved, as well as the public at large. If you can make a living judging baking contests, you’ve got lifetime job security (as long as you don’t eat too many pies).
Psychologists, psychiatrists, and other mental health professionals will simply be the last jobs robots can take. Sure, we could do a combination natural language understanding, automatic speech recognition system tied to a competent AI system that would make a fine suicide prevention chatbot. But there’s much more to understanding and treating mental health issues. Again, humans are better equipped to understand other humans. This is not to say that medical professionals won’t leverage AI systems to do a better job, but the ability to create a robot that could take the job of a trusted psychiatrist will be outside of our technical reach until we have functioning WestWorld-style robots. And even then, it will be a reach.
I have intentionally left artist, writ large, off this list. The artist is a good subject for another article. Suffice it to say, technology has already had a huge impact on the economics of the arts. And, as much as I would like to tell you otherwise, none of these jobs are anywhere near safe.
If you’re wondering where your job sits on the list of “Run for your life, the robots are coming,” you have a simple, singular mission. Learn how your job is going to be automated. Learn everything you can about what your job will evolve into and become the very best man-machine partner you can. It’s the best way to prepare yourself for the advent of AI. Lastly, don’t wait. Everyone will tell you that none of this is happening anytime soon. They are flat wrong. But even if they are right, there’s no harm in being better prepared for an inevitable future.
Named one of LinkedIn’s Top 10 Voices in Technology, Shelly Palmer is CEO of The Palmer Group, a strategic advisory, technology solutions and business development practice focused at the nexus of media and marketing with a special emphasis on machine learning and data-driven decision-making. He is Fox 5 New York's on-air tech and digital media expert, writes a weekly column for AdAge, and is a regular commentator on CNBC and CNN. Follow @shellypalmer or visit shellypalmer.com or subscribe to our daily email http://ow.ly/WsHcb
Last month, Mayo Clinic’s CIO gave the strongest endorsement so far of artificial intelligence technology at the annual HIMSS conference in Orlando, Florida.
Cris Ross along with Tuffia Haddad, a breast cancer oncologist, at Rochester, Minnesota institution, portrayed the tangible benefits of using artificial intelligence, specifically IBM Watson Health’s AI engine.
But make no mistake.
Ross wasn’t donning rose-tinted glasses as he reviewed this emerging technology that’s set to transform myriad industries, including healthcare.
“Artificial intelligence is still pretty dumb,” Ross declared before adding, “And I don’t mean that in a really derogatory way.”
What Ross meant is the current limits of AI.
He described IBM Watson Health “as some of the best computer science on the planet” but noted that AI is heavily dependent on mammoth amounts of data. Here’s how Ross captures the limitations of AI, adding that his view of the technology may result in “fist fights”: (slightly edited)
The best artificial intelligence today is still driven entirely by so-called semantic models, which is understanding language and the relatioship of words to each other and how they build up. So the only way that these things can work is by giving them mountains of data to plow through to try and get to statistically meaningful connections, which then can be leveraged to gain some other understanding.
So, this is like a 2-year-old child just learning to speak and to walk and how they interact with the world. When I put my hand on the stove, that’s not a good outcome. It’s not something immediately clear to a 2-year-old child.
What all this AI is lacking is an ontological model where you can describe a structure abstractly. Watson had no idea what a patient was, what a hospital is, what a doctor is, what a drug is, what the effect is on a patient, what’s the relationship between a doctor, drug, a patient and an outcome.
No clue, because with these technologies you can’t describe an abstract concept and have that abtract concept be applied….
But as long as we are still based on raw horse power semantic engine technology, it means that the only place where this technology is applicable is where there is sufficiently deep and rich data sets with enough narrow variations ….
Those narrow variations allow the technology to look for some correlations and then arrive at some knowledge, Ross explained.
James Rosen, senior managing director, PricewatrehouseCoopers’ analytics group, who was moderating the AI panel at HIMSS, chimed in that as AI is developed and perfected over time, interested stakeholders should keep an eye out for “deep learning.” Deep learning is a subset of machine learning where algorithms try to make sense or model abstract/thought through data. These algorithms are aimed to function as neural networks in the way a human brain does.
The IBM representative on the panel discussion at HIMSS did not resort to “fist fights” as Mayo’s Ross essentially described the best computer science on the planet as a thoughtless toddler.
“The technology isn’t the goal. The goal is the outcome, the health that we are all trying to move towards.” said Sean Hogan, VP, IBM Healthcare. “So, if Watson is still a toddler, a young infant even, glad that we’re choosing good parents or smart parents like Mayo and MD Anderson and some of the top institutions around the world and we are actively trying to learn from that experience.”
Photo: HKPNC, Getty Images