World of Watson: Day 2 General Session
Articles,  Blog

World of Watson: Day 2 General Session

[ MUSIC ] Life changing. Leading. Assistive. Evolving. Future. Complicated. Vast. Wow. Amazement. Powerful. Watson’s definitely powerful. For me, the biggest one is “meaningful.” It truly is profoundly meaningful work. One of the first times I really got to
see where this was going was we had some of our representatives from Memorial
Sloan-Kettering get up and they really talked about how promising they thought this would be. One of the amazing realizations is how
much like a learned colleague it can be. IBM Watson can just go into
a medical records system, pull out all the relevant data
and do it much more quickly. The response from patients, to me,
has been absolutely overwhelming.>> People are getting to understand
the implications that this will have for every profession and every industry.>> We’re using Watson to make travel decisions. We’re using Watson to make cooking decisions. [ MUSIC ]>> With the collaboration between IBM and
SoftBank, we’re working to bring Watson to new markets and to teach Watson Japanese.>> The biggest surprise that we found is
the receptiveness to this new technology. In the past, it’s taken years for
people to get comfortable with us.>> My own view is that every single
professional on the planet can be as good as the best professional in their field
with the help of a cognitive assistant.>> It’s really about scaling the
greatest minds to every mind. I think that’s the promise of Watson.>> It is a technology that can go anywhere
and do almost anything at this point.>> It’s a future in which that intelligence
embodies itself in the physical spaces in which we work and live, and that
transition is really extraordinary. [ MUSIC ] RHODIN: Good morning. [ APPLAUSE ] Welcome back to day two. I’m glad we actually got the thermometer turned
down just a touch from yesterday afternoon. It’s nice to be up here not sweating. So one of the things I found pretty exciting
over the day yesterday and actually mingling with everybody last night was listening
to all the new ideas and use cases that you all were coming up with for
the technology you saw yesterday. I have a very simple message to
my team that’s listening in here. We need to start brewing the coffee because we’ve got some seriously
exciting work to get going on. A year ago, when we first started talking
about the Watson platform and opening up to an ecosystem, we had our
first API, question and answer. Very simple, ask a question and get an answer. Today, a year later, we’ve got over 25 different
services ranging from personality insights to concept expansion, tradeoff analytics. Developers organizations and
entrepreneurs and educators can all build with these different applications now. We also have been focused on how to make
it easier to get started with Watson easier for organizations to get up and
start their cognitive journey. But for many of them what we heard
last year keeping data local, some information is increasingly sensitive
and is not something you want to put into a public cloud is more than a requirement. It’s an imperative. And fields of healthcare, financial
services, education, just to name a few. So we’re really excited to be able to announce that we’re offering a Watson hybrid cloud
offering now that uses Watson Explorer as the on premise application platform that
connects to your local data but also connects to the Watson developer cloud-enabling
access to the cognitive capabilities on the data that’s available
in the public world. This is going to be a great environment
for you to be able to pull down data like Pub Med or news feeds, news reports. Financial journals. New publishings of healthcare reports. This is going to open up entirely
new avenues for organizations looking to get started on their cognitive journey. And speaking of journeys, over the past
48 hours, over 200 developers joined us. Some of them coming from Germany, the UK,
Brazil, Hong Kong, Georgia, Canada and Japan. One of the teams called themselves Jet Lag. They set up camp in tents outside
here with some serious bandwidth. The Watson APIs. Music, gotta have music, and started a
hack-a-thon along with truckloads of food. The Watson hack-a-thon kicked into high gear
with help from our co-sponsor Newi Central and they’ve emerged with a
really impressive set of apps. The judging last night went late into the night
to take advantage of all of the capabilities that they put together to
really come up with the winners. So let’s take a look at the top three teams. First up, Fetch. It’s designed to help users dig into
topics at a deeper level in real time. It’s a tool that converts speech-to-text
and uses the Watson Alchemy APIs to identify the information for the same
content and return them to the app in real time. Second, and our runner up, tackled a problem of finding a great school in
New York City for your child. Give the app a child’s writing sample and it
will match their personality with the schools in New York City that are the best fit
using the personality insights API. It then visualizes tradeoffs in considering each
of the schools using the tradeoff analytics API and highlights potential
career paths for your child. Now, that’s a cool app. And our winner of the first World of
Watson hack-a-thon is like minded. It helps you find people worth
connecting to that are also near you. It generates a user’s personality
using Watson’s personality insights API and uses Alchemy’s word extraction on tweets
and profile text and based on interest and personality fit it matches
you with people like you. Congratulations to like minded. [ APPLAUSE ] Let me emphasize these apps like many others
were built in 48 hours using the APIs. We’ve come a long way in a year. For you inspired to give this a try I’ll
remind you that our hands-on lab is located in the balcony that is no longer a sauna and you
can try your hand at writing some applications. Now, we like recipes around here. And this morning we’re going to take a
look at the ingredients for our work ahead. As we jump in, I want to leave you with
this: There’s one theme that I take away from the brilliant and savvy ideas of how you’re
putting Watson to work is that Watson promises to democratize information
and with it innovation. We’re moving at Watson speed. And we’re going to be on this journey together. We’ll continue to innovate and build
solutions to help people discover new things, learn about them and make better decisions. So now to get started this
morning, let’s take a look at how cognitive technology is
disrupting industries and professions. The founder of Travelocity, Kayak and
now Wayblazer will take us on this tour. Please welcome Terry Jones. JONES: Thank you, sir. Thank you very much. And good morning ladies and gentlemen. Normally I’ve spoken at several Watson events
they’ve asked me to talk really about Watson. I’m not going to do that this morning. I’m going to talk about disruption. Because I’ve had the opportunity
of disrupting travel twice and I hope I’m going to disrupt it again. And so the title of this talk
is lessons about disruption. One of the things I’ve learned is
that technique follows technology. I’ve been going down this IT road
for a long time, about 45 years now. And we simply don’t know today where
technology is going to take us. We think we know. But we don’t know. Alexander Graham bell thought the
telephone would be a tool for the deaf. Edison thought the phonograph
would be a dictation machine. They didn’t know where technology
would take them. When — this isn’t working very well. Let’s see here. When Al Gore invented the Internet — [ LAUGHTER ] — I don’t think he knew that it was going to
be our travel agent, our mailbox, our bank, our real estate agent or where I’d find my wife,
or that I’m — I think I’m going to switch. We’ve got another clicker here and if we
can move to that one — I think we just did. Let’s see here. Hold on one second. There we go. Or when Martin Cooper invented the cell phone. Anybody remember those bricks, you had those. When he invented the cell phone, he had
no idea that we’d have 7.3 billion of them or that 10ian fisherman off
Mombasa in a sailboat without a motor would find they had
cell reception and check market prices of fish using text based messages using the
port on that data and gain some market power. We really don’t know where
technology is going to take us next. I certainly learned that at Sabre where I
was CIO where we took the reservation system of American that had schedules and fares
and because of inexpensive terminals and inexpensive connectivity, we went
out to revolutionize travel agents. We took them from the OAG and telephone
to using terminals and suddenly they went from 40 percent marketshare
to 80 percent marketshare. In a very short period of time. Disrupting an entire industry. And then the Internet came along
and information found its freedom. Information escaped. It was being beamed to us all the time and as
customers we were empowered by that information. But remember that pretty much in those
days information only flowed one direction. From the website to us. And websites were pretty simple. That’s the first Yahoo! page. That’s the first Travelocity page. Pretty simple. And yet connectivity equals opportunity. I’d say that’s the next big thing
to think about in disruption, how much connectivity has created opportunity. Think of all the industries that
were destroyed by that connectivity. Music, maps, news, the yellow pages. Even with that simple connectivity. 18,000 travel agents out of business. So that 80 percent share
disappeared almost overnight. And then users began to create information. That was in my opinion the next big change, because information began
to flow in two directions. People started blogging. And selling. And rating. My friend Rich Barton who started
Expedia and Glassdoor and Zillow said if it can be rated it will be rated and people
started sharing experiences about buying but no longer one-on-one over the backyard fence
but now one to many on sites like Trip Advisor where they have over 130 million hotel reviews. No one goes anywhere today
without reading a review. And that changed things again. It revolutionized marketing because
social feedback builds trust. It used to be that marketing
was in charge of consideration. But now it is the user who uses
and comments and gives feedback that creates consideration of products. Two-way information changed everything. And it changed the amount of
data from a stream to a torent. And all of these meetings people say
90 percent of the world’s data created in the last three years, 80
percent of it unstructured. The last five years have been
about connecting all these people. The next five years are going to be
all the crazy things you can do now that they are connected said Mark Zuckerberg. I agree. But I’d add one word. All the crazy things they can
do now that they are connected. And they are doing. Martin Cooper never envisioned
7.3 billion cell phones. But he also didn’t envision 7.3 billion network
nodes that are creating connected businesses. Airbnb now the largest hotel chain. Uber the largest limo company. Expedia the world’s largest travel
agency, all because of connectivity. So connectivity creates a lot of opportunity. And if you think about the rest of the changes
that are coming, social, of course, is one. Analytics. Cloud. Big data. All these various pieces are coming forward. And the interesting thing is
when they all come together. And that’s what excites me about Watson. That’s what got me sort of out of retirement
and got me working on a Watson-based business is because with HTML, we kind of were able to
put a pretty face on a lot of ugly systems. I think with Watson, we can take all this
interesting social, mobile, cloud analytic data and put it together in an
incredibly powerful engine that will disrupt businesses going forward. And I’m disrupted by this which doesn’t seem — so we’ve gone from systems of
record to systems of engagement. And now to systems of insight. And you know, the other day I was in London and
I got a chance to speak to the research board. Some of you know the research board. The research board is a collection
of the top CIOs in the world. And as I went in to make my
talk and they asked me to speak about cognitive computing,
this was up on the board. It said insights are the strategic asset. So these CIOs looking forward are
being told by their research arm that it’s all about insights going forward. And I absolutely agree. That’s what we’re looking for at Wayblazer. That’s what all of you are
finding with this technology. That’s what cognitive computing can provide,
because a lot of what I think what went on yesterday and a lot of the
discussions, a lot of the booths outside are about extending human capability,
about the more capable doctor, the more capable chef, the
more capable salesperson. I think that’s great. I think that’s a terrific use for Watson. But when I look at all this unstructured data, as a B to C consumer guy, I
think of something different. I think of the paradox of choice. I think of how difficult it is for consumers
to sort through all this data we have in the physical world and
the virtual world today. It’s hard. It’s really hard. So complexity equals opportunity. The complexity that was generated through that
two-way world, that two-way communication, is a huge opportunity for us in business. How do we make it simple? How do we reduce complexity? That’s to me where cognitive
can be very powerful. Now, like Travelocity and Kayak we
make booking and searching easy. But planning? Planning is still really hard. I think Ginni even mentioned yesterday
that the average consumer searches over 20 sites to find what they want. Now, I started my career as a travel agent,
and in those days people would come to me. They would want to go to Europe and
we would plan their trip in an hour. Today, today people spend months. They spend forever looking at
websites and more websites. They don’t want to make a mistake. They try to do it themselves. It’s really hard. How do we make it simple? How do we change what’s going on and
make it easier than it was before? Because search is terrific. But search only gives you clues. Do you realize there’s 58
million best hotels in Hawaii? Not really useful. Boy, oh boy. But what we need is expert advice. And the Web has never given advice. Advice will help us move
people through the funnel. So fluid is the customer. Part of the ecosystem. There in retail. And they’re asking what do you want to do. Not what product. Why do you want to go there, not
exactly what you want to buy. I’m going to Everest. I’m going to climb Mount
Whitney, here’s what you need. That’s a very different kind of retail question. Sell points doing something similar
with this open interesting discovery. We’ve done the same thing for the city of
Austin, Texas, at their convention bureau, where you can say, hey, I want to go to Austin
this fall with three buddies for a guy’s trip. You can’t ask that question
of any other website. But you can ask that of Wayblazer. I’m heading to Austin for a guy’s
trip with three buddies this fall. And instead of giving you 58 million answers,
we will give you go to the Formula 1. Go to Austin City Limits. Go to a UT football game. Curated answers but curated by a machine. Now, are those baby steps? Yeah. Those are baby steps. But that’s where customers want to start. Because you don’t have to change
the world to cause disruption. Many big innovations only affect
only one part of the model. Think of contact lenses. Revolutionized eyewear we bought them
the same place we bought our eyeglasses. Think of cell phones, mobile. Huge revolution, we buy them from basically
the same phone companies we bought them before. Think of Travelocity. Travelocity, a registered travel agent. We just sold differently. You don’t have to change
every part of the model. Apple didn’t invent the MP3 player or the
cell phone, they just made them better. So baby steps can be okay. Kayak. We sold for $1.8 billion. All it did differently was offer choice. I’m on the board of a hotel auction company. Our whole thing is about auctions. We’re the only place on the Web
that you can auction a hotel room. Small model. Big business. I’m on the board of an attractions company. We bundle attractions together so
you can go to Boston and see all of the attractions in Boston for one price. You don’t have to change everything
to begin to disrupt an industry. At Wayblazer, we’re doing simple things as well. Hotel days are asking us to allow you to
ask questions like great honeymoon hotels to stay this spring somewhere
tropical or adventurous. You can’t ask that question of
any hotel site today and get a set of hotels that meet those criteria. We’re also building concierge
applications once you’re at the hotel. What do you want to do today, here’s
what’s available at the hotel. These are baby steps because our customers
right now are only ready for baby steps. But they’re baby steps that no one
else has been able to do before. That’s where we’re having to begin. Right, with these small steps. But what comes after baby steps? After baby steps you have to look
beyond what’s immediately apparent. That’s the next lesson, where can we go next. I had the unfortunate task of
going to the executive committee at American Airlines to propose
electronic tickets. I said we don’t need tickets anymore. We can get rid of tickets. Computers are smart enough to do that. And what did they say? Our customers want tickets. What about receipts? They will never accept this idea. Leave us, go from this room. [ LAUGHTER ] And American was one of the last
carriers to implement e-tickets. And yet today 99 percent of the tickets are
electronic, right, and they were not only late for that they were late for
the self-service world. But why were they late? Because in real innovation
being comfortable isn’t good. As you implement all these
new ideas that John Kelly and the team are putting together
you should be a little uncomfortable. You should be a little scared. Because that’s what it takes to move forward. The big prizes are found when you ask a
question that challenges corporate orthodoxsy. And that’s what I encourage
you to do with this technology. That’s what it can let you do. And remember that no battle plan
survives contact with the enemy. Right? So you really have to have
wonderful sensors to get feedback on what people think about this new technology. At kayak, we didn’t have a phone bank
but we did get e-mails from our customers and we sent the e-mails to the engineers. You might think that’s crazy and
expensive, but our slogan was give the pain to the people who caused the pain. [ LAUGHTER ] And it worked. And you know Kayak has the number
one mobile app in the world. They have 45, 50 million customers of that app. When we started it we thought it
would be about next flight out. About hotel tonight. But it’s not true. People use that app exactly like the desktop. And if we hadn’t listened and had that feedback
loop, we wouldn’t have 45 million downloads. Particularly important with this technology,
you better build tight feedback loops because we’re all out there breaking new snow. You also have to be agile. You have to make sure that your delivery muscle
is not outmuscling your discovery muscle. Right? That happens in every business. Make sure both muscles are as strong. Because we get really good
at delivering products over and over again and we forget about innovation. I’ll tell you a little story about Travelocity,
but it could be the story of Expedia as well. We started out as a regular travel agent. We made 10 percent commissions and then
Microsoft entered the market to compete with us. And airlines responded by reducing from
10 percent to 8 percent and we responded by selling tours and cruises I bought the
number three site and got AOL and traffic and airlines responded by cutting
from 8 percent to 5 percent because they were afraid of new big competitors. So we reduced costs and we added telephone sales
and the airlines got together with the project that they call Travelocity Terminator. You may know it as Orbits. That was actually internal
name of their project. And then 9/11 happened and
our business fell 70 percent and the airlines took the
opportunity to cut commissions to 0. And we added service fees. But through it all we continued to grow
to when I left about a $3 billion company. How did that happen? Because we continually transformed
our business model. And I think that’s the case with this software. Again, I don’t think we know exactly
where the models will take us yet. We also do it kind of lucky I’ll call it victory
of the Lilipucians the little guys who tied down Gulliver because they worked so fast. Believe it or not American Express and Carlson
wagonlit the largest travel agents didn’t go on line I ran the American Express travel site. If you’re little move fast because the
big guys may not move as fast as you will. If you’re one of these guys, you better wake up. You better adopt this technology. Because the Lilipucians can tie you down. They didn’t transform, which
is why we succeeded. You know, Steve Jobs used to say why can’t I buy
a concept car I love to go to car shows I want to buy the concept car but it’s not
for sale how come when the car comes out three years later it looks like crap? [ LAUGHTER ] Because it went through the committee. Right? But he took phones that looked like this and he turned them into phones
that look like this. How did he do that? Really two ways. Through clarity, terrific clarity,
of what he wanted to do, and focus. Focus on what’s coming next, and courage. To never settle. Clarity, focus and don’t settle. That’s what it takes to disrupt an industry,
because change, change is going to continue. As Mike said, Watson started with Q&A. It was all about Q&A, and now we’ve got
all these new services coming forward. Services that can really change
the direction that we’re going. So what are you going to do with those services? How are you going to put all these
pieces together to create more power? Don’t just recreate the past. This is an opportunity to reimagine business. That’s what we’re trying to do with travel. You know Chuck Hall invented 3-D printing
and he did that by looking at CNC machines that are subtractive and he said instead
of subtractive, what about additive? And he looked at an inkjet and he created
3-D printing he reimagined the world and that’s pretty cool because
people are building parts. They’re building whole cars. Customization will be the norm, but
then people have reimagined that again and they’re building hands for kids
who never had a hand with open source. They built an ear out of skin cells using
a 3-D printer that can actually hear. They reimagined. We’ve had radio controlled airplanes for years. But then a young man took apart a
PlayStation and invented a drone. And of course the first drones
were just recreating the past. Take a crop duster, make a
drone crop duster out of it. But now people are reimagining
what can be done with a drone. How can we help the farmer. Let’s put in infrared imaging and show
the farmer where the crops aren’t growing and the coolest one I’ve heard about, instead
of having people figure out when a grocery is out of stock they now have a drone that
flies up the aisles at night and looks and decides what needs to be put into the store. That’s reimaging. So don’t recreate. Reimage. The last 25 years were all about
who could build things the cheapest. The next 25 years will be about
who can make things the smartest. Martin Luther King did not say I have a plan. He said I have a dream. So have a dream today. Have a dream about what you
and Watson can do together. What can you do to disrupt the industry? Because as John Kelly, I think,
will tell us in just a moment, tomorrow will be nothing like today. Thank you very much. [ APPLAUSE ] What if there were only one kind of dog. Then it would be easy to know
everything about that breed. But in fact there are over 300 breeds of dogs
because no one can be an expert in every one. An app by IBM will help tap in every breed
walks slides or slithers through the door. IBM Watson is making medicine smarter every day. RHODIN: Thank you, Terry. [ APPLAUSE ] The exciting work of LifeLearn
and the video you just saw is one of the reimaging things Terry talked about and the exciting work you’re doing
in our ecosystem we’re doing. We added to our family with
Alchemy API joining our team and making their capabilities
available on the Watson platform. Whole group adding to the team
and the technology over three and a half billion API calls a month. But along with them they brought a few
friends, 40,000 developers and customers. To share his story on how he’s putting
cognitive and AI technologies to work to coding our social world, please help me
welcome to the stage Francesco D’Orazio. [ APPLAUSE ] D’ORAZIO: I’m Francesco and thanks for
having me here, because this is amazing and I’m not going to thank you for putting
me after Terry Jones who did an amazing job. So I’ll try and do my best. Something we’re going on here. So we should go back. So we are in the business of
studying how ideas spread. And we do that by analyzing the content
that gets produced on social media platforms like Facebook, like Twitter, like Tumblr. We collect all this information
and we — let me go back one slide. We collect all this information and analyze
it and generate insights on what people think and what people do and the
behaviors they put in place. To do this, we have created a new type of
social media monitoring platform that looks at social data in a slightly different way. What we’re trying to do is disrupt the
social media monitoring business by, instead of trying to track social
media conversation by keywords, what we’re trying to do is give people
the user experience that is connected to extract insight from the data. We allow you to extract social media by
analyzing the topics of the conversation or by analyzing the audience of a conversation. So instead of tracking conversation mentioning
IBM Watson, what we might want to try to do is track any conversation being published
by anybody at the conference or by moms in the U.S. versus moms in the UK we help
you track conversation in social media that tracks specific pieces of content. A piece of video and online. I’ll show you three examples of how this works. So the United Nations does some pretty good
work on keyword tracking, they connect directly with Twitter and they work with them to run a
taxonomy of 25,000 keywords to map the areas of development and policy making that they
might want to implement around the world. They do this because they want to understand
what issues are crucial in what country around the world and based on that
information they can then go and decide which issue they should prioritize
in those countries. And another example of what you
can do with this new approach to social data is to map how content spreads. So this is a video that shows how a piece
of content has been shared on Twitter for hundreds of times and went viral. This is a single video that gets
posted on Twitter and gets passed on through different communities. Almost looks like a global brain that is
lighting up when this content is passed on to people from person to person and what
we do is basically tracking any mention in social media any post in social media that
contains a link to the content that we’re trying to track and it gives you beautiful
maps like this one, for example. And you can see how virality means
different things for different audiences. And finally we help you study what an
audience thinks, this is a study we did for sci fi we tracked the audiences and we asked
what made the audiences in the TV networks tick. In this case you’re seeing what
brands are overindexing some of the audiences within the skyfy network. This might give you a good idea of the
audience, the audience of this specific network. So there are three kind of
like fundamental challenges that we’re facing now when looking at the data. The first one is that social media
data is not quantitative data. It’s been analyzed as quantitative
for a long time. As if you were applying something like Google
analytics to what’s coming out of Twitter. It’s not the same. Social media data is qualitative data on a
quantitative scale and what that means is that you need to quantitative the
qualities that you can find in this data and the way we quantitative these
qualities is by adding metadata to the data that we are analyzing, and that’s where
the work of the great guys of Alchemy that we’ve been working with for the last
four years have helped us a lot and we hope to continue on this journey of collaboration by
integrating more Watson APIs into our product. So what do we do with Alchemy. The first thing we do with them is helping us
detect what languages these conversations are in, then analyze the sentiment
of the conversations. Then we analyze the topics of this conversation. So if something talks about pears and apples
what that will tell us is the conversation is about fruit or we extract the text
that is contained in an image. So people posting Memes or
an image contains the text. We can, for example, recognize when a brand
is mentioned in an image but not the caption of the image O’er we understand the topic of
an image we do it by looking at the entities and concepts that can be associated
with the image it can be a concept like I don’t know this is a landscape. This is a sunset. This is a person, this is
a dog, this is a category, but it can also be something
more complex than that. It can be a taxonomy that we can build around
the specific concepts and understand what type of conversations we are analyzing. Is it something about the automotive
industry or something about healthcare. The second issue that we’re facing
when it comes to social media data is that social media data is very dirty. It’s like very dirty and very messy, and
it’s like human feedback and human expression so it’s full of pretty much everything
you can possibly imagine and beyond that. So when you’re analyzing the data,
you can’t try and mine the data because you don’t know the questions
that you might want to ask of the data. What you have to try and do is surface
what’s interesting in that data. I’m going to give you an
example of how we do this. So once we’ve extracted all that metadata
and we have collected topics and tags on all of these conversations and images what we
do is we cluster them and we cluster them, for example, to find messages that are similar
to others that help us understand in what groups of conversations we are seeing developing online
and also what’s new, when does a conversation that is completely dissimilar from
everything else that we’ve seen before we know that something is breaking
that wasn’t there before. Or excuse me we use this for analyzing
the discourse that we see developing. So this is the food standards agency in the
U.K. studying how people are discussing the flu online and discovering through the surface and
technology that conversations about flu tend to get connected to conversations
about remedies of other flu. What they didn’t know is one of the
remedies most powerful and most common in the U.K. is actually cloves and red onion or
thyme or what was the other one, garlic salt. So this was quite interesting for
them to discover an entire audience that was completely looking at
it from a different perspective that they assumed was the main one. The third challenge that we’re
facing as you might have guessed is that social media has gone visual. And from the kind of like golden age of your
launch on Instagram to the kind of bronze age of hot dog legs on Tumblr that you
might have seen before and the one that troubles me the most is the one
on the bottom left I don’t know whether to this day whether those are hot dogs or
legs, I really don’t know the answer to that. And then the blogged dress. I hate this dress so much. Especially because the night the
story broke we were tracking llamas. We should have tracked this one. But anyway so what we do with Alchemy
is extracting the text from images. This lady here is saying let
it go and give it to God. Sometimes we do it right and some
wrong, “Len graduated ghetto God” — 92 percent and there’s a rap hip hop
band name but we’re getting there. We use this to extract kind of like
topics and characteristics we can see in an image this is forest Johnson
we’re seeing if Alchemy can recognize. The first person is a politician,
mayor, TV actor, not necessarily, but what’s interesting there is that it
also gives me a link to a page on the Web that shows me some more information
about this subject which means that I can now triangulate information
about what I can extract from an image but also what I can get out of
information that’s already on the Web. And what this API also gives me is behaviors. For example, in the case of this picture,
it was telling me that he was yawning, and obviously we know that he was yawning,
that he wasn’t yawning, this is a normal face as the mayor of London we kind
of like see this face a lot. But then how can you blame an algorithm when
on the Web the algorithm can also find this. So we know that he’s yawning here but it’s
real difficult to make the distinction. So to wrap up my talk I would like to show
you three different dimensions that we kind of like are exploring for the future
of social media monitoring now. First one is breaking the social silo. Social media has been looked at
as data that needs to be analyzed in its own right I think that’s completely
wrong the value of social media is analyzing it in the context of the other data that a company
is generate and the company is providing, for example, saved data or, for example, looking
at a video and how it goes viral you might want to know what kind of like emotionality
levels specific piece of content is eliciting because you know that emotionality level
correlates with a higher virality score if you can predict how a
content is going to do online. We want to have structure that drives action. So adding structures to the data is great. But it needs to be done to mirror the way
an organization works and makes decisions because the main issue that’s
stopping big data from being adopted is that we have a human problem,
which is a decision-making process. It’s not yet a data-driven process. And we need to make it so. So we need to provide data that actually
facilitates that kind of interaction. So this is an example from a new source of
data that we’re analyzing in a moment comes from Facebook 100 percent of
Facebook data aggregated anonymized and through a British company called Data Safe. What it shows you is, for example, what
kind of like people in the audience like which kind of types of models of car. And how do these type of models of cars
correlate to other types of brands of car, but also when people are discussing
a specific car, what kind of elements of the car are they talking
about.Is it the style of the car? Is it a practicality of the car? Is it the performance of the car? So these are all things that help you
make actual decisions based on data that you’re collecting from the Web. And finally the most important thing
of all, moving from content to context. We have to stop looking at a
finger and look at the moon. The context we need to explore is the ability
to patch bits and pieces of information together to give the analysts that are
looking at this data some answers. So this is a really good example. It’s Gatorade looking at images on Instagram
looking at Gatorade and people having it for breakfast and along posts
of bottles of Gatorade. Seems like this is something that a
human needs to do to gather the insight. You have the breakfast and you have the food
on the table Gatorade picture and a profile with a name, probably annage or an
inferred age and this is information that we can patch together
and provide that as context to the analyst that is looking at the data. Because what we ultimately want to do is
be able to understand things like what kind of drink do the friends of Beyonce like. So in this case you can see the Beyonce
fans like ice and Metallica Lyon can and Taylor Swift don’t like Guinness. So what we want to be able to do
is to be able to move to a place where we stop querying the data and
just start asking questions of the data. And that’s where we want to be
with the platform we’re building. Thank you very much. [ APPLAUSE ] RHODIN: Thanks. And now we’ve had a chance to hear incredible
stories where cognitive technologies are going to take us and what people will do with them. But as we look at the future it’s interesting
to look back see how the decisions we made in the past and formed and
arrived at where we are today and that might help us point
where we’re going tomorrow. To help us do that please welcome the
professor and chairman of the Department of Computer Science University
of Texas Bruce Porter. PORTER: Based on that experience, I’d like to address what I think
is the elephant in the room. I’m sure you all know about the boom of
expert systems in the eighties and nineties, the bubble of AI companies that emerged
from that and the complete collapse of that industry in the nineties. A decade that’s appropriately
been called the AI winter. The surgeon of enthusiasm we see now in cognitive computing is it just
a rerun of the same bad movie? This is a very reasonable concern. But I think there are several key differences between modern times and
the era of expert systems. Of course, the technologies have
advanced significantly over these decades, but the most significant differences
lie at the core of the very purpose of artificial intelligence, the
very reason the whole enterprise. So to explain that, I need to roll
back the clock to the beginning of AI. You might not realize that this year marks the
60th anniversary of artificial intelligence. In 1955, AI pioneer John McCarthy
identified it as the science and engineering of making intelligent machines. The very idea of building
intelligent machines was beyond bold. It was audacious. Perhaps absurd. Especially in 1955. So consider at that time almost
computation was done by people and their job title was quite
appropriately computer. The first transistor rised
computers were created that year. Six of them were built and that was five
years before IBM’s 1401 captured a third of the worldwide market. Now against that backdrop, a
primitive times in computation, somehow Alan Turing envisioned intelligent
machines and he designed the imitation game as a way of assessing intelligence
by machines or by people, rather than trying to define that big term. Now, a few years later, Nobel
laureate Herb Simon said that a xhn in 20 years could be capable of
doing any work a man could do. The most striking AI researchers at
that time was unbridled enthusiasm. They believed that every aspect of human
cognition could be and would be automated as soon as somebody sat down
and wrote the algorithm for it. This included reasoning,
remembering, communicating, and to their credit the AI pioneers
put a special priority on learning, which we’ll see with [INAUDIBLE] in
1939 Arthur Samuel built checkers, the first big success of program learning. The program learned to play if game by imitating
behavior of checkers experts and mimicking that behavior, and it became good enough to
play the game at a respectable level of amateur. As we’ll see, machine learning has become the
go-to technique for building AI systems today. Now, armed with this success and a few
others, the early researchers believed that nothing was beyond their reach. Here’s one reason they were so enthusiastic. They believed in general intelligence. By that I mean all aspects of human
cognition, the ability to solve problems. To play chess, to converse in English, to make
plans, these were all simply manifestations of a few common underlying algorithms. They were all alike. So success at automating any one of them was
considered success at automating all of them. Early AI researchers were
psychologists at heart. They wanted to know how people work. With computers, they could build computational
models of some cognitive skill and then compare that model against human behavior. They were psychologists interested in
using computers to understand cognition. Now, the problem with historical
accounts, like the one I’ve just given, is that they show the path that was taken and
they omit all the paths that were not taken. Now, in the history of AI,
you might not have known that from the outset, there
was a competing vision. It was called, interestingly enough,
intelligence amplification or IA. The goal of IA was to use computers to
augment or enhance human intelligence. That’s in contrast to the AI goal of
building fully autonomous independent systems that somehow replicated some
capability that was uniquely human. It was first proposed and named by
William Ross Ashby in 1956 but in ’60, Licklider pushed the idea further by
composing man and machine symbiotically with humans setting the high level goals
and computers doing the routine work to help them achieve those goals. As helped of DARPA at a critical time, Licklider
his vision included human computer interface, computer networks later PCs
and then the Internet. As we’ll see intelligence amplification was
the forerunner of cognitive computing today. Licklider wanted to make it more useful. In those case AI beat IA and then it
focused on building general intelligence, building computer programs that could solve
hard novel problems just like people do. Each early program such as checkers was
viewed as a window into human cognition. And that after 20 years or so, nature
would give up all of her secrets. The 1980s was the era of expert systems. AI systems that solved hard problems
previously out of reach of computers. Hundreds, perhaps thousands
of these systems were built. Applications to interpreting oil
well data, interpreting medical data, mass spectometry data and so and so on. Commonly these systems performed
as well as human experts and that was the measure of success. But each one of them was extremely narrow and
idiot savant, capable of strong performance at a very narrow task and completely ignorant,
completely incapable at everything else. As an example of how narrow these systems
work, consider Missen often described as a system for medical diagnosis. But no the inventors of MYCIN, it determines
which are of seven antibiotics given to a patient known from suffering from
an infectious blood disease caused by bacteria, that’s how narrow it is. Another early expert system AT&T what
I’ll call yes/no system was an application of voice recognition to determine for a
collect call whether the person who — the person on the phone, whether they answer yes or no to the question, will
you accept the charges? Not an easy task when you consider the variety
of accents and answers such as hell no. In the early nineties, this system correctly
handled about 95 percent of these transactions with a huge savings in operator costs. In the early nineties, the
expert system bubble burst and the whole field crumbled
ushering in the AI winter. Now, there’s several reasons for this collapse. First, expert systems are hard to build. Each one required finding a subject matter
expert and somehow teasing out of him or her that knowledge that made this
person an expert and codifying it into a computer program typically
in the form of if/then rules. Second, expert systems were
solving the wrong problem so often. Consider two of the most
common application domains. Law and medicine. In law, expert systems were offered to legal
interpretation of case facts, for example, whether a particular job-related injury
was covered by workers’ compensation law. In medicine, systems like MYCIN diagnose
patients and recommended treatment options. Of course lawyers and doctors
perform those tasks routinely and the people building expert
systems never thought to ask whether they needed help at those tasks. Had they asked, doctors might have said I can do
routine diagnosis myself, where I need help is with managing patient records,
interfacing with insurance companies and keeping up with the medical literature. Because expert systems were built
from the perspective of AI, not IA, they addressed the wrong problems. AI languished for much of the
nineties and was reborn in the 2000s. Now, that’s a popular view but I think
it concealed what really happened. The term AI has been retained. But the new research agenda is squarely from IA. The quest to understand human cognition is
over within this field and the goal has become to augment human intelligence
by identifying human weaknesses and providing some kind of mental prosthesis. The psychologist underestimated the
difficulty of automating human strengths. So the engineers took over by
targeting human weaknesses. Modern day cognitive computing
is a good example of this change. Consider Watson. It addresses the problem of finding need
delegation in a stacks of information, which is a clear human weakness based on
principles of engineering not psychology. Not only has the field shifted focus and
priority from AI to IA there’s major advances in technology now it’s common to build AI
systems that integrate multiple capabilities. Second, machine learning — machine
learning has become the go-to technique for building AI systems today. I’m waiting for the teleprompter
to catch up with me. Okay. Thank you. Forward, please, forward. Forward. Expert systems died in part
because they were hard to build. Hard to tease out the main knowledge
from subject matter experts. That’s because people are able to
exhibit expertise but not to explain it. This is most clear when you
consider perception tasks. So consider AT&T’s yes/no system. Given a voice on the other end of the
phone, telephone operators are able to accurately classify that signal as a yes
or a no, but they can’t elucidate the rules that enabled them to do that classification. Somehow this ability is automatic
or intuitive to the operator. Now machine learning offers
a different approach. It discovers the rules automatically. All you do is provide the
learning algorithm with lots of examples provided by the
subject matter expert. This signal is a yes. This signal is a no. And so on. For maybe hundreds of thousands of examples. The machine learning algorithm identifies the
discriminating features and induces the patterns or rules automatically producing
an expert system for the task. That’s why machine learning
is such a powerful technology. Using this technique, Microsoft has recently
built a voice recognition system and it works in real time and performs translation across
the full lexicon of English and Chinese, an incredible advance over the
early yes/no system from AT&T. Now the industry is the standard approach
for building these systems for bipedal and for question answering and Watson. So that’s a quick journey through the
60-year history of artificial intelligence, the field has made tremendous strides,
especially in the last 15 years, when the goal of amplifying human
intelligence replaced the original goal of building fully autonomous
broadly intelligent agents. So let me close with a bit of
speculation about what might follow. First, it’s easy to predict we’re going to see
across-the-board improvements in technology. Natural language processing, automated
reasoning, interaction methods to provide lots of opportunities for AI systems
to learn from you and about you. A major step forward is dialogue. We’re starting to see that in systems today. Current interactions are just isolated
exchanges such as simple question answer pairs which can be frustrating to users. But a major advance will be with dialogue. Another big advance is going to be discovery. Not just retrieval of information. Think Chef Watson but applied to vast
knowledge bases orders of magnitude larger than any human could get their head around. Quite likely the breakthrough that cures cancer
is going to come from a machine learning program that finds patterns across genomic data,
medical records and medical literature. But the key insight, the creative spark, the suggestion of which patterns are worth
exploring, that’s going to come from people. Because in the end, intelligence
amplification beats artificial intelligence. Thank you. [ APPLAUSE ] RHODIN: Part of our mission in Watson
group is to equip knowledge and innovators in skills they need to build systems. We launched an initiative in partnership with
some of the most prestigious universities. Classes start last fall by next
semester we’ll have over 100 universities around the world teaching classes on how to
build cognitive applications using IBM Watson. Earlier this year we hosted our
first university competition. The IBM Watson University Challenge, with the
top team receiving $100,000 in seed funding to get their idea off the ground. Let’s take a look.>> When I was in high school watching
Watson on Jeopardy!, I mean I knew I wanted to do something like that, but I had no
idea I would actually be working with Watson within just a few years actually after that. It’s been…it’s a pretty incredible experience.>> I think Watson is unique in the sense
that data scientists can finally focus on the application because Watson
is doing the heavy lifting. Coming from an academic perspective it’s so
refreshing because finally we have a long list of problems we want to work on and
we don’t have to sit down starting from scratch in running the algorithms. I think that’s how Watson is
going to transform business. Watson is going to deliver
data science as a service. I think that’s pretty revolutionary. RHODIN: I’m delighted and honored to invite
to the stage a member of the winning team from the University of Texas at Austin
and CEO of their new company Bri Connelly. [ APPLAUSE ] CONNELLY: Navigating social services is hard. 50 percent of the U.S. population
relies on some Social Service. That’s a lot of people with a lot of questions. People who use these services have previously
had to call the 211 hotline to get answers. Answers to questions like where is the nearest
homeless shelter or how do I apply for Medicaid. At Cerebri we created a better
way to answer these questions. We partnered with a 211 call center in
Austin, Texas to create an application that uses IBM Watson technology to navigate
the complex world of social services. From healthcare to food pan tries,
the 211 app provides answers to the questions that matter most. The 211 app also allows people to do things
that they previously couldn’t over the phone such as browsing service providers, getting
updates on services that they use the most often and getting updates from the call center. At the call center management can
manage the app and visualize user data and they also have the ability to provide
feedback for Watson answers ensuring that quality is never sacrificed for efficiency. The 211 app launches in Austin this
July and we are excited to be on track to make information more accessible
to the 27 million residents of Texas. But the 211 app is just the start. Our goal is to make Social Service
information more accessible nationwide. And the process is pretty simple. So the first thing we do when a call center is on board we take all the documents a
normal call center employee has access to. Then we clean and we format
the data and we train Watson. After that we provide custom solutions. So if that’s a mobile app, a Web app or
even a Twitter BTO, we provide answers so people don’t have to dial an answer. They can get extra features
they didn’t have over the phone. Avcoming in first place at the IBM Watson
university competition we received $100,000 in seed funding to grow our business. We also received technical
mentorship from the IBM Watson team. Since we finished the competition,
we’ve been working really closely with the United Way call center to maximize
the impact that we can have on our community. And while we’re all extremely passionate about social services we’ve never
worked on a call center before. So we rely closely on their domain expertise
and it has allowed us to build a product that really fits the population’s needs. People who rely on social services are
not the stereotypical face of poverty. They’re students. They’re veterans, they’re nursing assistants,
they’re in line with you at the grocery store and it’s absolutely necessary
that the relevant services and programs are made available to them. 50 percent of the low income
population has a smartphone. And it is their primary contact
with the Internet. And at Cerebri, we are able to make this
information accessible to those people by putting it in the palm of
their hand in natural language. Through IBM Watson, we are able to make
Social Service information more accessible to more people than ever before. The way we interact with government
and nonprofit services is changing. And we are excited to be leading the way. Thank you. [ APPLAUSE ]>> More and more data is visual. In fact, the number of MRIs has
increased by 10 percent a year and a radiologist might view a thousand images to find one tiny abnormality
in shape, contrast or movement. And because it’s so challenging a research
project is teaching IBM Watson to see. In the future, it could help clinicians
spot key patterns quickly and precisely. IBM Watson is working to make
healthcare smarter every day. KELLY: Good morning. I’m John Kelly. If you’re looking for one person here to
blame for this whole thing, it’s probably me. I was there at the beginning of Watson and
I want to tell the story of the journey. I want to take us back in time to
talk about how did we get here. I want to talk about where we are today and
I want to talk about where this is going. Because it’s been a heck of a journey. And I think the place to
start is with this chart. This is a chart showing the
growth of the world’s data. And this journey started back here. It started back before 2010 when I and
a group of my researchers were sitting in our research labs and an IBM Research
we don’t just look around the corner. We look over the horizon. And we were sitting back here looking at this
tsunami, at this point in time we were also — we had just launched the world’s
most powerful super computer. A one petaflop calculator
to the U.S. National Lab. But we knew that that computer only addressed a
small portion of this blue portion of the curve. And as we looked at this
curve, we said, you know what, even if we go a thousand times
more powerful, in the next decades with our super computers we’d only be
addressing a small fraction of that curve. And furthermore we would
need nuclear power plants to power the computers to address this curve. Furthermore, we looked at this
curve and we said, oh my God, our super computers are only again working
on structured data and we were going to have an explosion in unstructured data. And we said you know what, we as engineers
to professor Porter’s point had better go after this problem in an entirely different way. And we started as engineers
to address that problem. We didn’t start as psychologists or psychiatrists trying to
reinvent or invent a brain. We were very focused. We took in a sense what Terry said stayed very
focused on the goal and we said we’re going to build a computer to go
after that unstructured data and the data we chose first is this little
wedge called text or natural language. We had great natural language
processing capability. We said we’ll go after just that slice
because if we can put a dent in that, then it will lead the way to how do we deal
with multimedia image data and how do we deal with sensors and devices and machines
and machine or what’s now called IOT. So we started with back in
2007, looking at this curve and saying we better reinvent
technology to go after that curve. Now, as Ginni told us yesterday, in order to
do that, we had to tear up everything we knew and we had to think about scaling
computing in a whole different way. And we thought a lot about the
previous eras of computing. We thought about the first era of computing
where mechanical devices scaled our ability to do arithmetic in amazing ways. We thought about the second era of computing
where we took our knowledge and embedded it in systems and caused these systems to
enhance our productivity in all sorts of ways. From banking to airline travel to other things. And we used the amazing power of
scaling semiconductors and technology to drive price performance and to go
from these rooms full of computers to today’s programmable devices
that we hold in our hand. Amazing scaling. But as we in IBM Research looked ahead at
that curve I showed you, we said guess what, we can’t scale technology fast
enough in traditional ways. Moore’s law is slowing down, and even Moore’s
law and its grandest days could not keep up with that curve that I showed you. So we said we have to scale in a different way. And we decided that we were
going to go after a whole new era of computing, an area of cognitive computing. Barring from the concepts of how we as
human beings use cognitive capabilities but not trying to replicate the human brain. And as this graphic shows,
we are just at the very, very beginning of this new era of computing. And as previous speakers said, it’s
very, very hard at the beginning of these eras to think about what’s to come. When the first System/360s and 14 XSs were
invented by IBM it was impossible to think about the devices that we have in
our hands much less the applications. So as we sit here today, very, very challenging
for us to think about where it’s going. But I’d like to share some
thoughts on that in just a minute. So let me talk a little bit about where we are
in this journey and what distinguishes Watson. In 2007, this was the state of
the art in open domain question and answering or the early days of Watson. This is precision of answer shown versus the
number of attempts at answering a question. This curve in the early days of
our research Watson, quote/unquote, it didn’t even have a name then, was this. The best human beings at this game of Jeopardy! are shown on the top. That was the gap we were looking at. And we said, okay, we’re going to
systematically go after that gap. Now, in research, we have lots
of camps of brilliant people. And many of the people who have
lived through the dark days or the AI winter said you guys are fools,
don’t go after that gap, you’ll spend decades and get nowhere against that attempt. But we decided to apply a
number of different techniques. And one of the most important techniques was
machine learning as Professor Porter showed. That really unlocked for us a
capability in Watson to close that gap. And we went on a journey over the next several
years to take Watson from taking hours to get to low probability answers to what
you all witnessed on Jeopardy! in 2011. Where basically
in two and a half seconds, two and a half seconds Watson could
beat two incredible human beings. So we went through then and sort of
closed out that portion of the journey with this infamous demonstration
of its capability. But that was just a launch point. By the way, side note. I always love to tell side stories about this
event, because I lived it moment for moment. Rutter and Jennings are two incredible
human beings I would argue that if we tried to replicate what they do as human
beings, we would have never gotten there. Because I personally asked each one
of them independently how did you get so smart.How do you have so
much information in your head? And they both answered independently
the same way. They said John, I don’t know, I just read
and listen to everything and I never forget. So I couldn’t — I wouldn’t even know as an engineer I wouldn’t even know
how to start to replicate that. Then I said, well, when you’re
asked a question in Jeopardy! how do you reason to get
to the answer so quickly? In roughly two and a half seconds and both of
them independently gave me the same answer. They said John, I don’t know the. The answer is just instantaneously
there in front of me. In fact, I assume now that asked any
question the answer will become apparent to me instantaneously. So again it just verified seeking to replicate
super human mental capability is impossible. You have to start from a different area. But it was not a predetermined fact
that we were going to win that game. In fact, if you remember those curves I showed
you a moment ago, we crossed into that sort of winners circle of the best ever human beings
in that game, but we had about a 50/50 chance of winning on that particular day, and
in fact I remember I had the privilege of introducing the audience
to that game show that day. And I said to the audience, look,
I don’t know if Watson is going to win today or the humans are going to win. But the one thing I can tell you is that it’s
only a matter of when these machines are going to beat these humans at this task. Now, the other thing I’ll tell you in a sense
we did ourselves a disservice by bringing Watson to this sort of domain, because forever more
people think of Watson as a simple question and answer machine and it’s much, much more. Now, my greatest nightmare occurred
with this infamous question, and people to this day ask me how did Watson
get the answer wrong to this question, category U.S. city, largest airport is named
after a World War II hero, et cetera, et cetera, and Watson answered what is Toronto. Now, most of the audience
went crazy when they saw this and said oh my God, Watson
has gone, has lost it. Watson has completely lost it. And I remember my confidence went
up when I saw these question marks. Because this said that Watson
knew it did not know. It knew with high probability that it did not
have the right answer but in final Jeopardy! it had to answer. So while everybody in the room was going crazy. I said we’re okay. We’re okay. Watson knows what it knows and
it knows what it doesn’t know. But to this day what is Toronto
is an infamous answer. So now let me move into the future. So we built this system that had
incredible text analytics capability, natural language processing and
the first really complete set of machine learning algorithms in one system. Going forward, though, we have to address
that tsunami of data that I showed before, because again we’re not trying to
replicate human cognitive capability. We’re going after the grand
challenge of that tsunami of data. So a research agenda now takes us into the
future, looking at discovery and creation, looking at segmenting and recognizing
different sorts of images and data. Looking at the vast amounts of signals
that are being generated by machines. And in a sense giving Watson the capabilities
to sense the world and learn from the world. That’s our research agenda. Let me give you one use case which
really demonstrates I think the power of machine learning. Healthcare from the get go has been one of
our focused industries, because we believe, as Terry said, we can disresult
these industries. And healthcare is ripe for disruption,
as Dr. Chris showed us yesterday. We’re working with Memorial Sloan Kettering
in the area of skin cancer or melanoma. As many of you know this is a disastrous cancer. It occurs quickly and it’s deadly. It’s absolutely deadly. And it turns out that very few
dermatologists are very highly skilled in knowing the difference between
a benign skin lesion and melanoma. And as a result of that it’s either
missed or biopsies are taken too often. Working with Dr. Hall person over hallpern over at Memorial Sloan Kettering we gave Watson
two hin images of skin lesions we said 200 of these are skin cancer and
we told Watson nothing else. Watson went in, used the machine learning and
the images and learned that certain colors and certain textures of edges of these
lesions are characteristic of cancer. And Watson now is capable of diagnosing images
of skin lesions at very, very high rates. You might say why don’t I just go
over to Memorial Sloan Kettering. But as Ginni said yesterday, not everyone in the
world can make it to Memorial Sloan Kettering. So we are training Watson and scaling. We’re going from 3,000 images to 100,000 images
then open sourcing collections of skin lesions to get to a million images and I guarantee
you when Watson has 100,000 images or a million images that it will be better
than dermatologists around the world. Scaling Watson, scaling data,
scaling knowledge based on the data. Another great example is again we as
humans are very good at looking at images. Again, the highest bandwidth to
our brains is through our eyeballs. We can look at complex images,
features, functions of these people, we could guess it’s probably
a family, et cetera. But really understanding deeply what’s going on in an imagine is something that’s
very difficult for a computer system. So we’re teaching Watson to
really understand context within these images, understand
emotions of people. You saw many of our tools
already that can look at text and derive psychological understanding
about what a person is saying. It turns out that through facial recognition
we can also determine an awful lot about what’s going on with a person. Think about using that in a
sales environment as an example. Or in a medical environment to
know is this person healthy or not. Another great example now
is in machine to machine. We’re working with Repsol one of the
largest oil companies in the world. Oil and gas is one of the most data
intensive industries in the world. The amount of seismic data
that’s collected to look for oil, the amount of data that’s collected off of
wells and rigs, the optimization of those wells to extract the last drop of oil
out is a massive data problem. We’re working with Repsol to use our machine
learning to look at that data in whole new ways. Today geologists sit in front of screens and
look for pockets of oil on huge walls of data. Tomorrow Watson will be looking for
those sites of oil and helping us drill in more precise locations because
it’s high risk high investment. We saw cognitive computing applied
to the culinary arts yesterday. Well, what if we were to
merge that in healthcare and provide not just a cognitive
chef but a cognitive dietician. A machine, a system that’s capable of advising
us what to eat, what not to eat in real time. Taking two different domains and bringing them
together again for a new set of offerings. Color. I hope you all saw the Muriel
on the other side of this wall out in the vestibule area that was created
by one of the street artists we hired from Brooklyn here working with Watson, working
with psychologists, trying to create the use of colors and images to create emotion
from that image which I hope you all catch on the other side of that painting. Again creative capability
but linking to human emotion. And lastly, our friend Pepper the robot. Our friends at SoftBank. To me this is one of the most
exciting projects we have. Yes, robots are cute. Yes robots can do a lot of things. But really what we’re trying to do here
is change the human machine interface in fundamental ways. No longer about just pretty
graphics, really having speech, continual speech and dialogue with something. I’ll never forget the first time I met Pepper. Pepper was standing next to me on a
stage and unrehearsed Pepper looks up at me and says, are you my father? I said yes, I am and don’t ever forget that. [ LAUGHTER ] But again a very exciting project that will
transform a number of different industries. Now, let me end with so where
is this going next? One approach, again back to the AI
world, would be to apply engineering and try to reproduce this amazing organ. This thing that’s couple of a quarts,
operates on about 20 watts of power, a pretty dim light bulb, we could go after that
and we certainly are looking for attributes of inspiring our computer systems and our
cognitive systems with how this works. But that’s not where we’re going. We’re going back on again and we never ever
forget that first curve I showed you, data. Our goal is to build systems, cognitive systems
to derive insights and creativity and discovery out of that massive amounts of
data that I showed you earlier. So what do we need to do to do that? Well, one thing we have to do is
apply whole new underlying technology to future cognitive systems. If we continue to build cognitive systems of
the future to chase that curve I showed you at the very beginning, using today’s technology,
we’re going to run into the same barriers. We’re going to run into power problems. The systems will be too slow because
that curve I showed you, remember, is roughly 10 to the 24th, 10 to the
25th bits of information in the world. We must change the slope of the curve. Two key technologies that we’re working on on
IBM Research are going to have a profound effect on our ability to catch that curve. One is something called synapse, pictured here. It’s a neural network of small tiny low-powered
devices contained on a very small chip. That neural network chip while not the power of
the brain comes with orders of magnitude of it. We will string together this
low power technology to build a cognitive computers of the future. The second thing we’re working
on is quantum computing. Quantum computing is probably the
only technology that has the potential to leapfrog forward and allow us the
computational capability to catch that wave. Because we will move forward
not by 10 or 100 percent, two orders of magnitude,
three orders of magnitude. When we build a quantum computer,
we will move forward by 10 to the 20 or 10 to the 30X performance improvement
and you say John I’ve been hearing about that for ages, too, just like AI. This past week we and IBM announced
that we had built the first, the world’s first four qubit device to
compute and error correct at quantum speed. Now, you may say well four,
that’s a small number. Well guess what, we only have to take a few
hundred of those little four qubit devices and put them together in the proper way
and we’ll have a computer that will tackle that issue that I showed you earlier. So we’re going after power and we’re
going after performance and we are going to supply the technology
to catch that data curve. The future of cognitive computing,
I think, team, is enormous. Absolutely enormous. We’re going to have great
innovation around this. We’re going to keep it an open ecosystem,
but we are simply out to change the world. I thank you for your partnership. I thank you for your support of Watson and
I look forward to seeing you next year, probably five or 10 times as big as this. Thank you very much. [ APPLAUSE ]>> Good morning. What a great start to the
second day of our event. I was sitting there scribbling very quick
this morning taking some notes as we heard from the various presenters, and I had
a couple of themes I walked away with. One was disruption. We started this morning we heard Terry so eloquently really describe
disruption and what’s happening. And then Francesco got up and talked about
disruption not just in travel but social. And three things they showed us about how to go about disruption that I thought
were well spoken. One as Terry talked about
the importance of speed and moving quickly, especially if you’re small. He talked about simplicity. Don’t overengineer or overthink this, and in
fact yesterday someone pointed out something to me which I thought was really insightful. They said you don’t need to transform at the
first step you just need to get one step better than where we are today to
make a true difference. And then he said don’t settle. So three Ss. Simple, speed, don’t settle. Great message. And we also heard the importance
of the board that Terry talked about as insight as a strategic asset. And the new millennia of changing
the paradigm of how to use data and information and making better decisions. And Bruce, I think, my take away from the
history is use the past to predict the future. There’s so much we can learn from where
we have come from to where we’re going. And Dr. Kelly just shared a
little bit of the vision as to where this technology is likely to take us. Now, we hope that Watson is going to be
an integral part at helping facilitate and allowing you to realize your dreams,
as clients, as partners, as innovators, as creators, as thought leaders, we
believe we can help facilitate this. And we saw this. Bri came up and I don’t know if it was evident, but Bri is actually a student
at University of Texas. They started Cerebri as a classroom project and over 12 years developed a very formidable
business idea and got support not only of the United Way but got a pilot
project with the state of Texas, now it’s a real company being launched. Twelve weeks, cradle to grave
not just concept but company. And then we saw in 48 hours what
tremendous amount of developers from around the world not only ideaate but
develop I was amazed by the application, the educational app was to me
awe inspiring and powerful. So hopefully you share with us
the fact that there’s so much that we should leave here at
the end of the day to go do. But the day’s not over. We’re going to go to breakout
sessions in just a few minutes here. We’ll start at 11:00. On the back of your badges, there is a
map guide which will take you through and explain the different tracks. There’s three tracks in total. One will be here in the general session area. Two will be directly behind us in
tent number one and number two. There’s a map on your guide as well as
a description of each of the breakouts. Please take advantage of those sessions. Also we mentioned several times
today there’s a chance to get hands on upstairs immediately behind
us in the balcony. You can actually interact with the
various APIs and Watson services that the hack thon developers were using and
in 12 minutes you can build your own app. It’s truly amazing how fast and how far we
have come in such a short period of time. Don’t forget about the Ted app. If you haven’t done it, download the Watson
2015 World of Watson 2015 app at the app store. In there you’ll receive an invitation to join
the Ted initiative and be able to experiment with rich media and apply cognitive in a whole different dimension
than you could just a day ago. We have shuttles running starting
I think three o’clock today. The shuttles will be running every 15 minutes
taking you back to the other side of the river. And if you have any questions today
or need help please look for someone in a bright blue scarf or bright blue tie. This is all about the experience. We’re here to help. We hope you enjoy the rest of the day. And look forward to having
a lot more discussions about cognitive computing in the future. Thank you. [ APPLAUSE, MUSIC ]


  • Villamarzia

    The best set of talks within the 2 days long event. Thanks IBM for having me there, it's like being part of a technology historical moment.

  • Charles Fuller

    MUCH better than Day 1. Thanks for putting this together. It's great to see that the company has some direction and that things we do on a daily basis are actually impacting something useful, not just bottom-lines and share prices.

  • Brett Ludwig

    Really not an interesting video. No actual product demo or real world implementation. Just salesmen talking about how great their product will be, someday.

  • Nical Preston

    hello ibm. my name is Nical i am unique…i am oregon. i am a bluehat…and when my doctors became my friends. i now share time. taking our civil citizens turn and wearing the DHS hat when the other CAN yet needs me to wear the hat home for them..

Leave a Reply

Your email address will not be published. Required fields are marked *