Transcript: Panel Discussion on NLP and Chatbots

Below is the transcript from our panel discussion on NLP. smart group of panelists from companies focused on conversational interfaces and chatbots:

  • Andrew Burgert — Moderator
  • Christina Apatow — VP Client Solutions, at Google
  • Xuedong Huang — Chief Scientist of Speech R&D, Microsoft
  • Tony Lucas — Founder,
  • Anthony Kesich — at Facebook
  • Laurel Hart — Artificial Intelligence Engineer, PullString
  • Mitchell Mason — Dialogue Service Product Manager, IBM Watson

You can also watch the video from the event on the Azumo channel.

Ok, so thank you guys for showing up. Is there still a line out there, or has everybody made it down here, do you know Dawn? Are there still people waiting upstairs?

[Dawn]: Be right back I can go check.

[AB]: Ok. Well thank you. Guys do you want to come over? So I’m Andrew Burgert, know some of you guys, ….. the development firm here in San Francisco. And we started the meetup group because we thinks it’s ….. we’re waiting for one more person Christina. And we’ve been organizing these events now for four months at least and we enjoy conversational applications, bots, and there has been interest from the community in different topics. The last one we organized was around the Microsoft bot framework, with huge success, we learned a lot, and then from there there was interest in learning more about the company use in NLP tools, does it make sense, which ones should we use, what tools come in handy. So then we decided to put together this panel so thank you everyone for coming over, and I’ll, well, I’ll let each of you introduce yourself so that people know what you do and then Christina will show up from She’s still not here. So, maybe we start with that and then I have questions for each of you to get going, to break the ice and then a discussion. And, at the end I will open up for the audience to ask any type of questions. Mitchell, do we start?

[Mitchell Mason]: Hello. So my name’s Mitchell Mason, I’m an offering manager on the Watson conversation service which is part of the Watson engagement advisor offering so it’s all about natural language conversations so how do you interact with n user around really any use … so every client is going to be a little different whether it’s tech support, health, retail, whatever it might be. How can we create a tool to easily create these type of services and that’s what conversation is really all about. I came in through acquisition a few years ago and I’ve been there for 2 years now.

[Laurel Hart]: Hello. My name is Laurel and I’m from PullString. I am our AI engineer and my specialty is NLP. We make an authoring platform for people to make chatbots without having to really dig into the technical details. So well if you’re a writer or someone who is just interested in making a chatbot, you shouldn’t have to learn to program just to make something that people can interact with. If you have a technical background that’s great, certainly it is easy for technical people to use as well.

[Anthony Kesich]: Hi, I’m Anthony Kesich, I’ll be going by Tony though tonight cause there’s two of us, and I am with-

[Tony Lucas]: I thought I was going with Tony.

[AK]: Oh you’re going with Tony? Ok well I’m with I’m been with them for about, well, just shy of 2 years, year and a half somewhere in that range right now. We have a platform to help developers build a conversational NLP bots that do NLP understanding n to n instead of just one shot single phrase analysis, the whole idea being that it makes it easy to build NLP bots without having to become an expert yourself. And, well, that’s about that.

[TL]: I’m the other Tony. I’m the founder of which is, funnily enough, a bot platform, we focus on making templates to build bots very easily for interacting with their customers across the entire coast of the journey. So that seems like pre-sales, customer service, post-sales, dealing with engagement, re-engagement, basically anything way you might need to engage with the customer either partially or fully automated with human escalation, we sold that. And our NLP integration with messaging platforms, conversation management dialogue, the whole nine yards basically

[Xuedong Huang]: Ok, I’m Xuedong Huang, I’m a ….. microsoft corporation. I actually, I …… so my group is responsible for a lot of speech and language. So this is web development including LUIS, by the way. I love you guys, I flew all the way down from Seattle.

[AB]: For the people, just don’t say what they do, Xuedong, are you the chief scientist?

[XH]: I happen to be the chief speech scientist, yes.

[AB]: Of microsoft. Ok, thank you for that.

[XH]: But I never use that title.

[AB]: You never use it. But it is what it is. People speak. And you’re the head of that speech group. So then, let’s start with a quick question for each one of you. And then we’ll do more. Anthony. You start with an “A” that’s why, you’re the first one. So, if you can talk about Facebook M. And the relation between wit and Facebook M. Is it possible that that service might get to non-operator activity within a period of time all automated without human operators. Does it make sense? Anything you want to discuss.

[AK]: Yes. So yeah, just a little bit of context I assume consists of the bot that everybody knows, but Facebook M is an AI assisted entity that we are building to live inside Messenger right now, that is, well, there to help you out. You can ask it to complete things and look things up for you, and it can do that. Right now, the way it works is it’s, the AI has a lot of suggestions and actually covers some set of what’s going on. But we still have trainers around that will oversee the AI and the NLP, make sure it doesn’t make mistakes, get to eager, throw the wrong response, that sort of thing, and then when you are actually ask go do something that the AI can’t do right now, the trainers can then step in and do it themselves and they pay attention for future training data. Anyway, now that we have that out of the way, we are slowly and slowly building it out. There is always going to be a frontier of things that we can’t do with it, and I say always but maybe 5, 10, 15 years out of it, you’ll look back and think that this sounds kind of naive. But, as for right now, we have been building it out, with our test users, mostly throughout the bay area, and we have had a lot of growth, but in terms of actually automating everything, that’s not going to happen for quite a while. We can start with the simpler things like booking restaurants, and looking up information, and managing your calendars, giving you reminders, that’s where we’re starting in terms of understanding, but in terms of planning your wedding for you, it’s we still definitely need humans there to help out at this point. I hope that answers your question.

[AB]: Yes, yes, bring the …. and start with this. Thank you. So, Laurel, now for you, when you look at PullString, if you can describe PullString, how do you believe the tool can be helpful relative to the other tools that we have here presenting, and how do you think you guys differentiate what you guys are doing currently.

[LH]: So, I think a little bit of background on our company is important here, which is that we used to specialize in making apps for kids, and that’s really, you know, where we got started, it was the sort of some blend of art and science of writing material that children would enjoy and interact with, and making it so, making a tool such that our engineers could work on the app and our writers could just write, they could make this conversational dialogue system without having to understand what are the intricacies of programming a dialogue system. So we’ve tried to make it such that you can program something simple, sorry, not program, write a simple bot that just has commands like saying hello, or say hello world, or something like that, but you can also do a very complex story. We recently published a Facebook experience called Jessie’s Story, that was you texting back and forth with this young adult women whose life was not going that well and she needed your help to make the decision with all her life. And that’s, you know, that’s a very complex and subtle story, that has lots of branching that, you know, can get almost impossible to track, if you’re just trying to use excel or something to write the story. And then, to have it so that our writers could write that thing without having to worry about all the implementation details. That’s really what we are making, we’re making a tool for.

[AB]: And the, can people use the tool, similar to for example wit, that folks can start using, is PullString available for developers?

[LH]: It hasn’t quite launched yet, it’s expected in the next couple months I believe.

[AB]: Ok, thanks. Tony. The Tony. You’ve described a bit what converse does, what the company does, tell us a bit more, any type of metrics that you believe make sense, when- Hello, Christina.

[Christina Apatow]: Hey.. Fashionably late.

[AB]: Thank you for coming in.

[CA]: Sure.

[AB]: Do you want to introduce yourself?

[CA]: Sure. Hey, I’m Christina and I’m the VP of client solutions of

[AB]: Thank you. So I’m starting with one question for each and then we’ll open up to more questions. So, when folks start to use converse, what metrics improvement in metrics we might see, what is the best use case for someone adopting the tools so the folks here have an idea of when they need to look at what you are …

[TL]: Ok, so there’s the kind of metrics that they might like to see but there’s also the metrics that they should be tracking but possibly aren’t thinking about. So the way we look at it is converse is basically used to, say semi or fully automate conversations so that could be things like I’ve got an FAQ question, right, they want us then we don’t know that a human never has to worry about it. But it could be that you got situations where maybe you’ve got a customer service agent that nobody takes 10 minutes per ticket that’s something we’ve seen quite often as an average. We look to go, ok, what parts of this can we automate to take that down from say 10 minutes to 3 minutes. Because it might well be that there’s information that we can gather but it still needs to go to a human, it might be that there is a way to completely automate it otherwise it just dances, and if it gets stuck then escalate to a human. It might simply be that by actually asking certain questions, we can direct it to the absolute right human to go to as well. So they are all different models. So, in general, what I look at is ultimately how many tickets, or how many simultaneous tickets, or how much time is also being saved, if you’re looking at that specific case. But the metric should really also be looking at if, you’re building things on this, is where are customers getting stuck in these conversations. If you build these crazy long conversations, when you go back and forward and collecting data you reaching a point where halfway through, users are giving up and getting fed up and want to find a way to escalate to a human, if they are, is there a nice and easy way for them to do that. Cause that’s one of the challenges that we see with a lot of people’s is, “Yeah we’ll just go and build this brilliant thing” and then not actually understanding how to track the success of it.

[AB]: The human handoff is something that they can …

[TL]: The human handoff is really important because the beauty of having the ability to do human handoff in any system, is the fact that on day 1 your success criteria, your minimal viable success criteria, is 0. The bot doesn’t have to understand a thing. Because it’s not any worse then the day before. And then every day from then on the more things it can answer, the more things you can train it, the better it can actually get but you need that level of human handoff otherwise, can you imagine trying to roll out a bot on day 1 that didn’t have that a success criteria of 0, I don’t think you would go very well.

[AB]: Ok, thank you. So, Xuedong, hopefully I’m pronouncing your name correctly, so you founded speech at Microsoft some years ago, many years ago, so you’ve been exposed to the problematic of language for some time now. In the last 15, 20 years what is the one or two largest breakthroughs that you’ve been exposed to that you have faced that you want to share.

[XH]: I just want to- I don’t actually really …. So I was on the academic … . if you happen to know that school. We had a very strong computer science program. So I joined Microsoft and started speech. I was naive. I thought we could just get speech adopted by offering API. So I don’t know if you guys know Windows 95 had a speech API on the Windows platform, it was a 95. 20 years after that Microsoft brought project oxford to market again. Now it’s renamed the Cognitive Services. So this is actually a sampling of 20 years or so AI. When Microsoft research we had a speech, vision, language all the assets we had to help developers. So, we have a 20 API services spanning, you know, all the angles you can think of on AI. So language understanding helps … one of those services. We have actually 20. So basically everything in the cloud is important, you can see 20 years ago we had a speech API on Windows, of course speech recognition technology wasn’t as good. Today is much better thanks to deep learning. So the second thing I want to say is deep learning really changed everything. Dramatically improved performance. I think if you only just wanna try conversational speech over the phone, I want to emphasize over the phone, because usually when you use the phone, it’s close enough to the microphone. The computers ….. Apparently. So that’s actually a major ….

[AB]: So that’s your forecast then.

[XH]: I put my name on it. I’ve been working on this for 30 years.

[AB]: Thank you.

[XH]: Two things. Cloud absolutely is the key so Microsoft company services is all cloud-based services. And we can improve that every day. Actually we have behind the scene. The second one is deep learning. Is changing everything.

[AB]: And so would you say LUIS is the current main initiative for you guys or is there any other that we have not been exposed that is worth mentioning? We have been exposed to LUIS, that’s the commercial product.

[XH]: LUIS is one of the 22 services we have. Behind the LUIS of course, everything is going to change because it’s cloud based API. But LUIS is actually fairly simple to use. They are, I don’t think I can say that, roughly 15,000 companies using LUIS today. So it’s quite popular because of the services is less than one year old, even LUIS is not that GA yet. It’s still in beta.

[AB]: Ok, thank you for that. Hey, Mitch, coming on to you, the Watson offering been around, and there have been some recent changes, from NLC, dialogue, APIs, retrieve and rank, to a new delivery framework, product features can you describe, a bit, that change. Let me ask people, is someone from the Audience currently using Watson APIs? Ok. No, let’s do the same for the rest. PullString, probably not. Folks using wit? Great, thank you. Converse probably not, here.

[TL]: Eh, there might be some.

[AB]: Converse? Folks using converse? LUIS? Great. Thank you. Ok. So for the recording, I think it’s just five hands, in each. More or less, not a lot, so I think there’s interest to learn more. By the way, there are two chairs, up here in the front, if people want to sit up front. So..

[MM]: Yeah, we’ve been in the virtual agent space on the enterprise side for, I don’t know, probably four or five years now. And we started out with “Hey, we have this great technology, you know, we won jeopardy, people want to pay to use that same technology”. Very hard to use, very expensive to train, took racks and racks of servers, wasn’t cloud based, we said ok, we need to improve that first. So we made it easier, cheaper to run but still technically it is very hard to use, and this solution became Watson Engagement Advisor. It was not a publicly available service, it was sold to enterprises only for a very large dollar amount. And we say it’s still great but it’s hard to use, the clients can’t train it, we need our experts, you know they’re all PHDs to train it. Normal people can’t use it, it’s too difficult. But technically, it was very powerful. So let’s simplify it. That eventually became Natural Language Classifier and Dialogue. Which are two services on Bluemix today. But they were still separate services, doing two completely separate things. Dialogue was a very simple rule-spaced chat service, so anyone can write rules to create a chat, but it’s not very scalable. Natural Language Classifier did the intent classification but didn’t actually handle the dialogue side of giving response, couldn’t really hold a conversation. Said this is what you meant, but that was it. So we saw developers combining those two together. We eventually wrote an SDK that did it for them, with it still not enough to have two tools that are mostly easy to use, but you’re still going back and forth between these two places. You know, if I see an error in one is it because of bad data from the first, or vice-versa. So now we have this new service, Watson Conversation, which was inspired by those two to be a unified tool experience, all about the simplicity. So, in a really short summary, we focused so much on the technology, nothing on the user experience, and in the end that meant no one wants to use it, even if it’s one of the smartest out there, it’s too hard to use. So now we’re flip-reversing that to make it a little bit simpler to use while not sacrificing any of the technology.

[AB]: Was that GA or-

[MM]: Yes, it went GA on Monday.

[AB]: Ok, great. Thank you, for that. Christina, thank you for making it. Did you drive from the center.

[CA]: Yeah, there was lots of traffic. Go figure.

[AB]: It’s worse ….

[CA]: I hope so. I’ll give you a review afterwards.

[AB]: Thank you. So, I think your background is electrical engineer, different from CS. Now, how do you bring in what you’ve learned, maybe yes maybe not, academically and experience, and how did you end up at and tell us a bit more.

[CA]: Absolutely. Yeah, so does everybody here first of all know what is? Can I see a hands, a show of hands, of who knows what it is? So, a decent amount of you. For those of you who don’t, we are a developer toolset that allows anybody to very easily create conversational interfaces. I can even brag that I literally had my little eight year old brother build bots this past week when I was in Connecticut, it’s the cutest thing ever. But yeah, I finished up my masters at Stanford in electrical engineering, focusing in communication systems, and I came across acutally right before it launched in 2014, and I joined as the platform lead. So basically, when I came across this technology, I completely, I just had, it was so clear how this was going to empower the future of all different types of scenarios. Since I was in high school I was really into like intelligent environments and smart home and this is the only type of technology that’s going to enable that in the future. And Amazon Alexa has been a really great piece of technology of paving the way for that. You want to be able to walk into your home and say preheat the oven to 400 degrees, turn on the garage light, and turn on the light in Billy’s room, or whatever in natural language. We work with one of the largest cable companies, if you guys have, I don’t know if I can say. But the voice remotes in the future, those are going to be powered by us. So, hopefully you’ll be able to be confident in interacting with your home in a natural and really easy way.

[AB]: Great, thank you. Thanks. So now we look at overall questions then we can handle. There’s a lot of discussion of when should we use NLP tools, when should we not use, and what could be a good use case that makes sense to bring in some natural language. For example, in a prior event, Mikhail from the messenger platform team, explicitly said, “Hey if you don’t know about NLP, just don’t use it. The experience might not be great.” And here we have six panelists related to these technologies so I think we could have a good discussion around when does it make sense to explore that. And when does it not make sense?

[TL]: Well, I don’t agree with his statement, but I do agree with the premise-

[AB]: And I’m sure I changed the statement

[TL]: Then I don’t agree with your statement. But I agree with the premise that NLP done poorly or bad platform or people trading it is a worse experience than not having it at all. But I think people should take advantage of NLP, if it’s easy enough to understand, but they don’t want to start thinking about context, and entities, and intent, and all the other billion different terminology that even we can’t agree on what’s called what. And so trying to train users on that is a real challenge. The other problem isn’t so much whether using NLP or not, it’s how to use conversation. For those of you might have seen the presentation I did at a Slack event a couple of months ago, I did a demonstration of how to build a bot for doing SSH. It was doing conversation form, and it basically took like six question answers backwards and forwards to do anything. I was exaggerating because it was a bad idea so it’s partly about not using conversation where conversation doesn’t fit, but it’s also about, it’s the simple things like if you want, if you’re looking for a yes or a no from a customer, then yeah, use a button, use something that allows you to let direct them easily know they got one of two options, if you’re looking to ask them what kind of food they want for dinner, then you’ve got a few choices. You could try do some directive thing using rich mimi or you could try doing something just using language. Perhaps having both is possible. If you’re looking where they want to go on flight, and they need to fly and pick an airport, well good luck doing that on a carousel, you might be there for a little while. So NLP does really need to be there as a backstop. It really does come down to the user case in my opinion.

[AK]: Another place that I find it very useful to jump in and use NLP is places where you might be extracting entities from a larger body of text but not trying to actually understand everything that’s going on. Your user has already told you this is my intent and NLP is great for figuring out intent, that’s a phenomenal way to do classification, but that’s like the harder step. You can use it in the first step, where you already have an understanding of what the user wants to do. An example I like to use is Hello Hipmunk. This is a service that Hipmunk has that you can forward an email to them, they’ll pull out of the email all of the information about you saying “I want to go on a vacation for this weekend for this amount of time”, in a natural conversation that you were having with your friends, loved ones, whoever, and since you’ve sent it to Hello Hipmunk, they already understand “I want to book a flight” and then they can very easily look through there and say oh hey, New York, San Francisco, 2 o’Clock, Saturday, what have you, and pull out all of the information that you need and pre-populate a form, and then give you back results or prompt you for anything additional that they might need, and the reason that they can do this very solidly is they know that you want to fly, they’re not gonna accidentally screw up and try to send you a flight when really you wanted to get a bus ticket or something like that.

[CA]: When I think about NLP or the use cases that really require it, I think about most frustrating scenarios ever which have destroyed our perception of voice interfaces and those are in car, where you only have specific voice commands that you’re able to use and it makes users frustrated, they can’t remember them, they don’t say them correctly or whatever and it gives them no flexibility. So, of course what I’m looking at messenger is you can consider having buttons and things like that, and rich media. But when you’re moving past that, which is the next stage after bots and all this, and that’s gonna be voice interfaces. And you’re not gonna have that same option, so you have to have natural language understanding to be able to actually intelligently process and communicate with the end user.

[TL]: Just a very, very quick interjection on that, I agree with you about voice in cars-

[AB]: Tony, here you go. We talked about this. Behave.

[TL]: Yeah, I know. I agree with you about the cars and I agree with you about my voice echo being delivered tomorrow. I don’t see voices being a wide ranging, being a panacea for all things beyond text. I see text as being that solution and voice being better than it in certain circumstances. Like in the car.

[AB]: There is some for voice, definitely, that there’s definitely room for text, voice, and I don’t want to get there, …..

[XH]: So, I think NLP is very broad, depending how you define what it is, I think that some of the great NLP applications I would say is search. Google and Bing. … Another great case is translation. And really now you have Skype translating, you can talk to it, you can converse with someone on the other end, without speaking the same language. And it looks like a decent panel really talking about conversational bot. That’s one small area of NLP. For this one I think that they are probably also cases where you have Siri, Cortana, Amazon Echo, and all the major players have a conversational agent. And there’s one that’s very popular in China, it’s called the Xiaoice, that you know, not doing anything, just keep you busy, chatting, and killing time. That makes it very popular because people have a lot of time to kill. So, really, it depends on how you want to define what the best case is. Is it really just wide open. But in the end, if I look at all this technology, they’re very shallow. And most of the search engines, they’re not using much NLP technology. They’re using statistics. Click simulations. And even for the, most of the conversation chatterbot today, … think they rely on AI technology. So this is wide open. I believe, once again, deep learning is going to change everything including NLP. So the biggest paradigm shift is coming.

[LH]: Yeah, thanks. I think one of the most important questions to ask when you’re deciding whether to use NLP or not is about your audience. What is your audience expecting. If you’re dealing with a very technical audience, then they may very well just be expecting a set of commands that they say precisely this way in order to get it to do what they want. Like a terminal. Versus a less technical or maybe younger audience, talks to the computer and doesn’t realize that it doesn’t -until very recently- have the same ability to understand language. Yeah, and so understanding when a person, say messages your app, are they expecting to be talking to a real human, and then you’re trying to either find a way to answer their recently asked question and if you don’t have the answer reroute them, or are they expecting to get a list of commands that they can then execute precisely as they want. If, you know, if you’re trying to order a taco and you can do that more efficiently by just clicking a couple buttons then typing in “I want a taco with all these things” and you have to go through several iterations of “Ok, what address do you wanna order that to, how many do you want, when do you want it delivered” then it’s not worth doing a NLP interface. If that’s less efficient, don’t do it.

[MM]: Just to continue from that, I had a client called a few weeks ago, wanted to do mortgage application over the phone and automate it, and it was a 45 question interview, one at a time. And I said, are you sure your users want you to call them, cause they want people calling in, and it keeps going to automatically call out to other people. You know, me while I’m eating dinner usually. And it’s gonna ask them 45 questions, one by one. That’s probably not the way you want to do it. We have other ways we can help give a better user experience. And I think that’s one of the better ways to think about it, is what’s gonna improve the user experience you already have a lot of times. Sometimes we are just taking unstructured text in a conversation and structuring it to do some kind of known process, it’s already out there. One that I always really advocate for is on a website, I have 10 tabs each with five subsections so figuring out where I want to go when they redesign their website, you know, every other month, it gets kind of frustrating when really I should build a login, ask a question of what I want to do, and it points me to it. And it’s a relatively simple use case so, I think there’s a simplicity in user experience.

[XH]: I want to support that. If you think about the paradigm shift, you know, IVR really never took off. And people actually loved the screen. So when you have a speech as input and the screen as output, that’s generally actually much preferred medium, so, but Amazon Echo is actually exceptional because it doesn’t have a screen, but a lot of people like it. It’s stand-alone. Just on the side information appliance, and it’s different from telephone because a telephone you have to hold the telephone you have to call, this one it’s all sound, always open, always ready. Even though it doesn’t do a lot of things, that actually really is remarkable to me and enhance what your experience is, so I just, I want to actually add that on a GUI, most frequent menus are really designed by the designer, so it’s easy to click and you’ve got the information, you can complete your task very quickly. The challenge is always for the detail, it’s about the customize for your own need. If you have natural language interface, you can pull information that’s deeper, hidden. That cannot be accommodated by your menu. That two combined will make your user experience much more interesting. What it isn’t frequent shouldn’t be on the menu, on the top, in your face, but you can never accommodate all of the things, you know, customize for different users and if you can chat, using a bot to actually complete a task, those two combined will be extremely powerful.

[AK]: In terms of user experience though, kinda going in a tangential direction, this applies more to machine learning in general but almost all NLP is done with machine learning these days, or mostly. One of the great things about NLP and building interfaces in that sense is it allows you to be quickly reactive to how your users actually use your app. Like I can go in and assume how somebody’s gonna say something, and if they come in and say it in a different way, even if my app doesn’t understand it, if I’m building it through machine learning and I’m doing good logging of the interfaces with my app I can very easily go in, take what the user said, figure out as a human, “Oh, they really meant this, tell my classifier, hey it means that” and now my app has gotten smarter, I’ve brought in more training data, I’ve started seeing it, I’ve allowed it to be used in the way that the consumer, the customer, the user, has decided to use it as opposed to the way that I’ve said, these are the ways that you are going to get in here, go for it. Because they might ask for a new feature this way, they might figure out a new way to say things, and if you’re just doing it all through training as opposed to writing hard coded rules, it’s very fast to adapt to new situations.

[TL]: It’s the 57 ways to say hello rule.

[AK]: Exactly.

[MM]: Just back to the original …. Use cases, you kind of inspired one that we tell all the customers that when someone goes to a website, if they leave, why did they leave. You don’t really know. You don’t know why they clicked x. Could they not find what they wanted, they found what they wanted but it was not good news, or was it good news and they were happy. With natural language, and having that interface over the top, you know exactly when they left, you know exactly what they were doing, you can really figure out if they were happy or sad, so as far as the use case, you know, you can learn a lot more about how your users are using your product and kind of their experience with it, whereas on a website, you know, you’re tracking clicks but you don’t know what they’re thinking or feeling, and you can really get that through natural language.

[AB]: Yeah and that’s the bot and chatbot initiative that many folks have, a user explaining their intent specifically other than verbal speech or written form it helps learn much more what they want. What I will do is one more question for the panel. And then we will go the audience so if you can think of any questions we will bring it up. In terms of quality, thinking quality around a conversational agent, conversational app, if any of the folks from the audience ends up using any of your tools, any of the tools that you guys could offer, how should they measure quality, how should they think around this is a success, this is working as I wanted, or this might not be working, what can I improve. There are many bots being created nowadays, hundreds, thousands,none of them have great quality. We all know that. But there’s room to continue to improve. How should people think around a good experience and what that should mean but taking into consideration language and interaction. Anthony you’ll start.

[AK]: I’d say at least one thing right now, and this is kind of a context of the time, is the best bots that I’ve seen recently have all been very simple and very narrow in scope. And so this allows the developers to dive down and be very focused on one very, I’m using very a lot, I’m turning into Trump, it allows you to dive down into a focused area of some project like, I believe it’s Hipmunk, has weather. And all they do is weather. Weather forecast, very specific things, but you can ask it now, it said “Oh there’s a weather alert” and you can say “What’s a weather alert, I have no idea what a weather alert is” and it’ll explain you it’s jargon. But really to get back to what is successful, if you see yourself building an app for a specific purpose, and you keep branching, and branching, and branching, and getting bigger, and trying to cover more and more and more tiny tiny, sorry my microphone went away from my mouth, edge cases, in a way, and this is partly to do with NLP and partly to do with bot design, you’ve probably promised too much of your app that it can currently deliver with the technology that we have. So even though a lot of this is using our technology to do the best we can, we also have to learn how to set expectations so don’t take conversational NLP as this end all be all you just need to get more and more and more examples and eventually, you know, it’ll be the best Siri ever. If you find yourself doing that, you probably haven’t defined your scope well and then your app is probably just going to flop because users are gonna get frustrated that it doesn’t do what they want it to do.

[AB]: So quality limited scope.

[LH]: Yeah I think it’s very useful to look at some measures of dialogue systems success. In this case, if you are specifically …. That has a specific task, you want to enable the user to accomplish that task with as few interactions as possible, you know, again the order form versus like having to do five text messages. But that’s on the one side of a very task oriented app. We, PullString, like I mentioned, builds a conversational story and in that case our measure of success was actually we want people to spend more time doing this, not get in, understand what it’s about and then leave, our measure of success in that case was how long did they keep playing it. Did they, you know, when they reached the end of the experience, were they interested in finding out the other things that could’ve happened if they told her to chase the boy or whatever, so it’s very dependent on when you say a conversational NLP system, or are you talking about you want to create conversation, or you want to use conversation to accomplish a task.

[AB]: And guys, you don’t see it from there but Laurel has a necklace, that is a bot, so it’s very funny and it’s cute. Thank you for that. So then engagement, quality, engagement can drive, can give us information from the bot. Any other ideas on what quality could mean in language and in a conversational environment.

[TL]: I think they’re all great points, stole my thunder a couple of them, but the under what it comes down to is a lot of this stuff is same problems but it’s a new interface. So it’s looking at things like cognitive load, how much extra work is there in actually trying to figure these things out. How quickly can people get through conversations, is there a way you can avoid them asking questions because you already gained their data in the past. One way people often refer to bots and use the phrase, things like personal assistant and things like in the same sense they’re talking about a wide range in bots, but if you could also control flights and it knows where you live, then it should try and pre-guess that that’s probably where you want to fly from. If you’ve told it at some point you’re a vegan, it should remember that fact, the dietary menu and perhaps mention it but then re-ask you it. Basically keep, the more that you use it, the better you get. But yeah, you are in this strange place where if it’s task-orientated the show of success is how quickly it’s over, and if it’s engagement-orientated it’s how quickly it’s not over.

[AB]: Ok, so then quality there might be on one hand persistence, knowing the user containing and then …..

[TL]: Yeah, so one of the things that always comes up, and I’m so glad we finally got a bot we can actually order pizza with, thank you pizza. But there’s been so many platforms to build a pizza demo, and yet the ultimate pizza demo in my book is always the “I’m hungry get me pizza” and that’s it. And it knows how to take payment, it knows where you live, it knows what your favorite pizza is, that’s the ultimate demo of pizza-bot, not build each question and have a conversation.

[AB]: Yeah. You have to get there. Thank you.

[CA]: And also going off some of the stuff that you guys were talking about too is, I think that it’s important for it to be contextually relevant, so it should be taking into account what you’re talking about, you should be able to switch between different dialogue flows, just like when you’re talking to another human and they understand that you were talking about the weather, and then navigation and then other things and switching back and forth. It should be able to take that into account where you are the user interface, thing your preferences as they mentioned so all these should be known and be considered in the design process.

[AB]: Thanks.

[XH]: So, I just wanted to add, in terms of quality, I really think that the intent understanding, the ability to recognize the entity, those are the foundations of LUIS. On top of that, you can talk about the dialog strategy, your personality, you know, your memory. Even if your memory is poor, if you can understand the last few things, it’s understandable in older, in older people, when you talk to older people, they have poor memory, but if you can’t even understand the current intents, you be screwed.

[AB]: Great, thank you.

[CA]: So here’s something that works.

[AB]: Thanks, yeah. So now, open up for questions. Stefan, you.

[Stephan]: I’m Stefan I’m the co-founder of Giant we’re trying to do a health bot. I have a question that we discussed in our founding team about a personality of bots. What’s your perspective on how important is personality on one hand, and how much does personality, because it takes longer, should get in the way of getting things done or is it an important part of the experience.

[XH]: I love this question. It is a fantastic question. It’s almost like you’re asking, if you fall in love with someone, what is the reason. You don’t have reason. You just fall in love with the person, right? So look at microsoft experience, we actually have two personalities. One is Cortona. It’s more you know getting work done, and the other one is Xiaoice. We only practice this in China. You know what happened to …. In the US, right? But the Xiaoice, is loved by the kinds of people in China. Very different from Cortana. So those two have different personalities. We do have a marketing message, we call them our sisters. One is good at solving problems including IQ, one is good at just keep you chatting ….. So those actually our personality we engineered them two bots. Two bots just know they are for different purposes.

[AB]: Any other response?

[LH]: Yeah, so I think until very recently there was a lot of interest for grants to have basically a very like neutral you’re talking to a robot and you get your task done or whatever you want, and then they came to realize, oh we spent all this money on marketing and our bots personality is part of the marketing. For something specific like health, doctors spend a lot of time on bedside manner, and your bot should probably do the same, you know. Should I be talking to a robot about my human state of being or should I be talking to someone who is friendly and interested in my health concerns. So I think it’s exactly, you want to create the personality that’s exactly like the human you would be interacting with in that particular service, or maybe a little bit better.

[TL]: I think a lot of it can depend optimally on if it can make the engagement better, like say if it can then great, and there’s a really interesting example of this. A lot of you in here will know Poncho. Or seen Poncho. Now some people actually love Poncho and the fact that these interactions …. Real character to it, some people can’t stand it because it’s like I just want to know what the weather is, why are you messing around and telling me you’re sleeping. And so a lot of depends on the audience and who you’re trying to get to. But I do agree with the point in the same instance. For the healthcare, if you think you’re talking to a bot who’s acting like an angry nurse he’s got nothing better to do then just bark questions at you versus a nice, genial doctor that has got time, that will make a difference in helping you interact. But it really does depend on that use case

[AK]: And just another thing I feel to keep in mind it’s not just, we all know these are bots and they’re advertised as bots, and it’s obvious to a lot of people that they’re bots, people still often seem to take treat some forms of technology, and especially in this case conversational technology which is going to feel intimate, as humans. You know, probably a lot of us saw this story of the lady in England who began every google search with please google would you find this thank you very much and every single year for 10 years because she thought somebody else was on the other end of it. And if we’re going to be building bots for the world, not just for those of us who are deep in technology, personality does kind of come into it. If your bot makes it feel like they’re, you know, you’re not bugging me, I’m doing this for you, that’s probably gonna help with engagement, but if it tries to chat you up a bunch, it can come across as annoying, but people confuse technology for humans and want to personify it.

[AB]: Thank you, for that. Paolo, you.

[Paolo]: Yeah. So, we build a conversation productivity set of apps for slack…. So we build our own NLP for that. Now, we wanna try to maybe expand the capability of our product and we look at basically all the platforms in here. And so can you spend like 30 seconds each one of you saying why we should pick you. Honestly, we are looking at basically all of you so.

[XH]: Are you from slack?

[Paolo]: No, no.

[AB]: What’s your company name?

[Paolo]: My company is called Kyber, the product, so we build from Slack right now but they feature us.

[AB]: So, who else is looking at NLP tools? Show of hands. And who is actually using an NLP tool in their conversational application. One, you guys are …. Ok. So I think it’s a fair question, maybe 30 seconds, a 30 second pitch why they should use your tool. Thank you. And you Chrisina are the first one.

[CA]: Alright, sure. The easiest reason for us is because it works. Seriously.

[Paolo]: I know your story.

[AB]: Ok.

[XH]: So, LUIS is unique in the sense that you don’t need to program or have a …. Cycle of learning. We know API is a hard problem, most of the time it doesn’t work but the coverage is the key. And actually I would like to accumulate data you label so active learning. You can actually continually improve the system. So the workload to offer assistance is very small. You don’t need to have any computer knowledge, you can hire anyone, order the example services. The more you label the better it gets.

[TL]: So I think the reality is right now, the NLP needs of most of this audience right now could probably be handled quite easily by anyone of us and that’s the reality. So I don’t think there’s necessarily oh ours is 10% quicker then that one or ours is worse than that one. It’s not cause it all comes down to individual training. I would look at the other pieces of the puzzle , whether it’s the integrated platform, whether it’s download management or whether it’s the level of technical knowledge you need or the flexibility you get out of it. And I’m not saying that anyone is better, there’s difficulty in all. For example Wit is a much more developed … Ours, we emphasize the fact you can build a bot without needing to write code at all, but it’s flexible enough that if you want to write code you can extend it. So, I think there’s plenty of room for everyone, but it depends on that if it’s outside of NLP.

[AK]: So the strongest advantage that Wit has right now is that we really work on doing the conversational aspect from end to end allowing you to build example stories of these interactions if you want to have and then actually not following them when we can, but then jumping around when needed, when the user has already given information so we don’t prompt them for things twice. But on top of that, yeah, we work and we’re easy to use just like everybody else in that you just tag data and it goes, and almost all these cases everybody that I’ve worked- all the platforms that I’ve used up here, it’s very straightforward in that you don’t need to actually code it, you just give examples and you don’t need an NLP expert. And as he said, all of us can offer that.

[LH]: Yeah, works, easy to use, all the good stuff. Also, what sets PullString apart is that you can build both very simple command and execution structures as well as very detailed dialogue systems that are very complex and have a non-technical interface into that and that visualization is somewhat unusual for the field at this point. Depending on what your app is, that may or may not be interesting to you. But, yeah, for our system, you don’t need to hire a developer just to maintain your bot. A writer writes in it directly and can even do all the publishing for you. And we also, you know, you’re allowed to use our machine learning to make the writing process simpler, but we also let you go in and if you want full control over exactly everything that it’s doing, we give you an interface to do that as well. And it’s that really whatever degree of control you want, that’s what you can get from PullString.

[MM]: Yeah, I think everyone, you know kinda, we all have great tooling so I think the main differentiator for Watson is all the other services that we align to folks. So if you want someone’s personality from a block of text, we offer that. If you want to get the tone of their sentence, we offer that. If you want to search a corpus you know of the documents, we offer that. As well as the base conversation. So expanding into all the other pieces of the use case you need, to give a good experience, we have tools and services that you can combine easily to satisfy your users.

[CA]: I would strongly recommend trying all of them.

[AB]: Thank you. Question, yeah. Can I get your name and company name please.

[Alexis]: Alexis. {indiscernible}. I know there’s a lot of hype around bots and conversation based interactions right now, and I got the question from a lot what you said, that it worked well for certain interactions, and this sort of jibes at my intuition where a lot of people are imagining them for interactions where they’re not suitable. I don’t know as much about this space as I like, so I just, what I’d really like to know is what are the best examples right now where you can see this technology in the best light, doing things that it does really well. And you guys mentioned a sort of spectrum between sort of task-oriented, I’m trying to find something or get something done, and then this very intriguing idea of bots that people talk to just for the fun of it. So, I would love it, maybe you guys agree with this maybe you don’t, what’s the best example that I can interact with right now of an english language bot that does something medium task-oriented on one end and on the other end, engaging, charming, keeping me on the phone, just something that’s nice to talk to. Because how do I know from prior experience …. Do one of you guys think there’s a best practice, best that I can see right now.

[XH]: It really depends on how much effort you want to put on there. If you have a huge efforts you can use LUIS to create a …. That’s how serious you can let this be. But if you’re not serious, you don’t have enough data to label then focus on some simple application.

[Alexis]: Well I’m interested in seeing what’s the best thing someone else has already made. Not what can I make right now, but what’s the best output that these tools can produce right now. Specifically state-of-the art. It’s not from these tools, these research …. Where would I go to see how good it can be.

[XH]: So, as I said, if you know there are about 15,000 apps created by LUIS. And they are very- a lot. And most of the typical command control services.

[TL]: I’ll be honest, I haven’t seen one yet that truly has amazed me. To your criteria of …. And being able to deal with things. I’m still waiting on that. They will come but right now the average, of a scoring out of a ten, the average of a bot I play with I wouldn’t give much more than a 3. Although I’m sure there … that I have not seen yet. So I’m sure there is something great but I have not seen it yet.

[LH]: Yeah I think until recently my target for a good bot that’s task-oriented is, did I basically not notice that interaction. So, … oh yeah that was a great interaction because, you know, the design should be 99% invisible. I strongly prefer things that are oriented towards entertainment. I’m in the group that should find Poncho entertaining. And I really like- PullString powers Hello Barbie, and that’s a conversation where the task is to have fun with a child, and it’s a very complex conversation and I think it does a really good job of here are all the games that we can play just using voice. And that’s my favorite one.

[AK]: My favorite bot right now unfortunately, you might call it a cybot, is M. Now, unfortunately, I can’t let you play with that right now. After that, though, the things that are really good are the things that we have in our pockets right now. Siri, Cortana, Ok Google, they aren’t super conversational but in terms of NLP understanding and giving you a little bit of funny banter when they want to, Siri pushing back at you when you’re a little rude to it, pretty good experience. They’re still building on the context and understanding when you want refer back at something that you said before, that’s probably gonna come in like a week because you know how fast Apple works, but actually yeah, I’m really happy with those three, I think they’re good examples to build on.

[CA]: If you haven’t tried it on Android, the highest-rated virtual assistant across platforms is called and it was our first product, it was launched actually 6 months before Siri was launched by Apple.

[MM]: I couldn’t believe you waited that long to mention Barbie, I remember reading about it in a magazine and it was like Teen Vogue, or some weird magazine, and I’m reading in my office, and people are walking by like “what the heck are you reading” and I’m like no, it’s actually work, it’s super cool though. Everybody should go check out the Barbie that talks. As far as the cooler apps that I’ve seen, Alaska Airlines actually has a really good one, built by I think Nuance it works well. It’s about booking a flight, you know, it’s not super impressive, I think any one of us could build something pretty similar. They’ve had it for a long time, and it works well, so I like that one. And the third one that I’m most proud of is, it was a old customer from my last company, which is now on Watson, they’re called “The Project Factory” in Australia, and they built a game where you go in, someone got arrested and they’re gonna be interrogated, and they’re saying “I’ll only talk to you” as the person, and you can- it’s just fun, you know, sometimes it gets kind of tripped up, I’ll be honest, but it’s just an interesting use case to say here’s a game that I’m playing in natural language.

[AB]: Great, thank you. Do you have a question?

[Audience Member]: Before I start to ask my question, it’s more like … I’m happy that the bots are really simple, because I found so many apps asking to download the whole app, which is about 100 megabytes, just to get to the really simple information. Sometimes you just want to have like on Telegram, or Facebook Messenger, or even just text message, you just want to have a contact and ask your question quickly without having to download the whole app. So on that part I think having bots really simple is really good for me. Now my question is like I’m French, and my parents are Arabic, and they told dialect, changing the dialect, how it goes with auto languages. Like everything is in English and it’s good, whenever I try with other language, it’s not. What are you thinking in terms of, I would say as a second question, it’s related to that, teenagers may use a different language than the common English, even some words that are not in the dictionary, how do you deal with that?

[AK]: Well that’s the great thing about a lot of these platforms, they’re actually fairly, basically language agnostic. Like we pretend like we understand what’s going on but really we’re just like matching phrases and since it’s all training data, if your users are teens, and your teens use it this way, and you log that data, you just put it right into your training app and the new funny word of the day is “shwazzle” and somebody says “shwazzle” to it, I have no idea what “shwazzle” is, nobody can define “shwazzle”, but you’ve already shown that it maps to these intents or what have you and that’s all it really needs to know. And also getting to your download size thing, it doesn’t matter how big a bot is, you’re querying a server. Like, I’ve got my Messenger, if there’s a huge bot on the other side, it’s nice for it to be focused and nice but I don’t have to download your app every time I query your giant server, you just send me back your response.

[AB]: Language support- LUIS has several languages, I think Watson as well, what about how many languages do you guys support.

[CA]: We support 15 different languages and we are currently implementing a way so that you can create it in say, in English, and then automatically or semi-automatically roll that out into all the other languages.

[AB]: Do you remember how many LUIS supports? How many languages?

[XH]: For speech we have about 20 including Arabic. We have Arabic speech recognition. I think in terms of language coverage Google is probably the broadest. In an honest assessment. Microsoft is also pretty good. LUIS will have a major, you know, English, French, Chinese, etcetera.

[AB]: Well, 20. And then, do you remember Watson?

[XH]: No, no. LUIS doesn’t have 20. About four, four.

[AB]: Do you remember Watson, more or less?

[MM]: Conversation specifically it has 5 right now, GA on Monday though, so you know, we have a roadmap of 20 for the next year, and we have other services that do over 20. We have a translation service, honestly I think it’s like playing telephone where if you translate, do some magic in the middle and translate back, you know, it’s not perfect but in theory it works.

[AB]: Xuedong mentioned Skype as well, the translation, that happened in Skype.

[TL]: So ours is same as the other two it’s agnostic in the designs it just depends on training data, but talk about teenagers, some other really interesting things, we had a customer this morning that actually, an awful lot of the communication on their- they’ve got a private messaging network for teenagers, and …. And a lot of it’s done by stickers and gifs and talking to each other, so we’ve actually taught it to understand what a gif is and how to react to the different pictures. So if someone is saying yes for this … no, it actually understands “hello, are you having a good day” and those type of things, and that’s equally another way to think about, yeah it is a language in itself.

[AK]: On that note of language support, I believe Wit right now supports right around a dozen language for speech and north of 30 maybe around 50 for NLP. I know it’s- I can’t remember the exact number but it’s right around 50.

[AB]: Thank you. Great.

[Indiscernible]: Hi my name is … and … West Africa, and my question is for the gentleman from Microsoft, you mentioned earlier that the most important breakthrough has to do with cloud computation for NLP. Where I’m from there is a problem with connectivity always. Because in that part of the world, the biggest, like the most downloaded cloud platform is mostly Google, app engine, or one of the Google top platform. I was curious to know, do you see in the future, some of the computation offloaded to the mobile device because a lot of the folks these days, especially … devices have … you know? So … computation would be done on the phone before … on the wire or do you see most of it being sent to just on the cloud, being processed on the cloud?

[XH]: That’s a great question. I have to tell you how old I am now. … speech recognized, I created was running on Apple II. That took 640k. I used the TMS320 DSP chip together with that. We actually have the first Chinese dictationist and the first speech recognizer we did at Microsoft took one {mac}. One time on windows 95. One mac. So that kind of computing capability was what we are consuming in the cloud, there’s just, there’s no comparison. But one day, you’re … would be so powerful and the technology is also at the basic. … example for you works, you do not need a kind of precision, you just need to know, you don’t mess up . . . very robust. You covered half of the brain, you still kind of working. It’s just not working as well. But the, you can have ⅓ works fine. Looks fine. So I believe in the future, it’s always a hybrid. But the cloud is so powerful because of the computing, you have just so many supercomputers combined in the cloud, that watch … you, and watch what’s going on. The cloud is really … that capability is hard to be patched together with the device. But the switch recognition for example … LUIS can … it easily after training. After you actually harvest a huge amount of data you really … you can actually download the stuff on the client. We’re not supporting that capability yet, but that doesn’t mean it’s not something we should not consider.

[CA]: Actually we have already imported our full embedded- our full NLU both speech recognition and NLU to embedded devices earlier this year.

[AB]: Questions ok. How are we doing with time? Do we want more questions? Ok. Let’s do a couple more. Your name and company name please.

[Ameldiga]: Hi I’m Ameldiga, I’m a computational linguist, so I want to go all technical on you. First of all that’s the obligation you speak emoji. And my second question is channel parsing or actual syntactive parsing for your application.

[MM]: I’m in marketing so I honestly have no idea.

[XH]: I can answer that question. My official title is Finished Engineer, I’m Microsoft’s … engineer by training and LUIS is not using parsing at all. LUIS is actually using active learning, create a conditional field model that’s on the {mind} model. And that’s the model, that’s the steep part in most entities… but we do have a deep learning system that uses parsing. If you search Microsoft Deep QA, you know, on Bing or Google, that one uses a very deep parsing, but that’s not the one we’re using, we’re shipping. … ongoing research, I believe deep learning will change a lot of stuff, and performance will be much better than anything we have ever seen, but it’s not ready to perform that in consumer yet.

[Ameldiga]: For deep learning, character embedding or word embedding?

[XH]: We use both. And actually in … training you will be amazed at what kind of data we use. We are able to embed almost several billion documents. Every document has embedded.

[AB]: Next question please.

[Raul]: Raul…. Question, just from each of your platforms, I bet you’re seeing a range of really interesting companies, I’m just curious if you could share an example, either anonymous or the company’s name of somebody who’s got a contraction that you think is doing something non-obvious or interesting with your tool.

[AK]: Well, just to reiterate the one I mentioned before, Hello Hipmunk I think is a really cool thing, the ability to pull out of an email your flights. The other fun one that’s actually coming up right now, I believe it’s called Ganglia, it’s out of India, it’s for following cricket matches. And it’s getting really strong traction, and people seem to be liking it, and nobody in the US builds cricket bots so I thought that was really cool that some people were doing that.

[TL]: Well that just made my day. So, the interesting thing … a lot of the companies that we deal with are building things that you wouldn’t necessarily call bots but are using the same technology and using some problems the same way. There’s one Bay Area company, I’ll be very vague because it’s big enough that they’re well-known has a re-engagement challenge with their users, and they have hundreds of thousands of users that aren’t actively engaged. People that that they can … have money from. When they’re not engaged and they don’t have money from. And then looking at ways that they can actually, the current way for engaging is using humans to literally get … out and try and pin these people down to get them re-engaged, and they look at ways that they can use bots to basically streamline that into high process. And what’s particularly interesting about that model is they’re not looking at all the ways saving costs, they’re looking at ways of changing the unit economics of the engagement, and so, in doing so, they’re talking about hiring dozens and dozens of new people to fill the human pieces it remains in that role because it’ll actually make it much more affordable for them. So with all the talk about bots actually removing jobs it’s quite interesting to see what one company is getting very excited about how it will actually create.

[LH]: Yeah so to go back to the Barbie bit, I talked with {dallas} about obvious and non-obvious, making it so that you can have a two-way conversation bots … new. Only Barbie and maybe a couple other ones have done that. And then, within that I have seen multiple people putting up video reviews of Barbie saying that using that toy had helped their child or cousin or whatever develop past their social anxiety. And so that was a really touching use of a conversational interface to me and I’m seeing, I don’t know about traction per say, but another company that’s using conversation as a form of therapy and I really hope that that takes off as well.

[XH]: So for LUIS I think the best problem is Microsoft bot framework and developing it to really create whatever you want.

[AB]: Can we do one more question?

[Argen]: Hi, my name’s Argen. I’m actually building a bot that gives you access of tour guides and gives you tours in the city. And I actually touched upon a very … concept for me which is like memory and linking it to different context of like, making the bot have context … What is the best practices right now or is there anything … other bots that your tools provide or like what can we do to get started on that.

[XH]: So internally we have to invest on the infrastructure. Before you have a long conversation you have to have the ability to index what happened. If you said something. Mentioned something. And a good way to really present this whole thing in academic people learning community is {}.

[CA]: So, there’s three ways that you can leverage context really easily and get ai. First of all, we have domains which are auto box functionality and they’re already contextually relevant and understand tons of different use cases, things like asking about weather, wisdom, small take, all sorts of things. And they maintain the conversation flow automatically, with no work. Then we have a function called Slot Pulling, so with one single attempt you can basically carry a full … of your dialogue flow where you’re getting pieces of information say like booking a flight and you need to know where they are, where they’re going, and all that stuff. All that context is managed internally, it’s actually, it’s pretty complex, but we’re able to do that on our side. And then we have the ability for you to completely control the flow of context and basically, as I said before, context are like topics of discussion. You can have an unlimited amount of topics and discussion that are currently active, and they decay after certain lifespan. So either a certain period of time or a certain number of requests, so it’s very very easy.

[TL]: So with Converse, it’s fairly similar in terms of arguments, Christina just described how we work, probably the other piece is the ability to easily switch between two sets of conversations back and forwards like talking about the weather, and then restaurants, and then go back and forward, but also being able to store data directly within our platform, if you want to. Against … conversation, against the user, and the platform in general, say if you have a user comes along and says that they’re looking for a restaurant and they say they’re vegan and then they come on later and that they want to book a flight, then actually, although it’s an entirely separate conversation, it can actually pick up that data from before and actually reuse it so that you’re not having to answer that kind of question again.

[XH]: I just want to add one thing. Microsoft LUIS right now is a single turn based language understanding services. Multi turn is actually by invitation only, so this is a work in progress. I just wanted to let you know, so you don’t get confused.

[MM]: We used to store … for you in some of the legacy APIs but it became too resource intensive, you know, 50 concurrent users would ruin our public servers because we’re holding context for every conversation that’s ever been had. So we recently redid our APIs and that’s kinda one of the big benefits of the new conversation service so it’s all stateless but there’s a context section that you can store really whatever you want, you can throw it in there from some other service if you have a SalesForce database of your customers, just throw it in there. Just pull it out and put it into the conversation now you can know their name and things like that. So now it’s all stored actually in the API call back and forth maintained through the life of the conversation and if you wanted to use sequel service on our site, you’re more then welcome to store it there, it’s really on the client to store though.

[AK]: When you want to do conversational bots with Wit, the way it keeps track of going on is first you have sessions with every one of the users and it kind of has an idea of the history of where they’ve been but then on top of that you have kind of a back and forth context exchange, where when the user says something Wit might pull some information out of their thing and put it in that context which is then sent back to your server along with a command like “oh they want to know something about the weather you need to tell me what the weather is” and then your server however it figures out what the weather is, it does that and sends it back, so it’s kinda this back and forth almost secondary conversation in between Wit and your server keeping track of everything that’s been going on in the conversation, what the relevant things are, the things you want to remember for the future, and so on.

[LH]: Yeah, same for PullString, of that it will remember your conversation state and the whole pileup trail that you’ve been through, and you can store specific pieces of information that you can then recall later, as well as having the ability to have an interjection that you pop into maybe a different conversation and then you can write a segway to get back into the conversation that you were in before, and then you just have this like stack of conversations that you’re popping off of. So you can store conversations that way.

[AB]: Great well, I think that should do it for tonight, thank you Galvanize for … And thank you each one of the speakers for making plans tonight, and hope you can stay for a minute or two around to chat with folks. Thank you everyone.

Share On:

You Might Also Like

Case Study: Twitter


Video: Panel Discussion on Speech, NLP and Chatbots

We’ve been following the chatbot opportunity closely for some time now and re...

Transcript: Panel Discussion on NLP and Chatbots

Below is the transcript from our panel discussion on NLP. smart group of pane...

Leave a Reply

Work with the Leading Software Development
Services Company for the Modern Age