11 Amazing Facts You Might Not Know About Chatbots

What is a chatbot? They come in two flavors:

  • Virtual assistants, which help you find information, remember stuff, or buy things. Think Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant. These are powered by machine learning, meaning they rely on artificial intelligence to learn and figure out what you want.
  • Messaging apps, which essentially allow businesses and brands to be online 24/7 providing customer support (e.g., instant responses, quick answers, complaint resolution). Think Facebook Messenger, Kik, WeChat, and Slack. These types of chatbots are only capable of interacting with users by following pre-programmed rules.

What you see and experienced so far is just the beginning of what is forecast to be a billion-dollar industry in less than 10 years. Many top brands, including Uber, Sephora, and CNN, have already adopted chatbots.

Still wondering what is a chatbot? Here are 11 amazing facts that might help explain what really it is and how it’s changing the world of digital technology.

Fact 1: 1.4 Billion People Use Messaging Apps

The top four messaging apps are bigger than the top four social networks, according to BI Intelligence. More than 1.4 billion people used messaging apps in 2016, according to eMarketer. By 2019, more than 25 percent of the world’s population (roughly 1.75 billion people) will be using mobile messaging apps.

Fact 2: People Are Ready to Talk to Chatbots

According to a report (Humanity in the Machine) from media and marketing services company Mindshare, 63 percent of people would consider messaging an online chatbot to communicate with a business or brand. A surveyconducted by myclever Agency found that they would use chatbots to obtain “quick emergency answers.”

Fact 3: People Want to Contact Retailers via Chat

Online chat and messaging apps are the preferred way for 29 percent of people to contact retailers when making a purchase decision, according to [24]7. That means people are equally likely to contact a retailer by phone or use a chatbot and more likely to use a chatbot than to contact a retailer via email (27 percent).

Fact 4: There Are More than 30,000 Facebook Chatbots

As of September, there were 30,000 chatbots on Facebook. Those chatbots have been used by millions of people in 200 countries.

Fact 5: Most Chatbot Conversations Start With “Hi”

“Hi” and “hello” are the two most popular ways to start a conversation with a chatbot, according to Dashbot.io, a bot analytics provider. Other popular messages included a question mark, “hey,” “help,” “yes,” and a thumbs up icon.

Fact 6: Consumers Are Ready to Buy Things via Chatbots

Thirty-seven percent of Americans say they are willing to make a purchase through a chatbot, according to DigitasLBi. On average, consumers would spend more than $55 per purchase. If a chatbot were available, 33 percent of UK residents would buy basic items like clothes and food, according to myclever Agency

Fact 7: Consumers Won’t Put Up With Bad Chatbots

One bad chatbot experience could be costly. According to the DigitasLBi report, 73 percent of Americans said they wouldn’t use a company’s chatbot after a bad experience. According to Mindshare’s report, 61 percent of people would find it more frustrating if a chatbot couldn’t solve a problem vs. a human.

Fact 8: Consumers Want Recommendations From Chatbots

Thirty-seven percent of all consumers–and 48 percent of millennials–are open to receiving recommendations or advice from chatbots, according to DigitasLBi. Breaking this down further, consumers are interested in recommendations for products from retail stores (22 percent); hotels/accommodations (20 percent); travel (18 percent); products from a pharmacy (12 percent); and fashion/style (9 percent).

Fact 9: Don’t Blur Lines Between Bots, Humans

An overwhelming majority of consumers (75 percent) said they want to know whether they are chatting with a chatbot or a human (48 percent considered chatbots pretending to be human “creepy”), according to Mindshare. The robotic and artificial nature of responses clued in 60 percent consumers that they were interacting with a chatbot, according to DigitasLBi.

Fact 10: Chatbots Will Be Indistinguishable From Humans by 2029

Ray Kurzweil, an inventor, futurist, and engineer at Google (who has a pretty good knack for making accurate predictions) predicts that chatbots will have human-level language ability by the year 2029. “If you think you can have a meaningful conversation with a human, you’ll be able to have a meaningful conversation with an AI in 2029. But you’ll be able to have interesting conversations before that,” according to Kurzweil, as quoted in The Verge.

Fact 11: People in China Seriously Love Chatbots

Xiaoice is a ridiculously popular chatbot in China, according to Engadget. The average conversation length is 23 conversations per session (CPS). The average CPS for pretty much every other chatbot: 1.5 to 2.5.

Bonus fact: Pretty soon chatbots will be ridiculously popular in the U.S. as well. Are you ready for the amazing chatbot revolution?

Originally posted in: Inc.com

155 claps4Follow

Source: https://chatbotsmagazine.com/11-amazing-facts-you-might-not-know-about-chatbots-7764213406e0

5 Benefits of using a Chatbot

Chatbots are gaining more popularity in the marketing industry and this is because they are bringing a new way for businesses to communicate with the world and most importantly with their customers, riding on the exploding popularity of messaging apps. Adopting the chatbot technology will give you a major advantage as a marketer over competitors.

Most Popular Chatbot Tutorials

1. Write a serverless Slack chat bot using AWS

2. Chatbot using django rest framework + api.ai + slack

3. Building a ChatBot with Watson

Here are 5 out of so many advantages of using a Chatbot;

  1. Improved Customer Service: A survey shows that 83% of online shoppers need support during shopping. So, your customers may require help trying to understand which products fit their needs/budgets any time of the day. A chatbot will provide real-time assistance like a sales person in a real store, offer an interactive communication where they also ask questions to understand the real problem. Using text and voice, they can present customers rich content with product pages, images, videos etc. This means that a chatbot will provide an extensive customer assistance, always-available customer support and proactive customer interaction.
  2. Better Engagement: It is important to keep your customers engaged with your brand and this is why companies are using social media marketing. Engaging customers through social media will increase your sales by 20% to 40%. While this sounds good, a chatbot will definitely add to the experience because a character driven experience to the customers helps in better engagement.
  3. Cost Savings: Implementing a full functioning chatbot is way cheaper and faster than creating a cross platform app or hiring employees for each task. Since chatbots are automated solutions, they allow organizations to handle many customers at once, and simultaneously. By “employing” chatbots that complements human agents, you will not only save on employee costs but you will also avoid the problems caused by human errors.
  4. Monitoring Consumer Data and Gaining Insights: Chatbots are great tools to communicate with customers. With the feedback they collect through simple questions, you can make improvements on your services/products, you can also them to track purchasing patterns and consumer behaviours by monitoring user data. Monitoring user data helps a company to decide “which products to market differently, which to market more and which to redevelop”.
  5. Rapid and Increasing Growth in Messenger apps65% of smartphone users don’t download new apps in a month. Since users have their core apps such as Facebook, Whatsapp, Messenger etc, they hardly download new ones. Therefore, integrating your own chatbot into one of the popular platforms that your customers use daily, can be better than building a new app by saving money and time (take your product/service to where your consumers are spending their time).

Users are spending more time on messaging apps like Messenger, Whatsapp, Skype, WeChat, and others. Using Chatbots in messaging platforms allow brands to interact personally.

Messaging apps will soon leverage on technologies such as chatbot to deliver smarter and better engagement with the customers. When used in the right effective way Messenger apps can leverage business.

In conclusion, the biggest advantages of chatbots include being able to reach a broad audience on messenger apps, as well as the ability to automate personalized messages.

Never miss a story from Chatbots Life, when you sign up for Medium. Learn more

Source: https://chatbotslife.com/5-benefits-of-using-a-chatbot-937b8b793826

Chatbots: What Happened?

Remember chatbots, the Next Big Thing of 2016? According to Sam Lessin, “the 2016 bot paradigm shift is going to be far more disruptive and interesting than the last decade’s move from Web to mobile apps.” And Chris Messina predicted, “you and I will be talking to brands and companies over Facebook Messenger, WhatsApp, Telegram, Slack, and elsewhere before year’s end, and will find it normal.”

This was exciting — enough so that I joined Facebook as design manager for the Messenger bot platform. It was a tough decision: I wasn’t ready to move on from my previous role at Google. But an opportunity to shape the next generation of software development…it was too intriguing to pass up.

But the predicted paradigm shift didn’t materialize. We look back at our wide-eyed optimism and laugh, chalking it up to the Silicon Valley hype cycle. Digit’s Ethan Bloch summed it up, quoted in a recent Inc article:

“I’m not even sure if we can say ‘chatbots are dead,’ because I don’t even know if they were ever alive.” After all, he said, no one can point to a chatbot that “all your friends were using.” Such a thing simply never existed.

Ouch. So what happened? Are chatbots dead?

Why all the hype?

There were legitimate reasons to get excited back then:

  • Messaging is huge. Business Insider wrote, “Messaging apps are bigger than social networks,” noting that chat had surpassed social networking in monthly active users. This makes intuitive sense: We are social creatures, and so much of our lives involve conversation. It’s why, when I co-founded a productivity startup in 2012, it ultimately (if unexpectedly) yielded a messaging app.
  • People don’t download apps. As of June 2017, 51% of US smartphone users downloaded zero apps per month; 75% downloaded two or fewer. (Note, however, that that leaves 56M people downloading at least 36 apps a year — so it’s simplistic to say nobody downloads apps.)
  • Apps are hard to build. A lot of work goes into designing and building even a simple app, be it native or web. With a bot, a lot of that complexity disappears — from user interaction to login to network traffic.
  • Messaging platforms are huge abroad. Merchants conduct business over SMS in emerging markets. In China, WeChat is a dominant platform for all sorts of products. Why not in the US, too?
  • Relationships matter. Every business wants a real relationship with its customers, and conversation is fundamental to relationships. That’s especially true for brand-driven businesses, but extends to others as well.

Top 3 Most Popular Bot Design Articles:

1. 10 Tips on Creating an Addictive ChatBot

2. What 10 Billion Messages can Teach us About Making Chatbots

3. Bots & Super Personalization On Steroids

So why didn’t it happen?

With all those factors pointing the way, why didn’t messaging platforms take off? We can only speculate, but here are some contributing factors:

Platforms are hard.

When Apple launched the iPhone SDK in 2008, we knew what an app looked like. The iPhone’s built-in apps had established best practices, both for how a smartphone app worked and for the sorts of things it supported well. The SDK launched a year after the iPhone, so those first developers had well-understood standards from which to work — as well as mature developer tools that had grown up with the Mac SDK.

The messaging platforms that launched in 2015–2016 lacked those advantages. Early bots seemed more like preliminary steps into the brave new world than instructive examples.

So should Slack, Facebook, Google, Microsoft, Kik, and others have built their own built-in bots to lead the way? Should they have gotten more proactive with their bot funds and incubators, hiring mentors to educate participants in the Way of the Bot, or supplying engineering and design resources? Funded Strategic Bot Initiatives at high-profile partners? In my opinion yes, yes, and yes. When it comes to platforms, developers are the users; and we don’t rely on our users to understand why or how to use our products. We have to show them.

And what about WeChat in China? As it turns out, while WeChat is a messaging app with a successful platform, it’s not really a messaging platform. Much of it boils down to apps that can run inside of WeChat. As Dan Grover (then at WeChat) wrote in 2016, “The key wins for WeChat…largely came from streamlining away app installation, login, payment, and notifications, optimizations having nothing to do with the conversational metaphor in its UI.” In other words, WeChat addressed many of the opportunities listed above, but did so without restricting itself to the messaging paradigm.

Replacing apps is hard.

Bots weren’t really going to replace apps, any more than apps replaced the web. (The web still accounts for 13% of mobile time spent, a full decade after the iPhone launched.) By talking about bots in such grandiose terms, we discouraged their development:

  • To begin with, that level of enthusiasm just lacked credibility. It’s fairly obvious that one can’t replace Google Maps or Gmail with a bot.
  • Painting with such a broad brush missed the opportunity to provide guidance: Is my product well-suited to be a bot? Are bot platforms ready to support the functionality I’ll need? If not, when is it reasonable to expect they’ll get there?
  • The language of replace precludes more nuanced concepts like extend and augment. As described below, some of the most interesting work has been additive.

Text is hard.

It’s easy to send and receive individual text messages—a competent developer can set up a basic bot in a few minutes—but hard to conduct a conversation:

  • NLP (natural language processing) allows a chatbot to understand the messages it receives, to be more dynamic than a simple command line. Many platforms provide some NLP, but even the best is limited compared to what a half-asleep human can do. By way of example, every time Siri or Alexa understands your words but not their meaning, that’s a failure of NLP.
  • Conversations aren’t linear. Multiple topics weave around each other. Discussions restart abruptly, or take unexpected left turns. That fluidity is tough to follow algorithmically, and most approaches are brittle.
  • Phrases like invisible UI or no UI often came up around bots: The idea that we could build systems so human that they wouldn’t have user experiences. They’d just know what the user wanted and take care of it. That level of automation requires sophisticated artificial intelligence. And despite the ongoing hype around AI, we’re still a long way from anything truly humanlike.
  • There are many reasons why computing moved from text-based command lines to graphical user interfaces (GUIs) in the early 1980s. One is that it’s faster to point than it is to type. That was true with a mouse and keyboard, and even more so with a mobile device. Pressing a button or selecting from a list is much, much easier than typing out a sentence.

In other words, there are technical and UX problems that limit the efficacy of a text-based, conversational UI.

But their impact was also far greater than it needed to be, because we limited our own vision of what a messaging experience could be.

Chatbot vs. Messaging App

Bots, chatbots, conversational commerce…whatever you called them, they were generally defined as messaging with a business or service. Chris Messina defined conversational commerce as “utilizing chat, messaging, or other natural language interfaces (i.e. voice) to interact with people, brands, or services and bots that heretofore have had no real place in the bidirectional, asynchronous messaging context.”

To many people, this simple definition seemed like an advantage. Developers could move fast, freed from much of the complexity of app development. Usability would improve: we all know how to message.

But developers ended up trading one type of complexity for another; and users suddenly found themselves typing out instructions long-hand instead of tapping and swiping. Here’s Dan Grover again:

Designing the UI for a given task around a purely conversational metaphor makes us surrender the full gamut of choices we’d otherwise have in representing each facet of the task in the UI and how they are arranged spatially and temporally…

So let’s take these past few years in China as “The Great Conversational UI Experiment.” Here, you have a messaging platform that…boldly and earnestly carried the “make every interaction a conversation” torch as far as it could. It added countless features to its APIs — and yet those that actually succeeded in bringing value to users were the ones that peeled back conventions of “conversational” UI. Most instructively, these successes were borne out of watching how users and brands actually used the app and seeking to optimize those cases.

So WeChat’s success relied, in part, on recognizing that messaging platforms needn’t restrict themselves purely to messaging UI. And what got me really excited was the notion of integrating the two: combining the efficiency and interactivity of a GUI with the familiarity, humanity, and permanence of a chat. Here I am in 2016, shortly after joining Facebook:

But conversation is more than just text. A face-to-face conversation layers subtle facial expressions, gestures, and tone of voice over the textual content — indeed, we can converse without uttering a single word.

Similarly, every digital interaction is a dialogue — whether it’s a simple text chat, an exchange of video and voice clips, a series of button presses, or manipulation of a chart. We can build it to be more or less explicitly conversational, but it doesn’t suddenly become unconversational when we introduce GUI.

And the US-based messaging platforms have added GUI features: Slack’s popup dialogs, Microsoft’s rich cards, Facebook’s structured templates and web-view overlay. Yet the overall, chat-centric narrative hasn’t changed. Why? I suspect it’s a combination of factors:

  • Again, there aren’t high-profile, high-quality examples to lead the way.
  • These GUI enhancements are still limiting compared to even a simple app.
  • Too much of the story is binary: Fully conversational, magical text assistants vs. rich, interactive apps. As an industry, we tend not to get excited about the middle ground.
  • The terms we chose — bot and chatbot in particular — suggest messaging, vs., say, messaging app.

Beyond the Platforms

Of course, a messaging experience can happen outside a messaging app. Freed from those constraints, one can experiment with all sorts of hybrid approaches. Some of the most innovative bot experiences aren’t actually bots:

Penny

Penny’s three-tabbed approach puts chat at the center, alongside more traditional approaches.

Just acquired by Credit KarmaPenny is a competitor to Mint with messaging front and center. It provides chatty advice and alerts, but also an account dashboard and transaction list. Users get the benefit of a friendly financial assistant, alongside the efficiency of perusing their finances the traditional way.

Quartz

In 2016 Quartz launched a bot-like news app, with an innovative bite-sized approach to content and a chat interaction that’s purely single-tap replies.

More recently they’ve launched a Facebook Messenger bot; but the app provides a more tailored experience for those who download it: a dedicated notification channel, easy access to full articles from the thread, and a UI devoid of anything beyond what’s needed for news. The separate app also gives Quartz the ability to advertise.

Trunk Club

Subscription clothing service Trunk Club (acquired by Nordstrom in 2014) has an app that mixes chat and GUI in two ways:

  • The app revolves around an ongoing chat with your stylist, as she assembles your “trunk” and you provide feedback. Traditional text messages mix with richer templates, which in turn serve as representations of and entry points into app-like experiences.
  • The chat is paired with a browse-based shopping experience, and the two are tightly integrated: after selecting merchandise in this “Discover” tab, one is prompted to message one’s stylist about it (rather than, say, purchase directly). This reinforces the chat — and its inherent stylist-customer relationship — as the backbone of the experience.

Much of this functionality is powered by Layer, a company with its own twist on messaging platforms. Rather than entice developers into their app, Layer provides them with tools and services to build messaging experiences on their own. So companies can innovate around chat without starting from scratch. Layer also mitigates the app-install hurdle by making their tools work across native apps, mobile web, and desktop web.

For businesses like Trunk Club, Layer is interesting not merely for the customer-facing experience but also for that of the stylist or other customer representative. There, Layer offers CRM functionality to streamline the human-powered side of the conversation, with quick access to customer profiles and tools to drop richer interactions into the thread as easily as text.

Marsbot

Launched in 2016, Foursquare’s Marsbot provides food and drink recommendations via chat. It takes requests but also recommends proactively, building on Foursquare’s ability to accurately detect not just what block you’re on, but in which restaurant you’ve just sat down.

Marsbot operates over SMS (and thus could work over any messaging platform). But none of today’s platforms provides access to background location, so Marsbot requires users to install an app. It sits in the background and does its thing, while all user interactions occur over SMS. That’s clever: Foursquare still has to get over the app-install hurdle, but sidesteps the engagement challenge that follows — as well as the need to build a chat experience.

Intercom

You’ve almost certainly interacted with Intercom, as a customer-service chat widget in the lower right corner of websites. By embedding this widget, developers can use Intercom to manage their customer service and even get some analytics (e.g., filtering customers for a particular device or action).

Intercom and Trunk Club are both great examples of focused domains (customer service and high-touch subscription services) that seem promising for messaging, because conversation is so central. Facebook would seem to agree: in late 2017 they launched their Customer Chat Plugin, which looks like a simple competitor to Intercom.

Interaction Models

Surveying these examples, three models of interaction and integration emerge:

  1. Chat as Layer. Messaging exists as a transient layer, always available to accompany a traditional app experience. This works well in domains like customer support where chat is a ubiquitous resource but not a centerpiece, e.g., Intercom.
  2. Chat as Pillar. Messaging is a core part of the UI, alongside more traditional GUI. Conversational things happen in conversation; less conversational things happen elsewhere. This is great for apps that want chat front-and-center, but also need top-level access to information and actions ill-suited to a linear, transcript-like approach — e.g., Penny.
  3. Chat as Backbone. Messaging is the fundamental, all-encompassing interaction. App-like flows are treated as elements in the thread, to be “popped out” and accessed as needed. Trunk Club uses this extensively to support trunk-editing. It’s also the model best supported by today’s messaging platforms (via dialogs, templates, and webviews).

Of course these aren’t the only possible models, but they cover a lot of ground. And if you’re contemplating a messaging experience in your own products, framing your decision in terms of these models may be valuable to shed light on your needs, and on which platforms may be appropriate.

Predictions

So where will we go from here? My predictions follow, with full recognition that predictions are often wrong:

Messaging platforms will remain niche

The hype is over. The general sentiment: messaging platforms weren’t a thing after all. But that doesn’t mean everyone has moved on: bots, and the consultancies and meta-platforms that support them, are very much alive, particularly in certain niches:

  • Companies continue to find fertile ground for bot-based products in conversation-heavy domains like healthcare and customer service.
  • Designing and prototyping a messaging experience requires different approaches than for an app or website. And the lines between design, prototype, and production are even blurrier for chatbots than for other platforms. From simple flow-mockup tools like BotPreview to hacker-friendly editors like Dexter to full-service shops like ChatFuel, companies rushed to fill the void, and are still evolving.
  • AI and NLP continue to improve, bringing us closer to the ability to conduct a true conversation. More importantly, more and more of that functionality is available as a service — so developers without advanced machine-learning degrees can use it.
  • Voice assistants like Alexa continue to evolve their own platforms—structurally similar to chatbots, but unique enough in their constraints and opportunities to warrant treatment as a separate category. I anticipate they’ll continue to grow, but won’t replace apps any more than chatbots did.

But the lack of widespread enthusiasm can’t help but affect the platform-makers themselves. It seems likely they’ll divest resources, and that’s a vicious circle: less platform investment means less developer interest, means less platform investment, and so on.

The current media storm around privacy and social networks plays into this as well: Companies and developers will be that much more reluctant to build a business beholden to a social platform.

Messaging experiences will continue to grow

There’s an old saying that left to develop long enough, any tech product will turn into an email app. In today’s mobile-centric world, we might want to change that to a messaging app.

Messaging isn’t going away. We’ll continue to text our fingers off, and messaging services will continue to evolve. That, in turn, will influence anyone building a product for which conversation and customer relationships matter (read: just about everything). And even the least chat-like of tools provides that opportunity: Nobody would accuse Microsoft Word of being a messaging app, but what are Google Docs comments if not a chat thread?

So I anticipate that more and more products will introduce a messaging component; and that trend will provide a fertile ground for innovation. Which, in turn, provides an ongoing opportunity for companies like Layer to facilitate that innovation.

Businesses will message more

As more people message each other, it’s inevitable that more businesses will want to message customers. That could involve human-powered accounts, bots, or bespoke apps —but any way you slice it, customer relationships matter. To support this, messaging providers will want to invest in:

  • Payments. Today’s platforms already provide this, but any evolution that makes it easier (and leverages existing payment information, e.g., Apple Pay or Android Pay) will attract developers.
  • CRM. The end-user experience is just one side of the coin. It makes sense that Intercom has invested in CRM and analytics tools for its customers. And it blurs some interesting lines in the competitive landscape: Intercom now competes simultaneously with Zendesk, Salesforce, Facebook, and Mixpanel in a single product.
  • Location. Location is a staple of mobile, but somewhat absent from messaging platforms today. From recommending restaurants to ordering coffee to buying movie tickets, there’s a plethora of use cases that are well-suited to a bot but for the lack of seamless location support.

The players may change

In some ways, the messaging landscape today isn’t so different from the pre-mobile days. The same regional network effects reign: Replace Facebook with Yahoo, SMS with AIM, WhatsApp with Windows Live Messenger — and divide the user base by 10 — and you might be in 2007.

Which makes me wonder if the names will be different again in ten years. Arguably it took mobile to disrupt the last group of incumbents, and perhaps there isn’t another mobile-ish revolution coming. But who knows?


Hype is never realistic, but it’s rarely empty. As an industry, we surely overestimated the impact bots would have. And we did a disservice by equating chatbot with messaging app: things get so much more interesting when we focus on the latter.

I wouldn’t want to be raising money for a chatbot startup right now, but messaging isn’t going anywhere because conversation isn’t going anywhere. NLP and AI will continue to improve. Developers and platforms will continue to experiment with different flavors of conversational experience. And whether the hype cycle comes around again or not, it’s valuable to consider conversation, in all its forms, as part of the product toolkit.


Disclosures: While at Facebook, I worked on several of the Messenger features mentioned here. I’ve had a casual relationship with Layer over the years of its existence, though we’ve agreed to see other companies.

Source: https://chatbotslife.com/chatbots-what-happened-dcc3f91a512c

Therapy Chatbots are Transforming Psychology

Chatbots are having a significant impact on numerous fields — especially the psychology sector. Developers caution these tech tools aren’t a replacement for human interactions with experts, but it’s already clear chatbots are an always-available resource — which isn’t the case for human health practitioners.

Chatbots Can Help Teach Users Tried-and-Trusted Psychological Practices

One notable benefit of chatbots is that engineers can program them to implement techniques widely accepted as advantageous in the psychological field. For example, Woebot is a chatbot that uses cognitive behavioral therapy strategies to help users manage symptoms of anxiety and depression.

And, Woebot really works. In a study comparing people who interacted with Woebot 12 times over two weeks with a group of individuals who read a self-help book, those who used Woebot had reductions in their symptoms, but there was no apparent change in the other group.

The Woebot study shows today’s chatbots can supplement the same principles patients learn in psychologists’ offices. People who use this technology could theoretically have fewer difficult moments outside of therapy sessions because a chatbot is available 24/7 and ready to listen.

Top Articles on How Businesses are using Bots:

1. WhatsApp for Business: A Beginners Guide

2. Revenue models for bots and chatbots

3. Series of stories on AI, chatbots and how can they help businesses

They May Encourage People to Seek Treatment Sooner

Even when individuals have access to mental health assistance in their areas, they may delay taking advantage of the services for many reasons. They might fear judgment from therapists or people they know. Or, individuals may assume everyone goes through the emotions they’re experiencing, and think their feelings aren’t severe enough to warrant treatment.

A chatbot called Wysa — which is designed to look like a cute penguin — could reduce avoidant behaviors that could cause a downturn in mental health. It features a mood tracker and can detect if you’re feeling down. In that case, Wysa prompts you to take a depression test and may recommend seeking professional help, depending on the results.

The way Wysa documents mood changes over time could also make mental shifts more obvious and promote more proactivity in getting to the root of the cause. People often don’t realize how all-encompassing and long-lasting their mental health symptoms are, but Wysa could highlight that information and make individuals realize it’s time to take decisive action.

Chatbots Could Contribute to More Diagnostic Successes

People who are leading authorities in their industries, such as motivational speakers, often discuss their struggles and successes to an audience in an effort to make their followers feel better. In contrast, psychologists must review patient histories, ask questions and make suggestions, all in the effort to make diagnoses. Though motivational speakers are a great resource for advice, psychologists aim to holistically treat an individual based on their personal experiences.

Despite those efforts, they don’t always reach the correct conclusions, which is frustrating for patients and practitioners alike. Evidence suggests, though, that analyzing social media content could help improve success rates. Ongoing research involving artificial intelligence and social media aims to make online experiences more personalized and productive for users.

Furthermore, scientists designed an algorithm that analyzed Instagram posts and could correctly use the associated data to diagnose depression 70 percent of the time, whereas doctors only did so in 42 percent of patients.

Consider psychologists might not always ask all the right questions when diagnosing patients. Similarly, patients may not think to mention characteristics about themselves that could be clear warning signs of illness. A chatbot could evaluate the revelations a user provides over time and use them to alert mental health practitioners of things that lead to more effective and appropriate treatments.

Chatbots Could Address Mental Health Professional Shortages

A definite advantage of the Internet is that it gives people access to information and services that may not be readily available in their areas. The same is potentially true of chatbots, due to the way they could help reduce the problems associated with a lack of mental health care access in rural areas. There are not enough mental health experts to care for all the people who need them, which means some individuals fall through the cracks.

Statistics indicate more than 106 million Americans live in areas federally designated as having mental health care shortages, hence the need for a solution. Also, people without nearby access to mental health professionals may not be able to easily drive to another town to get help. If they can’t, chatbots could provide them with strategies to start feeling better. Some even alert mental health professionals if a user shows suicidal tendencies.

Modern chatbots are already highly advanced. As for future versions, expect them to pick up on people’s emotions better than ever, thanks to advanced sentiment analysis algorithms in the works. They won’t replace therapists anytime soon, but these bots are already helping mental health workers do their jobs better.

Source: https://chatbotslife.com/therapy-chatbots-are-transforming-psychology-de67570236bc

The charge of the chatbots: how do you tell who’s human online?

Automated ‘voices’ that were supposed to do mundane tasks online also now spread hate speech and polarise opinion. Are they a boon or a threat?

Alan Turing’s famous test of whether machines could fool us into believing they were human – “the imitation game” – has become a mundane, daily question for all of us. We are surrounded by machine voices, and think nothing of conversing with them – though each time I hear my car tell me where to turn left I am reminded of my grandmother, who having installed a telephone late in life used to routinely say goodnight to the speaking clock.

We find ourselves locked into interminable text chats with breezy automated bank tellers and offer our mother’s maiden name to a variety of robotic speakers that sound plausibly alive. I’ve resisted the domestic spies of Apple and Amazon, but one or two friends jokingly describe the rapport they and their kids have built up with Amazon’s Alexa or Google’s Home Hub – and they are right about that: the more you tell your virtual valet, the more you disclose of wants and desires, the more speedily it can learn and commit to memory those last few fragments of your inner life you had kept to yourself.

 Read more

As the line between human and digital voices blurs, our suspicions are raised: who exactly are we talking to? No online conversation or message-board spat is complete without its doubters: “Are you a bot?” Or, the contemporary door-slam: “Bot: blocked!” Those doubts will only increase. The ability of bots – a term which can describe any automated process present in a computer network – to mimic human online behaviour and language has developed sharply in the past three years. For the moment, most of us remain serenely confident that we can tell the difference between a human presence and the voices of the encoded “foot soldiers” of the internet that perform more than 50% of its tasks and contribute about 20% of all social media “conversation”. That confidence does not extend, however, to those who have devoted the last decade or so to trying to detect, and defend against, that bot invasion.

Naturally, because of the scale of the task, they must enlist bots to help them find bots. The most accessible automated Turing test is the creation of Professor Emilio Ferrara, principal investigator in machine intelligence and data science at the University of Southern California. In its infancy the bot-detector “BotOrNot?” allowed you to use many of the conventional indicators of automation – abnormal account activity, repetition, generic profiles – to determine the origin of a Twitter feed. Now called the Botometer(after the original was targeted by copycat hacks), it boasts a sophisticated algorithm based on all it has learned. It’s a neat trick. You can feed it your own – or anyone else’s – Twitter name and quickly establish how bot-like your bon mots are. On a scale where zero is human and five is machine, mine scored 0.2, putting @TimAdamsWrites on a sentient level with @JeremyCorbyn, but – disturbingly – slightly more robotic than @theresa_may.

Chatbots for health, wealth and music

WoeBot (pictured)

Designed to help those suffering from depression by facilitating quick conversations. It will even check up on you every now and then to see how you are doing. The company bills it as a “robot friend, who’s ready to listen”.Cleo

An AI chatbot aimed at helping you to organise your finances. It connects with your bank account and can give you detailed information via Facebook Messenger about what you spent and where you spent it. 
Robot Pires

Arsenal FC invite you to talk to a cartoon-bot Roberto Pires – ask for news about the club, as well as the player’s own record, including how many goals he scored for the Gunners.
Paul McCartney

The “official Messenger bot for the music legend Paul McCartney” will react with gifs of the singer and can tell you when he’s on tour, about his latest projects, and more. However, it doesn’t respond too well to questions. When asked “How old is Paul?” the bot replied with a video of a flying baguette.
TfL TravelBot

Designed to allow you to check on how London’s transit system is running, all via Facebook Messenger. It can be asked about the status of lines, and when asked how to get from A to B will provide three links with the fastest route.
Lark

A health coach that can help users manage the symptoms of hypertension, diabetes etc. Using data gathered from the user’s connected devices, it makes data-driven nudges and recommendations to encourage healthier behaviour.
OllyBot

The Olly Murs official chatbot can answer questions about the singer, provide fans with information about his upcoming tours and offer playlists of his music. The bot replicates the celebrity’s tone by calling himself “29+3” years old and ending messages with winking emojis. 
Insomnobot-3000

Created by mattress company Casper, this chatbot allows sleepless users to message it for recommendations and suggestions that may improve their sleeping routine. 
Harry Lye
 Photograph: WoebotWas this helpful?Advertisement

Speaking to me on the phone last week, Ferrara explained how in the five years since BotOrNot has been up and running, detection has become vastly more complex. “The advance in artificial intelligence and natural language processing makes the bots better each day,” he says. The incalculable data sets that Google and others have harvested from our incessant online chatter are helping to make bots sound much more like us.

The Botometer is powered by two systems. One is a “white box” that has been trained over the years to examine statistical patterns in the language, Ferrara says, “as well as the sentiment, the opinion,” of tweets. In all there are more than 1,200 weighted features that a Twitter feed is measured against to determine if it has a pulse. Alongside that, the Botometer has a “black-box model” fed with a mass of data from bots and humans, which has developed its own sets of criteria to separate man from machine. Ferrara and his team are not exactly sure what this system relies on for its judgments, but they are impressed by its accuracy.

When Ferrara started on this work, he felt he had developed his own sixth sense for sniffing out artificial intelligence on Twitter. Now he is no longer so confident. “Today it is not clear to me that I interact with as many humans as I thought I did,” he says. “We look hard at some accounts, we run them through the algorithm and it is undecided. Quite often now it is a coin toss. The language seems too good to be true.”Advertisement

Not all bots aim to deceive; many perform routine operations. Bots were originally created to help automate repetitive tasks, saving companies money and time. Some bots help to refresh your Facebook feed, or keep you up to date with the weather. On social media, bots were originally coded to search for hashtags and keywords and retweet or amplify messages: “OMG have you seen this?!” They acted as cheerleaders for Justin Bieber or Star Wars or Taylor Swift. There were “vanity bots” which added numbers and fake “likes” to profiles to artificially enhance their status, and “traffic bots” designed to drive customers to a particular shopping site. There were also bots that acted as grammarians, making pedantic corrections to tweets, or simple gags like Robot J McCarthy, which sought out conversations using the word “communist” and replied with a nonsensical slogan.

At some point political bots entered the fray, mostly on Twitter, with the intent of spreading propaganda and misinformation. Originally these seem to have been the work of individual hackers, before the techniques were adopted by organised and lavishly funded groups. These bots, Ferrara suggests, proved to be a highly efffective way to broadcast extremist viewpoints and spread conspiracy theories, but also were programmed to search out such views from other, genuine accounts by liking, sharing, retweeting and following, in order to give them disproportionate prominence. It is these bots that the social media platforms have been trying to cull in the wake of investigations into the 2016 American election by Robert Mueller and others. Twitter has taken down a reported 6m bot accounts this year.

Advertisement

When I spoke to Ferrara he was looking at the data from the American midterm elections, examining the viral spread of fake news and the ways in which it was still being “weaponised” by battalions of automated users. “If you were an optimist you would think that the numbers look OK,” he says. “Between 10 and 11% of the users involved in conversations around the election are flagged as bots – and that is significantly less than in 2016 when it was something like 20%. The pessimistic interpretation is that our bot-detection systems are not picking up the more sophisticated bots, which look just like humans even to the eyes of the algorithms.”

The unseen global army of “bot herders”, those shadowy individuals and corporations and rogue government agencies that send their bots out into the virtual world, have a couple of advantages in this latter respect. One is that they are now able to find enormous amounts of natural-language data to develop the next generation of talkative bots. The other is that these creations can exploit our tendency to ascribe trusted human characteristics to voices even if, on a rational level, we suspect that they are artificial. That psychology is as old as electronic communication itself.

All modern chatbots trace their family tree back to the experiments by Joseph Weizenbaum with Eliza, named after Ms Dolittle in Pygmalion for “her” ability to master received pronunciation. In 1966, Weizenbaum, a German-American professor at the Massachusetts Institute of Technology, created a prototype chatbot that searched for keywords in conversations conducted with humans typing at keyboards. The rudimentary program would pick up these words and use them in its reply. If it did not locate a useful word, it would offer a neutral response. Weizenbaum set up Eliza to mimic the tone and manner of an east coast psychotherapist, a dynamic that allowed its voice to “adopt the pose of knowing almost nothing of the real world”.

The professor had assumed that his bit of trickery would be a useful way of teaching his students about the structures of language and dialogue. What he discovered however, was that several of them became hooked on conversing with Eliza and telling her their troubles, opening up to the program as if it were a therapist. When Weizenbaum’s secretary asked him to leave the room one morning because she was in the middle of a session with Eliza, he became alarmed by the general willingness to be so seduced by a machine voice. He published an academic paper which included the first notable conversation between woman and machine. It went like this.

Woman: Men are all alike.
Eliza: In what way?
W: They’re always bugging us about something or other.
E: Can you think of a specific example?
W: Well, my boyfriend made me come here.
E: Your boyfriend made you come here?
W: He says I’m depressed much of the time.
E: I am sorry to hear that you are depressed.
W: It’s true. I am unhappy.
E: Do you think coming here will help you not to be unhappy?
W: I need some help, that much seems certain.
E: What would it mean to you if you got some help?
W: Perhaps I could learn to get along with my mother.
E: Tell me more about your family.
W: My mother takes care of me.
E: Who else in your family takes care of you?
W: My father.
E: Your father?
W: You are like my father in some ways.
E: What resemblance do you see?
W: You are not very aggressive, but I think you don’t want me to notice that.
E: What makes you think I am not very aggressive?
W: You don’t argue with me…Advertisement

Weizenbaum later became a fierce critic of the ways that human intelligence was becoming in thrall to artificial intelligence, which he viewed as a corporatised, reactionary force. He was stunned by how quickly the users of Eliza gave human agency to what was a relatively simple piece of code. It indicated to him that the brain had evolved to view all speech as meaningful, even if it came from a patently fake source. He worried, extremely presciently, about the implications of this: “The whole issue of the credibility [to humans] of machine output demands investigation,” he concluded in his paper. “Important decisions increasingly tend to be made in response to computer output. Eliza shows, if nothing else, how easy it is to create and maintain the illusion of understanding.”

The many progeny of Eliza have evolved into chatbots – bits of software designed to mimic human conversation. They include recent entries into the annual Loebner prize, which offers chatbot contestants the chance to fool a panel of human judges with their intelligence. The comforting principle of telling our deepest fears to a machine is also exploited in various “therapy” platforms, marketed as a genuine alternative to conventional talking cures. Each of them trades on the idea of our fundamental desire to be listened to, the impulse which shapes social media.

Lisa-Maria-Neudert is part of the computational propaganda project at Oxford University, which studies the ways in which political bots have been used to spread disinformation and distort online discourse. She argues that the seductive intimacy of chatbots will prove to be the next battleground in this ongoing war.Advertisement

The Oxford research team began examining the huge growth of bot activity on social media after the shooting down of the MH17 passenger plane with a Russian missile in 2014. A dizzying number of competing conspiracy theories were “seeded” and encouraged to spread by a red army of automated agents, muddying the facts of the atrocity. The more Oxford researchers looked, the more they saw how similar patterns of online activity were amplifying specific hashtags or distorting news.

The most striking thing to me to this day is that people are really, really bad at assessing the source of informationEmilio Ferrara

In the beginning, Neudert suggests, the bots would rely on volume. “For example,” she says, “in the Arab spring bots were flooding hashtags that activists were using underground in order to make the conversation useless.” Or, like Eliza, bots would respond to a keyword to get a marginal topic trending, and, often, into the news. This was an effective but blunt instrument. “If I tweet something saying ‘I hate Trump’,” Neudert explains, an old-style bot “would send me a message about Trump because it is responding to that keyword. But if I say ‘I love Trump’, it would send me the same message.” These bots were not smart enough to recognise intent, but that is changing. “The commercial companies that are using artificial intelligence and natural language processing right now are already building such technologies. What we are doing as a project is to try to find out if the political actors are already using them also.”

Neudert is particularly interested in the new generation of branded chatbots that push content and initiate conversations on messaging platforms. Such chatbots – which openly declare themselves to be automated – represent a new way for businesses and news services to attract your attention, giving the impression of speaking just to you. She imagines the propaganda bots will use the same technology, but without declaring themselves. “They’ll present themselves as human users participating in online conversation in comment sections, group chats, and message boards.”

Advertisement

At present the feasibility of a truly conversational chatbot, one that can understand the context of any conversational gambit, pick up tonal ambiguities and retain a sense of how the discussion is evolving, is still a long way off. The new generation of chatbots might be good at answering direct questions or interrupting debates, but they are ill-equipped to sustain coherence over a range of subjects.

What they may soon be capable of is maintaining short bursts of plausible dialogue with a predetermined narrative. In a recent paper in the MIT Review, Neudert suggests that in the near future such “conversational bots might seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyse a user’s data to deliver customised propaganda.” In this scenario, and judging by what is already happening, the bots will have the capacity to “point people towards extremist viewpoints, counter arguments in a conversational manner [and] attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.” And of course all of this will be done by a voice that engages one on one, that talks just to us.

There are a number of fast-growing companies that are beginning offer the kind of technology that Neudert describes, as a legitimate marketing tool. Several are official partners of Facebook in order to use its Messenger service. They include the market-leading Russian-based company Chatfuel, which has enabled thousands of organisations to build Messenger chatbots, including headline acts such as the NFL and the Labour party, and a number of smaller operations such as Berlin-based Spectrm, which has created Messenger chatbots for the likes of CNN and Red Bull.

I spoke to Max Koziolek, one of the founders of Spectrm, who is (predictably) evangelical about this new way of businesses speaking “like a friend” to their users and customers.

Using a combination of natural language data and human input, Spectrm has created bots that can already converse on a narrow range of subject matter. “On a specialist subject you can now get to 85% of queries pretty fast,” Koziolek says, “and then you will have the long tail, all those surprise questions which take ages to get right. But if you are making something to answer queries about Red Bull, for example, does it really need to know who is the chancellor of Germany?”

One of the most successful chatbots Spectrm has created was a public health initiative to advise on the morning-after contraceptive pill. “It is one of those times when someone might prefer to speak to a bot than a human because they are a bit embarrassed,” Koziolek says. “They talk to the bot about what they should do about having had unprotected sex and it understands naturally 75% of queries, even if they are writing in a language which is not very clear.” Within a year of listening and learning, he is confident that capacity will have increased to nearly 100%.

Increasingly we will become used to almost every entity in our lives “talking to us as if it is a friend”, he suggests, a relationship that will require certain rules of engagement. “If you send messages after 11pm that’s bad. And also if you send too many. I wouldn’t send more than two messages a day as a publisher, for example. It’s a very intimate space. A friend is sending me relevant information and at the right time.”

Far from being a new frontier in the propaganda wars, Koziolek believes – hugely optimistically – that such direct conversation could help to clear the internet of hate speech, giving users more control over who they hear from.

Does it matter whether they know that the chat is from a machine?

“We don’t see big differences,” he says. “Sometimes our bots have a clear personality and sometimes they don’t. Bots which have a personality will always say goodnight, for example. Or ‘How are you?’”

Do those types of bots produce longer conversations?

“Different kinds of conversations. Even though you know this thing is a robot, you behave differently toward it. I would say you cannot avoid that. Even though you know it is a machine, you immediately talk to it just like it is a human.”

This blurring of those lines is less welcome to observers like Ferrara, who has had a front-row seat in the changing dialogues between human and machine. I wonder, in observing at close quarters for so long if he has, anecdotally, felt the mood of conversations changing, if interactions have become angrier?

He says he has. “The thing was, I was becoming increasingly concerned about all sorts of phenomena,” he says. “I worked on a variety of problems, bots was one. I also looked at radicalisation, at how Twitter was being used to recruit Isis and at how conspiracies affected people’s decision-making when it comes to public health, when it comes to vaccines and smoking. I looked at how bots and other campaigns [that] had been used to try to manipulate the stock market. There are all sorts of things that have nefarious consequences.”

What aspect of this behaviour alarmed him the most?

“The most striking thing to me to this day is that people are really, really bad at assessing the source of information,” he says. One thing his team have shown is that the rate at which people retweet information from bots is identical to that from humans. “That is concerning for all sorts of reasons.”

This is not a problem you can solve with technology alone. You need regulationEmilio Ferrara

Despite the revelation of such findings, he gets frustrated that people, for political purposes, still seek to dismiss the ways in which these phenomena have changed the nature of online discourse. As if the most targeted propaganda, employed on the most unregulated of mass media, had no effect on opinion or behaviour.

One of his later projects has been to try to show how quickly messages can spread from, and be adopted by, targeted user groups. Last year, Ferrara’s team received permission to introduce a series of “good” health messages to Twitter via bots posing as humans. They quickly built up thousands of followers, revealing the ways in which a flood of messages, from apparently like-minded agents, can very quickly and effectively change the tone and attitude of online conversation.

Unfortunately, such “good” bots are vastly outnumbered by those seeking to spread discord and disinformation. Where does he place his faith in a solution?

“This is not a problem you can solve with technology alone,” he says. “You need tech, you need some system of regulation that incentivises companies to do that. It requires a lot of money. And then you need public opinion to care enough to want to do something about it.”

I suggest to him that there seems to be a grain of hope in the fact that people are reaching out in greater numbers toward trusted, fact-checked news sources: subscriptions to the New York Times and the Washington Post are up (and the Guardian and Observer just notched up a million online supporters).

“It’s true,” he says. “But then I have a chart on my screen which I am looking at as I talk to you. It gives live information on the sources of things being retweeted by different groups. Way at the top is Breitbart: 31%. Second: Fox News. Then the Gateway Pundit [a far-right news site]. Looking at this,” he says, “it is like we haven’t yet learned anything from 2016.”

Since you’re here…

… we have a small favour to ask. More people are reading and supporting our independent, investigative reporting than ever before. And unlike many news organisations, we have chosen an approach that allows us to keep our journalism accessible to all, regardless of where they live or what they can afford.

The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our editor. No one steers our opinion. This is important as it enables us to give a voice to those less heard, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.

Every contribution we receive from readers like you, big or small, goes directly into funding our journalism. This support enables us to keep working as we do – but we must maintain and build on it for every year to come. Support The Guardian from as little as $1 – and it only takes a minute. Thank you.

Source: https://www.theguardian.com/technology/2018/nov/18/how-can-you-tell-who-is-human-online-chatbots

How to Create a Facebook Messenger Chatbot

Does your business want to do more with Facebook Messenger?

Interested in using a chatbot for customer service and marketing?

Facebook Messenger chatbots can help your followers get answers to frequently asked questions and more.


In this article, you’ll discover how to set up a Facebook Messenger chatbot for your business.

Why a Chatbot for Facebook Messenger?

Facebook now lets you install Messenger chatbots on your business page. Chatbots allow you to have an automated conversation with people who click on your Facebook Messenger to start a dialogue.

A series of menus or keywords guides customers to the next steps, saving time and eliminating frivolous requests that don’t lead to sales. It’s an easy way to allow people to interact with your business to buy tickets for an event, get directions, see a menu, set up an appointment, or ask a common question.


An automated responder lets you reply immediately to users with instructions on what to do next or with information about your business.

The chatbot uses keywords that users type in the chat line and guesses what they may be looking for. For example, if you own a restaurant that has vegan options on the menu, you might program the word “vegan” into the bot. Then when users type in that word, the return message will include vegan options from the menu or point out the menu section that features these dishes.

You can create an artifical intelligence bot with triggers that you define. Your bot can respond with blocks of text that help whittle down the pathways for users to take to get answers.

Source: https://www.socialmediaexaminer.com/how-to-create-facebook-messenger-chatbot/