Menlo Park, CA— Flying in the face of Sarah Connor and Elon Musk’s worst fears, artificial intelligence is saving lives worldwide. Facebook recently made news by expanding its suicide prevention efforts to include AI-powered intervention. The company rolled out a proof-of-concept in the US back in March, and thanks to early successes, the program has already been globalized.


For years, the company has funded a community operations team that scans user posts and alerts local law enforcement to intervene when individuals at high risk for self-harm are identified. By adding AI muscle to the approach, both text-based posts and live video can now be scanned for flags that point to suicidal intent. In the past month alone, the social network claims that more than 100 interventions have taken place worldwide thanks to the AI engine and operations team. Additionally, the new tech is accelerating intervention times by flagging earlier and bringing in support twice as quickly as user-reported instances.


Despite this system’s high-tech nature, the human element of these prevention efforts is more crucial than ever. Over the past decade, Facebook has partnered with dozens of suicide-prevention organizations to truly understand the warning signs of suicide and develop better ways of intervention. The company is seeking to better protect its communities and leverage its significant role in billions of people’s lives for the greater good.


Why This Matters—


In this instance, the header above doesn't seem to do this topic justice. Of course using technology to save countless human lives is of immeasurable importance. But what can we learn from Facebook’s approach?


First, it’s worth noting that even a young tech giant realizes the importance of real human insight. Rather than relying on AI for everything, its partnerships with more than 80 suicide prevention organizations helped hone Facebook’s approach and establish the logic behind the digital tools. 


Second, it’s clear that artificial intelligence is finally reaching a new level of sophistication. It’s beginning to grasp or at least note human intent. This AI engine is adding more than ones and zeroes— it’s interpreting both text and video to understand what’s happening. What diagnostic doors will technology like this open in mental health and healthcare as a whole?


About the Author:

Drew Beck has spent his entire career in healthcare — from direct patient care as an EMT in college to countless roles in pharma sales and global marketing for leading life science companies including Eli Lilly & Co. and GlaxoSmithKline. He is currently a leader on the Syneos Health Insights & Innovation team, a group charged with leveraging deep expertise in virtual collaboration, behavioral science, trends-based-innovation, custom research and global marketing insights.