We have a truth problem that is being driven by the internet. Anyone with a computer and an internet connection can spread whatever drivel he or she wishes. Social media amplifies it, and before you know it, this fake news has spread like a wildfire in the California hills.

It’s not a new problem. Consider the 19th-century axiom that “a lie can travel halfway around the world before the truth has a chance to put its boots on.” Interestingly, that has been falsely attributed to Mark Twain.

The notion of fake news—that what we are being told isn’t true—is a bit of a slippery slope to begin with. The truth is not always obvious, and people interested in thwarting their critics can use the term as a club. But I think most reasonable people can agree that we need to stop the spread of blatant misinformation, which is meant to mislead or create some sort of almost preprogrammed reaction depending on your politics.

We certainly saw our fair share of this in the 2016 election cycle and have since learned that Russia had teams of people and bots spreading misinformation on social media to stimulate certain reactions and emotional triggers in people.

The internet is the greatest distribution platform in the history of the world, and enormous social media platforms such as Facebook and Twitter can broadcast information to millions of people. This allows us to share news, true or not, with lightning speed. This ability has made spreading false, or grossly inaccurate, information widely a trivial act, and certain people have taken advantage.

Facebook in particular has come under fire for being a vehicle for spreading falsehoods. While we probably don’t want to leave it to Facebook to be the final judge on what’s true or not, the company has a responsibility to at least try to remove blatantly false information from its platform.

A July 2018 article in The New York Times (“What Stays on Facebook and What Goes? The Social Network Cannot Answer” by Farhad Manjoo) explains the struggles the company is facing 2 years after the U.S. presidential election: “But it’s been two years since an American presidential campaign in which the company was a primary vector for misinformation and state-sponsored political interference—and Facebook still seems paralyzed over how to respond.”

During a session (The Misinformation Age: Can AI Solve Fake News?) in March at SXSW, a group of experts tried to tackle this problem. As session speaker Mark MacCarthy put it, the fact is that this is more than a social media problem. It is simply shining a light on much broader societal issues.

“The hope that we can find a purely technical solution to a complex social problem will be in vain. We need to do as much as we can in that area, and there are lots of tools that can help us do it, but we shouldn’t fool ourselves that this is a technical problem that [can be solved] by a technical solution. It’s a social problem, and all of the actors in this complex ecosystem are going to have to do what they can together to try to resolve this kind of issue,” MacCarthy explained.

Certainly, there have been attempts to filter falsehoods using technology, but it’s hard to do well because you don’t want to end up censoring in the name of keeping certain types of stories out of the information stream. Furthermore, while developing intelligent algorithms can help identify news over falsehoods, it requires a human to define what’s real and what’s not—and that can lead to issues in itself. Finally, algorithms are rigid attempts to solve a problem, and defining fake news is not a clear binary situation.

While there can never be a clear-cut way to stop misleading news, the various parties have to continue to work to solve the issue without giving into censorship. It’s not an easy problem, but it’s one that requires our constant attention and effort to find some sort of solution, because what we have now clearly isn’t working well.