Featured Post

60,000 rapes carried out by UN staff

A WHISTLE blower has claimed UN staff could have carried out 60,000 rapes in the last decade as aid workers indulge in sex abuse unchec...

Tuesday, November 29, 2016

FAKE NEWS Donald Trump and the media’s ‘epic fail’

There were so many uncertainties in the lead up to the 2016 US election result, but one thing is clear: journalism, just like the pollsters, failed.

Stuck in their offices in Washington and New York, American reporters failed to understand what drove Donald Trump’s popularity. They largely failed to ask the right questions, to provide the context and to properly scrutinise the extraordinary campaign run by the new president-elect.
Instead, buoyed by his clickable soundbites, the media gave Trump’s shocking misogynistic and racist imperialism a platform, then failed to take his growing support seriously. As Margaret Sullivan, media correspondent for the Washington Post wrote this morning: “Make no mistake. This is an epic fail.”

Death of print journalism

But the problem, worryingly, goes far deeper than newsrooms’ failure to fulfil their democratic duty to properly monitor Trump. That failure could at least, with a lot of self-examination, be fixed for next time around. The real issue is much harder to fix.
At the heart of divided societies in both the US and Britain is a newspaper industry in precipitous declineThe true “epic fail” is of the journalism industry as a whole: that the sector has been unable to find an alternative commercial model to the one that has sustained it for so long. As print readers migrate online, few newspapers have been able to persuade them to continue contributing to the cost of producing news; and neither have they been able to convince advertisers that their online versions are as worthy of investment as their print products.

Trump and Brexit as entertainment

One of the most egregious failings of the media in the US election was their chase of audience share at the expense of substantial reporting. Struggling with ever-declining advertising revenues, newspapers chased the stories that brought them the clicks.
As happened in the UK in the run up to Brexit, lots of American media outlets treated Trump as entertainment – his soundbites, as shocking as they were, provided fantastic content on social media. For months, many newspapers allowed Trump to get away with making blatantly untrue statements – and elevated those untruths to their front pages.

Mass job losses

Also worth mentioning here is that the failure to find a new commercial model has led to a shocking reduction in journalists. As reported by media commentator Roy Greenslade earlier this year, the number of people employed in the US newspaper industry has fallen by almost 60% since the dawn of the internet age – from nearly 458,000 in 1990 to about 183,000 in 2016. It is a similar picture in the UK.
This loss has been felt most seriously among regional papers which have either cut their newsrooms right back to the bone or shut up shop altogether. With the firing of so many regional reporters, a crucial understanding of both countries has disappeared.
Yes, the big media platforms flew thousands of journalists all over the US to interview supporters pitching up to rallies, but these reporters rarely got out of the campaign bubbles. They had not lived among the many communities that voted for Trump.

The real-life Biff Tannen

Locked up in their ivory towers, these journalists failed to believe that Trump would make it all the way to the White House because they did not know and understand the people that got him there. They reported on the phenomenon of the presidential race, Trump’s outlandish statements and Hillary’s failings, but they largely did not report on the frustrations that built such fervent support for the real-life Biff Tannen.
As for Trump, he holds journalists in contempt; regularly sneering at them throughout his campaign and pointedly describing them as “horrible people.” He decided from the outset to find an alternative way to his supporters – with great success, thanks to the very platforms that have been cutting into the revenues of newspapers. On Twitter, Trump has over 13 million followers; the New York Times’ total circulation is about 2 million.

Defending democracy

As we reflect on what went so horribly wrong, for the sake of democracy it is crucial that we also ask whether we can make journalism great again. We have to hope the newspaper industry finds a new way to sustain its existence. But we must also look to new models – of which not-for-profit news organisations like the Bureau of Investigative Journalism are one.
Philanthropically-funded journalism is not a silver bullet. It cannot plug all the gaps left by the storm that has battered traditional media for the past decade. But it is a force for good and it has to become a valued part of the news landscape.
The Bureau, with a small team of journalists but with the privilege of time to properly report our stories, has continually held power to account through our near seven years of existence. We have investigated the influence of money in politics, we have forced Washington to become more transparent about its covert drone war, we have reported on immigration in Europe, we have revealed many failings of our own government, we have shown how powerful PR companies have manipulated the news, and we have documented social failings in housing, in care and in police investigations.

Holding power to account

We have also striven to plug some of the blatant gaps in our industry. In 2017 we will launch a project that will deliver data journalism to our struggling local press. And with the National Council for the Training of Journalists we are also encouraging a more diverse industry by launching a Diversity Fellowship.
Every member of the Bureau understands the important part investigative journalism must play in the years to come. Now more than ever, we need strong, independent, fearless and deep reporting that holds power to account. We need journalists who have the time and the resources to properly investigate the stories that matter. On a day when many people around the world are feeling fearful of what lies ahead, we pledge to at least do that.
Donald Trump and the media’s ‘epic fail’ November 9, 2016 Rachel Oldroyd The Bureau of Investigative Journalism

Chris Arnadean independent journalist who has spent the past four years traveling the US to document the opioid crisis, was one of the few who weren't surprised. After traveling tens of thousands of miles in working-class communities along the Rust Belt and elsewhere, he found one constant.
"Wherever I saw strong addiction and strong drug use," Arnade told Business Insider, he saw support for Trump.
Official voting data has suggested a similar correlation. Since the November 8 election, Shannon Monnat, a rural sociologist and demographer at Pennsylvania State University, has dug into the results. She found that counries that voted more heavily for Trump than expected were closely correlated with counties that experienced high rates of death caused by drugs, alcohol, and suicide.

The revenge of the 'Oxy electorate' helped fuel Trump's election upset Harrison Jacobs Nov. 23, 2016

Lo Spettacolo del Dolore e la Prostituzione del Giornalismo AUGUST 28, 2016

The CEO of one of the most popular social media websites, Reddit, is facing heavy criticism from users after he admitted to secretly altering comments on a pro-Donald Trump message board.
Steve Huffman admitted to changing the content of the posts, which were directed at him, to instead show the names of the moderators of the subreddit r/The_Donald, a community within Reddit where users share images, news stories and other material related to the president-elect.
Huffman said the posts were abusive to him and he only changed them as a joke, not to censor them. The posts were later restored to their original form.
Reddit is a massive message board broken up into smaller communities called subreddits in which users can discuss and share information about a variety of topics. The pro-Trump subreddit has more than 300,000 active subscribers. The pro-Hillary Clinton subreddit, in contrast, has around 33,000 subscribers.
Huffman's clash with r/The_Donald subscribers came after the website earlier this week banned a community called r/Pizzagate, which was made up of people who believe Hillary Clinton and her close associates are running a child sex ring out of a pizza shop in Washington, D.C.
The Pizzagate conspiracy centers on emails that were stolen and made public by WikiLeaks in which Clinton Campaign Chairman John Podesta and others discuss pizza. According to the theory, talk of pizza is code for pedophilia. There has been no solid evidence backing up any of the accusations against Clinton or her associates.
Reddit said it banned the community because members repeatedly posted personally identifiable information of people they accused of being involved in the pedophile ring, which is against the site’s terms of service.
James Alefantis, the manager of pizzeria Comet Ping Pong, said he has been receiving hundreds of death threats since the conspiracy began gaining traction, and asked for help on social media to stop the harassment.
In Response, Huffman, who goes by the handle u/Spez on Reddit, altered comments critical of himself as a way of coping with the stress of dealing with the fallout from Pizzagate.
“This was a case of me trolling the trolls for a bit,” he later added.
“The CEO of a major media company edited the comments of Trump supporters because he did not like what they had to say,” one Reddit user, who goes by the handle Velostodon, wrote in a prominent thread on r/The_Donald. “This calls into question the integrity of the website. Not in a ‘muh free speech’ sense, but in a legal sense.”
The petition, which so far has garnered just over 1,200 signatures, claims the Reddit community “has lost their trust and faith” in Huffman “due to his inability to conduct himself properly as a CEO.”

Reuters has developed a tool capable of automatically detecting and verifying breaking news on Twitter, in an attempt to report on events more rapidly and accurately. The tool, known as Reuters News Tracer, has been developed over the past two years, but Reuters only made it public this week, in interviews with the Columbia Journalism Review and Nieman Lab.
News Tracer analyzes tweets in real-time, filtering out spam and grouping similar tweets into “clusters” based on similar words. The tool then classifies the clusters into topics and generates short summaries about each one. Tweets with the words “bomb” or “explosion,” for example, might be grouped under a terrorist attack cluster.
The idea, according to Reg Chua, Reuters’ executive editor of data and innovation, is to help automate the news gathering process. “A large part of our DNA is built on the notion of being first, so we wanted to figure out how to build systems that would give us an edge on tracking this stuff at speed and at scale,” Chua tells Nieman Lab. “You can throw a million humans at this stuff, but it wouldn’t solve the problem.”
The algorithm also helps verify breaking news by assigning a credibility score to each cluster, based on a range of factors: the location and identity of the person tweeting, how the tweet spreads, and whether the information is being confirmed or debunked on Twitter. As Nieman Lab notes, “Reuters essentially taught its algorithm to think like a reporter.
Other newsrooms have explored different forms of automation. Last year, the Associated Press began using algorithms to write full stories, with a tool called Automatic Insights. The French newspaper Le Monde is also working on a browser extension that will automatically flag fake or misleading news — something that Facebook has been notoriously reluctant to address.

Donald Trump could be banned from Twitter, company says, but President-elect's Facebook is probably safe Andrew Griffin  01 12 16


WASHINGTON, DC – Russia's leak of emails it hacked from the Democratic National Committee and Clinton campaign chairman John Podesta during the US presidential campaign came as a shock to FireEye CEO Kevin Mandia.
It takes a lot to surprise the seasoned Mandia, whose incident response firm Mandiant was acquired by FireEye nearly three years ago and who has been investigating and studying Russian nation-state breaches since the 1990s. In an interview at FireEye's Cyber Defense Summit here today, Mandia said the recent Russian state-sponsored attacks and leaking of information were a gamechanger in cyber espionage tradecraft.
Hillary Leaks: Wikileaks releases 20,000 DNC Emails JULY 28, 2016

How Russia Pulled Off the Biggest Election Hack in U.S. History 

"The doxing shocked me. I'm fascinated by it," he said. It's part of a major shift in Russia's nation-state hacking machine, according to Mandia.

Big Brother Rising: Extreme Spying and Social Censorship NOVEMBER 29, 2016

Wikileaks releases 2,420 documents from German government NSA inquiry 01.12.2016

#Pizzagate: the Pedophile Ring exposed by Podesta’s Emails NOVEMBER 27, 2016

Of the around two dozen breaches FireEye currently is investigating, Russian state hackers are behind many of them; in the "double digits," Mandia said. Even more chilling than the relative volume of attacks, however, is how dramatically Russia has changed its cyber espionage modus operandi over the past two years.
Mandia said the big shift began in the fall of 2014. "Suddenly, they [Russian state actors] didn't go away when we responded" to their attacks, he said. Historically, the attackers would disappear as soon as they were found: "The Russian rules of engagement were when we started a new investigation, they evaporated [and] just went way."
The Russian cyber espionage groups also began hacking universities, but not necessarily for the usual government research secrets they traditionally had been hunting. "They were [now] stealing [from] professors who had published … anti-Russian, anti-Putin sentiments. We'd seen the Chinese do that, but had never seen Russia doing that," Mandia said.
"The scale and scope were starting to change. Then I thought maybe their anti-forensics had gotten sloppier because now we could observe that they were not going away," he said. Rather than their usual counter-forensics cleanup, the Russians now merely left behind their digital footprints from their cyber espionage campaigns.
"They used to have a working directory and would remove it when they were done. But they just stopped doing that," Mandia said. That's either because they're no longer as disciplined in their campaigns, he said, or "they've just chosen to be more noticeable."
There are no easy solutions for response to this new MO of Russia's hacking machine, either, he said. "They're damn good at hacking," Mandia said.
The Obama administration's Executive Order signed in 2015 gives the US the power to freeze assets of attackers who disrupt US critical infrastructure, or steal trade secrets from US businesses or profit from theft of personal information.
It's unclear for now whether President-Elect Donald Trump will preserve Obama's cybersecurity EOs and policies. Mandia said he doesn't expect them to be scrapped. "No one wants to be hacked. Whether you're a Democrat or a Republican, you don't want people stealing your email. I can't imagine this is an issue that’s divided" politically, he said.
Trump's cybersecurity platform published during the campaign calls for developing "offensive" capabilities in cybersecurity. "Develop the offensive cyber capabilities we need to deter attacks by both state and non-state actors and, if necessary, to respond appropriately," according to his statement.
Some security experts say it's unclear if that leaves the door open for private organizations to hack back. Mandia opposes businesses hacking back at their online adversaries: "It's very dangerous. You will not have the intended consequences if you have anyone in the private industry do anything on offense, unless they were deputized by the government," he said.
Mandia is a fan of the oft-criticized pact by President Obama and China president president Xi Jinping not to conduct cyberspying attacks for economic gain. The agreement specifically applies to the theft of trade secrets and stops short of banning traditional espionage via hacking. Cyberespionage has been a notoriously prolific US strategy for China, with the US among its top targets, although Chinese officials deny such hacking activity.
While some security experts say the US-China agreement has not slowed China's hacking for IP theft, Mandia said his firm saw a dramatic decrease in the wake of the pact. FireEye saw the number of such attacks drop from 80 to four within one month after the pact. "Whoever runs China's cyber espionage: they have disciplined troops. They stick to the rules of engagement," Mandia said.
He said he can't see how the Trump administration would scrap the pact with China. "It has had impact in such an incisive way, I don't know why they would change it."
The New 'Wave'
Mandia said cyber espionage and cyberattacks have now entered a new, less predictable phase. "More emboldened nations are doing more emboldened things" hacking-wise, such as Iran, he said.
"Every day, Iran is hacking and there are no repercussions. They are getting operational experience and getting better at it," he said.

Cyberwar SEPTEMBER 28, 2012


How Edward Snowden escalated Cyber War NOVEMBER 2, 2013

Grady Summers, CTO of FireEye, said his firm is seeing more coordination and destruction in all types of cyberattacks. They're seeing attackers use ransomware attacks moving from targeting a machine or two to thousands of machines. "They're establishing a foothold, going lateral and going destructive and encrypting en masse," Summers said. That allows attackers to encrypt thousands of machines, and do more damage and gain more leverage. 

Mandia: Russian State Hackers Changed The Game Kelly Jackson Higgins 12/1/2016

Russia’s government didn’t just hack and leak documents from U.S. political groups during the presidential campaign: It used social media as a weapon to influence perceptions about the election, according to cybersecurity company FireEye Inc.
Material stolen by Russia’s intelligence services was feverishly promoted by online personas and numerous fake accounts through links to leaked material and misleading narratives, according to an analysis of thousands of postings, links and documents by FireEye, which tracks Russian and Chinese hackers breaking into U.S. systems. The operation was a new and belligerent escalation by Moscow in the cyber domain, company Chairman David DeWalt said.
The dawning of Russia as a cyber power is at a whole other level than it ever was before,” DeWalt said in an interview in Washington. “We’ve seen what I believe is the most historical event maybe in American democracy history in terms of the Russian campaign.”

Russia Weaponized Social Media in U.S. Election, FireEye Says  
Chris Strohm 
 December 1, 2016

Hossein Derakhshan was born in Iran in 1975. In 2001, he went to study in Toronto, Canada. There, he started blogging under the pen name Hoder and translated a guide on how to blog in Farsi, initiating a Persian blogging surge. In 2004, Iran was among the top five countries with the most bloggers worldwide.
These bloggers were living dangerously, however. Derakhshan was imprisoned by the Iranian government in 2008 and kept in prison for six years.
Since 2015, Derakhshan, now living in Teheran, has been writing essays and giving talks about the impact of social media and the decline of online political exchanges.
DW: Mr. Derakhshan, you spent six years offline while you were imprisoned. The internet changed tremendously during that period. What did you notice when you were released?
Hossein Derakhshan: I observed a shift from an internet which was very decentralized, diverse, link-based, connected, curious, outward-looking and text-centered to a space which is image-centered, which is more about entertainment than discussions or debates or thinking. It is quite centralized and it's dominated by social networks. It's much less diverse. It is more about entertainment now. It has less serious content - including politics.
What happened to all the bloggers you were working with?
Many of the former bloggers and activists are co-opted in that new space; they actually seem to have forgotten that politics played a big role in their life at some point. Now they are happy with the little things they see on social networks. But I think there are less and less people who are interested in serious news and serious discussions.
[Eds.: Many Iranian activists and bloggers were imprisoned - like Derakhshan - and have not been released.]
When the internet emerged in the 90s, it created a new window of opportunity for serious discussion. After 20 years, this window is now closing, and the internet is becoming like television. It is actually forming a new kind of television which is internet-based and personalized and it is available on all kinds of devices.
I am just very sad that everyone is into videos now rather than reading, because video is a limited medium for conveying complex messages. It simplifies everything and it is the most convenient environment for the rise of demagogues and populists.
Do you believe social networks aren't useful at all? After all, noteworthy political movements rallied through social networks over the last years…
Social networks may be useful to have emotional reactions to certain news or certain events. Maybe even to bring people to the streets in some cases. But as an Egyptian activist has brilliantly put it, this has not helped the Egyptian revolution. Social networks were successful in bringing out people and giving them a negative voice about the status quo, but that did not lead to anything positive. They started to fight each other.
Once the Mubarak regime was gone, the same social networks became the best device to foment internal rivalries and differences. So I think that as much as they can help people to organize against something, they cannot unite people in doing something positive, because they can not replace leadership.
How did you see this development in Iran?
Like everywhere else in the world, there are less people interested in reading more than a few paragraphs and you can see that in Iran as well, through the very very low circulation of books and newspapers and the sales for anything that is printed. It is going down very rapidly.
At the same time, there is a huge rise in the number of people using some sort of social network, for instance Instagram is extremely popular in Iran. There is also this messaging application called Telegram which is extremely popular in Iran. I think more than 25 million people in Iran are using it now. This is completely different from how things were 10 years ago when there was a lot more text and reading and serious discussion going on.
So much of the stuff on the web has been abandoned, many people used to have their own domain names, but they let them expire. It has become a graveyard. In the best cases, you have some tombs saying there used to be blogs here.
Don't you use any social networks at all?
I use Twitter because relatively speaking it is much more open. It supports hyperlinks and it can be very informative and useful to link to other places, to introduce and share ideas and articles. But generally speaking, there is a trend towards videos. Even Twitter as a text-based platform is going towards video and they are encouraging more live videos.
What should happen to save the internet you knew and appreciated?
We have to do certain things to disrupt this algorithm-based kind of gate keeping, which is creating comfort zones for everyone. I don't necessarily know what the solutions are because I think it is a very general problem and it can probably only be solved by big social structures like states. On an individual level, we can't do much but be unhappy about how things are changing. Still, we can push governments to intervene if they value representative democracy and informed citizenship and informed political participation.

Iran's 'blogfather': Why social media is killing the free web 28.09.2016 Julia Hitz (interview)


What is the most important source of news and therefore the most powerful media organisation in the world today?
Well, there is a good argument that the answer is not a newspaper or broadcasting organisation but a social network, Facebook.
After all, it has 1.6 billion users and is becoming an ever more important place for them to share news. More than 40% of the population of the United States say they get news on Facebook - and for many it is where they go to share and comment on stories.
Stories like this - "Pope Francis Shocks World, Endorses Donald Trump for President", "Barack Obama Admits He Was Born in Kenya", or "Trump said in 1998 'If I were to run, I'd run as a Republican. They're the dumbest group of voters in the country'."
What all of those stories had in common was that they were completely made up. That did not stop them being shared by millions of Facebook users.
Whoever created this torrent of untruth probably had two motives - to cause mischief and to make a large amount of cash through the adverts that are the lifeblood of Facebook and the businesses which live off what it describes as its ecosystem.
But they also succeeded in unleashing a debate about fake news and whether the internet, far from spreading enlightenment as its creators once hoped, was leaving us worse informed.
At first Facebook's founder Mark Zuckerberg appeared unwilling to engage with that debate, dismissing the idea that fake news could have swung the presidential election as "crazy".
But it soon became clear that this position was untenable and that even inside Facebook there was plenty of agonising going on over its role in fakery.
Some days later Mr Zuckerberg took to - you've guessed it - Facebook to share some good news. "We've been working on this problem a long time and take this responsibility seriously," he wrote.

Facebook, fake news and the meaning of truth correspondent 27 November 2016 

Facebook, il Live è un fake: 6 milioni di visualizzazioni per una lampadina VALENTINA RUGGIU 04 novembre 2016


Fake news, disinformation, the digital media war the press finds itself ill-prepared to fight D.B. Hebbard November 28, 2016

President Obama is really, really worried about the spread of fake news in places like Facebook.
In a new profile of Obama in The New Yorker, David Remnick describes a scene where the president “talked almost obsessively” about a BuzzFeed article that explained how a small Macedonian town was pumping out fake news on Facebook for profit.
Obama told Remnick that the new media ecosystem “means everything is true and nothing is true … An explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers’ payroll. And the capacity to disseminate misinformation, wild conspiracy theories, to paint the opposition in wildly negative light without any rebuttal—that has accelerated in ways that much more sharply polarize the electorate and make it very difficult to have a common conversation.”
Obama characterized this as different from how we engaged with democracy and politics in the past.
“Ideally, in a democracy, everybody would agree that climate change is the consequence of man-made behavior, because that’s what ninety-nine per cent of scientists tell us,” he told The New Yorker. “And then we would have a debate about how to fix it. That’s how, in the seventies, eighties, and nineties, you had Republicans supporting the Clean Air Act and you had a market-based fix for acid rain rather than a command-and-control approach. So you’d argue about means, but there was a baseline of facts that we could all work off of. And now we just don’t have that.”
We can’t agree on the basic facts.
This isn’t the first time the president has riffed on this idea. In a speech on Thursday, he talked about how damaging the spread of deliberate misinformation can be on Facebook.
“If we are not serious about facts and what’s true and what’s not, and particularly in an age of social media, where so many people are getting their information in sound bites and snippets off their phones, if we can’t discriminate between serious arguments and propaganda, then we have problems,” Obama said.
"Personally, I think the idea that fake news on Facebook — it's a very small amount of the content — influenced the election in any way is a pretty crazy idea," Facebook CEO Mark Zuckerberg said at a recent press conference.
But Obama’s worry about the effect fake news can have has some data backing it up. A recent study by BuzzFeed showed that in the lead-up to the election, the top fake-news stories on Facebook outperformed legitimate news stories shared by some of the most popular media companies.

OBAMA: The new media landscape 'means everything is true and nothing is true' Nathan McAlone Nov. 18, 2016



Pope Francis has condemned disinformation as "probably the greatest damage that the media can do".
His comments come amid warnings of a "fake news" crisis in online media following last month's US elections.
Social media platforms and search engines were widely criticised in the poll's aftermath for failing to prevent the spread of fabricated stories.
The Pope himself fell victim to a fake news story, which falsely reported his endorsement of Donald Trump.
In a frank interview with Belgian Catholic weekly Tertio, the pontiff said the media's obsession with scandal was akin to "coprophilia", an abnormal interest in excrement.
This preyed on people's "tendency towards the sickness of coprophagia", the eating of excrement, he added, extending the analogy to apply it to the public's consumption of such coverage.
People could not be expected to make "a serious judgment" about any situation if the media provided "only a part of the truth, and not the rest", he said.
The issue of balance has been at the heart of the debate about changes in the way people access information in today's world.
Beyond the rapid and far-reaching spread of fake news, analysts have also criticised social media platforms like Facebook for enabling an "echo chamber" to be created, in which people are far less likely to be exposed to both sides of an argument.

So much for taking America’s “fake news” problem seriously.

Ever since Donald Trump was elected president, there’s been an abundance of hand-wringing over the “fake news” that supposedly is rampant on social media

Yet missing has been any kind of serious searching among the mainstream media about whether it could learn any lessons from this election—and whether reporters and editors are holding themselves accountable to their supposed values of objectivity and rigorous reporting. 

And a new “study” presents Exhibit A as to why the mainstream media should reconsider its own practices.

The Daily Signal is the multimedia news organization of The Heritage Foundation.  We’ll respect your inbox and keep you informed.

The Southern Poverty Law Center—an organization that calls the Family Research Council an “extremist group” because of its socially conservative views on LGBT matters—reported Nov. 29 that “in the 10 days following the election, there were almost 900 reports of harassment and intimidation from across the nation.”

Many harassers invoked Trump’s name during assaults,” the report continued, “making it clear that the outbreak of hate stemmed in large part from his electoral success.”

Cue the widespread coverage:

“Nationwide, there have been more than 867 incidents of ‘hateful harassment’ in the first days following the election, the Southern Poverty Law Center says,” reported CNN.

“In the 10 days following the November election, SPLC said it collected 867 hate-related incidents on its website and through the media from almost every state,” wrote the Associated Press.

NBC News headlined its piece on the study “Southern Poverty Law Center Reports ‘Outbreak of Hate’ After Election.”

The Washington Post’s headline blared, “Civil rights group documents nearly 900 hate incidents after presidential election.”

There’s just one issue: The Southern Poverty Law Center didn’t confirm these “nearly 900” incidents actually happened.

“The 867 hate incidents described here come from two sources—submissions to the #ReportHate page on the SPLC website and media accounts,” the SPLC report states. “We have excluded incidents that authorities have determined to be hoaxes; however, it was not possible to confirm the veracity of all reports.”

In other words, who has any idea if these incidents actually happened or not?

Yet, the fact that there was no verification of these incidents didn’t stop the media from covering this “study.”

And let’s not pretend there’s no to very little chance that a Trump opponent would make up a hate crime story.

Just consider this reported hate incident in November: “The men used a racial slur, made a reference to lynching, and warned him this is Donald ‘Trump country now,’ according to the report he gave police,” reported the Boston Herald.

Yet the man wasn’t telling the truth. The Herald reported that Kevin Molis, police chief of Malden, Massachusetts, said “it has been determined that the story was completely fabricated.”

“’The alleged victim admitted that he had made up the entire story,’ saying he wanted to ‘raise awareness about things that are going on around the country,’” the newspaper added, continuing to quote Molis.

So maybe 867 hate crimes happened in the first 10 days after the election. Or maybe 5,000 did. Or maybe five did.

Maybe 10,000 did—and most of them were directed at Trump supporters, not opponents. (Let’s not forget the man beaten in Chicago while someone said, “You voted Trump.”) Who knows?

The SPLC should realize that playing around with facts is no laughing matter.

In 2012, a gunman entered the headquarters of the Family Research Council “with the intent to kill as many employees as possible, he told officers after the incident,” reported Politico. The 29-year-old man, identified as Floyd Lee Corkins II, did shoot and wound a security guard. His motivation?

“Family Research Council (FRC) officials released video of federal investigators questioning convicted domestic terrorist Floyd Lee Corkins II, who explained that he attacked the group’s headquarters because the Southern Poverty Law Center (SPLC) identified them as a ‘hate group’ due to their traditional marriage views,” the Washington Examiner reported.

Ultimately, regardless of what the Southern Poverty Law Center does, the media shouldn’t be giving a platform to faux studies like this.

But maybe it’s not surprising, given attitudes like President Barack Obama’s. In an interview with Rolling Stone magazine published Tuesday, the president griped about the reach of Fox News Channel—and then complimented Rolling Stone: “Good journalism continues to this day. There’s great work done in Rolling Stone.”

Yes, that Rolling Stone—the news outlet that published the completely discredited University of Virginia gang rape story. In early November, “jurors awarded a University of Virginia administrator $3 million … for her portrayal in a now-discredited Rolling Stone magazine article about the school’s handling of a brutal gang rape [at] a fraternity house,” the Associated Press reported.

It’s tough to hold the media accountable when even the president seems willing to brush aside true instances of fake news.


Fake news is big news these days. There’s an emotional debate over the explosion of information on the internet -- and on social media sites in particular -- that’s provably false or intentionally misleading. As content of dubious authenticity swirls on platforms like Facebook, Twitter and Google, many in the media worry consumers may lose trust in stories that are actually true. Maybe most uncomfortable are the social media companies, Facebook especially. They make millions in ad revenue by distributing information, but the last thing they want are the responsibilities that come with being a publisher, like ensuring stories are accurate.

1. Why is fake news in the news?

Some Hillary Clinton supporters say a flood of fake items may have swayed the results of the election in favor of President-Elect Donald Trump. They’re not alone. The "impresario of a Facebook fake-news empire," Paul Hornertold the Washington Post, "I think Trump is in the White House because of me." BuzzFeed found that of the 20 fake election stories that were most shared, commented-on and reacted-to on Facebook, 17 were pro-Trump or anti-Clinton.

2. What were some of the biggest fake election stories?

That Pope Francis endorsed Trump. That an FBI agent suspected of leaking Clinton’s e-mails was found dead. That a protester admitted being paid $3,500 to disrupt a Trump rally. That Trump once called Republicans "the dumbest group of voters in the country."

3. Did it really influence the election’s outcome?

It’s hard to say. What we do know is that this is the first election in which the majority of U.S. adults got their news from social media. And that news came to them in a very personalized, filtered fashion if they were getting it on Facebook -- serving up what they would want to see. When people are fed the news they want to be fed, they may have blind spots, and not come into contact with other information that challenges their assumptions.

4. Who’s producing fake news?

Fake news can come from many sources. Some purveyors are in it for the advertising-sales money, like teenagers in Macedonia pumping out pro-Trump articles or a pair of 20-something friends in California who call themselves "the new yellow journalists." The difference between the age of Hearst and the age of Facebook is that on social media, shares and likes by outraged friends and family take the place of screaming newsboys on the street corner. Other sources of misleading information are trying to push an agenda. And sometimes the machine is fed by plain old mistakes -- since social media makes everyone a potential reporter, an innocent observation that happens to be wrong can take off if it’s something enough people would like to be true.

5. How does it disseminate so quickly?

What do you click on? For most of us, it’s stuff that sparks some emotion: surprise, sadness, anger or confusion. That’s what people share on social media, too. Facebook’s algorithm amplifies this content by promoting posts that trigger that kind of attention. Next, advertisers pay for slots next to these stories because they want to be where the eyeballs are. Finally, the flat landscape of social media wipes out many of the filters we used to use to judge content. At a newsstand, there’s a clear difference between the Washington Post and the National Enquirer. But their Facebook posts can look similar in your timeline.

6. What can be done?

Facebook Chief Executive Officer Mark Zuckerberg initially played down the idea of fake news on the social network being a problem. But he announced on Nov. 18 that Facebook will get input from journalists and fact-checkers on how to improve its algorithm. There are other ideas, too, like finding a way to warn users if a story might be false, or disrupting the economics of fake news, so purveyors have less financial incentive to spread it. 

7. What about Twitter?

It’s not getting hammered as hard by the public on this issue. It’s not that fake news doesn’t get shared there -- on the contrary, as President-elect Trump demonstrated by tweeting about "the millions of people who voted illegally." But Twitter shows users everything they sign up to follow in reverse-chronological order, while Facebook decides what go into people’s news feeds based on its algorithm.

8. Can an algorithm really tell what’s true and what’s false?

The internet presents a spectrum of information, with hyper-partisan opinion stories masquerading as news, plus lots of satire and funny memes. What’s an algorithm to do? Facebook’s engineers have trained their algorithm to know that if something is really popular, it must be relevant. It might be easy for the company to suppress an outright hoax -- by, say, searching for the topic on a major news site, or detecting a Snopes article debunking it -- but it’s harder to automate the decision on what to do about propaganda-like content meant to rile people up. 

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.
But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.
The project has snowballed since Pariser started it on 17 November, with contributors putting forward myriad solutions, he said. “It’s a really wonderful thing to watch as it grows,” Pariser said. “We were talking about how design shapes how people interact. Kind of inadvertently this turned into this place where you had thousands of people collaborating together in this beautiful way.”
In Silicon Valley, meanwhile, some programmers have been batting solutions back and forth on Hacker News, a discussion board about computing run by the startup incubator Y Combinator. Some ideas are more realistic than others.
“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”
Most of the solutions fall into three general categories: the hiring of human editors; crowdsourcing, and technological or algorithmic solutions.
Human editing relies on a trained professional to assess a news article before it enters the news stream. Its proponents say that human judgment is more reliable than algorithms, which can be gamed by trolls and arguably less nuanced when faced with complex editorial decisions; Facebook’s algorithmic system famously botched the Vietnam photo debacle.
Yet hiring people – especially the number needed to deal with Facebook’s volume of content – is expensive, and it may be hard for them to act quickly. The social network ecosystem is enormous, and Wardle says that any human solution would be next to impossible to scale. Humans are also partial to subjectivity, and even an overarching “readers’ editor”, if Facebook appointed one, would be a disproportionately powerful position and open to abuse.
Crowdsourced vetting would open up the assessment process to the body politic, having people apply for a sort of “verified news checker” status and then allowing them to rank news as they see it. This isn’t dissimilar to the way Wikipedia works, and could be more democratic than a small team of paid staff. It would be less likely to be accused of bias or censorship because anyone could theoretically join, but could also be easier to game by people promoting fake or biased news, or using automated systems to promote clickbait for advertising revenue.
Algorithmic or machine learning vetting is the third approach, and the one currently favored by Facebook, who fired their human trending news team and replaced them with an algorithm earlier in 2016. But the current systems are failing to identify and downgrade hoax news or distinguish satire from real stories; Facebook’s algorithm started spitting out fake news almost immediately after its inception.
Technology companies like to claim that algorithms are free of personal bias, yet they inevitably reflect the subjective decisions of those who designed them, and journalistic integrity is not a priority for engineers.
Algorithms also happen to be cheaper and easier to manage than human beings, but an algorithmic solution, Wardle said, must be transparent. “We have to say: here’s the way the machine can make this easier for you.”
Facebook has been slow to admit it has a problem with misinformation on its news feed, which is seen by 1.18 billion people every day. It has had several false starts on systems, both automated and using human editors, that inform how news appears on its feed. Pariser’s project details a few ways to start:
Similar to Twitter’s “blue tick” system, verification would mean that a news organization would have to apply to be verified and be proved to be a credible news source so that stories would be published with a “verified” flag. Verification could also mean higher priority in newsfeed algorithms, while repeatedly posting fake news would mean losing verified status.
Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’,” Amanda Harris, a contributor to Pariser’s project, wrote. “By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.”
Articles on Facebook and Twitter could be subject to a time-delay once they reach a certain threshold of shares, while “white-labeled” sites like the New York Times would be exempt from this.
Fake news could automatically be tagged with a link to an article debugging it on Snopes, though inevitably that will leave Facebook open to criticism if the debunking site is attacked as having a political bias.
An algorithm could analyze the content and headline of news to flag signs that it contains fake news. The content of the article could be checked for legitimate sourcing – hyperlinks to the Associated Press or other whitelisted media organizations.
This system would algorithmically promote non-partisan news, by checking stories against a heat-map of political opinion or sharing nodes, and then promoting those stories that are shared more widely than by just one part of the political spectrum. It could be augmented with a keyword search against a database of language most likely to be used by people on the left or the right.
This would promote or hide articles based on the reputation of the sharer. Each person on a social network would have a score (public or private) based on feedback from the news they share.
Fake news would come up in the news feed as red, real news as green, satire as orange.
If publishing fake news was punishable with bans on Facebook then it would disincentivise organizations from doing so.
News is shared across hundreds of other sites and services, from SMS and messaging apps such as WhatsApp and Snapchat, to distribution through Google’s search engine and aggregations sites like Flipboard. How can fake news, inaccurate stories and unacknowledged satire be identified in so many different contexts?
A central fact-checking service could publish an API, a constantly updated feed of information, which any browser could query news articles against. A combination of human editing and algorithms would return information about the news story and its URL, including whether it is likely to be fake (if it came from a known click-farm site) or genuine. Stories would be “fingerprinted” in the same way as advertising software.
People could choose their fact-checking system – Snopes or Politifact or similar – and then install it as either a browser plug-in or a Facebook or Twitter plug-in that would colour-code news sources on the fly as either fake, real or various gradations in between.
Much like Google’s original PageRank algorithm, a system could be developed to assess the authority of a story by its domain and URL history, suggested Mike Sukmanowsky of Parse.ly.
This would effectively be, Sukmanowsky wrote, a source reliability algorithm that calculated a “basic decency score” for online content that pages like Facebook could use to inform their trending topic algorithms. There could also be “ratings agencies” for media; too many Stephen Glass-style falsified reporting scandals, for example, and the New York Times could risk losing its triple-A rating.
Under this system, fake news would be inter-linked (possibly through a browser plug-in) to a story by a trusted fact-checking organization like Snopes or Politifact. (Rbutr already does this, though on a modest scale.) 
On current evidence, many people feel comfortable when presented by news which doesn’t challenge their own prejudices and preferences – even if that news is inaccurate, misleading or false.
What many of these solutions don’t address is the more complex, nuanced and long-term challenge of educating the public about the importance of informed debate – and why properly considering an accurate, rational and compelling viewpoint from the other side of the fence is an essential part of the democratic process.
“There’s a feeling that in trying to come up with solutions we risk a boomerang effect that the more we’re debunking, the more people will disbelieve it,” said Claire Wardle. “How do we bring people together to agree on facts when people don’t want to receive information that doesn’t fit with how they see the world?
Jasper Jackson contributed to this report

How to solve Facebook's fake news problem: experts pitch their ideas Nicky Woolf 29 November 2016 

Facebook Inc.’s artificial intelligence know-how could be applied to some of its most pressing problems, company executives said, if the social network creates policies to guide use of the technology.
Yann LeCun, Facebook’s director of artificial intelligence, or AI, research, said technology could be used to help stamp out fake news or detect violence in live videos by filtering the content on the site. But Facebook’s policy and product teams haven’t figured out how to introduce AI responsibly.
What’s the trade-off between filtering and censorship? Freedom of experience and decency?” Mr. LeCun told reporters during a recent round table at the company’s Menlo Park, Calif., headquarters. “The technology either exists or can be developed. But then the question is how does it make sense to deploy it? And this isn't my department.” 

Facebook is trying to remove some of the stigma and mystery that surrounds AI in popular culture. On Thursday, it released six informational videos about the technology. Mr. LeCun said AI is integral to the company’s operations, from learning how users experience their news feed to monitoring the site for terrorist propaganda.
How Facebook could use AI to prevent the spread of false informationa criticism Facebook faced following the U.S. presidential election—is unclear. Facebook uses AI to detect certain words that signal a story might be simply “clickbait.” Discerning fact from fiction is a much bigger challenge, posing the risk of removing too much content with an AI filter.
Facebook doesn’t have fully formed solutions—with or without AI—to these problems, a spokesman later said. The company often experiments with a technology before deciding whether it will apply it widely.
After initially dismissing the problem of fake news, Chief Executive Mark Zuckerberg two weeks ago laid out several steps Facebook is taking to tackle the issue—including building systems to detect fake stories before users flag them, which would involve AI. “Tens” of employees have been pulled off other projects to focus on fake news, people familiar with the matter say.
However, AI isn't a panacea. It doesn’t catch all terrorist propaganda, for example.
Facebook, which employs hundreds of people world-wide to monitor content on the site, is now in the “research stage” of using AI to automatically detect depictions of violence and other problems in live videos, said Joaquin Candela, Facebook’s director of applied machine learning.
Policing live video, an area of intense investment for Facebook, poses two challenges, he added. First, it requires a very fast computer vision algorithm, which Mr. Candela said was within reach of his team.
The second challenge is developing a clear set of practices, such as for determining what should or shouldn’t be removed. In general, the task of figuring out whether and how to introduce the technology is handled by Facebook product teams, Mr. Candela said.
Facebook said a lot of its network wouldn’t work without AI, such as its news-feed ranking algorithm, which creates individualized streams for each of the 1.79 billion people who access Facebook at least once a month. Every day, 2.5 billion posts are translated into other languages on Facebook.
Mr. LeCun said he disagreed with the portrayal of AI technology as a looming and devious threat. “This isn't magic. This isn't Terminator either,” Mr. LeCun said. “This is real technology that could be useful.”
Mr. LeCun said the ethical questions his team considers deserve more attention, such as how AI can be properly tested without causing harm and how it can be designed to avoid systematic bias. “Is humanoid AI going to take over the world and kill us all? I’m not personally worried about that,” he said.

Facebook Looks to Harness Artificial Intelligence to Weed Out Fake News DEEPA SEETHARAMAN Dec. 1, 2016

Facebook developing algorithm to flag offensive live videos By Reuters December 1, 2016

Facebook: intelligenza artificiale contro le notizie false 02 dicembre 2016

Today, if you ask the Google search engine on your desktop a question like “How big is the Milky Way,” you’ll no longer just get a list of links where you could find the answer — you’ll get the answer: “100,000 light years.”
While this question/answer tech may seem simple enough, it’s actually a complex development rooted in Google’s powerful deep neural networks. These networks are a form of artificial intelligence that aims to mimic how human brains work, relating together bits of information to comprehend data and predict patterns.
Google’s new search feature’s deep neural network uses sentence compression algorithms to extract relevant information from big bulks of text. Essentially, the system learned how to answer questions by repeatedly watching humans do it — more specifically, 100 PhD linguists from across the world — a process called supervised learning. After training, the system could take a large amount of data and identify the short snippet from it that answered the question at hand.
Training AI like this is both difficult and expensive. Google has to provide massive amounts of data for their systems as well as the human experts that the neural network can learn from.
Google and other technology companies like Facebook and Elon Musk’s OpenAI are currently working on better, more automated neural networks, the kind capable of unsupervised learning. Those networks wouldn’t need people to label data before they could learn from it; they could figure it out on their own.
If these companies are successful, a multitude of opportunities would be opened up for humankind. Advanced AI systems could quickly and accurately translate between languages, make our internet more secure, develop better medical treatments, and so much more.  The data machines like that could process would change our world permanently.
Tech companies are currently still years away from discovering how to create fully autonomous AI. Nevertheless, that digital voice now answering our search engine queries puts us one step closer.

Google Just Launched New AI-Powered Algorithms Jess Vilvestre Kristin Houser 01 12 2016

The aftermath of the US presidential election has been as troubled as the campaign. Hundreds of hate-driven incidents were reported after Donald Trump’s victory, the Southern Poverty Law Center attests. And police say some anti-Trump protests turned into riots.
Hate crimes will be documented by the FBI, as they were for last year. But it’s harder to measure the proliferation of hateful speech, which is protected by the US Constitution, and left largely unchecked, especially online. That puts private social media companies in a sticky place—left to decide between the unfettered right to free speech, however ugly it may be, and shielding their users from abuse and threat by regulating what they can say. And it leaves us to decide how we want to use platforms that curb this freedom.
Hate speech can’t be a crime because it is protected by the First Amendment,” said Wayne Giampietro, a First Amendment lawyer in Chicago. “But the First Amendment doesn’t apply to people who run the internet.”
While public hate speech can’t—and shouldn’t, I think—be suppressed, social platforms like Twitter, Facebook, and Reddit are allowed to set their own restrictions to moderate the community they want to foster on their platforms. Using racial slurs or sexist language is protected in public life, but private companies can decide what kind of dialogue they will entertain.
Twitter’s new guidelines released last week demonstrate the company’s latest reaction to constant reports of abuse and cyberbullying. The platform allows users to report hateful conversations, block racial slurs and opt out of whole conversations in an attempt to distance abusers from their victims. Think of it as a mechanism to control who steps into your home, versus the inevitability of encountering strangers on the street—which is important for those of us who have dealt with online abuse on multiple occasions.
“They’re becoming more responsive,” said Sameer Hinduja, a cyberbullying expert and professor of criminology at Florida Atlantic University who has worked informally with companies like Twitter and Facebook. “It’s useful to enlist the help of the community.”
In the past few weeks, Twitter has taken some of its boldest moves to date. It suspended or banned several members of the so-called “alt-right” movement, including prominent spokesperson Richard Spencer, as VICE reported. That means those who subscribe to the “alt-right” ideology can continue to organize around the “future of people of European descent” in the country, but they will no longer have one of the world’s biggest platforms to do so.
Facebook, meanwhile, has been caught in a maelstrom of attention during the election for its arbitrary policing of speech. On the one hand, it changed its strategy after users found out it could be suppressing conservative-leaning news after employees inside the organization spoke out this summer. But Facebook has also came under fire for overcompensating for its mistakes and allowing fake news to populate users’ feeds, and said last week it would stop allowing ads to help fund sites without fact-based reporting.
“They’ve got to walk a narrow line,” Giampietro said. “To what extent are we going to suppress certain ideas? And to what extent will you stop something that defames somebody [through fake news].”
Other platforms like Reddit are less likely to intervene, regardless of their users’ vitriol, as game developer Brianna Wu pointed out. This could explain why white nationalist groups have chosen to communicate primarily through these forums. However, even Reddit, which had long been the paragon of a free-speech social platform, has introduced guidelines and restrictions for its users, including language that threatens, harasses or bullies other users.
While this sounds like a debate that lives on the internet, there is plenty of real world impact.
The US, unlike its neighbor Canada, protects all speech, including hate speech, as long as it doesn’t incite violence. But that line is increasingly blurry since communities now grow and organize online. Reddit threads teeming with anti-semitic, anti-LGBT or anti-black comments, for example, are the same ones calling for real-life demonstrations and action.
“We can’t just say it’s okay because it doesn’t incite violence. We focus on youths for that reason—they become traumatized,” Hinduja said. At the extreme end, he said, this can lead to suicide and violence, but at the very least it impacts healthy discussion.
He said there will be more measures put into place, whether through machine learning or other filters. And if social platforms fail to address abuse, they will slowly fall off the radar, Hinduja said, much like JuicyCampus, a once popular site that pitted college students against each other in a whirlwind of gossip and anonymous judgement calls.
But restricting speech online, even to protect from abuse, could also cause companies to lose users. Or it could drive them to siloed platforms—some “alt-right” members are now asking people to join unrestricted platforms like Gab.ai instead of Twitter, so no one gets in the way of their conversations. And Reddit regulations have led the same crowd to forums like Stormfront, which claims to support these “racial realists”.
Giampietro said the only actual way to combat this kind of hate speech is to drown it in reasonable rhetoric. “We’ll never be able to—and we shouldn’t—prevent anyone from speaking,” he said. “The antidote for hate speech is more of the right kind of speech. And denouncing those people, and demonstrating to the world what idiots they are for thinking that way.”
This might sound idealistic, but it's not impossible. Earlier this month, for example, Beverly Whaling, a mayor in West Virginia, commended a tweet that called First Lady Michelle Obama “an ape in heels” on Twitter, earning her intense backlash on social and news media. Just a couple of days later, the mayor resigned under pressure saying she regretted the “hurt it may have caused.”
In such a case, it was a combination of Twitter’s online community, media attention and cross-platform sharing that put pressure on the mayor to resign—cementing the connection between hate speech online and offline.
It’s important to note that online hate speech, and hate crime, did not begin with Donald Trump and his supporters—nor will it end with his presidency. Fighting for civil rights is an ongoing process, and any choices that Twitter and Facebook make will not necessarily mean people won’t carry out the same hateful conversations, or the antidotes to them, in real life.
But Americans are still largely unaware of the algorithms and parameters they’re speaking within as they share every political and social view they have on social media platforms. And this election has given us the opportunity to figure out how we can become a more informed public, without the limitations of both censorship and abuse.


Journalism is the activity of gathering, assessing, creating, and presenting news and information. It is also the product of these activities.” It goes on further to state, “Journalism can be distinguished from other activities and products by certain identifiable characteristics and practices. These elements not only separate journalism from other forms of communication, they are what make it indispensable to democratic societies. History reveals that the more democratic a society, the more news and information it tends to have.”


If anything, we’re suffering from having too much, and of a kind that isn’t particularly helpful to us. When corporations control the information being produced by media outlets, journalism takes on a different tone. The days of “gathering, assessing, creating and presenting” can turn easily into “mining, configuring, manipulating and angling” to best suit the established views of the parent company. When this happens, journalism is no longer about the pursuit of facts and truth; it’s about the presentation of the angle that the company wants us to see. It’s a subtle but direct manipulation of our trust with their information. It happens all the time.
Six companies control the spread of 90% of the information in the major news media outlets. They are: Comcast, Rupert Murdoch’s News Corporation, CBS, Time Warner, The Walt Disney Company and Viacom. By contrast, in 1983, 90% of the news media was owned by 50 companies. While we lacked the tech to disseminate information as quickly as we can now, what we didn’t lack were committed journalists who were dedicated to presenting the news instead of the news the way the company would like you to see it. At this point in history, 232 media executives control most of the information seen by 277 million Americans. The infographics available on this site represent these statistics nicely, but basically, every news media executive has control of what a population the size of the San Francisco metro area sees every day.


The investors decide what the focus for the company is going to be, and the media presents what the investors want. So we basically get to see whatever is most beneficial to the bottom line of these companies.
Fortunately, because we do live in the (DIS)Information Age, we do have alternatives, and we can point our browsers to them whenever we’d like. Here’s a baker’s dozen news sources this writer in particular, and many others, appreciate having regular access to in order to be well-educated and well-informed:
  1. Reddit. “The front page of the Internet” according to itself, Reddit’s algorithm for sorting stories by weight and importance based entirely on user feedback means I can get a quick glance at what other Redditors find important simply by tapping open the app on my phone. I get a healthy digest of the news as well as all kinds of information on any other topics I desire. This user-moderated site has a wealth of information in the thousands of subreddits available for perusal, and a quirky, community feel to boot.
  2. The Christian Science Monitor. This magazine touts itself as “an independent international news organization that delivers thoughtful, global coverage” and given the breadth and depth of their investigative work and commitment to journalistic integrity, it’s clear they are committed to their mission.
  3. The Real News Network, which broadcasts at the top of its front page: “No Advertising, Government or Corporate Funding”. I don’t think they could state their commitment to real journalism any more clearly.
  4. Truthout.org, a 501(c)3 non-profit organization dedicated to “in-depth investigative reporting and critical analysis”. Truthout depends on reader donations to stay funded and accepts no corporate or government contributions. They make all of their financial reports available on their website, so you can easily see where the money’s coming from. 65% of their 2013-2014 funding came from donations by individuals of less than $1000.00.
  5. Reuters. Reuters has long been established as one of the highest global standards for true journalism.
  6. Reveal. Reveal’s mission is to engage and empower the public through investigative journalism and groundbreaking storytelling.
  7. ProPublica, an organization dedicated to purely investigative journalism and stories with “moral impact”.
  8. The Center for Public Integrity, whose mission is “to serve democracy by revealing abuses of power, corruption and betrayal of public trust by powerful public and private institutions, using the tools of investigative journalism”.
  9. Fair.org, which stands for Fairness and Accuracy In Reporting and seeks to offer well-documented criticism of media bias.
  10. Allsides.com, which promotes “news and issues from multiple perspectives” and invites its audience to “discuss like adults”.
  11. WhoWhatWhy, which states “We don’t cover the news. We uncover the truth.” WhoWhatWhy is a non-profit that was started by longtime investigative journalist Russ Baker.
  12. The Nation, one of the oldest weekly magazines in the United States. Founded by abolitionists in 1865, The Nation strives to present a critical and investigative perspective.
  13. Al-Jazeera. Once decried by George W. Bush as being controlled by Al-Qaeda, Al-Jazeera presents some of the most upfront and fact-based journalism about American issues available to the world.
Additionally, the Internet avails us of a number of sites presenting informed ideas and opinions as well as first-class investigative journalism. One of my personal favorites, Collective Evolution, seeks to “create change through transforming consciousness” and does so by presenting unbiased, fact-based “alternative news”. This site retains a progressive/intellectual focus, which I appreciate, without compromising their journalistic integrity.
Alternative media sources are gaining a lot of traction right now. At the time this article was being written, John Oliver’s “Last Week Tonight” show on the 2016 United States Presidential Election, which contained a considerable focus on the importance of alternative media, was trending in the #2 spot on YouTube.
Brilliant investigative journalist and anchorman Walter Cronkite once said, “I think it is absolutely essential in a democracy to have competition in the media, a lot of competition, and we seem to be moving away from that.” While this may sadly be true, it doesn’t have to be. Support your interest in true journalism by getting the facts from alternative media outlets instead of the fiction that the companies controlling the 90% want to give.

The Right To Truth: Why Alternative News Media Sources Are Essential  

Ti piace?

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...