Facebook has once again eschewed a direct request from the UK parliament for its CEO, Mark Zuckerberg, to testify to a committee investigating online disinformation — without rustling up so much as a fig-leaf-sized excuse to explain why the founder of one of the world’s most used technology platforms can’t squeeze a video call into his busy schedule and spare UK politicians’ blushes.
LifeSiteNews has covered multiple instances of Facebook censoring conservative and Christian content. These include classifying Black conservative commentators Diamond and Silk as “unsafe to the community,” censoring a pro-life documentary on Roe v. Wade, and refusing to run a Holy Week ad by the Franciscan University of Steubenville featuring the San Damiano Cross. In addition, a study by the conservative Western Journal found that left-of-center sites enjoyed a nearly 14% traffic increase following algorithm changes last fall, whereas popular conservative sites saw a 27% decline.
Facebook is indirectly helping thousands of terrorists connect through its “suggested friends” feature, according to The Telegraph, citing a study that will soon be fully released.
Part of its mission to connect people throughout the world, and thus build new global communities, Facebook uses its proprietary algorithms to propose adding a friend that shares certain interests. In doing so, evildoers that visit the site are also afforded the same service, and are provided expedient means for seeking out others who may also have the same goals of destruction and violence.
On Tuesday, Facebook fired an employee who had allegedly used their privileged data access to stalk women online. Now, multiple former Facebook employees and people familiar with the company describe to Motherboard parts of the social media giant’s data access policies. This includes how those in the security team, which the fired employee was allegedly a part of, have less oversight on their access than others.
The news emphasizes something that typical users may forget when scrolling through a Silicon Valley company’s service or site: although safeguards against abuse may be in place, there are people who have the power to see information you believe to be private, and sometimes they may look at that data.
After a volley of scandals haunting Facebook about the mishandling of personal information and alleged election manipulation, trust building was front and centre of Zuckerberg’s keynote speech at the annual Facebook developers conference F8 on Tuesday. But it was the revelation about the planned dating services that really pricked up ears.
Sure I trust Facebook and want to give them even more personal information. Don’t you?
A court in Berlin has issued a temporary restraining order against Facebook. Under the threat of a fine of 250,000 euros (roughly $300,000 USD) or a jail term, Facebook was obliged to restore a user’s comment that it had deleted. Moreover, the ruling prohibited the company from banning the user because of this comment.
This is the first time a German court has dealt with the consequences of Germany’s internet censorship law, which came into effect on October 1, 2017. The law stipulates that social media companies have to delete or block “apparent” criminal offenses, such as libel, slander, defamation or incitement, within 24 hours of receipt of a user complaint.
Days after Facebook, along with Google and Twitter, refused to attend a congressional hearing on social media censorship, the social network banned the account of author and free speech activist Pamela Geller for 30 days after she posted an articleabout Muslim anti-Semitism in Germany.
A Hamburg court spokesman said Tuesday opposition Alternative for Germany (AfD) parliamentarian Alice Weidel and her lawyer would press next Friday for a ruling that Facebook was liable for the remark’s continued dissemination.
The Hamburg-based blog Meedia, a subsidiary of the Handelsblatt newspaper chain, said the Facebook post was made by a user last September under a Huffington Post article before Germany’s Network Enforcement Act (NetzDG) came into effect, imposing hefty fines on social media which failed to delete content after determining its offensive nature.
It will be interesting to see how the Facebook Employee Handbook will instruct the 20,000 employees on how to flag hate speech to delete and ban it.
Facebook asked conservative groups for help last week in heading off European-style privacy rules, just as CEO Mark Zuckerberg prepared to apologize to Congress for his company’s data scandal.
The company’s outreach comes as the European Union is preparing to enforce strict new privacy rules that take effect in late May. Among other things, the EU’s rules allow regulators to impose fines as high as 4 percent of a company’s global revenues for serious violations.
The emailed invitation to a sit-down to discuss the policy, obtained by POLITICO, also shows how Facebook is seeking an unlikely alliance with conservatives, who frequently accuse the the social network of bias against their views but oppose most forms of government regulation. The email did not disclose the recipients but came from Facebook’s liaison to conservative organizations.
CBC is asking readers to help it track political ads on Facebook by installing a browser extension created by ProPublica, a non-profit, investigative news organization based in the U.S.. Facebook Political Ad Collector allows users to flag the ads they see in their feeds as political and submit them to a database that CBC News will use to research stories.
Facebook’s misuse of their users’ biometric information could potentially amount to billions of dollars in damages after a federal judge greenlighted an Illinois class action suit against the firm’s facial recognition feature.
Facebook violated an Illinois state law by improperly using their photo-scanning and facial recognition technologies and storing biometric data without their users’ consent, a federal judge in California ruled on Monday, after reviewing a 2015 claim made against Facebook by three Illinois plaintiffs.
Artificial intelligence is not a solution for shortsightedness and lack of transparency
Over the course of an accumulated 10 hours spread out over two days of hearings, Mark Zuckerberg dodged question after question by citing the power of artificial intelligence.
Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. Fake accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.
It’s not even entirely clear what Zuckerberg means by “AI” here. He repeatedly brought up how Facebook’s detection systems automatically take down 99 percent of “terrorist content” before any kind of flagging.
As creator and controlling force behind Facebook, with the intimate data of more than two billion users at his fingertips and a net worth estimated at £50 billion, Mark Elliot Zuckerberg is one of the richest and most powerful human beings in history.
Having reached this lofty status at just 33, he was, until recently, widely regarded as a genius capable of shaping tomorrow’s world in his own image — a man of such rare talent that only a fool would bet against him one day ascending to the office of U.S. President.
Though the Forces have a team of dedicated social media staffers, the comments have become so toxic at times that one member of the public observed the “comment section around here reads like a Sons of Odin chat room,” a reference to a far-right, anti-immigration group.