Socialnomics

4 Case Studies in Fraud: Social Media and Identity Theft

Does over-sharing leave you open to the risk of identity theft.

Generally speaking, social media is a pretty nifty tool for keeping in touch. Platforms including Facebook, Twitter, Instagram, and LinkedIn offer us a thousand different ways in which we can remain plugged in at all times.

However, our seemingly endless capacity for sharing, swiping, liking, and retweeting has some negative ramifications, not the least of which is that it opens us up as targets for identity theft.

Identity Theft Over the Years

Identity theft isn’t a new criminal activity; in fact, it’s been around for years. What’s new is the method criminals are using to part people from their sensitive information.

Considering how long identity theft has been a consumer threat, it’s unlikely that we’ll be rid of this inconvenience any time soon.

Living Our Lives Online

The police have been using fake social media accounts in order to conduct surveillance and investigations for years. If the police have gotten so good at it, just imagine how skilled the fraudsters must be who rely on stealing social media users’ identities for a living.

People are often surprised at how simple it is for fraudsters to commit identity theft via social media. However, we seem to forget just how much personal information goes onto social media – our names, location, contact info, and personal details – all of this is more than enough for a skilled fraudster to commit identity theft.

In many cases, a fraudster might not even need any personal information at all.

Case Study #1: The Many Sarah Palins

Former Alaska governor Sarah Palin is no stranger to controversy, nor to impostor Twitter accounts. Back in 2011, Palin’s official Twitter account at the time, AKGovSarahPalin (now @SarahPalinUSA ), found itself increasingly lost in a sea of fake accounts.

In one particularly notable incident, a Palin impersonator tweeted out an open invite to Sarah Palin’s family home for a barbecue. As a result, Palin’s security staff had to be dispatched to her Alaska residence to deter would-be partygoers.

This phenomenon is not limited only to Sarah Palin. Many public figures and politicians, particularly controversial ones like the 2016 presidential candidate Donald Trump, have a host of fake accounts assuming their identity.

Case Study #2: Dr. Jubal Yennie

As demonstrated by the above incident, it doesn’t take much information to impersonate someone via social media. In the case of Dr. Jubal Yennie, all it took was a name and a photo.

In 2013, 18-year-old Ira Trey Quesenberry III, a student of the Sullivan County School District in Sullivan County, Tennessee, created a fake Twitter account using the name and likeness of the district superintendent, Dr. Yennie.

After Quesenberry sent out a series of inappropriate tweets using the account, the real Dr. Yennie contacted the police, who arrested the student for identity theft.

Case Study #3: Facebook Security Scam

While the first two examples were intended as (relatively) harmless pranks, this next instance of social media fraud was specifically designed to separate social media users from their money.

In 2012, a scam involving Facebook developed as an attempt to use social media to steal financial information from users.

Hackers hijacked users’ accounts, impersonating Facebook security. These accounts would then send fake messages to other users, warning them that their account was about to be disabled and instructing the users to click on a link to verify their account. The users would be directed to a false Facebook page that asked them to enter their login info, as well as their credit card information to secure their account.

Case Study #4: Desperate Friends and Family

Another scam circulated on Facebook over the last few years bears some resemblance to more classic scams such as the “Nigerian prince” mail scam, but is designed to be more believable and hit much closer to home.

In this case, a fraudster hacked a user’s Facebook profile, then message one of the user’s friends with something along the lines of:

“Help! I’m traveling outside the country right now, but my bag was stolen, along with all my cash, my phone, and my passport. I’m stranded somewhere in South America. Please, please wire me $500 so I can get home!”

Family members, understandably not wanting to leave their loved ones stranded abroad, have obliged, unwittingly wiring the money to a con artist.

Simple phishing software or malware can swipe users’ account information without their having ever known that they were targeted, thus leaving all of the user’s friends and family vulnerable to such attacks.

How to Defend Against Social Media Fraud

For celebrities, politicians, CEOs, and other well-known individuals, it can be much more difficult to defend against social media impersonators, owing simply to the individual’s notoriety. However, for your everyday user, there are steps that we can take to help prevent this form of fraud.

  • Make use of any security settings offered by social media platforms. Examples of these include privacy settings, captcha puzzles, and warning pages informing you that you are being redirected offsite.
  • Do not share login info, not even with people you trust. Close friends and family might still accidentally make you vulnerable if they are using your account.
  • Be wary of what information you share. Keep your personal info under lock and key, and never give out highly sensitive information like your social security number or driver’s license number.
  • Do not reuse passwords. Have a unique password for every account you hold.
  • Consider changing inessential info. You don’t have to put your real birthday on Facebook.
  • Only accept friend requests from people who seem familiar.

Antivirus software, malware blockers, and firewalls can only do so much. In the end, your discretion is your best line of defense against identity fraud.

You might also enjoy a great Motivational Speaker Video  for social media safety tips

' src=

Jessica Velasco

Tech Company Pulls Ads from Olympics Over Controversial Opening Ceremony Performance C Spire, a Mississippi-based tech company, withdrew its Olympi…

Hurricane Beryl, Caitlin Clark Triple-Double, French Elections

Nba draft 2024, rivian stock, rapidan dam, subscribe to the skinny.

Complete Instagram, Facebook & Pinterest Management - Start Your Free 7 Day Trial

Chipotle’s Brian Niccol is takes over for Starbucks’s Laxman Narasimhan

Top tips for accelerating medication prior authorization, best practices for building a robust deal flow pipeline, elon musk reveals latest breakthrough in neural technology, antitrust ruling against google threatens apple partnership, eco-friendly bakery packaging: benefits of biodegradable and compostable solutions, web design vs. web development: understanding the key differences, 2024 olympics highlights.

Find anything you save across the site in your account

The National-Security Case for Fixing Social Media

Mark Zuckerberg calling in on a video screen to a Senate hearing

On Wednesday, July 15th, shortly after 3 P.M. , the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Bill Gates, Elon Musk, Warren Buffett, Michael Bloomberg, Kanye West, and other politicians and celebrities began behaving strangely. More or less simultaneously, they advised their followers—around two hundred and fifty million people, in total—to send Bitcoin contributions to mysterious addresses. Twitter’s engineers were surprised and baffled; there was no indication that the company’s network had been breached, and yet the tweets were clearly unauthorized. They had no choice but to switch off around a hundred and fifty thousand verified accounts, held by notable people and institutions, until the problem could be identified and fixed. Many government agencies have come to rely on Twitter for public-service messages; among the disabled accounts was the National Weather Service, which found that it couldn’t send tweets to warn of a tornado in central Illinois. A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is currently investigating his sixteen-year-old sidekick.

In its narrowest sense, this immense security breach, orchestrated by teen-agers, underscores the vulnerability of Twitter and other social-media platforms. More broadly, it’s a telling sign of the times. We’ve entered a world in which our national well-being depends not just on the government but also on the private companies through which we lead our digital lives. It’s easy to imagine what big-time criminals, foreign adversaries, or power-grabbing politicians could have done with the access the teen-agers secured. In 2013, the stock market briefly plunged after a tweet sent from the hacked account of the Associated Press reported that President Barack Obama had been injured in an explosion at the White House; earlier this year, hundreds of armed, self-proclaimed militiamen converged on Gettysburg, Pennsylvania, after a single Facebook page promoted the fake story that Antifa protesters planned to burn American flags there.

A group called the Syrian Electronic Army claimed responsibility for the A.P. hack; the Gettysburg hoax was perpetrated by a left-wing prankster. A more determined and capable adversary could think bigger. In the run-up to this year’s Presidential election, e-mails and videos that most analysts attributed to the Iranian government were sent to voters in Arizona, Florida, and Alaska, purporting to be from the Proud Boys , a neo-Fascist, pro-Trump organization: “Vote for Trump,” they warned, “or we will come after you.” Calls to voters in swing states warned them against voting and text messages pushed a fake video about Joe Biden supporting sex changes for second graders. But a truly ambitious disinformation attack would be cleverly timed and coördinated across multiple platforms. If what appeared to be a governor’s Twitter account reported that thousands of ballots had gone missing on Election Day , and the same message were echoed by multiple Facebook posts—some written by fake users or media outlets, others by real users who had been deceived—many people might assume the story to be true and forward it on. The goal of false information need not be an actual change in events; chaos is often the goal, and sowing doubt about election results is a perfect way to achieve it.

When we think of national security, we imagine concrete threats—Iranian gunboats, say, or North Korean missiles. We spend a lot of money preparing to meet those kinds of dangers. And yet it’s online disinformation that, right now, poses an ongoing threat to our country; it’s already damaging our political system and undermining our public health. For the most part, we stand defenseless. We worry that regulating the flow of online information might violate the principle of free speech. Because foreign disinformation played a role in the election of our current President, it has become a partisan issue, and so our politicians are paralyzed. We enjoy the products made by the tech companies, and so are reluctant to regulate their industry; we’re also uncertain whether there’s anything we can do about the problem—maybe the price of being online is fake news. The result is a peculiar mixture of apprehension and inaction. We live with the constant threat of disinformation and foreign meddling. In the uneasy days after a divisive Presidential election, we feel electricity in the air and wait for lightning to strike.

In recent years, we’ve learned a lot about what makes a disinformation campaign effective. Disinformation works best when it’s consistent with an audience’s preconceptions; a fake story that’s dismissed as incredible by one person can appear quite plausible to another who’s predisposed to believe in it. It’s for this reason that, while foreign governments may be capable of more concerted campaigns, American disinformers are especially dangerous: they have their fingers on the pulse of our social and political divisions. At the moment, disinformation seems to be finding a more receptive audience on the political right. Perhaps, as some researchers have suggested, an outlook rooted in aggrievement and a distrust of institutions makes it easier to believe in wrongdoing by élites. Breitbart columnists and some Fox News commentators are also happy to corroborate and amplify fringe ideas. In any event, during this year’s Presidential election, our social-media platforms have been awash in corrosive disinformation, much of it generated by Americans, ranging from lurid conspiracy-mongering—Antifa protesters starting wildfires in Oregon; Democrats arranging child-sex rings—to the faux-legalistic questioning of voting procedures.

For the most part, this disinformation has been scattershot. What would a more organized effort look like? The cyber-disinformation campaign conducted by Russia in 2016, largely on Facebook, gave us a glimpse of what’s possible. The five-volume bipartisan Senate report on Russia’s efforts, produced by the Select Committee on Intelligence, reveals an effort of startling scale. Russia conducts disinformation operations at home, in bordering countries, and across the world. It works through several arms at once: the sophisticated, Kremlin-directed S.V.R. (the equivalent of the C.I.A.); the clumsier, military-run G.R.U.; and the savvier Internet Research Agency in St. Petersburg. In general, Russia seeks to push disinformation in a comprehensive, integrated way, so as to give its content an aura of authenticity. Using so-called sockpuppets—inauthentic personas on Facebook and elsewhere—its campaigns inflame existing political tensions with calls to action, online petitions, forged evidence, and false news. This specious material is then cited by seemingly legitimate news sites established by Russia for the purpose of spreading and corroborating disinformation. Facebook and Twitter have built automated systems that look for inauthentic accounts with manufactured followings. But Russian cyber actors have become increasingly sophisticated, using an integrated array of what spy agencies call T.T.P.s—tactics, techniques, and procedures—to avoid detection.

Like a musical toured in smaller markets before it hits Broadway, Russian T.T.P.s are tested first in border states—Lithuania, Estonia, Ukraine, Poland—before being deployed against America. In the past year, Russian trolls working in those countries have adopted a new strategy: impersonating actual organizations or people, or claiming to be affiliated with them—a muddying of the waters that makes detection harder. According to experts , they’ve also begun corrupting legitimate Eastern European news sites: hackers manipulate real content, sometimes laying the groundwork for future disinformation, at others, inserting fake articles for immediate dissemination.

China , meanwhile, already adept at intellectual-property cyber theft, has begun shifting toward active disinformation of the Russian sort. Most of its efforts are focussed on propaganda portraying China as a peace-loving nation with a superior form of government. But earlier this year, a pro-China operation, nicknamed Spamouflage Dragon by cybersecurity firms, deployed an array of Facebook, YouTube, and Twitter accounts with profile pictures generated by artificial intelligence to attack President Trump and spread falsehoods about the George Floyd killing, the Black Lives Matter movement, and Hong Kong’s pro-democracy protests. Compared to Russia, China’s disinformation efforts are less immediately alarming, because its government is more concerned about how it’s perceived around the world. But it seems possible that, in the longer term, the country will pose a more significant threat. If China harnessed the vast intelligence resources of its Ministry of State Security and its People’s Liberation Army to mount a coördinated disinformation campaign against the United States, its reach could be significant. Foreign powers could get better at pushing our buttons; domestic disinformers could get better-organized. In either case, we could face a more acute version of the disinformation crisis we’re struggling with now.

There’s a sense in which it doesn’t matter who our disinformers are, since they all use the same social-media technology, which has transformed our societies quickly and pervasively, outpacing our ability to anticipate its risks. We’ve taken a relatively minimal and reactive approach to regulating our new digital world. The result is that we lag behind in security: the malicious use of new platforms begins before security experts, in industry or government, can weigh in. Because new vulnerabilities are revealed individually, we tend to perceive them as one-offs—a hack here, a hack there.

As cyber wrongdoing has piled up, however, it has shifted the balance of responsibility between government and the private sector. The federal government used to be solely responsible for what the Constitution calls our “common defense.” Yet as private companies amass more data about us, and serve increasingly as the main forum for civic and business life, their weaknesses become more consequential. Even in the heyday of General Motors, a mishap at that company was unlikely to affect our national well-being. Today, a hack at Google, Facebook, Microsoft, Visa, or any of a number of tech companies could derail everyday life, or even compromise public safety, in fundamental ways.

Because of the very structure of the Internet, no Western nation has yet found a way to stop, or even deter, malicious foreign cyber activity. It’s almost always impossible to know quickly and with certainty if a foreign government is behind a disinformation campaign, ransomware implant, or data theft; with attribution uncertain, the government’s hands are tied. China and other authoritarian governments have solved this problem by monitoring every online user and blocking content they dislike; that approach is unthinkable here. In fact, any regulation meant to thwart online disinformation risks seeming like a step down the road to authoritarianism or a threat to freedom of speech. For good reason, we don’t like the idea of anyone in the private sector controlling what we read, see, and hear. But allowing companies to profit from manipulating what we view online, without regard for its truthfulness or the consequences of its viral dissemination, is also problematic. It seems as though we are hemmed in on all sides, by our enemies, our technologies, our principles, and the law—that we have no choice but to learn to live with disinformation, and with the slow erosion of our public life.

We might have more maneuvering room than we think. The very fact that the disinformation crisis has so many elements—legal, technological, and social—means that we have multiple tools with which to address it. We can tackle the problem in parts, and make progress. An improvement here, an improvement there. We can’t cure this chronic disease, but we can manage it.

On the legal side, there are common-sense steps we could take without impinging on our freedom of speech. Congress could pass laws to curtail disinformation in political campaigns, not necessarily by outlawing false statements—which would run afoul of the First Amendment —but by requiring more disclosure, and by making certain knowing falsehoods illegal, including wrongful information about polling places. Today, political ads that appear online aren’t subject to the same disclosure and approval rules that apply to ads on radio and television; that anachronism could be corrected. Lawmakers could explore prohibiting online political ads that micro-target voters based on race, age, political affiliation, or other demographic categories; that sort of targeting allows divisive ads and disinformation to be aimed straight at amenable audiences, and to skirt broader public scrutiny. Criminal laws could also be tightened to outlaw, at least to some extent, the intentional and knowing spread of misinformation about elections and political candidates.

Online, the regulation of speech is governed by Section 230 of the Communications Decency Act—a law, enacted in 1996, that was designed to allow the nascent Internet to flourish without legal entanglements. The statute gives every Internet provider or user a shield against liability for the posting or transmission of user-generated wrongful content. As Anna Wiener wrote earlier this year, Section 230 was well-intentioned at the time of its adoption, when all Internet companies were underdogs. But today that is no longer true, and analysts and politicians on both the right and the left are beginning to think, for different reasons, that the law could be usefully amended. Republicans tend to believe that the statute allows liberal social media companies to squelch conservative voices with impunity; Democrats argue that freewheeling social media platforms, which make money off virality, are doing too little to curtail online hate speech. Amending Section 230 to impose some liability on social-media platforms, in a manner that neither cripples them nor allows them to remain unaccountable, is a necessary step in curbing disinformation. It seems plausible that the next Congress will amend the statute.

Other legal steps might flow from the recognition that the very ubiquity of social-media companies has created vulnerabilities for the millions of Americans who rely on them. Antitrust arguments to break up platforms and companies are one way to address this aspect of the problem. The Senate has asked the C.E.O.s of Facebook and Twitter to appear at a hearing on November 17th, intended to examine the platforms’ “handling of the 2020 election.” Last month, a House hearing on the same topic degenerated into an argument between Republicans, who claimed that social media was censoring the President, and Democrats, who argued that the hearing was a campaign gimmick. It remains to be seen whether Congress can separate politics from substance and seriously consider reform proposals, like the one put forth recently by the New York State Department of Financial Services, which would designate social-media platforms as “systemically important” and subject to oversight. It will be difficult to regulate such complicated and dynamic technology. Still, the broader trend is inescapable: the private sector must bear an ever-increasing legal responsibility for our digital lives.

Technological progress is possible, too, and there are signs that, after years of resistance, social-media platforms are finally taking meaningful action. In recent months, Facebook, Twitter, and other platforms have become more aggressive about removing accounts that appear inauthentic, or that promote violence or lawbreaking; they have also moved faster to block accounts that spread disinformation about the coronavirus or voting, or that advance abhorrent political views, such as Holocaust denial. The next logical step is to decrease the power of virality. In 2019, after a series of lynchings in India was organized through the chat program WhatsApp, Facebook limited the mass forwarding of texts on that platform; a couple of months ago, it implemented similar changes in the Messenger app embedded in Facebook itself. As false reports of ballot fraud became increasingly elaborate in the days before and after Election Day, the major social media platforms did what would have been unthinkable a year ago, labelling as misleading messages from the President of the United States. Twitter made it slightly more difficult to forward tweets containing disinformation; an alert now warns the user about retweeting content that’s been flagged as untruthful. Additional changes of this kind, combined with more transparency about the algorithms they use to curate content, could make a meaningful difference in how disinformation spreads online. Congress is considering requiring such transparency.

Finally, there are steps we could take that have nothing to do with regulation or technology. Many national-security experts have argued for an international agreement that outlaws disinformation, and for coördinated moves by Western democracies to bring cybercriminals to justice. The President could choose to make combating foreign disinformation a national-security priority, by asking the intelligence community to focus on it in a cohesive way. (We have an integrated national counterterrorism center, but not one focused on disinformation.) Our national-security agencies could share more with the public about the T.T.P.s used by foreign disinformation campaigns. And the teaching of digital literacy—perhaps furthered by legislation that promotes civic education—could make it harder for disinformation, foreign or domestic, to take hold.

We will soon no longer have a President who himself creates a storm of falsehoods. But the electricity will remain in the air, regardless of who occupies the Oval Office. Perhaps because the disinformation crisis has descended upon us so suddenly, and because it reinforces our increasing political polarization, we’ve tended to regard it as inevitable and unavoidable—a fact of digital life. But we do have options, and if we come together to exercise them, we could make a meaningful difference. In this case, it might be possible to change the weather.

A previous version of this piece misstated the state where the Gettysburg rally took place.

The Ad-Hoc Group of Activists and Academics Convening a “Real Facebook Oversight Board”

You are using an outdated browser. Please upgrade your browser to improve your experience.

Suggested Results

Antes de cambiar....

Esta página no está disponible en español

¿Le gustaría continuar en la página de inicio de Brennan Center en español?

al Brennan Center en inglés

al Brennan Center en español

Informed citizens are our democracy’s best defense.

We respect your privacy .

  • Research & Reports

Social Media Surveillance by the U.S. Government

A growing and unregulated trend of online surveillance raises concerns for civil rights and liberties.

Rachel Levinson-Waldman

  • Social Media
  • Transparency & Oversight
  • First Amendment

Social media has become a significant source of information for U.S. law enforcement and intelligence agencies. The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants. This is not surprising; as the U.S. Supreme Court has  said , social media platforms have become “for many . . . the principal sources for knowing current events, . . . speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge” — in other words, an essential means for participating in public life and communicating with others.

At the same time, this growing — and mostly unregulated — use of social media raises a host of civil rights and civil liberties concerns. Because social media can reveal a wealth of personal information — including about political and religious views, personal and professional connections, and health and sexuality — its use by the government is rife with risks for freedom of speech, assembly, and faith, particularly for the Black, Latino, and Muslim communities that are historically targeted by law enforcement and intelligence efforts. These risks are far from theoretical: many agencies have a track record of using these programs to target minority communities and social movements. For all that, there is little evidence that this type of monitoring advances security objectives; agencies rarely measure the usefulness of social media monitoring and DHS’s own pilot programs showed that they were not helpful in identifying threats. Nevertheless, the use of social media for a range of purposes continues to grow.

In this Q&A, we survey the ways in which federal law enforcement and intelligence agencies use social media monitoring and the risks posed by its thinly regulated and growing use in various contexts.

Which federal agencies use social media monitoring?

Many federal agencies use social media, including the  Department of Homeland Security  (DHS),  Federal Bureau of Investigation  (FBI),  Department of State  (State Department),  Drug Enforcement Administration  (DEA),  Bureau of Alcohol, Tobacco, Firearms and Explosives  (ATF),  U.S. Postal Service  (USPS),  Internal Revenue Service  (IRS),  U.S. Marshals Service , and  Social Security Administration  (SSA). This document focuses primarily on the activities of DHS, FBI, and the State Department, as the agencies that make the most extensive use of social media for monitoring, targeting, and information collection.

Why do federal agencies monitor social media?

Publicly available information shows that federal agencies use social media for four main — and sometimes overlapping — purposes. The examples below are illustrative and do not capture the full spectrum of social media surveillance by federal agencies.

Investigations : Law enforcement agencies, such as the FBI and some components of DHS, use social media monitoring to assist with criminal and civil investigations. Some of these investigations may not even require a showing of criminal activity. For example, FBI agents can open an “assessment” simply on the basis of an “authorized purpose,” such as preventing crime or terrorism, and without a factual basis. During assessments, FBI agents can carry out searches of publicly available online information. Subsequent investigative stages, which require some factual basis, open the door for more invasive surveillance tactics, such as the monitoring and recording of chats, direct messages, and other private online communications in real time.

At DHS, Homeland Security Investigations (HSI) — which is part of Immigration and Customs Enforcement (ICE) — is the Department’s “ principal investigative arm .” HSI  asserts  in its training materials that it has the authority to enforce any federal law, and relies on social media when conducting investigations on matters ranging from civil immigration violations to terrorism. ICE agents can look at publicly available social media content for purposes ranging from finding fugitives to gathering evidence in support of investigations to probing “potential criminal activity,” a “threat detection” function discussed below. Agents can also operate undercover online and monitor private online communications, but the circumstances under which they are permitted to do so are not publicly known.

Monitoring to detect threats:  Even without opening an assessment or other investigation, FBI agents can monitor public social media postings. DHS components from ICE to its intelligence arm, the Office of Intelligence & Analysis, also  monitor social media  — including specific individuals — with the goal of identifying potential threats of violence or terrorism. In addition, the FBI and DHS both engage private companies to conduct online monitoring of this type on their behalf. One firm, for example, was  awarded  a  contract  with the FBI in December 2020 to scour social media to proactively identify “national security and public safety-related events” — including various unspecified threats, as well as crimes — which have not yet been reported to law enforcement.

Situational awareness:  Social media  may   provide  an “ear to the ground” to help the federal government coordinate a response to breaking events. For example, a range of DHS components — from Customs and Border Protection (CBP) to the National Operations Center (NOC) to the Federal Emergency Management Agency ( FEMA ) — monitor the internet, including by keeping tabs on a broad list of websites and keywords being discussed on social media platforms and tracking information from sources like news services and local government agencies.  Privacy impact assessments  suggest there are few limits on the content that can be reviewed — for instance, the PIAs list a sweeping range of keywords that are monitored (ranging, for example, from “attack,” “public health,” and “power outage,” to “jihad”). The purposes of such monitoring include helping keep the public, private sector, and governmental partners informed about developments during a crisis such as a natural disaster or terrorist attack; identifying people needing help during an emergency; and knowing about “ threats or dangers ” to DHS facilities.

“Situational awareness” and “threat detection” overlap because they both involve broad monitoring of social media, but situational awareness has a wider focus and is generally not intended to monitor or preemptively identify specific people who are thought to pose a threat.

Immigration and travel screening:  Social media is  used to  screen and vet travelers and immigrants coming into the United States and even to monitor them while they live here. People applying for a range of immigration benefits  also undergo  social media checks to verify information in their application and determine whether they pose a security risk.

How can the government’s use of social media harm people?

Government monitoring of social media can work to people’s detriment in at least four ways: (1) wrongly implicating an individual or group in criminal behavior based on their activity on social media; (2) misinterpreting the meaning of social media activity, sometimes with severe consequences; (3) suppressing people’s willingness to talk or connect openly online; and (4) invading individuals’ privacy. These are explained in further detail below.

Assumed criminality:  The government may use information from social media to label an individual or group as a threat, including characterizing  ordinary activity  (like wearing a particular sneaker brand or making common hand signs) or social media connections as evidence of criminal or threatening behavior. This kind of assumption can have high-stakes consequences. For example, the NYPD  wrongly arrested  19-year-old Jelani Henry for attempted murder, after which he was denied bail and jailed for over a year and a half, in large part because prosecutors thought his “likes” and photos on social media proved he was a member of a violent gang. In another  case  of guilt by association, DHS officials barred a Palestinian student arriving to study at Harvard from entering the country based on the content of his friends’ social media posts. The student had neither written nor engaged with the posts, which were critical of the U.S. government. Black, Latino, and Muslim people are especially vulnerable to being falsely labeled threats based on social media activity, given that it is used to inform government decisions that are often already tainted by bias such as  gang determinations  and  travel screening  decisions.

Mistaken judgments:  It can be difficult to accurately interpret online activity, and the repercussions can be severe. In 2020, police in Wichita, Kansas  arrested  a teenager on suspicion of inciting a riot based on a mistaken interpretation of his Snapchat post, in which he was actually denouncing violence. British travelers were interrogated at Los Angeles International Airport and  sent back  to the U.K. due to a border agent’s misinterpretation of a joking tweet. And DHS and the FBI  disseminated  reports to a Maine-area intelligence-sharing hub warning of potential violence at anti-police brutality demonstrations based on fake social media posts by right-wing provocateurs, which were distributed as a warning to local police.

Chilling effects:  People are highly likely to  censor  themselves when they think they are being watched by the government, and this undermines everything from political speech to creativity to other forms of self-expression. The Brennan Center’s  lawsuit  against the State Department and DHS documents how the collection of social media identifiers on visa forms — which are then stored indefinitely and shared across the U.S. government, and sometimes with state, local, and foreign governments — led a number of international filmmakers to stop talking about politics and promoting their work on social media. They self-censored because they were concerned that what they said online would prevent them from getting a U.S. visa or be used to retaliate against them because it could be misinterpreted or reflect controversial viewpoints.

Loss of privacy:  A person’s  social media presence  — their posts, comments, photos, likes, group memberships, and so on — can collectively reveal their ethnicity, political views, religious practices, gender identity, sexual orientation, personality traits, and vices. Further, social media can reveal more about a person than they intend. Platforms’ privacy settings frequently change and can be difficult to navigate, and even when individuals keep information private it can be disclosed through the activity or identity of their connections on social media. DHS at least has recognized this risk, categorizing social media handles as “sensitive personally identifiable information” that could “result in substantial harm, embarrassment, inconvenience, or unfairness to an individual.” Yet the agency has failed to place robust safeguards on social media monitoring.

Who is harmed by social media monitoring?

While all Americans may be harmed by untrammeled social media monitoring, people from historically marginalized communities and those who protest government policies typically bear the brunt of suspicionless surveillance. Social media monitoring is no different.

Echoing the transgressions of the  civil rights era , there  are   myriad   examples  of the FBI and DHS using social media to surveil people speaking out on issues from racial justice to the treatment of immigrants. Both agencies have monitored Black Lives Matter activists. In 2017, the FBI  created  a specious terrorism threat category called “Black Identity Extremism” (BIE), which can be read to include protests against police violence. This category has been used to rationalize  continued   surveillance  of black activists, including monitoring of social media activity. In 2020, DHS’s Office of Intelligence & Analysis (I&A)  used  social media and other tools to target and monitor racial justice protestors in Portland, OR, justifying this surveillance by pointing to the threat of vandalism to Confederate monuments. I&A then  disseminated  intelligence reports on journalists reporting on this overreach.

DHS especially has  focused  social media surveillance on immigration activists, including those engaged in  peaceful protests  against the Trump administration’s family separation policy and others  characterized  as “anti-Trump protests.” From 2017 through 2020, ICE  kept tabs  on immigrant rights groups’ social media activity, and in late 2018 and early 2019, CBP and HSI  used   information  gleaned from social media in compiling dossiers and putting out travel alerts on advocates, journalists, and lawyers — including U.S. citizens — whom the government suspected of helping migrants south of the U.S. border.

Muslim, Arab, Middle Eastern, and South Asian communities have often been particular targets of the U.S. government’s  discriminatory  travel and immigration screening practices, including social media screening. The State Department’s collection of social media identifiers on visa forms, for instance,  came out  of President Trump’s Muslim ban, while  earlier  social media monitoring and collection programs focused disproportionately on people from predominantly Muslim countries and Arabic speakers.

Is social media surveillance an effective way of getting information about potential threats?

Not particularly. Broad social media monitoring for threat detection purposes untethered from suspicion of wrongdoing generates reams of useless information, crowding out information on — and resources for — real public safety concerns.

Social media conversations are difficult to interpret because they are often highly context-specific and can be riddled with slang, jokes, memes, sarcasm, and references to popular culture; heated rhetoric is also common. Government officials and assessments have repeatedly recognized that this dynamic makes it difficult to distinguish a sliver of genuine threats from the millions of everyday communications that do not warrant law enforcement attention. As the former acting chief of DHS I&A  said , “actual intent to carry out violence can be difficult to discern from the angry, hyperbolic — and constitutionally protected — speech and information commonly found on social media.” Likewise, a 2021  internal review  of DHS’s Office of Intelligence & Analysis noted: “[s]earching for true threats of violence before they happen is a difficult task filled with ambiguity.” The review observed that personnel trying to anticipate future threats ended up collecting information on a “broad range of general threats that did not meet the threshold of intelligence collection” and provided I&A’s law enforcement and intelligence customers with “information of limited value,” including “memes, hyperbole, statements on political organizations and other protected First Amendment speech.” Similar  concerns  cropped up with the DHS’s pilot programs to use social media to vet refugees.

The result is a high volume of false alarms, distracting law enforcement from investigating and preparing for genuine threats: as the FBI bluntly  put it , for example, I&A’s reporting practices resulted in “crap” being sent through one of its threat notification systems.

What rules govern federal agencies’ use of social media?

Some agencies, like the FBI, DHS, State Department and  IRS , have released information on the rules governing their use of social media in certain contexts. Other agencies — such as the ATF, DEA, Postal Service, and Social Security Administration — have not made any information public; what is known about their use of social media has emerged from media coverage, some of which has attracted  congressional   scrutiny . Below we describe some of what is known about the rules governing the use of social media by the FBI, DHS, and State Department.

FBI:  The main document governing the FBI’s social media surveillance practices is its  Domestic Investigations and Operations Guide  (DIOG), last made public in redacted form in 2016. Under the DIOG, FBI agents may review publicly available social media information prior to initiating any form of inquiry. During the lowest-level investigative stage, called an assessment (which requires an “authorized purpose” such as stopping terrorism, but no factual basis), agents may also log public, real-time communications (such as public chat room conversations) and work with informants to gain access to private online spaces, though they may not record private communications in real-time.

Beginning with “preliminary investigations” (which require that there be “information or an allegation” of wrongdoing but not that it be credible), FBI agents may monitor and record private online communications in real-time using informants and may even use false social media identities with the approval of a supervisor. While conducting full investigations (which require a reasonable indication of criminal activity), FBI agents may use all of these methods and can also get probable cause warrants to conduct wiretapping, including to collect private social media  communications .

The DIOG does restrict the FBI from probing social media based  solely  on “an individual’s legal exercise of his or her First Amendment rights,” though such activity can be a substantial motivating factor. It also requires that the collection of online information about First Amendment-protected activity be connected to an “authorized investigative purpose” and be as minimally intrusive as reasonable under the circumstances, although it is not clear how adherence to these standards is evaluated.

DHS:  DHS policies can be pieced together using a combination of legally mandated disclosures — such as privacy impact assessments and data mining reports — and publicly available policy guidelines, though the amount of information available varies. In 2012, DHS published  a   policy  requiring that components collecting personally identifiable information from social media for “operational uses,” such as investigations (but not intelligence functions), implement basic guidelines and training for employees engaged in such uses and ensure compliance with relevant laws and privacy rules. Whether this policy has been holistically implemented for “operational uses” of social media across DHS remains unclear. However, the Brennan Center has obtained a number of templates describing how DHS components use social media, created pursuant to the 2012 policy, through the Freedom of Information Act.

In practice, DHS policies are generally permissive. The examples below illustrate the ways in which various parts of the Department use social media.

  • ICE agents monitor social media for purposes ranging from situational awareness and criminal intelligence gathering to support for investigations. In addition to engaging private companies to monitor social media, ICE agents  may collect  public social media data whenever they determine it is “relevant for developing a viable case” and “supports the investigative process.”
  • Parts of DHS, including the National Operations Center (NOC) (part of the Office of Operations Coordination and Planning ( OPS )), Federal Emergency Management Agency ( FEMA ), and Customs and Border Protection ( CBP ), use social media monitoring for situational awareness. The goal is generally not to “seek or collect” personally identifiable information. DHS may do so in “in extremis situations,” however, such as when serious harm to a person may be imminent or there is a “credible threat[] to [DHS] facilities or systems.” NOC’s situational awareness operations are not covered by the 2012 policy; other components carrying out situational awareness monitoring must create a but may receive an exception from the broader policy with the approval of DHS’s Chief Privacy Officer.
  • DHS’s U.S. Citizenship and Immigration Services ( USCIS ) uses social media to verify the accuracy of materials provided by applicants for immigration benefits (such as applications for refugee status or to become a U.S. citizen) and to identify fraud and threats to public safety. USCIS says it only looks at publicly available information and that it will respect account holders’ privacy settings and refrain from direct dialogue with subjects, though staff may use fictitious accounts in certain cases, including when “overt research would compromise the integrity of an investigation.”
  • DHS’s Office of Intelligence & Analysis (I&A), as a member of the Intelligence Community, is not covered by the 2012 policy. Instead it operates under a separate set of  guidelines  — pursuant to Executive Order 12,333, issued by the Secretary of Homeland Security and approved by the Attorney General — that govern its management of information collected about U.S. persons, including via social media. The office incorporates social media into the open-source intelligence reports it produces for federal, state, and local law enforcement; these reports provide threat warnings, investigative leads, and referrals. I&A personnel  may  collect and retain social media information on U.S. citizens and green card holders so long as they reasonably believe that doing so supports a national or departmental mission; these missions are broadly defined to include addressing homeland security concerns. And they may disseminate the information further if they believe it would help the recipient with “lawful intelligence, counterterrorism, law enforcement, or other homeland security-related functions.”

State Department.  The Department’s policies covering social media monitoring for visa vetting purposes are not publicly available. However,  public   disclosures  shed some light on the rules consular officers are supposed to follow when vetting visa applicants using social media. For example, consular officers are not supposed to interact with applicants on social media, request their passwords, or try to get around their privacy settings — and if they create an account to view social media information, they “must abide by the contractual rules of that service or platform provider,” such as Facebook’s real name policy. Further, information gleaned from social media must not be used to deny visas based on protected characteristics (i.e., race, religion, ethnicity, national origin, political views, gender or sexual orientation). It is supposed to be used only to confirm an applicant’s identity and visa eligibility under criteria set forth in U.S. law.

Are there constitutional limits on social media surveillance?

Yes. Social media monitoring may violate the First or Fourteenth Amendments. It is well established that public posts receive constitutional protection: as the investigations guide of the Federal Bureau of Investigation recognizes, “[o]nline information, even if publicly available, may still be protected by the First Amendment. Surveillance is clearly unconstitutional when a person is specifically  targeted  for the exercise of constitutional rights protected by the  First Amendment  (speech, expression, association, religious practice) or on the basis of a characteristic protected by the  Fourteenth Amendment  (including race, ethnicity, and religion). Social media monitoring may also violate the First Amendment when it burdens constitutionally protected activity and does not contribute to a legitimate government objective. Our  lawsuit  against the State Department and DHS ( Doc Society v. Blinken ), for instance, challenges the collection, retention, and dissemination of social media identifiers from millions of people — almost none of whom have engaged in any wrongdoing — because the government has not adequately justified the screening program and it imposes a substantial burden on speech for little demonstrated value. The White House office that reviews federal regulations noted the latter point — which a DHS Inspector General  report  and  internal reviews  have also underscored  — when it  rejected , in April 2021, DHS’s proposal to collect social media identifiers on travel and immigration forms.

Additionally, the  Fourth Amendment  protects people from “unreasonable searches and seizures” by the government, including searches of data in which people have a “reasonable expectation of privacy.” Judges have  generally   concluded  that content posted publicly online cannot be reasonably expected to be private, and that police therefore do not need a warrant to view or collect it. Courts are increasingly recognizing, however, that when the government can collect far more information — especially information revealing sensitive or intimate details — at a far lower cost than traditional surveillance, the Fourth Amendment  may protect  that data. The same is true of social media monitoring and the use of powerful social media monitoring tools, even if they are employed to review publicly available information.

Are there statutory limits on social media surveillance?

Yes. Most notably, the  Privacy Act  limits the collection, storage, and sharing of personally identifiable information about U.S. citizens and permanent residents (green card holders), including social media data. It also bars, under most circumstances, maintaining records that describe the exercise of a person’s First Amendment rights. However, the statute contains an exception for such records “within the scope of an authorized law enforcement activity.” Its coverage is limited to databases from which personal information can be retrieved by an individual identifier like a name, social security address, or phone number.

Additionally, federal agencies’ collection of social media handles must be authorized by law and, in some cases, be subject to public notice and comment and justified by a reasoned explanation that accounts for contrary evidence.  Doc Society v. Blinken , for example, alleges that the State Department’s collection of social media identifiers on visa forms violates the Administrative Procedure Act (APA) because it exceeds the Secretary of State’s statutory authority and did not consider that prior social media screening pilot programs had failed to demonstrate efficacy.

Is the government’s use of social media consistent with platform rules?

Not always. Companies do not bar government officials from making accounts and looking at what is happening on their platforms. However, after the ACLU  exposed  in 2016 that third-party social media monitoring companies were pitching their services to California law enforcement agencies as a way to monitor protestors against racial injustice,  Twitter ,  Facebook , and Instagram changed or clarified their rules to prohibit the use of their data for surveillance (though the actual  application  of those rules can be murky).

Additionally, Facebook has a  policy  requiring users identify themselves by their “real names,” with no exception for law enforcement. The FBI and other federal law enforcement agencies permit their agents to use false identities notwithstanding this rule, and there have been documented instances of other law enforcement departments  violating  this policy as well.

How do federal agencies share information collected from social media, and why is it a problem?

Federal agencies may share information they collect from social media across all levels of government and the private sector and will sometimes even disclose data to foreign governments (for instance,  identifiers  on travel and immigration forms). In particular, information is shared domestically with state and local law enforcement, including through fusion centers, which are post-9/11 surveillance and intelligence hubs that were intended to facilitate coordination among federal, state, and local law enforcement and private industry. Such unfettered data sharing magnifies the risks of abusive practices.

Part of the risk stems from the dissemination of data to actors with a documented history of discriminatory surveillance, such as fusion centers. A 2012 bipartisan Senate investigation  concluded  that fusion centers have “yielded little, if any, benefit to federal counterterrorism intelligence efforts,” instead producing reams of low-quality information while labeling Muslim Americans engaging in innocuous activities, such as voter registration, as potential threats. More recently,  fusion centers  have been  caught   monitoring  racial and social justice organizers and protests and  promoting  fake social media posts by right-wing provocateurs as credible intelligence regarding potential violence at anti-police brutality protests. Further, many police departments that get information from social media through fusion centers (or from federal agencies like the FBI and DHS directly) have a  history  of targeting and surveilling minority communities and activists, but lack basic policies that govern their use of social media. Finally, existing agreements  permit  the U.S. government to share social media data — collected from U.S. visa applicants, for example — with repressive foreign governments that are known to retaliate against online critics.

The broad dissemination of social media data amplifies some of the harms of social media monitoring by eliminating context and safeguards. Under some circumstances, a government official who initially reviews and collects information from social media may better understand — from witness interviews, notes of observations from the field, or other material obtained during an investigation, for example — its meaning and relevance than a downstream recipient lacking this background. And any safeguards the initial agency places upon its monitoring and collection — use and retention limitations, data security protocols, etc. — cannot be guaranteed after it disseminates what has been gathered. Once social media is disseminated, the originating agency has little control over how such information is used, how long it is kept, whether it could be misinterpreted, or how it might spur overreach.

Together, these dynamics amplify the harms to free expression and privacy that social media monitoring generates. A qualified and potentially unreliable assessment based on social media that a protest could turn violent or that a particular person poses a threat might easily turn into a justification for policing that protest aggressively or arresting the person, as illustrated by the examples above. Similarly, a person who has applied for a U.S. visa or been investigated by federal authorities, even if they are cleared, is likely to be wary of what they say on social media well into the future if they know that there is no endpoint to potential scrutiny or disclosure of their online activity. Formerly, one branch of DHS I&A had a  practice  of redacting publicly available U.S. person information contained in open-source intelligence reports disseminated to partners because of the “risk of civil rights and liberties issues.” This practice was an apparent justification for removing pre-publication oversight to identify such issues, which implies that DHS recognized that information identifying a person could be used to target them without a legitimate law enforcement reason.

What role do private companies play, and what is the harm in using them?

Both  the   FBI  and  DHS  have reportedly hired private firms to help conduct social media surveillance, including to help identify threats online. This raises concerns around transparency and accountability as well as effectiveness.

Transparency and accountability:  Outsourcing surveillance to private industry obscures how monitoring is being carried out; limited information is available about relationships between the federal government and social media surveillance contractors, and the contractors, unlike the government, are not subject to freedom of information laws. Outsourcing also weakens safeguards because private vendors may not be subject to the same legal or institutional constraints as public agencies.

Efficacy:  The most ambitious tools use artificial intelligence with the goal of making judgments about which threats, calls for violence, or individuals pose the highest risk. But doing so reliably is beyond the capacity of both humans and existing technology, as more than 50 technologists  wrote  in opposing an ICE proposal aimed at predicting whether a given person would commit terrorism or crime. The more rudimentary of these tools look for specific words and then flag posts containing those words. Such flags are overinclusive, and garden-variety content will regularly be  elevated . Consider how the word “extremism,” for instance, could appear in a range of news articles, be  used  in reference to a friend’s strict dietary standards, or arise in connection with discussion about U.S. politics. Even the best Natural Language Processing tools, which attempt to ascertain the meaning of text, are prone to  error , and fare particularly  poorly  on speakers of non-standard English, who may more frequently be from minority communities, as well as speakers of languages other than English. Similar  concerns  apply to mechanisms used to flag images and videos, which generally lack the context necessary to differentiate a scenario in which an image is used for reporting or commentary from one where it is used by a group or person to incite violence.

protests

Internal Atlanta Police Records Reveal Monitoring of ‘Cop City’ Opponents’ Political Activity

The documents show inappropriate police surveillance of social media posts about pizza nights and study groups.

Capitol building in background and "Black Lives Matter" sign held up in the foreground

Records Show DC and Federal Law Enforcement Sharing Surveillance Info on Racial Justice Protests

Officers tracked social media posts about racial justice protests with no evidence of violence, threatening First Amendment rights.

Documents Reveal How DC Police Surveil Social Media Profiles and Protest Activity

Study reveals inadequacy of police departments’ social media surveillance policies, ftc must investigate meta and x for complicity with government surveillance, we’re suing the nypd to uncover its online surveillance practices, senate ai hearings highlight increased need for regulation, informed citizens are democracy’s best defense.

Social Media Security: Risks, Best Practices, and Tools for 2024

Learn about the most common social media security risks and the best practices that will help you protect your accounts.

cover image

Table of Contents

Keeping your social accounts secure is a critical component of your social media strategy. Here, we’ll walk you through the latest social media security risks. Then we’ll explore how you can protect yourself, your brand, and your team.

Bonus: Get a free, customizable social media policy template to quickly and easily create guidelines for your company and employees.

What is social media security?

Social media security refers to the practices taken to protect your social media account, information and privacy. These measures provide security from threats like:

  • Data breaches
  • Identity theft
  • Spread of misinformation

Nowadays, platforms like Instagram, Facebook, and LinkedIn are relied upon for communication, marketing, and customer service. Therefore, social media security awareness is important for both business and personal accounts .

Why is social media security awareness so important?

Social media accounts contain a wealth of data. They’re linked to personal information, customer connections, credit card details, and so much more. Without social media security protocols in place, all that information is at unnecessary risk.

Common social media security risks

Phishing and scams.

Phishing scams are some of the most common social media cyber security risks. The goal of a phishing scam is to get you or your employees to hand over passwords, banking details, or other sensitive information.

Fake giveaways are one common type of phishing scam. Fraudsters impersonate companies like Best Buy or Bed Bath and Beyond to offer a significant coupon or prize. Of course, you have to provide personal information to access the non-existent reward.

In another variation, someone claims to be a lottery winner who wants to share their winnings.

Free money on social media? Nah. It’s a scam. Learn more: https://t.co/j2MkzCIZIc — FTC (@FTC) May 21, 2024

Online shopping and investment scams are also significant problems on social media. Losses reported to the FTC that started on social media jumped from $237 million in 2020 to $1.4 billion in 2023.

Social media is the most common contact method for scammers targeting Americans aged 20 to 69. In fact, 2023 was the first year social media became the primary contact method for those in their 40s through 60s.

Warn your parents and grandparents (and your C-suite)!

graph of age and fraud loss FTC Consumer Sentinel Network

Source: Federal Trade Commission

  • Imposter accounts

It’s relatively easy for an imposter to create a social media account that looks like it belongs to your company. This is one reason why getting verified on social networks is so valuable .

Impostor accounts can target your customers, employees, or prospective hires. Your connections may be tricked into handing over confidential information. In turn, your reputation suffers. Imposter accounts may also try to con employees into handing over login credentials for corporate systems.

LinkedIn’s latest Community report notes that they took action on more than 63 million fake accounts in just the last six months of 2023. Most of those accounts (90.5%) were blocked automatically at registration. However, 232,400 fake accounts were only addressed once members reported them.

social media security case study

Source: LinkedIn

Meanwhile, Facebook took action on 631 million fake accounts between January and March 2024. The social media platform estimates that 4% of monthly active users are fake accounts.

AI information gathering

There’s a lot of information about your business – and your employees – on social media. That’s not new. What is new is the ability to gather crumbs of information from multiple sources and use it to train an AI tool to produce content.

That makes it easier for bad actors to create convincing, fraudulent social media posts and direct messages.

In fact, 20% of Gen X say it’s hard to tell what’s real or fake regarding social content generated by AI . Younger generations find it only slightly easier: 15% of Millenials and 14% of Gen Z also struggle here.

social content generated by AI graph of agreement with the following statements by age generation category

Source: Hootsuite Social Media Consumer 2024 Survey

Fraudsters can also use information gathered from social media to train an AI tool. They are then well-equipped to contact your employees through other means. AI social media and search tools can also support scammers by seeming to verify false information.

Case and point: A Canadian man was recently scammed by a fraudulent Facebook customer support line. He felt comfortable giving his information to the scammer because a chat with Meta AI told him the phone number he found online was legitimate. (It was not.)

Malware attacks and hacks

In one of the more embarrassing recent social media cyber security incidents, the X (formerly Twitter) U.S. Security and Exchange Commission account was hacked in January .

The @SECGov X account was compromised, and an unauthorized post was posted. The SEC has not approved the listing and trading of spot bitcoin exchange-traded products. — U.S. Securities and Exchange Commission (@SECGov) January 9, 2024

If hackers gain access to your social media accounts, they can cause enormous brand reputation damage.

A newer threat to social media business accounts is hijacking a social media ad account with attached payment methods. They can then run fraudulent ads that appear to come from a legitimate source (you!) but actually direct the user to malware or scams (bad!).

graph of security risks threat groups targeting Meta Business accounts

Source: W/Labs

Vulnerable third-party apps

Locking down your own social accounts is great. But hackers may still be able to gain access through vulnerabilities in connected third-party apps .

Instagram specifically warns about third-party apps that claim to provide likes or followers:

“If you give these apps your login information … they can gain complete access to your account. They can see your personal messages, find information about your friends, and potentially post spam or other harmful content on your profile. This puts your security, and the security of your friends, at risk.”

Password theft

Those social media quizzes asking about your first car or elf name might seem like harmless fun. But they’re a common method for gathering password information. Or to learn personal details that are often used as forgotten password clues.

Are you bored scrolling post after post on Facebook? Think twice though before taking that fun-looking Facebook quiz. You are giving away more information than you think. See how: https://t.co/YmVAoL3yiF pic.twitter.com/gFNJHPsxU1 — BBB (@bbb_us) January 28, 2024

By completing them, employees can compromise their cyber security on social media.

Employees can also unwittingly provide clues to their forgotten password hints. This info may appear in posts about life events. Think: graduations, weddings, and birthdays. It’s always best to limit personal information shared online, especially on public profiles.

Privacy settings and data security

People seem to be well aware of the potential privacy risks of using social media. Those concerns, of course, don’t stop people from using their favorite social channels. The number of active social media users grew to 5.07 billion as of April 2024.

Make sure you – and your team – understand privacy policies and settings . This applies to both your personal and business accounts. Provide privacy guidelines for employees who use their personal social accounts at work, or to talk about work.

Unsecured mobile phones

Surprisingly, 16% of Americans never use phone locking features such as a passcode, fingerprint, or face recognition. Their social accounts and other data are completely accessible to anyone who gets their hands on their mobile device.

16% of smartphone owners don't use a security feature to unlock their device, older adults especially prefer not to

Source: Pew Research Center

Failing to update phone software also exposes users to unnecessary risk. Only 42% of American smartphone users have their software set to update automatically, and 3% never update their smartphone software at all.

Social media security best practices for 2024

Now that you know the risks, here are some ways to mitigate them .

Implement a detailed social media policy

A social media policy is a set of guidelines that outline how your business and your employees should use social media responsibly.

At a minimum, the security section of your social media policy should include:

  • Rules related to personal social media use on business equipment
  • Social media activities to avoid, like quizzes that ask for personal information
  • Which departments or team members are responsible for each social media account
  • Guidelines on how to create an effective password and how often to change passwords
  • Expectations for keeping software and devices updated
  • How to identify and avoid scams, attacks, and other security social media threats
  • Who to notify and how to respond if a social media security concern arises

Set up an approval process

Limiting the number of people who can access and post on your social accounts is an important defensive strategy.

You might focus on threats coming from outside your organization. However, employees are a significant source of accidental data breaches.

You may have whole teams of people working on social media messaging, post creation, or customer service. But not everyone needs to know the passwords to your social accounts – or have the ability to post.

You can use Hootsuite to collaborate on secure social media without sharing passwords. Then, the post goes into an approval workflow.

Use two-factor authentication

Two-factor authentication is not foolproof. But it does provide a powerful extra layer of protection for your social media accounts . It’s best practice to enable it for all secure social media accounts, even if it can sometimes be annoying.

In fact, a lack of two-factor authentication contributed to the SEC Twitter account hack.

We can confirm that the account @SECGov was compromised and we have completed a preliminary investigation. Based on our investigation, the compromise was not due to any breach of X’s systems, but rather due to an unidentified individual obtaining control over a phone number… — Safety (@Safety) January 10, 2024

Set up an early warning system with social media security monitoring tools

Keep an eye on all of your social channels. That includes the ones you use every day and those you’ve registered but never used.

Use your social media monitoring plan to watch for:

  • Suspicious activities
  • Inappropriate mentions of your brand by employees
  • Inappropriate mentions of your brand by anyone else associated with the company
  • Negative conversations about your brand

Regularly review your social media security measures

Social media security threats are constantly changing. Regular audits of your social media security measures will help keep you ahead of fraudsters.

At least once a quarter, be sure to review:

Social network privacy and security settings

Social media companies routinely update their privacy and security settings. For example, X (formerly Twitter) disabled two-factor authentication via text message for non-premium users in March 2023. In April 2024, the platform rolled out Passkeys as a login for all iOS users worldwide. Both are important security changes that should be addressed in your social media policy.

Access and publishing privileges and approval workflows

Regularly checking who has access to your social media management platform and publishing privileges on your social accounts. Update as needed. Make sure all former employees have had their access revoked. Check for anyone who’s changed roles and no longer needs the same level of access.

Recent online security threats

Maintain a good relationship with your company’s IT team to improve your social media security awareness . They can keep you informed of any new security risks and social engineering attacks. And keep an eye on the news—big hacks and major new threats will be reported in mainstream news outlets.

Your social media policy

As new networks gain popularity, security best practices change, and new threats emerge. A quarterly review will ensure this document remains useful and helps to keep your social accounts safe.

3 Social media security tools to keep your channels safe

1. hootsuite.

With Hootsuite , team members never need to know the login information for any social network account. You can control access and permission, so everyone gets only the necessary access.

You can then build an approval workflow that automatically bumps content from the creator to the approver. Notifications ensure everyone knows when they need to complete an approval or revision task.

If someone leaves the company, you can disable their account without changing social media passwords.

You can add an extra level of security with Hootsuite’s Proofpoint integration . This compliance software automatically reviews social content before publishing. This ensures it follows your social policy and relevant legislation and regulations.

supervision compliance

Psstt: Learn more about setting up Proofpoint here.

Hootsuite is also an effective social monitoring tool that keeps you ahead of threats. It monitors social networks for mentions of your brand and keywords. You then know right away when suspicious conversations about your brand emerge.

For example, say people are sharing phony coupons, or an imposter account starts tweeting in your name. You’ll see that activity in your streams and can take action before your customers get scammed.

Hootsuite Streams

Psstt: Hootsuite is also FedRamp authorized and Cyber Essentials compliant. Learn more about our risk management program and information security policies .

Zero

Source: ZeroFOX

ZeroFOX is a cybersecurity platform that provides automated alerts of:

  • Dangerous, threatening, or offensive social content targeting your brand
  • Malicious links posted on your social accounts
  • Scams targeting your business and customers
  • Fraudulent accounts impersonating your brand

It also helps protect against hacking and phishing attacks.

3. BrandFort

BrandFort content moderation tool removes spam in comments

Source: Brandfort

BrandFort can help protect your social accounts from spam and phishing comments, and other content moderation issues.

Why are spam comments a cyber security and social media risk? They’re visible on your profiles and may entice legitimate followers or employees to click through to scam sites. You’ll have to deal with the fallout, even though you did not directly share the spam.

Brandfort also detects and hides personally identifiable information that followers post in comments on your posts. This helps protect them from phishing and fraud attacks.Plus, BrandFort uses AI to detect problem comments in multiple languages and hide them automatically.

Plus, you can integrate Brandfort directly into the Hootsuite dashboard .

Easily manage all your company’s social media profiles using Hootsuite. From a single dashboard, you can schedule and publish posts, engage your followers, monitor relevant conversations, measure results, manage your ads, and much more.

Do it better with Hootsuite , the all-in-one social media tool. Stay on top of things, grow, and beat the competition.

Become a better social marketer.

Get expert social media advice delivered straight to your inbox.

Christina Newberry is an award-winning writer and editor whose greatest passions include food, travel, urban gardening, and the Oxford comma—not necessarily in that order.

Related Articles

cover image

5 Ways To Improve Social Media Security Awareness at Work

Being active on social media comes with some security risks. We’ve got tips to help improve your workplace social media security awareness.

cover image

Social Media Crisis Communication: A Complete Guide [2024]

Social media crisis communication for brands comes down to one simple question: How can you help? Here’s how to make sure you’re prepared.

cover image

How to Write a Social Media Policy [Templates]

A social media policy is a document that outlines how an organization and its employees should conduct themselves online.

cover image

Brand Safety on Social Media: Manage Risk and Build Trust

Protecting your brand’s online reputation means getting proactive about brand safety, especially when it comes to social media.

A Survey and a Case-Study Regarding Social Media Security and Privacy on Greek Future IT Professionals

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options.

  • Romansky R Noninska I (2022) Deterministic Model Investigation of Processes in a Heterogeneous E-Learning Environment International Journal of Human Capital and Information Technology Professionals 10.4018/IJHCITP.293228 13 :1 (1-16) Online publication date: 14-Jan-2022 https://dl.acm.org/doi/10.4018/IJHCITP.293228

Recommendations

The social structuration of six major social media platforms in the united kingdom: facebook, linkedin, twitter, instagram, google+ and pinterest.

Sociological studies on the Internet have often examined digital inequalities. These studies show how Internet access, skills, uses and outcomes vary between different population segments. However, we know more about social inequalities in general ...

Social Media Marketing in Luxury Retail

This study examines the potentials of social media marketing for luxury retailers. Social media marketing tactics of three luxury retail brands Barneys New York, Net-a-Porter.com, and Saks Fifth Avenue were examined across three major social media sites ...

Trends in Social Media Usage: An Investigation of its Growth in the Arab World

In the present era of Web 2.0 and Web 3.0, Social Networking Sites have given us means of providing real-time services. Recent years have brought a massive growth in the social networking phenomenon. The use of social media in the Arab World has been ...

Information

Published in.

United States

Publication History

Author tags.

  • Geo-Location
  • IT Practitioners
  • IT Professionals
  • Information Retrieval
  • OSINT Techniques
  • Social Media
  • Social Networks

Contributors

Other metrics, bibliometrics, article metrics.

  • 1 Total Citations View Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

social media security case study

Critical Security Studies in the Digital Age

Social Media and Security

  • © 2023
  • Joseph Downing 0

Senior Lecturer of International Relations and Politics, Department of Politics, History and International Relations, Aston University, Birmingham, UK

You can also search for this author in PubMed   Google Scholar

  • Re-evaluates existing debates in critical security studies in the context of digital communications
  • Proposes a historical and methodological re-assessment of the field
  • Explains how social media impacts critical security studies

Part of the book series: New Security Challenges (NSECH)

2508 Accesses

1 Citations

2 Altmetric

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

This book demonstrates that the disciplinary boundaries present within international relations approaches to security studies are redundant when examining social media, and inter- and multi-disciplinary analysis is key. A key result of the analysis undertaken is that when examining the social media sphere security scholars need to “expect the unexpected”. This is because social media enables users to subvert, contest and create security narratives with symbols and idioms of their choice which can take into account “traditional” security themes, but also unexpected and under explored themes such as narratives from the local context of the users’ towns and cities, and the symbolism of football clubs. The book also explores the complex topography of social media when considering constructions of security. The highly dynamic topography of social media is neither elite dominated and hierarchical as the Copenhagen School conceptualises security speak. However, neither is it completely flat and egalitarian as suggested by the vernacular security studies’ non-elite approach. Rather, social media’s topography is shifting and dynamic, with individuals gaining influence in security debates in unpredictable ways. In examining social media this book engages with the emancipatory burden of critical security studies. This book argues that it remains unfulfilled on social media and rather presents a “thin” notion of discursive emancipation where social media does provide the ability for previously excluded voices to participate in security debates, even if this does not result in their direct emancipation from power hierarchies and structures offline.

Similar content being viewed by others

social media security case study

Being Critical About Security: What Critical Political Economy Says About Security and Identity

social media security case study

Information-sharing and the EU Counter-terrorism Policy: A ‘Securitisation Tool’ Approach

social media security case study

#ThisFlag: Social Media and Cyber-Protests in Zimbabwe

  • critical security
  • social media
  • war and social media
  • digital warfare
  • mediatization
  • digitalization
  • war and media
  • world politics in the digital age

Table of contents (8 chapters)

Front matter, introduction to social media and critical security studies in the digital age.

Joseph Downing

Conceptualising Social Media and Critical Security Studies in the Digital Age

Social media, digital methods and critical security studies, social media, security and terrorism in the digital age, social media and vernacular security in the digital age, social media, security and democracy in the digital age, social media, security and identity in the digital age, conclusions on social media and critical security studies in a digital age, back matter, authors and affiliations, about the author.

Joseph Downing  is Senior Lecturer in International Relations and Politics, Aston University, UK, and Visiting Fellow in the European Institute, London School of Economics and Political Science, UK. He was previously Marie-Curie Fellow at the Laboratoire méditerranéen de sociologie, CNRS, Université Aix-Marseille Marseille, and the School of Oriental and African Studies, University of London. He has published and consulted widely on politics and security.

Bibliographic Information

Book Title : Critical Security Studies in the Digital Age

Book Subtitle : Social Media and Security

Authors : Joseph Downing

Series Title : New Security Challenges

DOI : https://doi.org/10.1007/978-3-031-20734-1

Publisher : Palgrave Macmillan Cham

eBook Packages : Political Science and International Studies , Political Science and International Studies (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023

Hardcover ISBN : 978-3-031-20733-4 Published: 24 January 2023

Softcover ISBN : 978-3-031-20736-5 Published: 24 January 2024

eBook ISBN : 978-3-031-20734-1 Published: 23 January 2023

Series ISSN : 2731-0329

Series E-ISSN : 2731-0337

Edition Number : 1

Number of Pages : XI, 265

Number of Illustrations : 1 b/w illustrations

Topics : International Security Studies , Military and Defence Studies , Terrorism and Political Violence , Social Media , Political Communication

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Opportunities
  • Free Speech
  • Creativity and Innovation
  • Transparency
  • International
  • Deeplinks Blog
  • Press Releases
  • Legal Cases
  • Whitepapers
  • Annual Reports
  • Action Center
  • Electronic Frontier Alliance
  • Privacy Badger
  • Surveillance Self-Defense
  • Atlas of Surveillance
  • Cover Your Tracks
  • Crocodile Hunter
  • Street Level Surveillance
  • Donate to EFF
  • Giving Societies
  • Org. Membership
  • Other Ways to Give
  • Membership FAQ
  • Joint Statement by Access Now, Article 19 and EFF regarding Breton letter

Search form

  • Copyright (CC BY)
  • Privacy Policy

social media security case study

In These Five Social Media Speech Cases, Supreme Court Set Foundational Rules for the Future

A person holding a megaphone that another person speaks through

The U.S. Supreme Court addressed government’s various roles with respect to speech on social media in five cases reviewed in its recently completed term. The through-line of these cases is a critically important principle that sets limits on government’s ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users’ First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation, but will not be infringed by the independent decisions of the platforms themselves. As a general overview, the NetChoice cases , Moody v. NetChoice and NetChoice v. Paxton , looked at government’s role as a regulator of social media platforms. The issue was whether state laws in Texas and Florida that prevented certain online services from moderating content were constitutional in most of their possible applications. The Supreme Court did not rule on that question and instead sent the cases back to the lower courts to reexamine NetChoice’s claim that the statutes had few possible constitutional applications.

The court did, importantly and correctly, explain that at least Facebook’s Newsfeed and YouTube’s Homepage were examples of platforms exercising their own First Amendment rights on how to display and organize content, and the laws could not be constitutionally applied to Newsfeed and Homepage and similar sites, a preliminary step in determining whether the laws were facially unconstitutional. Lindke v. Freed and Garnier v. O’Connor-Ratcliffe looked at the government’s role as a social media user who has an account and wants to use its full features, including blocking other users and deleting comments . The Supreme Court instructed the lower courts to first look to whether a government official has the authority to speak on behalf of the government, before looking at whether the official used their social media page for governmental purposes, conduct that would trigger First Amendment protections for the commenters. Murthy v. Missouri , the jawboning case, looked at the government’s mixed role as a regulator and user, in which the government may be seeking to coerce platforms to engage in unconstitutional censorship or may also be a user simply flagging objectionable posts as any user might. The Supreme Court found that none of the plaintiffs had standing to bring the claims because they could not show that their harms were traceable to any action by the federal government defendants. We’ve analyzed each of the Supreme Court decisions, Moody v. NetChoice (decided with NetChoice v. Paxton ), Murthy v. Missouri , and Lindke v. Freed (decided with Garnier v. O’Connor Ratcliffe ), in depth. But some common themes emerge when all five cases are considered together.

  • Internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to  interfere with content moderation, but it will not be infringed  by the independent decisions of the platforms themselves. This principle, which EFF has been advocating for many years, is evident in each of the rulings. In Lindke , the Supreme Court recognized that government officials, if vested with and exercising official authority, could violate the First Amendment by deleting a user’s comments or blocking them from commenting altogether. In Murthy , the Supreme Court found that users could not sue the government for violating their First Amendment rights unless they could show that government coercion lead to their content being taken down or obscured, rather than the social media platform’s own editorial decision. And in the NetChoice cases, the Supreme Court explained that social media platforms typically exercise their own protected First Amendment rights when they edit and curate which posts they show to their users, and the government may violate the First Amendment when it requires them to publish or amplify posts.
  • Underlying these rulings is the Supreme Court’s long-awaited recognition that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and are often guided in this process by their own community standards or similar editorial policies. This is seen in the Supreme Court’s emphasis in Murthy that jawboning is not actionable if the content moderation was the independent decision of the platform rather than coerced by the government. And a similar recognition of independent decision-making underlies the Supreme Court’s First Amendment analysis in the NetChoice cases. The Supreme Court has now thankfully moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Supreme Court used that language to describe the process in last term’s case, Twitter v. Taamneh.
  • T his term ’ s cases also confirm that traditional First Amendment rules apply to social media . In Lindke , the Supreme Court recognized that when government controls the comments components of a social media page , it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls , such as parks, public auditoriums, or city council meetings . In the Net C hoice cases, the Supreme Court found that platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Plenty of legal issues around social media remain to be decided. But the 2023-24 Supreme Court term has set out important speech-protective rules that will serve as the foundation for many future rulings.  

Related Issues

Related cases, join eff lists, discover more., related updates, victory d.c. circuit rules in favor of animal rights activists censored on government social media pages.

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case,...

EFF to Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

Legal intern Danya Hajjaji was the lead author of this post. The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment...

Tik Tok phone icon turned into circe-slash icon

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect. Montana’s ban ( which EFF and others opposed ) prohibits TikTok from...

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

social media security case study

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any...

The U.S. Supreme Court’s Busy Year of Free Speech and Tech Cases: 2023 Year in Review

social media security case study

EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

Intern Muhammad Essa contributed to this post. EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a...

Necessary & Proportionate logo

UN Cybercrime Treaty Talks End Without Consensus on Scope And Deep Divides About Surveillance Powers

As the latest negotiating session on the proposed UN Cybercrime Treaty wrapped up in New York earlier this month, one thing was clear: with time running out to finalize the text, little progress and consensus was reached on crucial points , such as the treaty's overall scope of...

Free Speech banner, an colorful graphic representation of a megaphone

Rights Groups Urge EU’s Thierry Breton: No Internet Shutdowns for Hateful Content

EFF and 66 human rights and free speech advocacy groups across the globe today called on EU Internal Commissioner Thierry Breton to clarify that the Digital Services Act (DSA)—new regulations aimed at reining in Big Tech companies that control the lion’s share of online speech worldwide—does not allow internet shutdowns...

fingers prepared to flick a small person with a megaphone off the screen

The Broad, Vague RESTRICT Act Is a Dangerous Substitute for Comprehensive Data Privacy Legislation

Related cases.

Back to top

Follow EFF:

Check out our 4-star rating on Charity Navigator .

  • Internships
  • Diversity & Inclusion
  • Creativity & Innovation
  • EFFector Newsletter
  • Press Contact
  • Join or Renew Membership Online
  • One-Time Donation Online

social media security case study

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

social media security case study

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The influence of social media on perceived levels of national security and crisis: a case study of youth in the united arab emirates.

social media security case study

1. Introduction

2. background.

…a core part of their everyday reality and not just a peripheral place to visit and share ideas with others. Young people’s connections and networks in social media also provide them with the opportunity to engage in different types of political discussions in society. (p. 231)
…Any future researcher must therefore understand that 90% of terrorist activities have gone dark, meaning that it is high time to research the strategies to control their clandestine activities. (p. 1)
…results show that serious events had a minimal impact on the city’s image as perceived and shared by reviewers despite the enormous media coverage.
…while traditional media still plays a significant role in shaping risk perception, social media can be considered even more influential. (p. 39)
…means to maintain authentic cultural components to face other foreign cultural currents that may be suspicious. (p. 633)
…we should point to the rarity of studies that deal directly with the effect of social networks causing Intellectual Deviation. (p. 636)

3. Materials and Methods

  • Cultural and societal implications (6 questions);
  • Ethical and religious implications (7 questions);
  • Political implications (5 questions);
  • Economic implications (6 questions);
  • Security implications (6 questions).
… the fact that the method is based entirely on empirical data regarding subjects’ responses rather than subjective opinions of judges; and the fact that this method produces more homogeneous scales and increased the probability that unitary attitude is being measured, therefore that validity (construct and concurrent) and reliability are reasonably high. (p. 560)

5. Discussion

6. conclusions, 6.1. theoretical contribution, 6.2. practical implications, 6.3. limitations and future work, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.




  • After Greetings
  • First: general data
-1 GenderMale ( )
Female ( )
-2 Academic qualification( ) Diploma or less
( ) BA
( ) Postgraduate
-3 Work nature( ) Responsible/Director
( ) employee
( ) does not work
-4 Nationality( ) Emirati
( ) resident
-5 Age( ) From 20 to 25 years old
( ) From 26 to 30 years old
( ) from 31 to 40 years
  • Second: The axes and paragraphs of the questionnaire
1Social media helps spread its disruptive ideas in society, especially among young people
2Social media has helped spread Western ideas, values, and customs among young people in Emirati society
3Social media helps to be influenced by the opinions, beliefs and perceptions of others and to be followed by young people in the UAE
4Social media has helped form groups and friendships between young people with common cultural and scientific interests
5Social media has contributed to easy access to information and news in an instant and its repercussions on the Emirati society
6Social media plays a major role in questioning the value of the country’s cultural heritage and national symbols
1Social media contributes to the fluctuation of the value system of members of the Emirati society as a result of the mixing of cultures
2Social media helps to spread the websites of deviant and misleading teams and groups
3Social media has contributed to the development of values associated with the moderation and tolerance of the Islamic religion in the United Arab Emirates
4The spread of social media has contributed to a gap between religious scholars and the youth group in society
5Social media has contributed to the spread of immoral and immoral images and videos among young people in society
6Social media has contributed to the marketing of consumer values that are hostile to our authentic Arab values, ethics and customs
7Social media lacked depth in personal relationships between members of the same family, and the moral aspect of family control was weak
1Social media contributes to mobilizing public opinion and news against government policy in the United Arab Emirates
2Social media distorts the personal information of some important leadership figures in the United Arab Emirates
3Social media helps to respond to and refute the suspicions raised about the UAE
4Social media develops the ability of young people in the UAE to objectively express their point of view on community issues
5Social media contributes to promoting the values associated with the concepts of citizenship and national responsibility
1Social media has contributed to the increase in money laundering
2Social media is a business investment that benefits business owners
3Social media negatively affects the country’s economy
4Social media has contributed to the spread of illegal e-marketing
5Social media provides a suitable environment for e-commerce buying and selling
6Social media contains annoying and often unacceptable advertisements
1Social media contributes to the formation of public opinion towards security issues no matter how valid they are
2Social media has indirectly violated a lot of others’ privacy through spying and malicious software
3Social media has a role in directing false information and news that may lead to crises and security disturbances
4Social media contributes to spreading hate crimes in light of the multinationality of the United Arab Emirates
5Social media has helped increase the rate of cyber-extortion crimes in Emirati society
6Social media contributes to spreading rumors that harm the security and stability of society in the United Arab Emirates
1The wide spread of social media represents a real threat to the national security of the United Arab Emirates
2The contents published on social media pages are considered an entry point for the moral, religious and cultural invasion of young people in the UAE
3Social media contributes to reducing opportunities for interaction and communication between family members in the UAE, which affects the level of national security
4Social media constitutes a fertile environment for some to exploit extremist ideas that affect the security and stability of Emirati society
5Social media has helped young people in the UAE learn about and benefit from other cultures and civilizations
6The competent security agencies in the UAE tightly control the contents of social media that are harmful to the country’s national security to limit its political and security impact on members of society
7Social media is used as media platforms by the youth group in the UAE to spread the values of tolerance in light of the country’s multiple cultures
8Developed legislation and laws have contributed to limiting the security and social effects of social media on the national security of the state
9There is intellectual, cognitive and cultural maturity among young people in the UAE with the threats of social media to national security and ways to deal with them
10The security media of the Ministry of Interior and police leaders contributed to spreading security awareness among members of society about the effects of social media on the national security of the state and its negative repercussions on stability
  • Mathes, E.W. Maslow’s Hierarchy of Needs as a Guide for Living. J. Humanist. Psychol. 1981 , 21 , 69–72. [ Google Scholar ] [ CrossRef ]
  • Wanja, S.W.; Muna, W. Effects of Social Media on Security-Agenda Setting in Nairobi City County, Kenya. Int. J. Curr. Asp. 2021 , 5 , 64–77. [ Google Scholar ] [ CrossRef ]
  • Borelli, M. Social media corporations as actors of counterterrorism. News Media Soc. 2021 , 1448–1461. [ Google Scholar ] [ CrossRef ]
  • Asongu, S.A.; Orim, S.-M.I.; Nting, R.T. Terrorism and Social Media: Global Evidence. J. Glob. Inf. Technol. Manag. 2019 , 22 , 208–228. [ Google Scholar ] [ CrossRef ]
  • Innes, M.; Dobreva, D.; Innes, H. Disinformation and digital influencing after terrorism: Spoofing, truthing and social proofing. Contemp. Soc. Sci. 2019 , 16 , 241–255. [ Google Scholar ] [ CrossRef ]
  • Al Huwaish, Y. Enhancing Intellectual Security in Light of Contemporary Global Models and Experiences ; King Abdulaziz Center for National Dialogue: Riyadh, Saudi Arabia, 2017.
  • Tawfiq, A.R. Crisis Management: Planning for What May Not Happen , 3rd ed.; Professional Management Expertise Center (PMEC): Cairo, Egypt, 2009. [ Google Scholar ]
  • Al-Suwaidi, J.S. From Tribe to Facebook: The Transformational Role of Social Networks ; Emirates Centre for Strategic Studies and Research: Abu Dhabi, United Arab Emirates, 2013; Available online: https://www.thefreelibrary.com/Al-Suwaidi%2c+Jamal+S.%2c+From+Tribe+to+Facebook%3a+The+Transformational...-a0396767761 (accessed on 17 August 2022).
  • Norri-Sederholm, T.; Norvanto, E.; Talvitie-Lamberg, K.; Aki-Mauri Huhtinen, A. Social media as the Pulse of National Security Threats: A Framework for Studying How Social Media Influences Young People’s Safety and Security Situation. In Proceedings of the 6th European Conference on Social Media, Portugal, France, 13–14 June 2019; pp. 231–237. Available online: https://www.theseus.fi/bitstream/handle/10024/227210/Norri_Sederholm_Norvanto_Huhtinen_Talvitie_Lamberg.pdf?sequence=1 (accessed on 17 August 2022).
  • Al Zaabi, K.; Tomic, D. New security paradigm—The use of social networks as a form of threat to the national security state. Ann. Disaster Risk Sci. 2018 , 1 , 27–33. Available online: https://hrcak.srce.hr/clanak/295213 (accessed on 17 August 2022).
  • Al-Saggaf, Y.; Davies, A. Understanding the expression of grievances in the Arabic Twitter-sphere using machine learning. J. Criminol. Res. Policy Pract. 2019 , 5 , 108–119. [ Google Scholar ] [ CrossRef ]
  • Vasu, N.; Ang, B.; Teo, T.A.; Jayakumar, S.; Raizal, M.; Ahuja, J. Fake News: National Security in the Post-Truth Era ; S. Rajaratnam School of International Studies: Singapore, 2018; Available online: https://www.rsis.edu.sg/wp-content/uploads/2018/01/PR180313_Fake-News_WEB.pdf (accessed on 17 August 2022).
  • Al-Enezi, N.N. Employment of social networking sites in response to rumors. J. Media Stud. Res. 2020 , 1 , 201–220. Available online: https://www.iasj.net/iasj/article/200839 (accessed on 17 August 2022).
  • Cardenas, P.; Obara, B.; Theodoropoulos, G.; Kureshi, I. Analysing Social Media as a Hybrid Tool to Detect and Interpret likely Radical Behavioural Traits for National Security. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 4579–4588. [ Google Scholar ] [ CrossRef ]
  • Marine-Roig, E. Content analysis of online travel reviews. In Handbook of E-Tourism ; Xiang, Z., Fuchs, M., Gretzel, U., Höpken, W., Eds.; Springer: Cham, Switzerland, 2022. [ Google Scholar ] [ CrossRef ]
  • Marine-Roig, E. Destination Image Analytics through Traveller-Generated Content. Sustainability 2019 , 11 , 3392. [ Google Scholar ] [ CrossRef ]
  • Marine-Roig, E.; Huertas, A. How safety affects destination image projected through online travel reviews. J. Destin. Mark. Manag. 2020 , 18 , 100469. [ Google Scholar ] [ CrossRef ]
  • Tsoy, D.; Tirasawasdichai, T.; Kurpayanidi, K.I. Role of Social Media in Shaping Public Risk Perception during COVID-19 Pandemic: A Theoretical Review. Int. J. Manag. Sci. Bus. Adm. 2021 , 7 , 35–41. [ Google Scholar ] [ CrossRef ]
  • Al Smadi, H.S.I. The Effect of Social Networking Sites in Causing Intellectual Deviation from Qassim University Students Perspective. Int. J. Asian Soc. Sci. 2016 , 6 , 630–643. [ Google Scholar ] [ CrossRef ]
  • Stanger, N.; Alnaghaimshi, N.; Pearson, E. How do Saudi youth engage with social media? First Monday 2017 , 22 . Available online: https://firstmonday.org/article/view/7102/6101 (accessed on 17 August 2022). [ CrossRef ]
  • Howard, P.N.; Duffy, A.; Freelon, D.; Hussain, M.M.; Mari, W.; Maziad, M. Opening Closed Regimes: What Was the Role of Social Media during the Arab Spring? 2011. Available online: https://ssrn.com/abstract=2595096 (accessed on 17 August 2022). [ CrossRef ]
  • Smidi, A.; Shahin, S. Social Media and Social Mobilisation in the Middle East: A Survey of Research on the Arab Spring. India Q. 2017 , 73 , 196–209. [ Google Scholar ] [ CrossRef ]
  • Al-Jenaibi, B. The nature of Arab public discourse: Social media and the ‘Arab Spring’. J. Appl. J. Media Stud. 2014 , 3 , 241–260. [ Google Scholar ] [ CrossRef ]
  • Davison, S. An Exploratory Study of Risk and social media: What Role Did social media Play in the Arab Spring Revolutions? J. Middle East Media 2015 , 11 , 1–34. [ Google Scholar ] [ CrossRef ]
  • Rice, S.; O’Bree, B.; Wilson, M.; McEnery, C.; Lim, M.H.; Hamilton, M.; Gleeson, J.; Bendall, S.; D’Alfonso, S.; Russon, P.; et al. Development of a graphic medicine-enabled social media-based intervention for youth social anxiety. Clin. Psychol. 2021 , 25 , 140–152. [ Google Scholar ] [ CrossRef ]
  • O’Reilly, M.; Dogra, N.; Whiteman, N.; Hughes, J.; Eruyar, S.; Reilly, P. Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clin. Child Psychol. Psychiatry 2018 , 23 , 601–613. [ Google Scholar ] [ CrossRef ]
  • Kelly, Y.; Zilanawala, A.; Booker, C.; Sacker, A. Social Media Use and Adolescent Mental Health: Findings From the UK Millennium Cohort Study. EclinicalMedicine 2018 , 6 , 59–68. Available online: https://www.sciencedirect.com/science/article/pii/S2589537018300609 (accessed on 7 August 2022). [ CrossRef ] [ Green Version ]
  • Kahne, J.; Bowyer, B. The Political Significance of Social Media Activity and Social Networks. Politi. Commun. 2017 , 35 , 470–493. [ Google Scholar ] [ CrossRef ]
  • Literat, I.; Kligler-Vilenchik, N. How Popular Culture Prompts Youth Collective Political Expression and Cross-Cutting Political Talk on Social Media: A Cross-Platform Analysis. Soc. Media Soc. 2021 , 7 , 20563051211008821. [ Google Scholar ] [ CrossRef ]
  • Butt, J.; Saleem, H.; Siddiqui, A.; Saleem, S.; Awang, M. Influence of social media towards e-participation of youth in national political elections. Int. J. Manag. 2021 , 12 , 734–748. [ Google Scholar ] [ CrossRef ]
  • Mahamid, F.A.; Berte, D.Z. Social Media Addiction in Geopolitically At-Risk Youth. Int. J. Ment. Health Addict. 2018 , 17 , 102–111. [ Google Scholar ] [ CrossRef ]
  • Auxier, B.; Anderson, M. Social Media Use in 2021 ; Pew Research Centre: Washington, DC, USA, 2021; pp. 1–17. Available online: www.pewresearch.org (accessed on 17 August 2022).
  • Saleh, S. New Media and Global Conflict Management ; Al Falah Publishing Library: Dubai, United Arab Emirates, 2017. [ Google Scholar ]
  • Al-Labban, S. Communication Technology, Risks, Challenges and Social Impacts ; General Egyptian Book Organization: Cairo, Egypt, 2016. [ Google Scholar ]
  • Alexander, D.E. Social Media in Disaster Risk Reduction and Crisis Management. Sci. Eng. Ethics 2013 , 20 , 717–733. [ Google Scholar ] [ CrossRef ]
  • Akram, W.; Kumar, R. A Study on Positive and Negative Effects of Social Media on Society. Int. J. Comput. Sci. Eng. 2017 , 5 , 347–354. Available online: www.ijcseonline.org (accessed on 14 July 2022). [ CrossRef ]
  • Raggad, A.; Shweihat, S. The Degree of Positive and Negative Effects of Social Media Networks from the Point of View of the German-Jordanian University Students. Dirasat Hum. Soc. Sci. 2021 , 48 , 68–83. Available online: https://archives.ju.edu.jo/index.php/hum/article/view/109898 (accessed on 7 August 2022).
  • Hollewell, G.F.; Longpré, N. Radicalization in the Social Media Era: Understanding the Relationship between Self-Radicalization and the Internet. Int. J. Offender Ther. Comp. Criminol. 2021 , 66 , 896–913. [ Google Scholar ] [ CrossRef ]
  • KhosraviNik, M.; Amer, M. Social media and terrorism discourse: The Islamic State’s (IS) social media discursive content and practices. Crit. Discourse Stud. 2020 , 19 , 124–143. [ Google Scholar ] [ CrossRef ]
  • Cherney, A.; Belton, E.; Norham, S.A.B.; Milts, J. Understanding youth radicalisation: An analysis of Australian data. Behav. Sci. Terror. Polit. Aggress. 2022 , 14 , 97–119. [ Google Scholar ] [ CrossRef ]
  • Morsi, M. Culture and Cultural Invasion in the Arab Gulf States ; Obeikan Publishing Library: Riyadh, Saudi Arabia, 2016. [ Google Scholar ]
  • Abdul-Khalequ, M. The role of the Kingdom of Saudi Arabia in protecting against the intellectual dangers of social media. J. Coll. Islamic Stud. Univ. Tabuk 2019 , 35 , 1. [ Google Scholar ]
  • Acar, A. Culture and Social Media: An Elementary Textbook ; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2014. [ Google Scholar ]
  • McElreath, D.H.; Doss, D.A.; McElreath, L.; Lindsley, A.; Lusk, G.; Skinner, J.; Wellman, A. The Communicating and Marketing of Radicalism. Int. J. Cyber Warf. Terror. 2018 , 8 , 26–45. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Huda, A.Z.; Runturambi, A.J.S.; Syauqillah, M. Social media as An Incubator of Youth Terrorism in Indonesia: Hybrid Threat and Warfare. J. Indo-Islamika 2021 , 11 , 21–40. Available online: http://journal.uinjkt.ac.id/index.php/indo-islamika/article/view/20362 (accessed on 17 August 2022).
  • Yin, R. Case Study Research: Design and Methods , 3rd ed.; Sage: Thousand Oaks, CA, USA, 2003. [ Google Scholar ]
  • Burns, R.B. Introduction to Research Methods , 4th ed.; Pearson Education: New South Wales, Australia, 2000. [ Google Scholar ]
  • Masudi, F. Arab Youth Survey 2021: Social Media Big-but Less Trusted-Source for News for Youngsters. 2021. Available online: https://gulfnews.com/uae/arab-youth-survey-2021-social-media-big--but-less-trusted---source-for-news-foryoungsters-1.82884142 (accessed on 17 August 2022).
  • Heale, R.; Twycross, A. Validity and reliability in quantitative studies. Évid. Based Nurs. 2015 , 18 , 66–67. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chaffey, D. Global Social Media Statistics Research Summary 2022. Available online: https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/ (accessed on 17 August 2022).
  • Al Hanaee, M.; Davies, A. Influencing Police and Community Relations in Abu Dhabi with a Soft Power Approach during COVID-19. Polic. A J. Policy Pract. 2022 , 16 , 249–259. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Author/sReference TitleContent FocusCurrent Study Comparison/Contribution
Norri-Sederholm, T., Norvanto, E., Talvitie-Lamberg, K., Aki-Mauri Huhtinen, A. [ ]Social Media as the Pulse of National Security Threats: A Framework for Studying How Social Media Influences Young People’s Safety and Security SituationA four-year study not yet complete, the study approaches the subject from the perspective of society’s comprehensive security and investigates whether activities in social media influence attitudes towards personal and national security, and young people’s safety and security situation.Findings not yet available from the Finnish study, the current UAE study will offer a valuable comparison; however, there may be a difference in the dimensions investigated in the two studies.
Al Zaabi, K., Tomic, D. [ ]New security paradigm—the use of social networks as a form of threat to the national security stateThis qualitative study examined the role of social media in influencing indoctrination.The study design is not directly comparable to the UAE study; the Al Zaabi and Tomic study suggests social media is a tool for influencing national security—the UAE study indicates there is limited impact.
Al-Enezi, N. N. [ ] Employment of social networking sites in response to rumors Study exclusively explored the role of social media managing Facebook so as to mitigate false information.The study confirmed the wide use of social media by youth and the potential of its influence. The UAE study resonates the findings.
Marine-Roig, E. [ ]Content analysis of online travel reviewsStudy explores online travel reviews—specifically related to perspective on image of areas with recent terrorist activity.The study aligns with the UAE findings that social media activity had limited impact on the perspective of security in specific locations.
Tsoy, D., Tirasawasdichai, T., &
Kurpayanidi, K. I. [ ]
Role of social media in shaping public risk perception during COVID-19 pandemic: This study did not contain data, it was a review of literature, the theme of which indicates social media exposes people to more information and the potential to heighten risk perception, calling for attention to crisis communication management and the use of social media to examine public opinion.There is no comparable data, the study is valuable in confirming the strength of social media and its potential to heighten risk perception and by default perception of security.
Akram, W., Kumar, R. [ ]A study on positive and negative effects of social media on societyA presentation of positive and negative effects of social media on society—no data available—a general review of commonly used social media sites and the positive and negative effects on education, business, society generally, teens, and children.The study does not offer any data with which to establish a comparison, the study does support the potential of social media to have positive and negative effects on these dimensions in society.The UAE study offers an extension to this study through provision of data and analysis of youth use of social media.
Raggad, A., &
Shweihat, S. [ ]
The degree of positive and negative effects of social media networks from the point of view of the German-Jordanian University studentsThe study includes a questionnaire of 55 paragraphs and analyses the most attractive topics on social media from the perspective of a sample of German-Jordanian University students. The current study aimed to know the reality of the use of social networks by students of the German-Jordanian University in terms of the topics they follow and the sites that are most attractive to them, analyzing the social and cultural effects (negative and positive). The approach in the Raggad and Shweihat is similar to this current UAE study, the focus of the questions is different in that the UAE study is exploring specifically the connection between the influence of social media on perspectives of national security, the Raggad and Shweihat study explores which topics on social media are of most interest and the students’ perspectives of the positive and negative influences of social media related to those topics.
Hollewell, G.F., Longpre, N. [ ]Radicalization in the social media era: Understanding the relationship between self-radicalization and the internetThis study focused on the role of social media in self-radicalization. Results showed that individuals holding a university degree—especially young men—were more at risk of endorsing positive attitudes toward political violence and terrorism, and, therefore, more at risk of being radicalized.The Hollewell and Longpre study questions focused on understanding the emotional intelligence, psychological involvement on social media, attitudes toward terrorism, and political violence, and loneliness. These dimensions did not directly align with the focus of the UAE survey dimensions.
Al Smadi, H. [ ]The Effect of Social Networking Sites In Causing Intellectual Deviation From Qassim University Students’ PerspectiveThe study focusses on investigating the effect of social networks in causing intellectual deviation (distortion of Islam and noble values) by KSA university students.The study utilized a questionnaire and analyzed the results in a similar process to the UAE study—means & standard deviation; Pearson correlation to treat variables, Manova analysis. The results suggest a potential for a strong influence by social media on intellectual deviation (culture/religion/values of Islam). The study does not consider national security. The difference here with the UAE study results suggest the core dimensions are interdependent with security least impacted.
Stanger, N.,
Alnaghaimshi, N.,
Pearson, E. [ ]
How do Saudi youth engage with social media The study utilizes Hofstede’s cultural dimensions to assess how cultural and religious factors are shaping the use of social media (Instragram, facebook, snapchat). The research sample used was KSA students studying in New Zealand.Surveys and interviews were conducted, the results indicating the sample were very conscious of behaving ethically and culturally and religiously appropriately on social media. These were not the dimensions of the UAE study; however, the results do resonate with the UAE results suggesting limited negative influence of social media experienced on cultural and religious perspectives.
Rank:Dimension No.Dimension DescriptorArithmetic MeanStandard DeviationGrade Degree
13Political implications3.460.46High
24Economic implications2.920.39Moderate
31Cultural and societal implications2.790.40Moderate
42Ethical and religious implications2.350.55Low
55Security Implications2.130.67Low
Overall Scale2.690.38Moderate
Sum of SquaresDegrees of FreedomMean SquareValue (F)Statistical Significance
Regression10.17552.03555.4310.000
Residual43.28211790.037
Total53.4571184
Independent VariablesRegression CoefficientStandard ErrorBeta Coefficient (B)Value (T)Statistical Significance
(Constant)2.9070.058 50.0000.000
Cultural and societal implications1910.0170.36311.5510.000
Ethical and religious implications−0.0990.016−0.259−6.1510.000
Political implications0.0200.0130.0441.5090.132
Economic implications0.0370.0170.0692.1570.031
Security Implications−0.0900.014−0.288−6.6740.000
Dependent variable: National Security
Correlation coefficient R = 0.436
Coefficient of determination R = 0.190
Explained variance = 0.187
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

Al Naqbi, N.; Al Momani, N.; Davies, A. The Influence of Social Media on Perceived Levels of National Security and Crisis: A Case Study of Youth in the United Arab Emirates. Sustainability 2022 , 14 , 10785. https://doi.org/10.3390/su141710785

Al Naqbi N, Al Momani N, Davies A. The Influence of Social Media on Perceived Levels of National Security and Crisis: A Case Study of Youth in the United Arab Emirates. Sustainability . 2022; 14(17):10785. https://doi.org/10.3390/su141710785

Al Naqbi, Nadir, Naill Al Momani, and Amanda Davies. 2022. "The Influence of Social Media on Perceived Levels of National Security and Crisis: A Case Study of Youth in the United Arab Emirates" Sustainability 14, no. 17: 10785. https://doi.org/10.3390/su141710785

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

social media security case study

  • Case Study on Online Privacy
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics
  • Internet Ethics Resources
  • Your Privacy Online

(AP Images/Seth Wenig) image link to story

Privacy, Technology, and School Shootings: An Ethics Case Study

The ethics of social media monitoring by school districts.

(AP Images/Seth Wenig)

(AP Images/Seth Wenig)

In the wake of recent school shootings that terrified both campus communities and the broader public, some schools and universities are implementing technical measures in the hope of reducing such incidents. Companies are pitching various services for use in educational settings; those services include facial recognition technology and social media monitoring tools that use sentiment analysis to try to identify (and forward to school administrators) student posts on social media that might portend violent actions.

A New York Times article notes that “[m]ore than 100 public school districts and universities … have hired social media monitoring companies over the past five years.” According to the article, the costs for such services range from a few thousand dollars to tens of thousands per year, and the programs are sometimes implemented by school districts without prior notification to students, parents, or school boards.

The social media posts that are monitored and analyzed are public. The monitoring tools use algorithms to analyze the posts.

A Wired magazine article tilted “ Schools Are Mining Students’ Social Media Posts for Signs of Trouble ” cites Amanda Lenhart, a scholar who notes that research has shown “that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.” She adds that in the case of the new tools being offered to schools and universities, the problem “could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing.”

Others have also expressed concerns about the effectiveness of the monitoring programs and about the impact they might have on the relationship between students and administrators. Educational organizations, however, are under pressure to show their communities that they are doing all they can to keep their students safe.

Discussion Questions

Are there some rights that come into conflict in this context? If so, what are they? What is the appropriate balance to strike between them? Why?

Do efforts like the social media monitoring serve the common good? Why, or why not? For a brief explanation of this concept, read “The Common Good.”

Does the fact that the social media posts being analyzed are public impact your analysis of the use of the monitoring technology? If so, in what way(s)?

Should universities not just notify students but also ask them for their input before implementing monitoring of student social media accounts? Why or why not?

Should high schools ask students for their input? Should they ask the students’ parents for consent? Why or why not?

According to The New York Times , a California law requires schools in the state “to notify students and parents if they are even considering a monitoring program. The law also lets students see any information collected about them and tells schools to destroy all data on students once they turn 18 or leave the district.” If all states were to pass similar laws, would that allay concerns you might have had about the monitoring practices otherwise? Why or why not?

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics.

Photo by AP Images/Seth Wenig

  • Defining Privacy
  • Privacy: A Quiz
  • Loss of Online Privacy: What's the Harm?
  • Nothing to Hide
  • Do You Own Your Data?
  • How to Protect Your Online Privacy
  • The Ethics of Online Privacy Protection

Additional Resources

  • Suggested Reading and Viewing Lists
  • A Framework for Thinking Ethically

Margalla Papers

SOCIAL MEDIA AS A THREAT TO NATIONAL SECURITY: A CASE STUDY OF TWITTER IN PAKISTAN

  • Saad Al Abd National Defence University, Islamabad

Social media has evolved significantly over the years while providing strategic platforms for voices to reach billions of people within no time. Accordingly, it has advantages and disadvantages (threats). The nature of threats emanating from social media, especially Twitter, in the context of Pakistan, are mainly in the form of radicalization, glorification of terrorist groups, propagation of violent sub-nationalism and hybrid warfare. Though Pakistan has been relatively active after 2020 in responding to social media challenges, implementing social media regulations remains an issue, especially when most social media platforms are foreign in origin. This paper evaluates the interlinkage of social media and national security in the context of Pakistan while exploring how agents of insecurity and instability exploit social media and what response mechanism the state has placed to mitigate these threats. The paper is a qualitative inquiry using primary and secondary sources to answer these questions. The research findings suggest marginal securitization of social media, albeit without significant implementation.

Bibliography Entry

Al Abd, Saad. 2022. "Social Media as a Threat to National Security: A Case Study of Twitter in Pakistan."  Margalla Papers  26 (2): 96-107.

Author Biography

Saad al abd, national defence university, islamabad.

Mr. Saad Al Abd is a Ph.D. Scholar at Strategic Studies Department, National Defence University, Islamabad.

Thierry Balzacq, “Theory of Securitization: Origins, Core Assumptions and Variants,” in Securitization Theory: How Security Problems Emerge and Dissolve? Ed. Thierry Balzacq (London and New York: Routledge, 2011), 3.

Marcos Cardoso dos Santos, “Identity and Discourse in Securitisation Theory,” Contexto Internacional 40, 2 (2018): 229-230, http://dx.doi.org/10.1590/S0102-8529.2018400200003 .

Robert Entman, “Framing: Toward Clarification of a Fractured Paradigm,” Journal of Communication 43, (1993): 51-58.

Robert M Entman, “Media framing biases and political power: Explaining slant in news of Campaign 2008,” Journalism 11, 4 (2010): 391, DOI:10.1177/1464884910367587.

Scott Watson, “Framing the Copenhagen School: Integrating the Literature on Threat Construction,” Millennium: Journal of International Studies 40, 2 (2012): 301.

Stephen D. Reese, “The Framing Project: A Bridging Model for Media Research Revisited,” Journal of Communication57 (2007): 148-154, https://doi:10.1111/j.1460-2466.2006.00334.

..............................(contd.)

How to Cite

  • Endnote/Zotero/Mendeley (RIS)

Copyright (c) 2023 Saad Al Abd

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

License Terms

Crossref

Make a Submission

social media security case study

e-ISSN: 2789-7028

ISSN-L: 1999-2297

HEC Recognition

social media security case study

------------------------------

social media security case study

Announcements

Call for papers - 2024 (issue-ii), scan qr code, information.

  • For Readers
  • For Authors
  • For Librarians

Current Issue

social media security case study

Social Media

social media security case study

Important Links

  • NDU Journal
  • ISSRA Papers
  • Journal of Contemporary Studies
  • Strategic Thought

Quick Links

  • Publication Policy
  • Publication Ethics
  • Guidelines for Authors
  • Article Processing Chart
  • Subscription

Institute for Strategic Studies, Research and Analysis, National Defence University, Islamabad 

Copyright © 2023 National Defence University, Islamabad, Pakistan

More information about the publishing system, Platform and Workflow by OJS/PKP.

Case Western Reserve University

  • News and Events

Participants needed for oxygen delivery study

Case Western Reserve University Associate Professor Michael Decker, PhD, is seeking healthy, non-smoking adults, 18–55 years of age to participate in a study to determine how oxygen delivered at a steady concentration or at a variable concentration may change how a person’s brain processes information. 

Study participation will last approximately seven days and involves four visits to Case Western Reserve’s main campus. 

  • Baseline data collection and a blood sample will be collected at the first, 30-minute study visit. 
  • Participants will be given a sleep/activity tracking device (Fitbit watch) to wear for approximately seven days and nights while completing questionnaires on sleep, activity, diet and fatigue levels. 
  • Participants will breathe oxygen via a face mask for two hours during each of the next three visits—scheduled three days within a seven- to 10-day period, while brain activity is measured using electroencephalography (EEG). 
  • After two of the oxygen exposure sessions, a blood sample will be collected.

Participants will be compensated. People who take daily medications for asthma, those with heart disease, lung disease or neurological diseases, or who are currently pregnant or attempting pregnancy, are not eligible. Contact Elizabeth Damato at 216.368.5634 or [email protected] for more information.

More From Forbes

Why solo apps just don’t work: a kardashian case study.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

LOS ANGELES, CA - OCTOBER 27: (L-R) TV personalities Khloe Kardashian and Kim Kardashian watch the ... [+] season opening game between the Los Angeles Clippers and the Los Angeles Lakers at Staples Center on October 27, 2009 in Los Angeles, California. NOTE TO USER: User expressly acknowledges and agrees that, by downloading and or using this photograph, User is consenting to the terms and conditions of the Getty Images License Agreement. Mandatory Copyright Notice: Copyright 2009 NBAE (Photo by Kevork Djansezian/Getty Images)

In today’s world, if one is lucky enough to amass millions of followers or fans, it’s hard not to think of the millions they can help create in revenue.The potential for monetization has been made clear by social media sites and yet, sometimes, what traditional social media has to offer doesn’t seem like enough. That’s where the Kardashians found themselves just shy of a decade ago. They figured that if they could get their “followers” to follow them to their own app, they could charge the followers and convert their follower count into a dollar count. The Kardashians’ logic was sound, and their path is one that’s tempting to follow, however it ended in failure. How did their seemingly bright idea of solo-apps fade? Why hasn’t this become the model for all social media stars?

Content is Queen

Kim Kardashian West made her App Store debut with a game, “Kim Kardashian: Hollywood,” which may have grossed the star and development partner $200 million in annual revenue . The game was free-to-play but players could purchase in-game currency, “K-stars ,” to buy in-game items, like special wardrobe items and furniture . That seemed to pave the way for individual Kardashian sister apps, and in 2015, the whole family got involved.

Kim Kardashian West, Khloé Kardashian, Kendall Jenner and Kylie Jenner each launched their own subscription apps, all of which shot up into the App Store’s top charts . There was no charge for each of the Kardashian-Jenner apps, but they all offered additional content to subscribers who paid $2.99 per month .

The difference between Kim’s initial launch and the subsequent solo apps was that a game has very clear content and an experience that can’t be found anywhere else. However the sister’s solo apps largely shared content that was being offered for free elsewhere—namely on Instagram. This difference was significant: Kim’s game lasted for nearly a decade , whereas the solo apps died within three years. With the rise of social media, consumers are used to obtaining content for free, making monetization even more difficult. Requiring an audience to move to another platform necessitates that celebrities and creators provide a deeper level of access to exclusive content.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Value is Vital

The importance of ample, quality content in the success of a content creator’s standalone app is made quite apparent by one of the few solo apps that’s still standing: Martha Stewart TV . Martha Stewart has created seasons of beloved television shows and, as she said when the app launched, “ Wherever I go, I am always asked where these classic television shows can be found - everyone misses them .” At launch, her app made over 750 episodes available to an audience that had been wanting them; it added value to her fans’ experience. By contrast, the Kardashian-Jenner apps offered content that could be found elsewhere. As Vox put it, rather bitingly, “ Can you think of a time when you didn’t have easy access to healthy living and motherhood tips from Kourtney Kardashian? Or workout tips and product recommendations from Khloé Kardashian? Or Kylie Jenner’s personal music preferences? ”

Safety in Numbers

While the promise of having one’s own app seems desirable for purposes of hoarding all the possible revenue, there are also problems with being the only celeb on an app. Taylor Swift experienced this pitfall. Her short-lived app The Swift Life , which debuted at #1 in the App Store in 2017, fell to 56th place by day three and plummeted to 793rd in its second week, mainly because its content moderation system couldn’t handle all the racist and homophobic users who seem to have embraced the dedicated app as the perfect place to air all their fury . And Swift wasn’t the only one whose app faced this fate. Jeremy Renner’s app came and went in about six months thanks to the community on the app being unbelievably toxic . But this isn’t the only reason it’s beneficial to be on an app with others. Marketing costs can skyrocket when trying to get fans to download a specific program. They already have so many other apps in the palm of their hand—Instagram, TikTok, et al.—it’s often more cost-effective to distribute content on a shared platform, assuming one can capture the fans’ attention there. Platforms like Patreon, OnlyFans, Substack and Fireside exist to help celebrities and creators maintain control by owning and monetizing their content, while simultaneously providing the same ‘safety in numbers.’ Fans also still have the benefit of accessing all of their content in one place without needing to download additional applications.

Back to Basics

While it’s understandable that the Kardashian-Jenners liked the idea of being a big fish in a small pond—so small that they were the only fish, and it seemed sensible that they might be able to convert their followers to subscribers of their solo app, time has shown that there’s been little to lose for sticking with a shared platform. Every single one of the Kardashian-Jenner sisters has more than doubled their Instagram follower count in the past six years and, as of July 2024, Kourtney has 222 million followers, Kim has 362 million followers, Kylie has 398 million followers, Kendal has 292 million followers and Khloe has 308 million followers . Given that Kylie makes $847,544 per sponsored Instagram post and no longer has any of the costs of keeping up a solo app, she clearly demonstrates that there’s plenty of reason to enjoy being an influencer fish in a big social media pond.

Falon Fatemi

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

News from Brown

Brain-computer interface allows man with als to ‘speak’ again.

In a clinical trial and study supported by Brown scientists and alumni, a participant regained nearly fluent speech using a brain-computer interface that translates brain signals into speech with up to 97% accuracy.

The new BCI system allowed Casey Harrell, a 45-year-old person with ALS, to communicate his intended speech effectively within minutes of activation. Photo provided by University of California Regents

PROVIDENCE, R.I. [Brown University] — Scientists with the  BrainGate  research consortium have developed a brain-computer interface that translates brain signals into speech with up to 97% accuracy, offering a significant breakthrough for individuals with speech impairments due to conditions like amyotrophic lateral sclerosis.

The technology involves using implanted sensors in the brain to interpret brain signals when a user attempts to speak. These signals are then converted into text, which is read aloud by a computer.

The work is described in a new study in the New England Journal of Medicine published on Wednesday, Aug. 14, that was led by neurosurgeon David Brandman and neuroscientist Sergey Stavisky, both of whom are Brown University alumni and faculty members at UC Davis Health.

“Our BCI technology helped a man with paralysis to communicate with friends, families and caregivers,” Brandman said. “Our paper demonstrates the most accurate speech neuroprosthesis ever reported.”

ALS, also known as Lou Gehrig's disease, affects nerve cells controlling muscle movement, leading to the gradual loss of mobility and speech. BCI technology aims to restore communication for those who have lost the ability to speak due to paralysis or neurological disorders.

The system allowed Casey Harrell, a 45-year-old person with ALS, to communicate his intended speech effectively within minutes of activation. The powerful moment brought tears to Harrell and his family. Harrell, reflecting on his experience with the technology, described the impact that regaining the ability to communicate could have on others facing similar challenges.

“Not being able to communicate is so frustrating and demoralizing. It is like you are trapped,” Harrell said. “Something like this technology will help people back into life and society.”

The first time Harrell tried the system, he cried with joy as the words he was trying to say correctly appeared on-screen. Photo provided by University of California Regents

The study is part of the BrainGate clinical trial, directed by Dr. Leigh Hochberg, a critical care neurologist and a professor at Brown University’s  School of Engineering  who is affiliated with the University’s  Carney Institute for Brain Science . 

“Casey and our other BrainGate participants are truly extraordinary,” Hochberg said. “They deserve tremendous credit for joining these early clinical trials. They do this not because they’re hoping to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis.”

It is the latest in a  series of advances  in brain-computer interfaces made by the BrainGate consortium, which along with other work using BCIs has been developing systems for several years that enable people to generate text by decoding the user’s intent. Last year, the consortium described how a brain-computer interface they developed enabled a clinical trial participant who lost the ability to speak to create text on a computer at rates that approach the speed of regular speech, just by thinking of saying the words.

“The field of brain computer interface has come remarkably far in both precision and speed,” said John Ngai, director of the National Institutes of Health’s  Brain Research Through Advancing Innovative Neurotechnologies ® Initiative (The BRAIN Initiative®) , which funded earlier phases of the BrainGate consortium. “This latest development brings technology closer to helping people, ‘locked in’ by paralysis, regain their ability to communicate with friends and loved ones, and enjoy the best quality of life possible.”

In July 2023, the team at UC Davis Health implanted the BCI device, consisting of four microelectrode arrays, into Harrell’s left precentral gyrus, a brain region responsible for coordinating speech. These arrays record brain activity from 256 cortical electrodes and detect their attempt to move their muscles and talk.

“We are recording from the part of the brain that’s trying to send these commands to the muscles,” Stavisky said. “We are basically listening into that, and we’re translating those patterns of brain activity into a phoneme — like a syllable or the unit of speech — and then the words they’re trying to say.”

BCI allows man to 'speak’

A new brain-computer interface translates brain signals into speech with up to 97% accuracy — the most accurate system of its kind.

The study reports on 84 data collection sessions over 32 weeks. In total, Harrell used the speech BCI in self-paced conversations for over 248 hours to communicate in person and over video chat. The system showed decoded words on a screen and read them aloud in a voice synthesized from Harrell’s pre-ALS voice samples.

In the first session, the system achieved 99.6% word accuracy with a 50-word vocabulary in just 30 minutes. In another session with a vocabulary expanded to 125,000 words, the system achieved 90.2% accuracy after an additional 1.4 hours of training data. After continued data collection, the BCI has maintained 97.5% accuracy.

“At this point, we can decode what Casey is trying to say correctly about 97% of the time, which is better than many commercially available smartphone applications that try to interpret a person’s voice,” Brandman said. “This technology is transformative because it provides hope for people who want to speak but can’t.”

The research included funding from the National Institutes of Health.

This story was adapted from a  news release  published by Nadine Yehya at UC Davis Health.

CAUTION: Investigational device. Limited by federal law to investigational use.

Related news:

New study unveils 16,000 years of climate history in the tropical andes, isabel tribe: examining ancient sediment to predict earth’s future, ilija nikolov: bringing a quantum leap to boston’s largest investment company.

IMAGES

  1. Free Social Network Security Case Study Sample

    social media security case study

  2. Free Social Network Security Case Study Sample

    social media security case study

  3. Social Media Case Study :: Behance

    social media security case study

  4. Free Social Media Case Study Template in Word, Google Docs

    social media security case study

  5. Social Media Case Study :: Behance

    social media security case study

  6. (PDF) India's domestic Cyber Security and CyberCrime: A Case Study of

    social media security case study

COMMENTS

  1. 4 Case Studies in Fraud: Social Media and Identity Theft

    Case Study #2: Dr. Jubal Yennie. As demonstrated by the above incident, it doesn't take much information to impersonate someone via social media. In the case of Dr. Jubal Yennie, all it took was a name and a photo. In 2013, 18-year-old Ira Trey Quesenberry III, a student of the Sullivan County School District in Sullivan County, Tennessee ...

  2. The National-Security Case for Fixing Social Media

    A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is ...

  3. Social Media Surveillance by the U.S. Government

    Social media has become a significant source of information for U.S. law enforcement and intelligence agencies. The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants.

  4. Social Media & Privacy: A Facebook Case Study

    Globally, the website h as over 968 million. daily users and 1.49 billion monthly users, with nearl y 844 million mobile daily users and. 3.31 billion mobile monthly users ( See Figure 1 ...

  5. (PDF) SOCIAL MEDIA AND CYBER SECURITY: PROTECTING ...

    Case studies of blockchain projects at various phases of development for diverse purposes are discussed. ... & Chatterjee, M. (2019). A review of social media security and privacy risks: Current ...

  6. Social Media Security: Risks, Best Practices, and Tools for 2024

    Those concerns, of course, don't stop people from using their favorite social channels. The number of active social media users grew to 5.07 billion as of April 2024. Make sure you - and your team - understand privacy policies and settings. This applies to both your personal and business accounts.

  7. PDF Social Media And National Security Threats: A Case Study Of Kenya

    to examine: the threats of social media technology to Kenya's national security; the use of social media by the military in preventing, limiting or removing threats to national security; and the current state of Kenya's national security and how social media makes it worse. The study adopted survey research method.

  8. PDF Social Media Cybersecurity

    Put another way: 49% of the total world population are using social networks.1. • Digital consumers spend nearly 2.5 hours on social networks and social messaging every day.2. Simple Tips. • If You Connect IT, Protect IT. Whether it's your computer, smartphone, game device, or other network devices, the best defense against viruses and ...

  9. A Survey and a Case-Study Regarding Social Media Security and Privacy

    This study examines the potentials of social media marketing for luxury retailers. Social media marketing tactics of three luxury retail brands Barneys New York, Net-a-Porter.com, and Saks Fifth Avenue were examined across three major social media sites ...

  10. Social media analytics: Security and privacy issues

    Social media analytics (SMA) is a set of informatics tools and frameworks to collect, monitor, analyze, summarize, and visualize SM data, to facilitate interactions, and to extract useful patterns and intelligence (Zeng, Chen, Lusch, & Li, 2010 ). SMA has been applied to business and public domains, such as crime analysis, marketing, and public ...

  11. Social Media Users' Legal Consciousness About Privacy

    Social networking sites (SNSs) continue to grow in popularity. In 2015, the Pew Research Center reported that 90% of young American adults aged 18-29 use social media, compared to 12% in 2005, an increase of 750% (Perrin, 2015).Likewise, in 2013, 89% of Europeans aged 16-24 years were found to participate in social networks (Seybert & Reinecke, 2013).

  12. PDF Innovative Uses of Social Media in Emergency Management

    The case study organizations demonstrated innovative uses of social media and met a number of predefined criteria, which established that they: Do not suppress social media sites on internal networks; Actively use various social media accounts; Use social media to distribute alerts, warnings, and updates;

  13. Introduction to Social Media and Critical Security Studies in the

    Abstract. This chapter introduces the key debates about critical security studies and social media that will frame the discussions in the rest of this book. Key in this is challenging simplistic and unidirectional assumptions about social media and its role in the sphere of politics and security. To do this, this introduction sets out three key ...

  14. PDF The Effects of Social Media on National Security: An Overview

    The usage of social media should be prohibited until a safe and well-organized national. strategy is introduced. Native social media applications can also be devised to minimize cybercrime for the ...

  15. Critical Security Studies in the Digital Age: Social Media and Security

    In examining social media this book engages with the emancipatory burden of critical security studies. This book argues that it remains unfulfilled on social media and rather presents a "thin" notion of discursive emancipation where social media does provide the ability for previously excluded voices to participate in security debates, even ...

  16. Trust and Safety on Social Media: Understanding the Impact of Anti

    In the article on "Ecologies of Violence on Social Media: An Exploration of Practices, Contexts, and Grammars of Online Harm," Morales argues for the need to understand the ways that violence is performed and communicated on social media. Using a case study of young adults in Colombia, the author demonstrates the complexity of violence on ...

  17. In These Five Social Media Speech Cases, Supreme Court Set Foundational

    The through-line of these cases is a critically important principle that sets limits on government's ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users' First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation ...

  18. The Influence of Social Media on Perceived Levels of National Security

    The increase in the use of social media as a 21st century communication tool is in parallel increasing the threat to national security globally. This study explores the perception of United Arab Emirate community members (specifically youth) on the influence of social media as a threat; the wide use of SM platforms for Emirate of Sharjah (Dibba Al-Hisn, Khor Fakkan, Kalba) were analyzed ...

  19. Social Media as a Threat to National Security: A Case Study of Twitter

    AT TO NATIONAL SECURITY: A CASE STUDY OF TWITTER IN PAKISTANSaad Al Abd*AbstractSocial media has evolved significantly over the years while p. oviding strategic platforms for voices to reach billions of. people within no time. Accordingly, it has advantages and disadvantages (threats). The nature of threats emanating from social media ...

  20. Full article: Ethical concerns about social media privacy policies: do

    Introduction. With 4.76 billion (59.4%) of the global population using social media (Petrosyan, Citation 2023) and over 46% of the world's population logging on to a Meta Footnote 1 product monthly (Meta, Citation 2022), social media is ubiquitous and habitual (Bartoli et al., Citation 2022; Geeling & Brown, Citation 2019).In 2022 alone, there were over 500 million downloads of the image ...

  21. Case Study on Online Privacy

    A New York Times article notes that "[m]ore than 100 public school districts and universities … have hired social media monitoring companies over the past five years." According to the article, the costs for such services range from a few thousand dollars to tens of thousands per year, and the programs are sometimes implemented by school ...

  22. PDF Level of Awareness of Social Media Users on Cyber Security: Case Study

    Hence, the level of awareness of social media users towards cyber security is important. Based on the research by Adnan & Kamaliah (2000), the value of cyber security needs to be instilled to the whole social media users so that the risk for every information sharing in social media is made known. The attitude of social media users that

  23. Social Media As a Threat to National Security: a Case Study of Twitter

    The paper is a qualitative inquiry using primary and secondary sources to answer these questions. The research findings suggest marginal securitization of social media, albeit without significant implementation. Bibliography Entry Al Abd, Saad. 2022. "Social Media as a Threat to National Security: A Case Study of Twitter in Pakistan."

  24. Social media: Disinformation expert offers three safety tips in a time

    Societal security risks refer to threats that can undermine the social fabric and stability of a community or nation. These risks often arise from issues such as political instability, economic ...

  25. Digitally skilled but socially disadvantaged: Enabling digital

    Our case study is contextualised against the typical digital inclusion challenges faced by low-income families and draws on Sen and Nussbaum's capabilities approach to addressing social inequalities. The paper highlights the need to support situation-specific digital capability development and flexible technology and social welfare arrangements.

  26. Participants needed for oxygen delivery study

    Case Western Reserve University Associate Professor Michael Decker, PhD, is seeking healthy, non-smoking adults, 18-55 years of age to participate in a study to determine how oxygen delivered at a steady concentration or at a variable concentration may change how a person's brain processes information.

  27. Why Solo Apps Just Don't Work: A Kardashian Case Study

    Forbes Community Guidelines. Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

  28. Social Media Networks Security Threats, Risks and ...

    This study applied the social gratification theory to examine students' behavior practicing social media usage. This study specifically identified 18 adversarial and constructive factors of social ...

  29. Brain-computer interface allows man with ALS to 'speak' again

    The study reports on 84 data collection sessions over 32 weeks. In total, Harrell used the speech BCI in self-paced conversations for over 248 hours to communicate in person and over video chat. The system showed decoded words on a screen and read them aloud in a voice synthesized from Harrell's pre-ALS voice samples.

  30. Hackers may have stolen every American's Social Security ...

    National Public Data has not responded to numerous media requests for comment. According to the website Bleeping Computer, "Each record consists of the following information - a person's name, mailing addresses, and Social Security number, with some records including additional information, like other names associated with the person ...