“What’s your country music star name?” the whimsical social network meme asks. Simple enough—it’s just your dog’s name plus your mother’s maiden name. The resulting combination is silly and shareable, good for a laugh with your colleague in the next cubicle when she gets back from lunch. Unfortunately, it might also be the first step in a multistage hack. It may even be the beginning of a major security breach.
Think about it—a pet’s name and your mother’s maiden name are common questions used by websites that contain confidential personal information to help uniquely identify you. You’ve just published the answers.
If this is indeed a hack, notice something important about it: The individual who launched it has no access to your systems, has not broken in electronically, and has not transmitted or received any malicious code. Instead, he or she has employed what security professionals call social engineering—a pernicious breed of hack that exploits weaknesses not in IT infrastructures but in the character of the humans who use them, especially our frail tendency to want to be friendly, helpful, cooperative, and compliant.
In the ongoing conflict between cybercriminals and data security professionals, the good guys (the ones who are trying to protect your enterprise) are always playing defense. The attackers have the initiative, and there is at least as much brainpower devoted to inventing and carrying out cyberattacks as there is to thwarting them.
The security profession has had reasonable success in safeguarding the technical infrastructure of enterprise IT. The tactics leveled against humans, however, have proven to be more difficult to eradicate. An April 2015 study by the consulting firm CompTIA indicated that human error was the root cause of 52 percent of data breaches. Human error, of course, includes failure to implement or use technical security measures, but social engineering exploits account for a large and growing share of the problem.
Security professionals maintain constant vigilance against novel technological threats, and typically have been quick to recognize and resolve the ingenious new code hacks as they arise. The battle against social engineering has an entirely different character. It isn’t so much that the crooks are inventive; it turns out that they can succeed through ordinary persistence. In most recent high-profile security breaches in which social engineering played a role, the tactic used was already familiar to security professionals. These are low-yield exploits with high impact; end users cannot be engineered to resist them, and when they succeed the consequences are expensive.
Take, for example, email “phishing”—the use of HTML emails fraudulently designed to look like legitimate corporate email or to direct recipients through hyperlinks to malicious websites, in order to secure access to the recipient’s network or get them to reveal sensitive information. Phishing isn’t new; the term has been around at least since 1987, and it entered the Oxford English Dictionary a decade ago. But phishing persists because it works.
Verizon's widely cited 2015 Data Breach Investigations Report, which analyzed data security trends from the previous year, documented a study showing that 23 percent of phishing email recipients open the messages, and 11 percent click on the attachments, despite the diligent efforts of security professionals to warn them away from this practice.
According to a recent study from CompTIA, human error is the root cause of 52% of data breaches.
It isn’t necessarily that end users are getting dumber. The hackers deserve credit for generating more convincing fakes. The security software vendor McAfee’s recently published Phishing Quiz indicated that 97 percent of people presented with a series of emails failed to consistently tell the phishing emails from the legitimate messages.
“Vishing” is a parallel technique that uses the phone instead of email. It’s an old-school con, but effective. According to the security firm Veracode, the success rate of vishing calls in tricking recipients into supplying confidential information is about 75%.
Social Engineering and the Service Desk
Social engineering represents a significant class of security threats for enterprises. Naturally, an important share of the impact will be felt at the IT service desk.
The service desk typically has a tactical role in data security administration: tier 1 agents may do some of the grunt work in eradicating malware from end-user hardware and overseeing patch implementation. The service desk may take the lead in diagnosing the results of security breaches, and analysts may help train end users to avoid known social engineering methods.
But the first responsibility of an analyst toward the social engineering threat is to avoid becoming its victim.
A service desk analyst has perceived authority, even to significantly senior managers, because of his or her grasp of technological esoterica. The analyst also has configuration data, passwords, and analytical tools that provide access to privileged areas of the infrastructure. That may make the service desk a target during the early phases of a cyberattack. A good analyst may be harder to spoof than a typical business user, but he or she is a high-value social engineering target.
A crucial first step, then, is to understand what social engineering is and how it works.
“The easiest way to understand how social engineering fits into the broader security picture is to look at the controls in place,” says Michele Fincher, Chief Influencing Agent at the security consultancy Social-Engineer, Inc. “First, you have technical controls (security guards, badging, firewalls, antivirus, and so on). Then, you have policies (e.g., no unescorted visitors onsite, no information provided to unconfirmed callers). The last piece is the human element, and this is critical, since humans can override even the most diligent technical and policy controls.”
Social engineering is the class of hacks that exploits the human tendencies to cooperate, to defer to authority figures, to avoid conflict, and to make quick, affirmative assessments of situations that seem familiar and have well-established rules of social engagement.
Variants and Compound Attacks
The three most notable social engineering “vectors” tracked by Social-Engineer, Inc. include:
- Vishing (telephone elicitation)
- Phishing (those malicious emails)
- Onsite impersonation, in which the social engineer intrudes on the target organization in person
“These are human-based attacks,” Fincher says. “They require an employee to make a decision which may not be in their best interests at the prompting of a malicious actor.”
The attack may employ several variants on the same tactic. The phishing technique has spawned an array of derivative exploits. Deceptive emails can be spread in scatter-shot fashion, or they can be used to deliberately target specific people (typically senior executives, who reportedly tend to be among the individuals most susceptible to spoof emails). This targeted email attack is called “spear-phishing.”
There are numerous other variants, and many social engineering exploits are compound attacks, in which the hacker attempts to establish a trusted presence using a combination of these techniques—for example, using an initial phone call to establish a pretext for a follow-up phishing email or an on-site intrusion. “We are seeing a lot more vishing being used by the attackers both in multistaged and standalone attacks,” observes Christopher Hadnagy, the author of two books on social engineering and Chief Human Hacker at Social-Engineer, Inc.
The multistage attack frequently is a sequence of exploits, with the initial contact intended to gather personal information that will be useful in staging the subsequent attacks. In recent years, end users have done hackers the favor of offering up some of this personal information voluntarily, in social media posts. Social engineers have been known to prowl Facebook timelines for photos of individuals in recognizable nightclubs and bars, and then showing up at those locations, intending to build rapport with their targets.
“The multistage attack tends to be highly successful, as humans tend to believe much more in a message if it comes from more than one source, even if neither of these sources is verified,” Fincher says.
On-site impersonations typically involve an individual trying to pass himself or herself off as an employee or as a trusted vendor—a telecommunications or building maintenance worker, a fire inspector or even a police officer. Or he may simply attempt to “tailgate” into the building, following an employee in and claiming to have forgotten his badge. The social engineer’s challenge is to gain access to sensitive areas, or to get away from an escort. From such vantage points, in addition to outright theft of sensitive material, the intruder can read posted messages, rummage through wastebaskets, eavesdrop on conversations, or “shoulder-surf,” reading over employees’ shoulders.
End users are more likely to believe a message that comes from more than one source, even if neither source is verified.
Penetration testing by security consulting firms involves sophisticated instrumentation and software to find vulnerabilities in the network infrastructure. But according to Ted Raffle, information security analyst for Baton Rouge, LA–based TraceSecurity, a cybersecurity consulting and software company, it also involves much lower-tech investigations of social engineering vulnerabilities.
“Some of the most trusted tools for this kind of investigation include props like a tool bag or a generic photo ID badge—crude, but effective enough to establish a pretext that will satisfy most employees,” Raffle says.
The Enterprise Offense
Human beings, unfortunately, cannot be tested for vulnerability to social engineering. For the enterprise, the most direct response to the threat is vigilance, effective policy, and training.
Hackers are smart, but they aren't infallible; sometimes they leave footprints. A phishing email usually will attempt to mimic the look of a legitimate website and URL, but in a careless exploit, the hacker may leave a telltale sign like a misspelled domain name.
“Employees need to at least be able to recognize suspicious behavior,” Fincher says. “Is the admin seeing a rash of suspicious emails from an unknown IP block? Is the service desk receiving phone calls from employees reporting calls from people identifying themselves as HR and asking for sensitive information? Service desk agents are in the best position to see trends and initiate mitigation in the event of a confirmed event. Since they hold the keys to the kingdom in terms of network access, they should also be aware that they are likely targets of spear phishing campaigns.”
Service desk agents typically are subject to performance evaluations focused on the way they use their time (e.g., how quickly they resolve issues). Vigilance about email security can become a conflicting priority, and this can leave analysts vulnerable to deceptive communications. On the phone, agents are expected to verify callers’ identities, but often they demand only things like name, location, and employee ID number. Social engineers have been known to get such information by dumpster-diving or from the employee’s business card.
A hacker may want to gain the confidence of an organizations tier 2 and tier 3 support people. If he can get their names, he might target them with a phishing email, spoofing it from a major vendor or from the company itself. An effective social engineer can get the names and even the contact information of those tier 2 and 3 people from the analysts at tier 1.
Training and perhaps a subtle adjustment of performance metrics can help the analyst avoid being a victim. But the service desk also needs to take steps to avoid becoming the vector for social engineering.
If an end user receives a phone call from an individual identifying himself as a service desk analyst offering to resolve a long-standing issue, her response is likely to be one of relief: "Finally, someone is actually going to deal with my issue.” Unfortunately, this is one of the most common vishing exploits. The caller is a stranger—but the user doesn’t necessarily expect a service desk analyst to be someone she knows. The pretexting of the situation puts the caller in control. He ostensibly has knowledge that she needs. The odds are good that she will willingly acknowledge some sort of issue, and provide personal information, including her password, and even offer the caller access to her system, for the promised relief of her issue.
Companies typically have policies against individual sharing their passwords, or only sharing them under specific circumstances with service desk personnel. So that’s a breakdown in policy compliance that may have serious consequences.
While it may not be specifically articulated, the service desk has an important role in policy management, according to Sam Abadir, director of product management for LockPath, an Overland Park, KS–based software company that helps clients manage risk and maintain policy compliance.
Companies generally maintain policies related to personal information: privacy, enterprise data security, and the like. Service desk people have their own policies, and are well positioned to help the general staff understand the broader policies of the enterprise.
The aforementioned exploit is common, and many enterprises have established a policy that confidential information such as passwords can only be shared with a service desk analyst if the end user has initiated the call—and then, only to the approved service desk phone number, because a sophisticated vishing exploit may involve more than one call, and the caller may provide a callback number to the scammer’s own call center.
“In an organization that is effectively managing its policies, the end users would get an annual reminder about what to do when the service desk calls them back,” Abadir says. “The best practice would be for the end user to call the service desk back, at the number that’s in the employee handbook.”
The Enterprise Defense
Because the targets of social engineering are humans, and not machines, these threats do not readily lend themselves to technological solutions. Consultants who do intrusion testing use software packages like the Social Engineer Toolkit, from TrustSec, LLC, to probe organizations for vulnerability to phishing and similar techniques.
There are vulnerability scanners that can probe network infrastructures for security gaps, such as open ports or already identified software bugs. “Typically, these are useful retrospectively—they detect exploits that have already happened,” Agadir notes. “They are less reliable in detecting possible vulnerabilities that have not yet been exploited.”
Tools may have limited value in preventing or remediating social engineering intrusions. More pragmatic solutions will include end-user training, documented procedures, and high-visibility support from senior management.
More proactive help can come from published threat intelligence services, such as those from iSIGHT Partners, Symantec, SecureWorks, and InformationWeek’s Dark Reading. The alerts generated by these services can enable the IT organization to identify future threats and block them proactively. LockPath’s Keylight software incorporates threat intelligence in formulating and reinforcing enterprise policies, notifying users of anticipated vulnerabilities, Abadir explains.
Implementing these proactive solutions, he adds, may include a leading role for the service desk.
In short, tools may have limited value in preventing or remediating social engineering intrusions. But more pragmatic solutions will include end-user training, documented procedures for dealing with a discovered breach, and high-visibility support from senior management for simple steps like questioning the credentials of individuals who look out of place.
“In this fight, you don't receive or throw the first punch in the ring,” Hadnagy asserts. “It takes months or even years of practice to learn how to survive. Social engineering attacks are no different; you need to ensure you are preparing yourself and your people before they get attacked.”
Peter Dorfman is a freelance writer and consultant based in New Jersey. He can be reached at email@example.com.