Steve subsequently logged over 30 years of computer industry experience in data security, software engineering, product development and professional services. He has managed product development with UNIX, Windows and Java platforms, founded four software and services startups and raised $42m in venture capital. Steve has held a variety of executive management positions in engineering, product development, sales, and marketing for ConnectandSell, Whittman-Hart, marchFIRST, the Cambridge Systems Group, Memorex, Health Application Systems, Endymion Systems, Blackhawk Systems Group and IBM. Steve is also known as the Godfather of Information Security.
This what your Network Security Insights Blog Ad will look like to visitors! Of course you will want to use keywords and ad targeting to get the most out of your ad campaign! So purchase an ad space today before there all gone!
notice: Total Ad Spaces Available: (2) ad spaces remaining of (2)
According to the FBI, it’s a 12.5 billion dollar problem: CEO fraud cost global businesses $12,536,948,299 between October 2013 (when formal reporting to the Internet Crime Complaint Center (IC3) began) and May of 2018. But this figure, however alarming it sounds, likely represents only the tip of the iceberg when it comes to email-based scams...
According to the FBI, it’s a 12.5 billion dollar problem: CEO fraud cost global businesses $12,536,948,299 between October 2013 (when formal reporting to the Internet Crime Complaint Center (IC3) began) and May of 2018. But this figure, however alarming it sounds, likely represents only the tip of the iceberg when it comes to email-based scams targeting senior executives.
Though experts agree that CEO fraud has caused greater financial losses than any other type of cybercrime, they also speculate that it’s probably vastly under-reported. That’s because the kinds of damage it causes extend far beyond simple and immediate monetary losses. Otherwise successful CEOs have been fired, and their most loyal employees have seen their careers ruined. Stock prices have plummeted. And incalculable damage has been done to brand reputations.
As 2019 gets underway, we expect that CEO fraud will become even more prevalent. FBI data shows that global exposed losses increased by 136% from 2016 to 2018, and there are no signs of a slowdown in the trend.
Businesses large and small in a wide variety of industries—ranging from technology to manufacturing, and from hospitality to financial sector—are at risk. So, too, are government agencies and nonprofits. And C-level executives are not the only personnel being targeted; employees in departments ranging from finance and accounting to Human Resources have been victimized by these attacks.
What can be done to reduce your individual business’s vulnerability? How can you protect against this growing threat?
CEO fraud is a type of business email compromise (BEC) involving impersonation. In these attacks, a criminal assumes the identity of a CEO or other senior executive within an organization and sends out emails to staff requesting payment—usually via international wire transfer—or the release of account credentials or sensitive information. Scammers look for businesses that have foreign suppliers or that regularly make large payments by bank transfer. These attacks are often highly effective because they’re so meticulously targeted.
Although perpetrators employ a variety of tactics, it’s not uncommon for them to have gained access to their victim’s network long before the malicious email was sent. They may have spent weeks or months studying the organization’s structure, billing systems and vendor relationships. By turning to social media, they may also have learned about the personal lives and relationships of employees, and analyzed their typical communication styles.
At the right moment—these emails are often sent while the CEO is away from the office—the criminals will make their request. Although the email is bogus, it may originate from the executive’s legitimate (but compromised) email account. Or it may be custom-crafted so as to appear highly realistic. These requests are designed to create a sense of great urgency, demanding that their recipients take immediate action. The target believes he’s sending money to a familiar vendor’s account, just as he’s done in the past. But the recipient’s account number is slightly different, and the funds transferred—which might be tens or hundreds of thousands of dollars—end up in the hands of criminals.
The scammers aren’t always seeking an immediate payout. Sometimes they’re trying to obtain employees’ pay stubs, tax statements or other personally identifiable information (PII) to perpetrate tax fraud or identity theft.
Like other types of socially engineered attacks, CEO fraud exploits universal human weaknesses: employees who are busy, stressed, tired or careless are less likely to notice warning signs in the email messages they receive. The good news is that merely increasing employee awareness has a protective effect. And implementing a mature and well-designed anti-phishing training program can reduce susceptibility across your entire organization by more than half.
To be truly effective, however, security awareness training has to be carried out frequently enough for its lessons to remain memorable: educating employees only during the onboarding process or in annual sessions isn’t enough to reduce their vulnerability. The most effective anti-phishing education programs provide ongoing, immersive training that is targeted, specific and increasingly challenging.
But no security awareness training program, however sophisticated and carefully implemented it may be, can completely protect against human error. That’s why it’s critical to establish policies and procedures for wire transfer authorization that include multiple forms of authentication. Your organization might, for instance, require that all payment transfer requests larger than a certain amount be confirmed face-to-face or by telephone. It’s also important to institute firm policies regarding access to and release of customer and employee PII, financial information and intellectual property.
The majority of the most sophisticated and successful BEC attacks in 2018 took place together with a broader compromise of the targeted organization’s network. And the statistics are clear: as with the risk of a data breach in general, the risk of an executive’s email account being compromised increases the longer the attackers remain undetected on a network. Thus adopting an “assume breach” mentality—which means emphasizing ongoing network monitoring and working to reduce detection time—is the most effective strategy to combat this ongoing threat.
Implementing identity and access controls such as two- or multi-factor authentication for key applications is also a must. Improving authentication protocols is the single most effective step you can take to mitigate the risks associated with credential theft and compromise. Although multi-part authentication systems requiring the use of hardware tokens and verification from a second device are most secure, even a simple system that uses SMS-messaging to confirm credentials offers significant protection against BEC and phishing attacks.
The FBI recommends that simple rule-based systems for detecting fraudulent emails also be put into place. These rules can flag emails with suspicious extensions, such as those differing from the company’s extension by a single letter or character (i.e., my~company.com instead of my-company.com) or those that contain a reply-to address that doesn’t match the “sender” address displayed in the message header.
The Secureli integrated platform now includes VeriPhi, an automated email protection system that relies on machine learning to identify malicious IP addresses and domains. VeriPhi draws upon threat intelligence from more than 78 external sources, including crowd-sourced intel feeds, as well as commercial, government and open sources. Its advanced machine learning based algorithms enhances this intelligence with real-world data gathered from your network, enabling it to become more accurate and efficient as it “learns” about the email traffic your business typically generates.
Contact us to learn more about how the Secureli platform’s multilayered defenses can protect your organization from CEO fraud today.
To recap one of the major events in 2018—500 million customer records from the Starwood Hotels and Resorts guest reservation database had been compromised—shocked and dismayed industry leaders, lawmakers and consumers alike. Not only was this breach one of the largest in history, but the personal information accessed was also unusually broad in scope, including...
To recap one of the major events in 2018—500 million customer records from the Starwood Hotels and Resorts guest reservation database had been compromised—shocked and dismayed industry leaders, lawmakers and consumers alike. Not only was this breach one of the largest in history, but the personal information accessed was also unusually broad in scope, including customers’ gender, birth dates, email and postal addresses, and passport and telephone numbers, along with payment card information. Particularly noteworthy—and troubling—is the unusually long dwell time involved: the attackers reportedly had access to the Starwood database for four years before the breach was discovered.
Starwood Hotels and Resorts is a subsidiary of Marriott International, the world’s largest hotel chain, with nearly 1.3 million rooms in 6,700 properties worldwide and over $22 billion in annual revenue. As an established brand and industry giant, Marriott might be expected to maintain higher information security standards than the industry norm, and to have more resources to invest in advanced cybersecurity technologies. But this seems not to have been the case.
The long-term financial implications of the incident are hard to predict. Marriott’s stock has already fallen approximately 6.8% since the breach was announced. A class-action lawsuit was initiated in New York in an attempt to recover investors’ losses. And civil penalties and fines are likely to cost the company between $200 and $450 million, depending in part on the size of the fee assessed for failing to comply with Europe’s General Data Protection Regulation (GDPR). Direct costs for notifying customers and supplying them with free data or credit monitoring services are estimated to fall in the range of $500 million.
Even more likely to cause long-term damage, though difficult to quantify, is the hit that Marriott’s brand reputation has already taken. With a front-page article in the Wall Street Journal suggesting negligence by reminding readers that “the company missed a significant chance to halt the breach years earlier” and an expert commentator at Forbes questioning if Marriott was “putting customers at risk because it assumed the cost of a breach would be less than the cost of better security” among the numerous mentions in major media outlets that the incident has received, it’s fair to say that in popular opinion, Marriott’s perceived to be at fault.
Though Marriott has created a website offering information about the incident and the company’s response, few details about the actual tools, processes and procedures that were in place at the time of the attack have been made public. Nonetheless, the facts that we do have—including Starwood’s breach history—can lead us toward some tentative conclusions about what may have made this breach possible, and what other hospitality leaders can learn from the incident.
Marriott completed its acquisition of Starwood in the fall of 2016. With the $13 billion purchase, the company took control of Sheraton, Westin, W and St. Regis hotel properties, but it also took on a significant technical challenge—merging disparate reservations systems, loyalty programs and their underlying databases. From the outset, the integration of Starwood’s legacy systems with the existing Marriott infrastructure proceeded more slowly than was expected, and was riddled with technical difficulties.
Starwood’s reservation system and databases were already particularly vulnerable to attack because they’d been cobbled together from multiple payment and property-management systems already in use in the various hotel brands that had been acquired by Starwood. Integrating this system with the Marriott Infrastructure further increased the challenges—and level of risk—involved.
Shortly after its acquisition by Marriott, in November of 2015, Starwood announced that it had suffered a relatively small breach of its point-of-sale systems in restaurants, gift shops and other service areas in more than 50 of its North American hotels. We don’t know if or exactly how that earlier incident is connected to the more recently-discovered, much larger breach.
But we do know that the more recently-discovered attackers already had access to the Starwood databases at the time of the company’s acquisition by Marriott. And we know that the POS compromise began in 2014 as well. We can be sure that the two security events were—at the very least—concurrent.
As security experts have noted, this would not be the first time that an intrusion that initially appeared to have been limited to POS compromise was later found to have been part of a much larger-scale attack. And the fact that the earlier incident took place continues to raise questions about the efficacy and thoroughness of Marriott’s investigation and incident response procedures.
The intruders in the Starwood database encrypted the information they’d accessed, most likely to evade the detection of its removal by a data loss prevention (DLP) tool in use within the network. Thus, determining exactly which records were involved isn’t straightforward.
The payment card data involved was encrypted using the robust Advanced Encryption Standard (AES-128) algorithm, which requires two separate key components to decrypt. Marriott, however, has admitted that they cannot “rule out the possibility that both were taken,” suggesting that the encryption keys may have been stored on the same network segment that was compromised. Naturally, this sort of mistake nullifies all the benefits of using encryption.
Because of the exceptionally long dwell time of these attackers on the Starwood network, some analysts believe that they were nation-state level threat actors with extensive resources at their disposal and high levels of sophistication. It’s almost impossible for any organization, even one with enterprise-grade information security management systems in place, to prevent these kinds of attackers from gaining a foothold in their systems.
The companies most successful at mitigating these risks are those whose focus has shifted from breach prevention to improving detection and response. Although standard best practices such as applying available patches promptly and running anti-malware programs on endpoint devices are helpful, they’re not enough to guarantee the security of today’s complex systems in the current threat landscape.
The Marriott breach should serve as a wakeup call, reminding hospitality industry leaders that today’s systems need multilayered defenses. In particular, log data from all devices in the ecosystem, all network flows, all Windows Active Directories, and all databases needs to be monitored constantly. This monitoring can best be accomplished by systems relying on machine learning to distinguish the truly meaningful alerts from the false positives, so that security engineers’ attention—a finite resource—can be focused on the right place at the right time.
To learn more about how the Secureli platform offers small and medium-sized businesses an enterprise-level security infrastructure at an affordable price, contact Netswitch today.
Late in the autumn of 2016, Microsoft announced that its purchase acquisition of LinkedIn had been finalized for a price of $26.2 billion. The deal attracted a great deal of scrutiny from investors, and attention from commentators, who calculated that Microsoft had paid about $260 per monthly active user. LinkedIn had other assets besides its...
Late in the autumn of 2016, Microsoft announced that its purchase acquisition of LinkedIn had been finalized for a price of $26.2 billion. The deal attracted a great deal of scrutiny from investors, and attention from commentators, who calculated that Microsoft had paid about $260 per monthly active user. LinkedIn had other assets besides its database of information on more than 433 million registered users worldwide, including a near-monopoly on professionally-oriented social networking, and significant relational and intellectual capital in its Silicon Valley-based workforce. But the user data—and the question of its worth—was at the center of most conversations about the deal.
Did Microsoft pay too much? Or was $26.2 billion a fair market valuation for what is essentially the world’s largest work history database?
The answers to these questions are far from simple or clear. There’s no precise formula for pricing user data in today’s business environment. But the questions are highly relevant for all decision-makers charged with keeping their business’s most valuable assets safe, even as this task grows increasingly more complex and daunting.
It has become a truism among business leaders and financial analysts: data is the oil of the twenty-first century, the most valuable commodity in the digital age. Those businesses best able to make use of—and extract value from—their data will see the fastest growth and have the most potential to dominate their industries in the coming years.
But data is unique among highly valuable business assets in several ways, including the difficulty and complexity of accurately assessing its value. What also distinguishes data from other business asset classes is its tendency to exhibit increasing returns to use. Most conventional physical assets tend to depreciate in value over time: buildings age and require repair, technology systems become obsolete, etc. And the more often and more rigorously they’re used, the faster they decay. Data, in contrast, tends to become more valuable the more often it’s used.
In order for companies to maximize the opportunities that their data offers them, however, they need to make that data available to users, thinkers and algorithms across their organizations. Ease of access to data is what enables businesses to become more agile and more responsive to their customers.
Today’s businesses need information systems that will permit them to extract the greatest possible value from their data. Whether it’s remembering customers’ preferences to encourage repeat business, or deeply understanding website user experiences in order to increase cross-selling and drive online booking, data can be used to improve outcomes and solve business problems. Increasingly, this data is also being used to enhance customer experiences. But in order to be useful, this data must be captured, stored and made available at many points within and outside of the organizations.
Each of these processes—the data’s capture, storage, transmission, use and reuse—adds another point of potential information security vulnerability.
As security expert and Fortinet CEO Ken Xie explains in a recent interview, the needs of today’s businesses change far more quickly than their infrastructures. Despite recent increases in IT security investment, security teams have struggled to keep pace with the rapid evolution of open, borderless networks and the demand for ubiquitous, always-on data accessibility.
It is no longer effective to focus on security the network at its borders because these have become too porous and difficult to define. End users increasingly rely on a variety of connected endpoints to access and manipulate high-value data from diverse locations (imagine checking your personal financial information, which is stored in the cloud, from your mobile phone). Today’s data is everywhere, and its security must travel with it.
As global IT security spending continues to climb, stakeholders face increasingly complex decisions. It is of course important to ensure you’re getting the most effective risk prevention for each dollar you spend on information security solutions. But it’s also critical to ask whether your business is choosing an information security solution that will allow you to maximize your data’s value, and will enable its appreciation as an asset.
Increasing your data’s usability means that it can no longer be confined behind a firewall or contained solely within an on-premises server. It means you need a solution that doesn’t interfere with usability—whether your employees are working from home, working on tablets or other personal devices, or working with cloud-based, as-a-service applications. It means that you need a multi-layered platform approach that can detect threats on local devices, remotely connected devices and in processes hosted in the cloud. And it means you need a seamlessly integrated platform in which all parts communicate and work well together.
Every business is different. The solution that will best enable your organization to access and make use of your data is one that takes the unique needs of your employees, partners and customers into account. It’s one that’s been developed on the basis of a thorough examination of your existing infrastructure, and one that’s been designed with your priorities and plans for growth in mind.
If you’re looking for a solution that not only doesn’t interfere with your data’s usability, but actually enhances it, customization is key. By fine-tuning your deployment so that it includes processes and procedures that support your productivity while minimizing risk, you can configure your system to work for you. You can choose incident response and security alert policies that make sense for your industry, business size, and compliance requirements. And you can choose system components that will fit your budget as well.
To learn more about how Netswitch’s three-step CARE implementation process ensures that every Secureli deployment is customized to address an individual organization’s unique risks, contact us today.
The more the threat landscape diversifies and changes, the more it stays the same. Some of the oldest tactics in cybercriminals’ playbooks remain the most prevalent and successful. In 2018, as in years past, the vast majority of data breaches were accomplished by criminal actors external to the targeted organization. But the majority of these...
The more the threat landscape diversifies and changes, the more it stays the same. Some of the oldest tactics in cybercriminals’ playbooks remain the most prevalent and successful. In 2018, as in years past, the vast majority of data breaches were accomplished by criminal actors external to the targeted organization. But the majority of these incidents were made possible by an action taken by an employee within that organization. Whether by mistake, accident or understandably being tricked by a sophisticated con, humans continue to fall victim to phishing attacks and social engineering schemes at an alarming rate.
Even as information security awareness grows, your employees are continuing—albeit inadvertently—to help attackers find a way onto your system: humans remain the weakest link in every organization’s security infrastructure.
According to the 2018 Verizon Data Breach Investigations Report, the most commonly employed activity in successful data breaches was the use of stolen credentials. Compromised or stolen passwords were to blame for 81% of hacking-related data breaches in another recent industry-wide cybercrime survey. And the majority of these credentials were obtained through phishing attacks, or in cases where users unintentionally downloaded keyloggers or other forms of malware when visiting fraudulent websites.
Although it’s possible to prevent some—perhaps even the majority—of these incidents through increased awareness and improved employee training, it’s not possible to eliminate the threat. Today’s reality is that someone, somewhere within your organization will inevitably make an IT security mistake at some point in time.
At least in part, this ongoing vulnerability can be attributed to the increasing sophistication of attackers’ techniques and methods. In the past, phishing emails were relatively easy to spot: they contained misspellings, obviously incorrect URLs, odd-looking graphics or improbable alerts. They were designed predominantly to target users who were careless, harried or distracted—users too busy to pause and consider the consequences before opening an attachment or clicking a link.
Today’s most advanced phishing attempts rely on far more sophisticated tactics. The ready availability of large volumes of highly personal information on social media networks enables attackers to craft messages that are customized to exploit individual recipients’ unique vulnerabilities. Text and graphics may be copied perfectly from authentic alert messages sent out by the companies being spoofed. Some email messages may even contain hidden code that will execute automatically as soon as the messages are opened on the victim’s computer.
But organizations also remain vulnerable because all too often they fail to take necessary steps to improve their employees’ security. Though research has proven security awareness training an effective means of reducing organizations’ overall susceptibility to these sorts of attacks, many businesses don’t budget adequately for this type of education. Or they choose programs that are too short or too shallow, or that fail to engage employees or impress them with the seriousness of the issue.
It’s even more worrisome that many businesses neglect to implement even the simplest of technical measures to protect themselves from attackers seeking to exploit their employees’ human weaknesses. Far too often, decision-makers don’t install the most effective tools or institute the strongest security policies because they fear these measures will make their network’s resources less accessible. The common belief is that there’s a tradeoff between usability and accessibility: what makes employees safer online may also make it more difficult for them to do their jobs.
In most cases, this perception is far from accurate. Even when extra steps are added (to login procedures, for instance), they seldom take more than a few seconds to complete, and the time spent is a worthwhile investment when compared to the probable costs of a data breach.
Time and time again, post-breach investigations show that the attacks succeeded because known security policy best practices were not followed, or readily available tools were not deployed.
Here are the most important—and most frequently neglected—steps you can take to protect your network from phishing and other social engineering-based threats:
Multi-factor authentication is one of the best ways to safeguard network assets attacks involving compromised credentials. When multi-factor authentication is in place, users must validate that they have two or more unique tokens—such as a password, one-time access code sent to a separate device, or physical key—in order to access the network.
Not only does multi-factor authentication add a much-needed extra layer of protection for administrative and privileged user accounts as well as business email and other applications, but it also sends an alert—in the form the request for the second authentication factor—to any employee whose account has been targeted by a attacker. Well-trained employees will recognize that unexpected alerts signal account compromise, and will report them promptly to security teams for investigation.
Outbound web traffic can be regulated on individual endpoints by installing browser-based web filters that prevent users from following links to known malicious addresses. It can also be regulated at the network level by incorporating data loss prevention (DLP) controls. DLP tools monitor network traffic for data streams that match a particular pattern—such as payment card data or protected customer information—and then prevent access or block the traffic.
Proactively-designed network-based defenses are usually the most effective means of blocking malicious IP addresses and domains without subjecting end users to unnecessary restrictions. Tools that employ contextual or advanced behavioral analytics can protect business networks even from newly-established dangerous URLs that haven’t yet been blacklisted.
Inbound message filtering is the most commonly used tool to prevent social engineering attacks, but it’s less likely to be effective when employed in isolation instead of as part of a comprehensive multilayered approach. Although spam filters will detect most if not all of the least sophisticated phishing attempts, messages that have been carefully crafted to resemble genuine correspondence and sent to a single, unique recipient are unlikely to be flagged as dangerous. Malicious emails sent from a compromised account within an organization with a well-established, trusted IP address are also unlikely to be detected.
Even the best-trained and most technically savvy employees can fall prey to socially-engineered attacks. It’s human nature: when we’re stressed, busy or tired, we can momentarily succumb to carelessness. And a single click is all it takes.
For this reason, the wisest approaches to phishing prevention are those that emphasize ongoing monitoring and detection. Once we accept that it’s impossible to prevent every user from clicking every link, every time, and instead adopt an “assume breach” mentality, we can begin to implement the most effective defenses. Contact Netswitch if you’d like to learn more about how the Secureli platform incorporates the most effective anti-phishing tools available today, including data loss prevention, real-time network monitoring and intelligent malicious domain blocking.
When cloud-based services first became popular more than a decade ago, business leaders embraced their versatility, scalability and predictable costs, but many did so with a sense of unease: was their valuable data truly safe when housed in an offsite data center to which they had no physical access? Would third-party cloud service providers offer...
When cloud-based services first became popular more than a decade ago, business leaders embraced their versatility, scalability and predictable costs, but many did so with a sense of unease: was their valuable data truly safe when housed in an offsite data center to which they had no physical access? Would third-party cloud service providers offer adequate security controls to guard against the increasing sophistication and frequency of threats?
To IT professionals, it seemed that the very features that made cloud computing attractive—its use of shared resources and the speed and flexibility with which workloads could be created or modified—were at odds with traditional best practices in network security.
Although these concerns have not vanished, organizations today are adopting cloud-based technologies and applications at an increasingly rapid pace. In fact, more than 80% of organizations today store data in the public cloud, and industry leaders predict that 83% of workloads will move to the cloud by 2020.
In this environment, cloud security is becoming an increasing concern—and top priority—across nearly all industry verticals.
Today’s cloud-based infrastructure environments are extraordinarily complex. In a recent survey conducted by Forrester Research, more than 85% of companies described themselves as employing a multi-cloud strategy, meaning that they rely on various public and private clouds for different application workloads. Each environment, whether public, hybrid or dedicated, comes with its own unique set of security challenges.
With private cloud solutions, enterprise customers are responsible for all aspects of the security of their data, infrastructure and physical network. In public clouds, the vendor assumes responsibility for securing the physical infrastructure and hypervisor, while the tenants must secure their own virtual networks, applications, access management systems and data. When resources or applications are delivered as a service, the vendor assumes responsibility for most aspects of their platform’s security, but the customer retains ownership of their data, and responsibility for how the applications are used.
As businesses more increasing numbers of workloads to these intricately complex environments, they tend to lose visibility into their deployments. It can be difficult to access log data from the public cloud, or to obtain it in as much detail as can be gathered on-premises. It can also be challenging to correlate anomalies in this data with patterns found in on-premise data or in data collected from roving endpoints.
Among the chief advantages offered by the cloud computing model are ubiquity and scalability. If your organization needs a great deal of processing power to solve a complex problem involving a very large amount of data, you can turn to a cloud-based high-performance computing infrastructure for as long as you need the resource, no matter where you’re located.
And, in fact, deriving actionable intelligence from the network security event logs generated in today’s complex multi-cloud computing environments is exactly this sort of complex computational problem. To solve it quickly enough to speed up the identification, containment and elimination of threats requires high-volume, high-velocity data processing.
Before the advent of cloud computing, many organizations relied on network-based tools such as Security Information and Event Management (SIEM) solutions to perform this sort of data analysis. Although SIEM still remains a critical component in comprehensive and holistic IT security toolkits, first-generation SIEMs haven’t evolved to keep pace with the expansion of the attack surface and increasing complexity of systems.
To maximize the security of your organization’s data and applications in the cloud, look for a next-generation solution that can integrate raw streaming data with logs from all devices in the local ecosystem and all services and processes in the cloud for deep analysis. This requires powerful and elastic computing resources that can handle billions of events per second along with contextual information.
The correlation is simple: the faster your organization can identify and contain threats, the lower the risk of a breach.
But the process of detecting and remediating threats is highly complex. No matter how deep the talent and expertise of your security personnel—and regardless of whether they’re an internal resource or sourced externally—humans are becoming increasingly incapable of monitoring incoming threats in real time. The sheer volume of data is simply too great.
Maintaining an always-on 24/7/365 Security Operations Center (SOC) housing a team of experts available for monitoring, analysis and incident response can help ensure the rapid detection of intrusions and risks. But SOC teams often face an unmanageable volume of false-positive threat alerts, and must spend too much of their limited time distinguishing real incidents from incorrectly-flagged ones.
This complexity can be substantially reduced by incorporating AI and machine learning into cybersecurity incident response protocols. By incorporating advanced network and user behavioral analytics and predictive threat modeling, today’s most advanced solutions can generate alerts with much more accuracy. This means that SOC teams can focus their attention on the incidents that most warrant it, dramatically improving their effectiveness and efficiency.
And because such advanced behavioral analytics platforms are built with real-time unsupervised and semi-supervised learning capabilities, their performance improves over time. The longer they’re in use, the lower the overall volume of alerts, and the greater the accuracy of each individual alert. Their “intelligence” also allows these systems to combat insider threats, identify dangerous user errors, and prevent previously undiscovered zero-day exploits—threats that can’t otherwise be anticipated.
Traditional signature-based endpoint protection providers collected their own internal repositories of known threat data. This enabled them to update their products reactively to guard against malware after it had been identified and cataloged—a process with inherent lag time.
Today’s most advanced protection platforms rely on threat information from a much broader array of sources, and they can access this information more quickly. Drawing upon governmental, institutional, commercial, crowd- and open-sourced threat intelligence, they can compare anomalies in network behavior and user activities with data derived from an up-to-date overview of the global threat landscape. The result is an increase in the speed and accuracy of known threat detection.
As more and more organizations migrate increasing quantities of data and resources to the cloud, it’s becoming an increasingly appealing target for attackers. But the very features of cloud computing that make it so attractive to businesses—its global accessibility, efficiency, and availability of the processing power needed to solve complex computational problems—can be used to help cybersecurity teams work more efficiently and smarter.
To learn more about how Netswitch’s Secureli Advanced Threat Protection platform relies on advanced behavioral analytics to keep your cloud-based resources safe in an ever-changing threat landscape, contact us today.
As we talked about in last week’s post, the PCI Data Security Standard has established a near-universal set of technical and operational requirements to which all businesses that process credit card transactions must adhere. Accepting card-based payments is the norm in the hospitality sector—it’s a must for any hotel or restaurant hoping to offer the...
As we talked about in last week’s post, the PCI Data Security Standard has established a near-universal set of technical and operational requirements to which all businesses that process credit card transactions must adhere. Accepting card-based payments is the norm in the hospitality sector—it’s a must for any hotel or restaurant hoping to offer the ease and convenience that today’s business and leisure travelers have come to expect.
Hence demonstrating and maintaining compliance is—and rightfully should be—of concern to all industry leaders today. Recent reports do indicate that they’re moving in the right direction: according to Verizon’s most recent Payment Security Report, 55.4% of organizations surveyed were found to be 100% compliant at an interim assessment. This is an impressive accomplishment, especially considering the cost and complexity of full compliance, and considering that this is the fifth consecutive year in which rates have increased.
The hospitality industry’s performance remained below average, though at 42.9% fully compliant, the industry still saw a significant improvement upon last year’s numbers (30.0% full compliance).
Overall, PCI DSS compliance rates are clearly on the rise. But it’s worrying to note that overall rates of data compromise aren’t decreasing in line with these improvements in compliance. Over the same five-year period, according to the 2017 Breach Level Index Report, the total number of breach incidents perpetrated by malicious outsiders rose from 662 to 1,269, with a peak of 1,336 in 2016. In other words, during a time when PCI compliance saw a 44.3% increase, the number of malicious data breaches grew by 91.6%.
Given these troubling numbers—and anecdotal accounts, such the story of the Target breach, which occurred just weeks after the retailer was certified as compliant—it’s tempting to conclude that PCI compliance, though it’s both mandatory and expensive, lacks any real security benefit.
But in the data provided in the aforementioned 2017 Verizon Payment Security Report, which analyzed more than 300 network intrusions involving payment card data, none of the breached companies was found to be fully PCI compliant at the time of the attack. Further, Verizon investigators claim that “of all the payment card data breaches.. [their] team investigated over the past 12 years, not a single organization was fully PCI DSS compliant at the time of the breach.”
How, then, can we explain the apparent disconnect between Verizon’s findings and the Breach Level Index data? A few facts about PCI compliance—its value and its limitations—can cast more light on the real relationship between compliance and information security.
It’s a commonplace—and entirely reasonable—assumption: if your organization passes the annual compliance assessment conducted by a Qualified Security Assessor (QSA), who has been certified by the PCI Security Standards Council, you must be fully compliant. This only makes sense, right?
But QSAs have only a limited amount of time to spend on each assessment. Their methodology necessarily relies upon user-reported information (interviews) and sampling. They simply do not have enough time to review a comprehensive collection of system event logs, check all network and component configuration settings, and comb through all on- and offsite data repositories. Just as the interview—as a method of data collection—is inherently subject to human error, sampling is by nature incomplete. It’s not uncommon for organizations to discover compliance gaps soon after certification—gaps that were missed by QSAs.
Another commonplace assumption among hospitality industry leaders is that PCI compliance is fundamentally a one-time event. If you’re found to be in compliance at the time of your annual assessment, this logic goes, your security is guaranteed for the following year. But nothing could be further from the truth.
In fact, PCI compliance requires ongoing effort, including employee training, monitoring system events and configuration settings, and installing software updates. Failure to perform any one of these tasks can cause your organization to fall out of compliance, even if your certification remains current.
Verizon’s own breach investigations emphasize this point: all the breached organizations Verizon surveyed had failed to maintain full compliance, most often by neglecting to maintain accurate system and user activity logs, disregarding software patches, or mistakenly altering secure configuration settings.
The PCI DSI is written in the form of a checklist, with each requirement comprised of a series of sub-requirements, and each sub-requirement defined such that compliance (or non-compliance) can be stated in binary terms (yes/no). This makes it seem that compliance is simple to verify.
But consider, for instance, sub-standard 11.2, which states that organizations must “run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).” At first glance, it seems that determining whether or not an organization performs quarterly network vulnerability scans would be easy to do. But the individual QSA is in fact tasked with deciding which network changes rank as “significant.”
And, on the one hand, because QSAs are paid by the organizations they’re hired to assess, they may be subtly pressured to let small problems slide, or risk not being re-hired for next year’s assessment in favor of an “easier” consultant. On the other hand, a forensic investigator, seeking to determine a breach’s cause after the fact, may be motivated to apply a stricter definition of the term “significant.”
The current PCI standard mandates that compliant systems include only two specific software applications or devices (anti-virus software and a firewall), and both are intended solely to prevent incursions rather than increase the speed with which organizations can identify and contain breaches.
Anti-virus software programs are reactive by design, requiring near-constant updating yet still leaving subscribers vulnerable to as-yet undiscovered malware variants. Firewalls, though commonplace and necessary, are intended as a “first-line” defense, blocking intruders at the network’s perimeter, and making their strongest contributions to overall security when serving as part of a multi-layered, defense-in-depth strategy.
PCI DSS does not mandate the use of a SIEM tool or other system event visualization platform, despite the fact that the use of such advanced analytics can significantly reduce the amount of time it takes to detect a breach. And this despite the fact that integrating SIEM and advanced threat protection platforms with firewalls and anti-virus programs demonstrably improves their performance.
In summary, PCI DSS—as you’ll recall from last week’s article—was developed to protect the interests of the banks issuing payment cards, not the merchants who rely upon them to do business. It’s far more difficult for organizations to maintain compliance than it is to obtain it, and quite easy for forensic investigators—and card issuers—to discover noncompliance after the fact—and use it as grounds for finding liability.
Nonetheless, attaining true compliance—an ongoing process that requires effort, care, and thoughtful attention from employees in many roles within your organization—has real value, in terms of both security and protection from liability. Maintaining real compliance for its own sake can seem difficult, complex and costly. But compliance can also come as a simple by-product of choosing a multi-layered, defense-in-depth security platform that includes advanced network monitoring tools and behavioral analytics. And partnering with a managed detection and response provider like Netswitch can make this option surprisingly affordable.
Or if you prefer use one of our linkware images? Click here