Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Webinar: How to Use Good, Actionable Threat Intelligence Tue, 21 Mar 2017 10:25:16 -0500 Threat Intelligence Webinar

How to use good, actionable threat intelligence  

We don't need more undigested data. We need answers. Enter Threat Intelligence.  

Useful threat intelligence is not data feeds of indicators without context; it's interpretation that boils things down to provide recommendations so we can operate safely in the Internet age.  

Join F5 Networks and SecurityWeek for this interactive webinar on March 22nd at 1PM ET, where we will provide the following takeaways:   

• What good, actionable threat intelligence looks like

• How to effectively use threat intelligence to neutralize potential attacks before they strike. 

Register Now

Can't make the live webinar? Register today and you'll get a link to watch on demand at your convenience.

Copyright 2010 Respective Author at Infosec Island]]>
Malvertising and Exploit Kits Still a Significant Threat: FireEye Sat, 18 Mar 2017 14:40:50 -0500 Malicious online ads and the exploit kits (EK) used to infect computers with various types of malware continue to pose a significant threat, FireEye warns.

Used in “drive-by” attacks, malvertising can infect computers without users even being aware that malicious code on the web page they are visiting is covertly installing malware. Bad actors use HTTP redirect protocols or iframe redirects or code injected in legitimate web pages to exploit unmitigated vulnerabilities and infect users. In some cases, domain shadowing is used to hide rogue ad servers as legitimate advertisers.

As FireEYe explains, popular ad servers sometimes redirect to affiliate networks, and these organizations might forward traffic to servers supporting other malicious domains, referred to as “Cushion Servers” or “Shadow Servers.”

Over the past four months, FireEye observed malvertising campaigns associated with a group of first layer compromise pages that used the same injected script to redirect to Magnitude EK. Popular mainly in the APAC region, the EK was observed affecting web servers with a specific header information, with the injected script appearing only when the site was being loaded through the advertisement and not when the URLs were accessed directly.

Some of the domains associated with the EK were hosted on Webzilla B.V and appear to be from the same actor, while others were Flash game websites registered with ‘Alpnames Limited’ registrar and hosted using a PlusServer AG server ISP in Germany. On rare occasions, advertiser poptm[.]com hosted on CloudFlare was used.

The researchers also observed campaigns abusing domains registered under [.]organisation: TTA ADULTS LIMITED and using advertisers belonging to Adcash group, along with other campaigns abusing domains registered under [.]organisation: China Coast and using ads.adamoads[.]com, and other ad sites for redirection.

RIG EK, currently the leading toolkit out there, has been associated with well-known campaigns such as EITest Gate, Pseudo-Darkleech, and Afraid Gate, but also with other malvertising campaigns that use redirection.

In late 2016 and early 2017, FireEye observed [.]info and [.]pw TLD domains that acted as intermediate redirect domains invoked via legitimate advertisers, but which lead to RIG EK domains instead. These were ad service-loaded casino-themed domains featuring injected malicious iframes for redirection, acting as shadow servers for the EK.

The ad service was provided by AdCash ad group, which stopped supporting these domains in February 2017. The campaign then switched to new domains and started leveraging the popular ad service popcash[.]net, which has been notified on the matter.

Sundown, the second most active EK at the moment according to Symantec, is leveraging redirection in a series of malvertising campaigns as well, including one that leverages domains hosted on two neighboring addresses: and Multiple legitimate advertisers are currently redirecting to one of the domains hosted on these IPs, which then redirect to a Sundown EK landing page.

The security researchers also discovered a group of redirect domains that has been leveraging advertiser popcash[.]net to lead users to Sundown EK landing pages via a chain of two domains. Another campaign was observed using shadow servers loaded via legitimate ad sites hosted on Webzilla B.V.

Another active toolkit is Terror EK, which FireEye says is similar to Sundown EK. The threat has been consistently leveraging advertiser serve.popads[.]net to redirect traffic to domains it controls, with some instances observed using this technique as early as December last year. Terror EK was observed downloading ccminer payloads.

“Malvertising and exploit kits continue to be a significant threat to regular users. While we strongly recommend using ad blockers for all web browsers, we understand that it’s not always possible. For that reason, the best approach is to always keep your web browsers and applications fully updated. Also, regularly check your browser to see what plugins are being used and disable them if they are not necessary,” FireEye notes.

Related: RIG Grabs 35% of Exploit Kit Market in December

Related: Edge Exploits Added to Sundown EK

Copyright 2010 Respective Author at Infosec Island]]>
SAP Cyber Threat Intelligence Report – March 2017 Fri, 17 Mar 2017 11:20:00 -0500 The SAP threat landscape is always growing thus putting organizations of all sizes and industries at risk of cyberattacks. The idea behind SAP Cyber Threat Intelligence report is to provide an insight on the latest security threats and vulnerabilities.

Key takeaways

  • This month, the software vendor releases a record-breaking number of security Notes for 2017. The recent patch update consists of 35 SAP Security Notes;
  • An RCE vulnerability in the SAP GUI client was closed. Millions of end users could fall victims;
  • HANA vulnerabilities are on the rise. This month, 5 Notes addressing this platform were released, one of which were rated 9.8.

SAP Security Notes – March 2017

SAP has released the monthly critical patch update for March 2017. This patch update includes 35 SAP Notes (28 SAP Security Patch Day Notes and 7 Support Package Notes).

4 of all the Notes were released after the second Tuesday of the previous month and before the second Tuesday of this month. 7 of all the Notes are updates to previously released Security Notes.

8 of the released SAP Security Notes has a High priority rating and 1 was assessed Hot news. The highest CVSS score of the vulnerabilities is 9.8.

SAP Security Notes March by priority

The most common vulnerability type is Cross-Site Scripting.

SAP Security Notes March 2017 by type

Issues that were patched with the help of ERPScan

This month, 6 critical vulnerabilities identified by ERPScan’s researchers Boris Sanin, Dmitry Chastuhin, Dmitry Yudin, Mathieu Geli, and Vahagn Vardanyan were closed by releasing 5 SAP Security Notes.

Below are the details of the SAP vulnerability, which was identified by ERPScan researchers.

  • A Remote command execution vulnerability in SAP GUI for Windows (CVSS Base Score: 8.0). Update is available in SAP Security Note 2407616. An attacker can exploit a Remote command execution vulnerability for unauthorized execution of commands remotely. The commands will run with the same privileges as the service that executed them.
    SAPGUI is the graphical user interface client. It is the platform used for remote access to the SAP central server in a company network. It allows an SAP user to access functionality in SAP applications such as SAP ERP, SAP Business Suite (SAP CRM, SAP SCM, SAP PLM, and others), SAP Business Intelligence.
  • A Denial of service vulnerability in SAP Netweaver Dynpro Engine (CVSS Base Score: 7.5). Update is available in SAP Security Note 2405918. An attacker can use a Denial of service vulnerability to terminate a process of a vulnerable component. For this time, nobody can use this service, which affects business processes, system downtime and, as a result, business reputation.
  • A Denial of service vulnerability in SAP Visual Composer (CVSS Base Score: 7.5). Update is available in SAP Security Note 2399804. An attacker can use a Denial of service vulnerability to terminate a process of a vulnerable component. For this time, nobody can use this service, which affects business processes, system downtime and, as a result, business reputation.
  • A Cross-Site Scripting vulnerability in SAP Enterprise Portal (CVSS Base Score: 6.1). Update is available in SAP Security Note 2408100. An attacker can use a Cross-site scripting vulnerability to injecting a malicious script into a page.
  • A Denial of service vulnerability in SAP Java Script Engine (CVSS Base Score: 2.7). Update is available in SAP Security Note 2406841. An attacker can use a Denial of service vulnerability to terminate a process of a vulnerable component. For this time, nobody can use this service, which affects business processes, system downtime and, as a result, business reputation.

SAP HANA Vulnerabilities closed by SAP Security Notes March 2017

SAP HANA was first introduced in 2010 and is marketed as a platform converging application and database capabilities with in-memory technologies that allow speeding up the performance, analytics, and other processes.

SAP HANA Security is always in the spotlight, however, this year, SAP HANA Security issue have been attracting special attraction of researchers. The current security update contains 5 SAP Security Notes addressing the flagship platform. The most dangerous of them are the following ones:

  • 2424173: SAP HANA User Self-Service has a Missing Authorization Check vulnerability (CVSS Base Score: 9.8). An attacker can use a Missing authorization check vulnerability to access the service without authorization and use service functionality with a restricted access. This can lead to information disclosure, privilege escalation, and other attacks. Install this SAP Security Note to prevent the risks.
  • 2429069: SAP HANA has a Session fixation vulnerability (CVSS Base Score: 8.8). An authenticated attacker can predict valid session IDs for concurrent users that are logged on to the system. Install this SAP Security Note to prevent the risks.
  • 2424120: SAP HANA has an Information Disclosure vulnerability (CVSS Base Score: 4.9). An attacker can use Information disclosure vulnerability to reveal additional information (system data, debugging information, etc), which will help them to learn about the system and to plan further attacks. Install this SAP Security Note to prevent the risks.

“The risk of these SAP HANA vulnerabilities is critical indeed. However, the likelihood of mass-exploitation is low as SAP HANA User Self-Service is enabled only on 13% internet-exposed SAP systems (according to a custom scan). There are numerous other services in SAP HANA, which are not enabled by default and susceptible to critical issues. For example, last month we helped SAP to close vulnerability with the same risk of remote authentication bypass but in other web service dubbed Sinopia.”

– commented Alexander Polyakov, CTO at ERPScan.

The aforementioned multiple vulnerabilities affecting Sinopia can be exploited together to crash applications on SAP HANA XS remotely without authentication.

The number of security patches addressing SAP HANA totals 51 (of note, one Note can close one or more security issues).

SAP HANA security notes

Advisories for these SAP vulnerabilities with technical details will be available in 3 months on Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.

SAP customers as well as companies providing SAP Security Audit, SAP Vulnerability Assessment, or SAP Penetration Testing services should be well-informed about the latest SAP Security news. Stay tuned for next month’s SAP Cyber Threat Intelligence report.

Copyright 2010 Respective Author at Infosec Island]]>
GRC: Going Beyond the Acronym Fri, 10 Mar 2017 13:13:38 -0600 It’s the age of the three-letter acronym, from LOL to IRL. On the business front, every firm has some form of alphabet soup that shapes the decisions about information security programs. Between data privacy laws, financial regulations, calls for a healthcare focused cybersecurity framework, and regular updates to the Payment Card Industry–Digital Security Standard (PCI-DSS), the need for a well-established information security program is clear as day.

As enterprises exercise their appetite for risk, their ability to assure the board of directors (and inherently the shareholders) that the appropriate controls are in place to protect their critical information and assets is crucial. The days of setting, forgetting, and burying our heads in the proverbial sand are long past. Accountable parties are under ever-increasing pressure to validate the effectiveness of the programs they have in place and provide actionable assurances that due care was taken.

What are you talking about?

We understand the motivations, the want, and the need, yet the reality of the situation doesn’t always align with what we would expect. Cybercrime is not just the elephant in the room; it’s the elephant in the room that’s been tagged with a Banksy-esque portrayal of modern gangsters kicking back and laughing. Criminal organizations are swelling like a tidal wave that is crashing down on the corporate landscape, yet many businesses are still operating under a reactive as opposed to proactive methodology when it comes to their Information Technology/Information Security (IT/IS) GRC needs. Perhaps this is because we have yet to see a nation-wide regulation mandate that controls across multiple business verticals instead of specific industry-related specifications. 

Now we combine that reactive approach to traditional spreadsheet-based GRC (governance, risk and compliance) with understaffed, over-used personnel. Too often these employees are slammed with audits out of nowhere—from business leaders who trickle down high-level policies such as “We’re gonna be ISO certified”—without truly understanding the workloads they just tossed down the org-chart. The elephant grows. How can one or two people in an enterprise tackle the elephant in the room and drag it outside where it belongs?

Give me a little hope

It is likely that the challenges and pain derived from GRC activities will continue to grow, which will further motivate market trends that we are already seeing. In the IT/IS GRC market segment, my clients face a lack of time to dedicate towards keeping up with the rapidly changing onslaught of privacy and data security regulations. As I hinted above, it is good that governments are impressing a need to protect the private information trusted unto businesses by its customers. However, those businesses will continue to be burdened, either through time sink or fines, by this trend.

In addition to the external changes shaping the internal governance policies that businesses put into place, the IT/IS systems within enterprise architectures are in a state of regular flux. It is rare that a system is in a static state for any significant period, and with every change, the same question must be asked: “Is the current machine state compliant?” Answering this question becomes its own burden, without the correct tools in place, and any manual tracking in a spreadsheet becomes impossible at a certain point.

There’s light at the end of the tunnel

Thankfully, we are living in a time where the options available for GRC tools are growing. The market was traditionally dominated by large scale—and expensive—systems. We are now seeing disruptive companies entering and offering reasonable alternatives to the status quo. However, as with any tool selection, there is a fair amount of vendor fatigue that can come from evaluation. It is best to have a short list of what you want to get out of this investment. When navigating the path of GRC vendor courtship, I advise to check off as many as the following boxes as possible:   

  • Affordability
    • Ask yourself, “is this affordable?” Not everyone can afford a high-end global enterprise-class implementation, but most organizations will benefit from a tool.
  • Mitigation, Remediation, and Delegation
    • Does the tool support tracking of remediation efforts, risk analysis processes, and an ability to seamlessly delegate accountability to system owners for remediation and mitigation of identified risks?
  • Streamlined Vendor Risk Management
    • Can this tool help reduce the probability of a Target-like breach by giving you the ability to semi-automate the evaluation of a third-party vendor’s risk profile?
  • Policy Libraries
    • Does the tool support dynamic updates of policies within a library to ease the burden of manually tracking changes to governing regulations, standards, and other best practice publications?
  • Policy Mapping
    • Can internal policies be easily mapped or overlaid with regulating policies or standards such as HIPAA, COBIT, ISO, etc.?
  • Views
    • Can multiple views be established for critical visibility to information that is reasonably valuable for multiple business organizations within your enterprise?

Collaboration is the key. The end goal of any tool is to streamline the day-to-day processes of GRC activities, support efforts between departments, and offer a central repository for documentation that validates compliance with both internal policies and external regulatory governance. An effective GRC disciple requires a company-wide buy-in. The easier you make it for your colleagues, the easier you make it for yourself. That way, when the time comes to jump into the next audit wave, you can prove once and for all that GRC isn’t just another three-letter word.

Copyright 2010 Respective Author at Infosec Island]]>
Why Is Digital Property Monitored Less than Physical Property? Thu, 09 Mar 2017 08:15:00 -0600 According to a recent report published by the British Retail Consortium (BRC), retail crime in the UK has soared to £613m, where "the majority of fraud is committed online." Given this information, one would assume that retailers were investing more money in protecting themselves from cyber-crime than in preventing theft from their physical stores. However, this doesn't appear to be the case. Radio frequency ID tags, used for stock control in many high-street shops, can be used to monitor and alert staff members about specific product sales, and of course, theft.

According to a post on Quora, an RFID system can cost tens of thousands of dollars to implement, while the cost of implementing a sophisticated suite of auditing solutions would be considerably less, and would be much easier to implement. Despite this, many retailers are still not able to determine who has access to what data, and when. In fact, many are not even able to identify where their critical data is located.

Let's face it, we live in a world where everything is monitored. There are an estimated four million CCTV cameras in the UK. Should the police wish to know what a specific person was doing at a given place and time, the chances are, they can. They can use CCTV to identify vehicles and monitor their speed. They can use mobile phone triangulation, and even Oyster cards, to identify an individual's location. Store loyalty cards and credit card transactions can be used to monitor people’s spending habits. The electoral roll has a record of every place a registered voter has ever lived. TiVo and Skyplus monitor your viewing habits, and will offer suggestions about programmes you may wish to record. Organisations will monitor your call "for training purposes." HTTP cookies are used by websites to personalize a user's visit based on their preferences. Google has eyes in the skies and it's likely that Government agencies are using technology that is even more sophisticated. For all we know, our phones are being tapped by aliens seeking to learn about our ways before turning the planet into an inter-galactic zoo. That's probably not the case, but my point is, despite such pervasive auditing of our personal lives, our personal data remains as elusive as ever.

Much of the auditing that takes place in modern society is politically or financially motivated. However, there are many reasons why organisations are failing to monitor and protect our personal information. According to a report by Symantec, people are the main cause of data leaks. Perhaps organisations consider cyber-security to be too technical and costly to implement, while dismissing the notion that incompetent or potentially malicious staff members are the key threat to their system. In which case, it is the managers themselves that require the training.

For those unfamiliar with the General Data Protection Regulation (GDPR), it is an EU directive which will come into effect on May 25, 2018, which sets out to change the way organisations handle personal information. On top of which, the UK Government has announced it will invest £1.9 billion in cyber security over the next five years. Such schemes and directives will not only prompt organisations to step up their game, but will also provide technical assistance along the way. Compliance may incur additional costs, but there are a variety of inexpensive IT auditing solutions on the market that can monitor system changes, permissions and file-based events, as well as provide real-time reporting.

If we were to place as much emphasis on monitoring events that take place on our IT systems as we do monitoring spending habits and shoplifting, many of the data breaches we hear about today could be largely mitigated.

About the author: Ajit Singh is a Marketing Manager for IT auditing, security and compliance vendor, Lepide.

Copyright 2010 Respective Author at Infosec Island]]>
Exchanges in History: What Third Party Cyber Risk Management (TPRM) Programs Can Learn from the Past Thu, 09 Mar 2017 05:40:00 -0600 Modern risk exchange concepts (the exchange of one with many like credit ratings and medical records) trace their roots all the way back to ancient Roman censuses.  

Starting in 485 BCE, the Roman Republic conducted a census every five years, to identify voters, taxpayers, and members of the army. When completed, census information was transcribed into wax tablets and stored in designated temples. Results were shared amongst regional government officials. The information was then used make important financial and military decisions. It was one of the first ways that data was gathered, synthesized, stored, and disseminated to the public.

Since then, governments, organizations and businesses have relied on information gathered in exchanges to understand risks and make important decisions. Almost always, large scale economic and societal transformations followed.

Let’s take a look at how exchanges have transformed two industries and how an exchange is transforming third party cyber risk management (TPCRM) programs.


Buying cars before 1984 was riskier than it is today. Dishonest vehicle owners and shady dealerships, intent on making an extra buck, would reduce a car’s mileage by disassembling a vehicle’s dashboard and rolling back the odometer. Instantly, the car was more valuable. With the exception of smudges, scratches, or misaligned odometer numbers within a dashboard, it was nearly impossible for buyers to determine mileage fraud. 

Enter visionary Ewin Barnett III. In 1984, he had the revolutionary idea to combat odometer fraud by faxing comprehensive vehicle reports to car dealerships. These reports contained the vehicle's history, including mileage and accident reports. From this concept he developed CarFax. A car report could be instantly faxed to buyers or sellers. His concept drastically reduced the risks associated with buying a used vehicle and revolutionized the way consumers shop for cars.

Today, CarFax keeps track of millions of VIN numbers. At the click of a button they provide instant access to a vehicle’s entire history. With over one million cars on the road with rolled back odometers in 2017, buyers continue to depend upon CarFax’s exchange to make smart purchases.

Consumer Reporting Agencies

Before Consumer Reporting Agencies started using computers to calculate, store, and share credit scores, determining an individual’s creditworthiness was tantamount to espionage.  

The precursors to modern Consumer Reporting Agencies have their roots in the early 1800s, when groups of merchants decided to share lists of customers that didn’t pay their debts. After the Panic of 1837, these merchant groups established Credit Bureaus and by the mid 1800s began publishing credit ratings on individuals in quarterly or biannual reports.

One of the largest credit bureaus in the US at this time was The Mercantile Agency. Headquartered in New York City, they hired about 10,000 agents to gather information on the three C’s of credit: Character, Capital, and Capacity.

To gather information on a person’s “Character,” agents would interview any known associates: everyone from coworkers to bellmen. Common questions included subjects such as drinking habits, church attendance, extramarital affairs, and personal hygiene. Lies and exaggerations were common. A person’s entire reputation could be destroyed by rumors. 

To a lesser degree, local credit bureaus used similar methods to gather information on individuals until the Fair Credit Reporting Act of 1970. This law stated that credit bureaus could no longer base creditworthiness on lifestyle information. “Character” was replaced with “Credit Reputation.” Thus, credit bureaus had to adjust their methods.

It was around that time that Experian, EquiFax, and TransUnion set themselves apart by using computers to calculate “credit reputation” based on more reliable data.

This move transformed the entire credit reporting industry. Lenders had easy access to accurate credit scores. Credit bureaus got information at a fraction of the cost of hiring thousands of agents to gather data on applicants. Individuals had quicker response rates to credit card applications, credit card companies had more customers. They had created a much needed nationwide exchange of individual credit information.

Today Experian, EquiFax and TransUnion are known as the “Big Three” of Credit Reporting Agencies. Over the past 30 years, banks, lenders, and consumers continue to rely on their exchange.

A Third Party Cyber Risk Exchange

Since the mid 1980s, industries recognized the risks of outsourcing functions to third parties. They could grow their business faster, but also had to be wary of the risks that accompanied this new agility.

One of the earliest mentions of third party electronic data risk is in the OCC’s Banking Circular 187. Written in January of 1985, it outlined some of the risks associated with outsourcing data processing services to third parties. Since then, laws have increasingly required regulators to ensure that businesses manage third party cyber risks.

Fast forward 32 years.

Outsourcing has become the backbone of many organizations. And third parties have become the lifeblood to outsourcing. The average Fortune 500 company has over 20,000 vendors in 2017, as companies try to improve agility to stay ahead of market disruptors.

Yet, this agility has inherent risks. Cyber criminals have realized that often the easiest path to access a business’ confidential information is to ride in on trusted connections of weaker third parties. Regulators have responded by requiring businesses to mitigate and manage cyber risks.

It’s difficult enough for organizations to manage their own cyber risk. Now they have to also be concerned with their third party ecosystem. 

Most organizations rely on self-assessments sent in the form of spreadsheets to essentially ask their third parties, “How good are your cyber security controls? If you are breached, can you provide assurance it will not affect my company?” This method is expensive, time consuming and even worse, doesn’t work.  

According to PwC’s 2016 Global State of Information Security report, third-party contractors are the biggest source of security incidents outside of a company’s employees. So how do we solve this daunting challenge?

An exchange that enables cyber risk assessment data to be shared like credit reports or CarFax reports. It’s a simple idea with massive impact.  

This would allow organizations of all sizes to share assessments at the click of a button - driving massive efficiency while simultaneously driving down third-party cyber risk.  

It's important that any kind of TPCRM program benefits both sides of the equation by providing automation and workflow to remove the hassles of keeping track of third party risk assessments with phone calls, emails, and shared spreadsheets.

While it's difficult to imagine a world where credit reporting agencies didn’t exist, there was a time when the notion of collecting financial credit data on every company that organizations provide credit to seemed like an insurmountable challenge. It's a similar challenge that faces third-party cyber risk assessments today.  

Organizations should be focused on managing third-party risk, and spend less time collecting data.


Exchanges revolutionized the decision making process in buying cars and determining credit scores many decades ago. Since Roman Times the concept has basically remained the same:

  1. Gather Information
  2. Store Information
  3. Share Information
  4. Base decisions on Information

Throughout history, whenever organizations, governments, or industries have used an exchange to share information, great transformation has taken place. Now is the time that an exchange transformed third-party cyber risk management programs in 2017. 

Copyright 2010 Respective Author at Infosec Island]]>
Neutrino Bot Gets Protective Loader Tue, 07 Mar 2017 05:40:42 -0600 A recently observed variant of the multi-purpose Neutrino Bot is using a protective, obfuscated loader that is an integral part of the full package, Malwarebytes Labs researchers report.

Also known as Kasidet, the Neutrino Bot is different from the Neutrino exploit kit (EK), although it has been distributed by the latter. In January 2017, security researchers detailed a Neutrino Bot campaign where the malicious payload was being distributed via spam emails, and the same variant appears to have got the protective loader.

Not only does the malware feature multiple layers that hide the actual core, but there are also virtual machine detections packed inside it, researchers now explain. The threat was being distributed via a malvertising campaign in the United States leveraging the Neutrino EK. The infection chain includes checks for virtualization, network traffic capture and antivirus software, performed by heavily obfuscated JavaScript code in the pre-landing pages.

After the initial check has been passed, a specially crafted Flash file that contains exploits for Internet Explorer and Flash Player is launched. Finally, an RC4 encoded payload is downloaded and executed via wscript.exe to bypass proxies. The security researchers also observed that the sample deletes itself if it determines it is being deployed in a virtual machine or sandbox.

On the infected machines, the malware adds and modifies registry keys and leverages the Task Scheduler to achieve persistence. It also modifies keys to remain hidden in the system and adds itself into the firewall’s whitelist. Moreover, the path to the malware is added to Windows Defender’s exclusions list.

However, the malicious core isn’t loaded until the full installation process was successful, researchers say. After completing the process, the malware sends a request to the command and control (C&C) server, which responds with commands to be executed. All requests and responses are base64 encoded.

“The loader code shows that it is an integral part of the full Neutrino Bot package – not yet another layer added by an independent crypter. Both, the payload and the loader are written in C++, use similar functions and contain overlapping strings. They both also have very close compilation timestamps: payload: 2017-02-16 17:15:43, loader: 2017-02-16 17:15:52,” Malwarebytes Labs reports.

To ensure it isn’t executed more than once, the loader creates a mutex with a name that is hardcoded in the binary: 1ViUVZZXQxx. While the main purpose of the loader is to prevent the malware from being executed in a controlled environment, it does this differently than other malware: it deploys a thread that runs the checks in a never ending loop.

This means that the malware continuously checks if blacklisted processes are being deployed, and it immediately terminates execution if that happens. The loader enumerates through the list of the running processes, searches blacklisted modules within the current process, checks if the process is under the debugger, uses time measurement to detect single-stepping, checks blacklisted devices to detect virtual machines, and also searches and hides blacklisted windows by their classes.

The operations related to bot installation (such as adding a task to the Windows Scheduler, adding exclusions to the firewall, and more) are performed in a different thread, researchers say. In the end, the loader unpacks the final payload and runs it with the help of the Run PE method, but not before creating another instance of its own.

“Neutrino Bot has been on the market for a few years. It is rich in features but its internal structure was never impressive. This time also, the malware authors did not make any significant improvements to the main bot’s structure. However, they added one more protection layer which is very scrupulous in its task of fingerprinting the environment and not allowing the bot to be discovered,” the researchers conclude.

Copyright 2010 Respective Author at Infosec Island]]>
Ask a Security Professional: WordPress Database Security Part Two — Best Practices Thu, 02 Mar 2017 08:30:00 -0600 In Part One of our #AskSecPro series on WordPress Database Security, we learned about the anatomy of WordPress. Now that we have a firm understanding of the role the WordPress MySQL database plays in a WordPress installation, we can take a look at the various ways an adversary can exploit the mechanisms involved. We’ll also explore some of the ways to defend your database against compromise.

For the purpose of this article, I’ll focus on some of the things that most WordPress website admins have complete control over but probably aren’t configuring properly. Most of us are guilty of poor security practices at one time or another, and in ways, we probably weren’t even aware of. In my best attempt to make this as dramatic as possible for a WordPress database security best practices article, I’m going to say that it’s time to start our recovery to be the best WordPress admins as possible. Consider this your twelve (or eight) step program to improve WordPress database security.

1. Keep WordPress Updated

You’ve heard it a thousand times, but here it is again, always update your WordPress to the latest version. This is one of the most important steps you can take. To reiterate the significance of this step, it was recently discovered that over one million outdated WordPress websites were defaced this month from running versions 4.7 and 4.7.1. Keep WordPress updated. If you’ve turned off automatic updates, turn them back on!

It’s super simple, just download and open your WordPress installation’s wp-config.php file in your favorite text editor and add this line to it:

define( 'WP_AUTO_UPDATE_CORE', true );

2. Keep Backups of Your Database

Backups are another song that’s been sung more times than Sweet Caroline at a Red Sox game. We’re seeing more and more people adopt backup solutions, but I fear that databases are too often overlooked when considering backup solutions. The first thing you should do is have a conversation with your hosting provider to see what backup services, if any, are offered. If the available backup solutions do not include database backups, there are many WordPress-specific solutions that do. In addition to simply performing backups, you need to make sure that you’re performing integrity checks on those backups. If you’ve ever heard me speak at a WordCamp, you’ve probably heard the story about my dashcam and the Loop 101 UFO — the moral of the story was to perform integrity checks on storage.

If you’re not sure what to ask about, here’s a handy checklist:

  • You want backups performed on a daily basis – at a minimum.
  • You want your backups to include your files AND your databases.
  • You want to keep at least 30 days of data, but preferably as close to 60 days as you can reasonably get.
  • You want an easy method for adhoc backups and adhoc recovery.
  • You want to be able to spot-check the integrity of the backups.
  • You want the backups to be stored on a different server from your web server.

3. Don’t Use the Same Database for Multiple Websites

While it is technically possible to run multiple applications, even separate WordPress installations, from the same database — don’t! There are numerous reasons you should never use the same database for multiple applications, not the least of which is a grossly ineffective security barrier. A vulnerability in one application could lead to the disclosure of the entire database. Remember that since no security methods are completely fool-proof, you should always be mindful of limiting the extent of any damage that could follow a compromise. One important part of this is effective barriers between applications and trust levels. These barriers serve to limit the damage, much like the watertight bulkheads of a naval vessel that help limit flooding between compartments. Think of the Titanic, you want the most effective bulkheads possible to keep your ship afloat.

4. Proper Permissions on Config File

Your wp-config.php file contains some previous information, including your unique hash salts as well as plaintext credentials for accessing your database with administrator privileges. Keep this file secure by ensuring the permissions on wp-config.php are set to 0600 (–rw––––––). This permission setting means that the owner can read and write to this file, but all others have no access. If you’re not familiar with setting file permissions, work with your hosting provider to accomplish this change.

5. Disable Remote Database Connections

Some hosting providers allow for remote connections to be made to SQL databases in their network. For the purposes of WordPress, this is not only unnecessary, it introduces additional risk to the database by allowing it to listen to requests from outside entities. Just like our parents told us not to talk to strangers growing up, we need to tell our database not to talk to untrusted sources. In most cases, your hosting provider can disable this option on your behalf.

6. Update Your Database Password

Perhaps the most often overlooked password during password update day are database passwords. You never use them yourself, and you forget they exist. Well, your WordPress website uses these credentials every day. When updating your database password, make sure you’re also updating the connection string in your wp-config.php file to ensure WordPress is still able to connect to the database and avoid downtime. Your hosting provider should be able to assist you in finding how to update your database passwords.

/** MySQL database password */

define('DB_PASSWORD', 'password_here');

7. Database User Access

While this is probably not the case for most people, you should go ahead and double-check that no additional database users have access to your WordPress database. I’ve seen a few cases where an unexpected database user was executing arbitrary SQL against a WordPress database and it was particularly hard to track down, because we don’t consider the possibility of another user. Double-check your database users and their privileges with your hosting provider to eliminate any stray users.

8. Website Scanning

Scanning your website for malware and vulnerabilities plays a significant role in your overall security posture. While there aren’t currently methods for directly scanning your database contents for issues, you are able to scan the content the database feeds to your live website through an external scanner for both malware and vulnerabilities. Coupled with a robust file scanning solution, your defenses are considerably enhanced.

By following these WordPress database security best practices, you’ve become a better WordPress admin and a more effective guardian of the data in your website. Even in the worst of scenarios, the damage will be significantly limited by these precautions and recovery will be that much less stressful.

Have a question for our security professionals or a topic that you would like us to write about? Message @SiteLock and use the #AskSecPro tag!

Copyright 2010 Respective Author at Infosec Island]]>
Security Policies Matter for Disaster Recovery Thu, 02 Mar 2017 06:43:44 -0600 In the past year there have been several high-profile outage incidents, affecting a wide range of organizations.  In the first month of 2017 alone, we have seen both Delta Airlines and United Airlines cancel flights due to major IT issues and the internet streaming services of both Comcast and Fox Sports experiencing outages during Super Bowl 51, leaving some fans unable to witness the nail biting ending.

These follow on from similar incidents last year that affected Amazon Web Services after storms hit Sydney, Australia in June, where services in the region were down for around 10 hours, disrupting a range of services from banking to pizza deliveries. And of course, cyberattacks also took their toll: we saw the worlds’ biggest ever DDoS attack targeting the company which controls much of the internet’s domain name system. 

Despite these high profile incidents, many businesses are still stuck in the mind-set of ‘it won’t happen to me’ and are ill-prepared for IT failures. And with IT teams facing a broad range of unpredictable challenges while maintaining ‘business as usual’ operations, this mind-set places organizations at serious risk of a damaging, costly outage. Therefore, it’s more important than ever to have plans for responding and recovering as quickly as possible when a serious incident strikes. As the author Franz Kafka put it, it’s better to have and not need, than to need and not have. In short, effective disaster recovery is a critical component of a business’ overall cybersecurity posture.

Most large organizations do have a contingency plan in place in case its primary site is hit by a catastrophic outage – which, remember, could just as easily be a physical or environmental problem like a fire or flood, as well as a cyberattack. This involves having a disaster recovery (DR) site in another city or even another country, which replicates all the infrastructure that is used at the primary site. However, a key piece of this infrastructure is often overlooked - network security - which must also be replicated on the DR site in order for the applications to function yet remain secure when the DR site is activated.

Building security into DR

Replicating the security infrastructure, however, can be more of a challenge than it may initially appear. The network at the primary site will contain routers, firewalls, servers and so on, and the DR site may be set up in exactly the same way. But the problem is, just installing the same equipment in the same configuration isn’t enough. All of those devices have security policies within them and these policies change on a daily or even an hourly basis, every time applications and users are added, amended or removed.

As such, whenever a policy change is made in the primary site it is critical to ensure that an equivalent change is made on the DR site. This requires synchronization between the two sites’ security policies to automatically replicate policies every time they change. How that synchronization is implemented will depend on the exact equipment and setup the organization is using, and it’s not always easy to do.

The most straightforward scenario is when the same equipment from the same vendor is deployed at each site – and that vendor offers a unified firewall management system. This means the same policies can be simultaneously installed on security devices on both sites; IT teams only have to make the change once in the firewall management system, and it’ll push the change out to the security devices in each site.

Overcoming language barriers

More complex scenarios occur when organizations don’t have such a firewall management system – or if the organization is using equipment from different vendors, or different models from the same vendor, at each site. In this setup, the policies at the two sites will not be truly identical, so synchronizing the two sites’ policies will need to be done in some other way. And if you rely only on human processes to synchronize the two sides – the polices will eventually diverge. Therefore, an automated system is the right approach to maintaining the synchronization.

The last thing to consider is the IP addresses in use at the primary and secondary sites. Are they identical or, as is more likely, are the IP addresses on the main site mapped to their logical counterparts in the DR site? In this case, any rules that are installed at the secondary site are going to look slightly different to the ones at the primary site – and again, you will need an automation solution to carry out the rule conversion.

It is essential to consider all of these aspects of security policy management when building a DR site. If you neglect these points, when disaster happens and you need to switch operations to your secondary site, then your systems and applications won’t work as you need them to.

As we’ve seen with recent serious outages, prevention is no longer enough to ensure robust readiness to unplanned incidents and cyber threats. Organizations also need to ensure that their incident response is as slick and unified as possible, so that when (not if) the worst happens, they can get critical systems back up and running quickly, to cut disruption to a bare minimum.  And having your security policies configured and orchestrated across the entire organization, in both primary and DR sites, is a critical facet of this.

Copyright 2010 Respective Author at Infosec Island]]>
Ask a Security Professional: WordPress Database Security Part One — Anatomy of WordPress Wed, 01 Mar 2017 13:07:00 -0600 For most people the year is still just getting started, but for some website owners the year has already packed quite a punch in the form of website attacks. This month hackers exploiting a vulnerability in the WordPress REST API successfully defaced over a million websites in what has become one of the largest website defacement campaigns to date. The attacks injected content that overwrote existing posts on WordPress websites running versions 4.7 and 4.7.1, leaving website owners with an immeasurable number of “Hacked by” posts across the droves of impacted websites.WordPress REST API

Many website owners who have unfortunately found themselves in the proverbial trenches of a digital battlefront, some of which had at least some security measures, are facing a difficult data recovery situation. It is from these recent events that the next Ask a Security Professional question was crafted; How can I better protect my data?

I feel that it’s important to fully understand what the problem is in order to best understand what forms a solution can take. In Part One of #AskSecPro we’ll cover an introduction to some of the infrastructure behind WordPress. Let’s start at the beginning.

The Basic Anatomy of a WordPress Website

As you may know, WordPress is a “database-driven” content management system, which means that all of the text and resource references found in WordPress posts and pages are stored in what is called a Structured Query Language (SQL) database, most commonly in the form of the open-source database management system MySQL. Many hosting companies nowadays offer one-click installation of WordPress, or hosting plans that simply come pre-loaded with WordPress. In these cases you may not have visibility of what actually goes into the workings of WordPress. The physical presence of WordPress on a web server consists of two major parts, each of which has its own security demands.

The WordPress Core Files

The core WordPress files contain what amounts to the machinery behind wordpress that does most of the heavy lifting, serving as the initial framework for the content management system. They are what instructs your web server on how to process the interactions both with your website visitors, as well as with you when you’re making new content. The core files are PHP, CSS, and JS files that live on your web server.* Every freshly-installed WordPress website on the same version is completely identical to the next, except for the configuration file wp-config.php, and in some uncommon cases where advanced users have modified other files. Even after installing plugins and themes, the core files themselves will typically remain unchanged.

*When manually installing WordPress (not through a hosting provider’s one-click installer), these files should only ever be downloaded from There are no exceptions to this rule.

Historically, the majority of documented malware we’ve seen on WordPress websites has lived as code within website files, either as malicious code injected into existing legitimate files, or entirely new files riddled with malware. In these cases, a combination of general file change monitoring and file-based malware scanning is the best defensive measure. This year, we’re seeing broader attack trends that focus less on file compromise, such as in the case of the recent REST API defacements where website files are not impacted, and more on database content.

The WordPress Database

The database is, as its name indicates, where the majority of your actual site data is stored. The most apparent of this data is of course the posts and pages you create. In perhaps a less obvious but equally important utilization of the database, your sensitive non-public data is stored there, and there’s a lot of it.

Page Content                        User Preferences
Post Content                         User Names
Comments                             Configuration Settings
Plugin Preferences                Site Name
Plugin Activation Status         Credit card data (in some eCommerce cases)
User Passwords                     and many more data types…

Corruption of this data can render your website completely inaccessible to your visitors, and unauthorized disclosure of this information could irreparably harm your reputation and perhaps even your pocketbook.

For some the concept of a website database can seem a little abstract, which is understandable since you can’t quite reach out and touch the database as easily as you would your files through a file manager. This is for good reason, as accidental damage to your database is potentially irreversible. While your database may not seem as accessible as your files, it is very concrete and requires very real security considerations.

You can consider your database to be basically a giant spreadsheet of various information. WordPress retrieves information from your database by making a connection to your database server, which in the case of most shared hosting accounts, is typically located on an entirely different physical server. Your WordPress then needs to authenticate into the database server with a username and password, much the same way as you login to your site, before it is able to retrieve any data. The WordPress installation keeps this very sensitive authentication information in what is called a connection string which is contained in a core file called wp-config.php. The connection string contains your database name, host address, port, username, and password. If this file is able to be accessed by an adversary, it is very likely that your database could be compromised.

Now that we better understand the roles that the two major parts of a WordPress installation play in the operation of your website, we can better understand how each could potentially be abused. Next we’ll discuss best practices and how to best protect your WordPress database. Stay tuned for Part Two!

Have a question for our security professionals or a topic that you would like us to write about? Message @SiteLock and use the #AskSecPro tag!

Copyright 2010 Respective Author at Infosec Island]]>
Access Management and the Automation of Things Tue, 21 Feb 2017 05:24:28 -0600 Want to destroy the confidence of your IT department and amputate key appendages from its leadership’s chance of success? Force these folks to manage manual processes; bog down their ability to manage the access of users and create ever-changing and overbearing password management rules, as examples. I’m not suggesting that requiring your IT team to manage these tasks manually is a head scratching problem, but doing so does mean more obvious complications, extremely mundane work and a time consuming volume of requests.

For your highly technical teams, asking them to handle manual account management is a waste of resources. These employees should spend their time on more technical issues and complex projects for your organization, don’t you think?

But, wait. How can you get your IT department to move forward when it is asked to perform menial tasks related to a user’s account or password? There are solutions, of course, for managing these processes, but can they be trusted and are they really beneficial to operations of the organization?  That depends on your point of view, I guess – some experts once wondered about the benefits and purpose of a thing called the “internet” while others, respectable people, once wondered about the benefits of tablets and their touch screens.

The simple answer to the proposed question then, in my opinion, is: “yes.” Solutions on the market assist with these processes. The following provides a bit more detail about what these things than do.

Account creation and management from one place

Setting up a new account for an employee is frustrating – when done manually. It looks a little like this: An IT admin accesses each appropriate application, enters the required employee information then sets the appropriate access rights. Making matters worse, a single new employee likely needs multiple accounts to begin their work. Sometimes, a newly hired employee will be left waiting for a few days for the correct access to correct accounts. Who hasn’t had this happen to them when they’ve begun a new job?

Current technologies can automate this process. As such, by automating, an HR person simply needs to enter the appropriate employee information into the HR system, and voila, new accounts are created in all systems relevant to the person’s role in the organization! These solutions work seamlessly for both in-house and cloud applications, so any type of application or system your company uses can be easily integrated – completely automated.

Aspirin for time-sensitive request headaches

In manual processes, the need for additional access to accounts or resources created for an employee can be a headache. Here’s how the process typically works: An employee must contact their manager who contacts an IT admin when they need access to an application or to make a change to their account. If this request is time sensitive, the employee may continually contact the manager to check up on the progress of the access rights. Now imagine countless messages and emails coming in from more than one employee requesting the need a change as soon as possible – overwhelming.

This can be automated, too. When an employee needs access to a certain application for a project they are working on they simply make the request in an employee portal and the request is routed to the correct manager. That manager can either accept or deny the employee’s request. If accepted, the change is automatically made and the employee has immediate access. This eliminates the need for anyone to contact the IT department or an account admin to request the change. This is seamless with in-house and cloud applications so changes can be easily made to both.

Then, if an employee wants to check on their request, they can view the progress in their portal instead of contacting the admin directly. So, admins no longer need to be repeatedly contacted to ask if the change has been made. 

Responding to the same call over and over again

What about all of the password issues that the IT department must deal with? Many automated solutions work seamlessly with password management to address these issues to drastically reducing redundant calls. Here’s the problem: Employees call the helpdesk to have their password reset for one or more of their applications when they forget them or are locked out of their accounts. This continues over and over again. This, like the account change requests, is very simple to fix, but overwhelming difficult and mundane if managed manually. Certainly, this is time-consuming when employees – especially repeat offenders – who request information over and over again.

The most popular fix here is an automated password management solution. Such technologies provide a self-service reset option can be adapted for use in the cloud or in-house applications. This allows users to reset their own passwords without contacting the helpdesk even from their mobile devices like smartphones and tablets.

Also, don’t forget that single sign-on solutions have also been adapted to work in conjunction with cloud applications. Single sign-on allows your users to login in once with a single set of credentials (one password and such) and thereafter gain access to all other applications they are authorized to use, easily resolving the issue of users needing to remember multiple passwords. This, then, also eliminates the need to request so many resets and alleviates mundane treacherous tasks for the helpdesk.

With automated solutions these tasks can be easily automated making processes better for everyone involved, and resulting in a happy IT department where leaders are empowered to live up to their professional potential without being cut off at the knees.

Copyright 2010 Respective Author at Infosec Island]]>
SAP Cyber Threat Intelligence report – February 2017 Fri, 17 Feb 2017 11:25:00 -0600 The SAP threat landscape is always growing thus putting organizations of all sizes and industries at risk of cyberattacks. The idea behind SAP Cyber Threat Intelligence report is to provide an insight on the latest security threats and vulnerabilities.

Key takeaways

  • The February’s set of Security Notes consist of 22 patches, most of them fix missing authorization check vulnerabilities.
  • The highest CVSS base score of the fixed bugs is 8.5.
  • This month, multiple vulnerabilities affecting SAP HANA were closed. They can be exploited together to crash applications on SAP HANA XS remotely without authentication.

SAP Security Notes – February 2017

SAP has released the monthly critical patch update for February 2017. This patch update includes 22 SAP Notes (15 SAP Security Patch Day Notes and 7 Support Package Notes).

4 of all the Notes were released after the second Tuesday of the previous month and before the second Tuesday of this month. 7 of all the Notes are updates to previously released Security Notes.

7 of the released SAP Security Notes has a High priority rating. The highest CVSS score of the vulnerabilities is 8.5.

SAP Security Notes February by priority


The most common vulnerability type is Missing Authorization check.

SAP Security Notes February 2017 by type

Issues that were patched with the help of ERPScan

This month, 3 critical vulnerabilities identified by ERPScan’s researchers Mathieu Geli and Mikhail Medvedev were closed.

Below are the details of these vulnerabilities.

  • Multiple vulnerabilities in SAP HANA (CVSS Base Score: 8.3). Update is available in SAP Security Note 2407694. An attacker can use a Denial of service vulnerability to crash a process of the vulnerable component. For this time, nobody would be able to use this service, which negatively influences business processes, system downtime, and, as a result, business reputation.
  • An XML external entity vulnerability in SAP Visual Composer VC70RUNTIME (CVSS Base Score: 6.5). Update is available in SAP Security Note 2386873. An attacker can use an XML external entity vulnerability to send specially crafted unauthorized XML requests that will be processed by XML parser. An attacker can use an XML external entity vulnerability to get unauthorised access to OS file system.

SAP HANA Multiple Vulnerabilities in detail

SAP Security Note 2407694 closes 2 vulnerabilities affecting SAP’s flagship product, HANA. Namely, there are DoS vulnerability and Implementation Flaw (insecure default user creation policy) in third-party repository server Sinopia.

These vulnerabilities can be exploited together. One of possible attack scenarios is the following. The first vulnerability allows an attacker to create a new user over the Internet without authentication. After that, an adversary can create a new repository. If a package name contains special characters, the application in process will crash. As a result of the attack, the project would be unavailable meaning a stoppage of developing processes. Moreover, the vendor’s advisory states that other SAP HANA XS components could also be potentially impacted.

The most critical issues closed by SAP Security Notes February 2017 identified by other researchers

The most dangerous vulnerabilities of this update can be patched by the following SAP Security Notes:

  • 2408892: SAP Netweaver Data Orchestration has a Missing Authorization Check vulnerability (CVSS Base Score: 8.5). An attacker can use a Missing authorization check vulnerability to access the service without authorization and use service functionality that has restricted access. This can lead to an information disclosure, privilege escalation, and other attacks. Install this SAP Security Note to prevent the risks.
  • 2413716: SAP GRC Access Control EAM has an Implementation flaw vulnerability (CVSS Base Score: 8.2). Depending on a case, an implementation flaw can cause unpredictable behaviour of a system, troubles with stability and safety. Patches solve configuration errors, add new functionality and increase system stability. Install this SAP Security Note to prevent the risks.
  • 2391018: SAP 3D Visual Enterprise Author, Generator and Viewer has a Memory Corruption vulnerability (CVSS Base Score: 8). An attacker can use Buffer overflow vulnerability to inject a specially crafted code into a working memory which will be executed by the vulnerable application. Executed commands will run with the same privileges as the service that executed the command. This can lead to taking complete control of the application, denial of service, command execution, and other attacks. Install this SAP Security Note to prevent the risks.

Advisories for these SAP vulnerabilities with technical details will be available in 3 months on Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.

SAP customers as well as companies providing SAP Security Audit, SAP Vulnerability Assessment, or SAP Penetration Testing services should be well-informed about the latest SAP Security news. Stay tuned for next month’s SAP Cyber Threat Intelligence report.

Copyright 2010 Respective Author at Infosec Island]]>
DigitalOcean Launches Public Bug Bounty Program Fri, 17 Feb 2017 11:01:37 -0600 Cloud computing platform DigitalOcean on Wednesday announced the public availability of its bug bounty program, after successfully running it in private mode.

The same as the private program, the public one was launched in collaboration with Bugcrowd, which provides DigitalOcean with access to a large crowd of researchers and allows it to focus internal resources “on keeping the cloud secure.”

On the program’s page, the company reveals that the bounties available for interested researchers range from $150 to $2,500 per bug, depending on the severity and impact of the discovered flaw. At the moment, the company accepts vulnerabilities found in and

According to the company, it plans on investigating legitimate reports received through the program and on addressing vulnerabilities as fast as possible. Moreover, DigitalOcean says that it won’t take legal action against (or ask law enforcement to investigate) researchers who comply with a series of straightforward requirements.

Specifically, the company asks researchers to provide it with all the necessary details of the vulnerability, including information needed to reproduce and validate the vulnerability and a Proof of Concept (POC), as well as to make “a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation” of services.

Researchers are also required to avoid accessing or modifying data that does not belong to them, as well as to provide the company with reasonable time to correct the issue before making any information public.

DigitalOcean's public bug bounty program adheres to the Bugcrowd Vulnerability Rating Taxonomy for the prioritization/rating of findings, the company also announced.

Researchers interested in the program are encouraged to register for a new account on the company’s website and will receive access to five droplets. They are required to refrain from launching droplets > 1GB of RAM, and to focus on the aforementioned resources, except ticket creation. Vulnerabilities in other applications owned by DigitalOcean aren’t within the scope of the program either.

“Incorporating Bugcrowd's platform into DigitalOcean's overall security strategy has noticeably decreased the window for detecting vulnerabilities in our cloud. Additionally, and in line with our culture of love, we are able to have a more consistent interaction with security researchers through Bugcrowd, and we are able to reward researchers for their hard work!” DigitalOcean Director of Security Nick Vigier said.

The partnership with Bugcrowd, the company says, should provide it with good, consistent communication with researchers, while ensuring their development teams are provided with actionable and validated vulnerabilities. “We are excited to extend our program and continue enjoying the benefits of crowdsourced security testing,” Vigier concluded.

Last year, Bugcrowd’s second annual State of Bug Bounty Report revealed that an increasing number of “traditional” industries are launching bug bounty programs to secure their products and services. Earlier this week, the company revealed a partnership with Qualys to allow joint customers to share vulnerability data across automated web application scanning and crowdsourced bug bounty programs.

Related: Identity Management Firm Okta Launches Bug Bounty Program

Related: Bugcrowd Raises $15 Million to Expand Bug Bounty Business

Copyright 2010 Respective Author at Infosec Island]]>
What bicycle thefts can teach us about mobile security Fri, 17 Feb 2017 09:49:34 -0600 I recently had my mountain bike stolen. I had locked it with a device that I thought was strong enough, but the thief was able to cut through it and take the cycle. As anyone who has had something personal stolen will know, the theft makes you re-evaluate how you protect other things you own. So, after choosing a replacement bike, I naturally decided to buy a more secure lock. 

At the cycle specialist, I was looking at devices from ABUS, one of the leading bike security brands. All of the company’s devices perform the same basic function – helping to prevent ‘mobile devices’ from being stolen – but of course, its solutions cover a range of security levels. So the company rates each of its locks according to its intended usage and the threat environment it will be used in – from low-cost bikes and accessories in low-risk areas, to high-value bikes in high-risk areas for theft. 

This got me thinking – why shouldn’t organizations apply the same rating process to securing the smartphones and tablets being used across their employee base? As with bike security, the overall objective is simple: reduce the security risks of the device being stolen or compromised. And there is no ‘one size fits all’ solution, as the organization has various functions with different levels of risk and different security needs. The idea that every mobile device in an organization should be protected with the highest-grade security technologies looks good on paper – but in practice it simply doesn’t make sense, as some do not require that level of security or are not willing to pay the required security price.

Organizations need to ensure they provide the right levels of security for the device and data, based on several factors: the role of the individual using the device; what core business applications and data the person has access to; and the risk to the business if the device is stolen, compromised by malware, or communications are intercepted. Just as it is unlikely that you would use a 150-dollar lock to secure an old, 50-dollar bike, you wouldn’t use a 30-dollar lock to secure a hand-built Specialized or Colnago racer. 

Different staff, different security levels

So how should organizations approach stratifying the security requirements across their mobile estates? I believe there are three main security levels to think about. 

First, there are the senior members of staff or specific, sensitive organization functions (C-level, MNA, Legal, Finance, core IP, Research, etc.) who access and process sensitive corporate data. These personnel – and their devices – are critical to the organization, and therefore should be considered a high security risk. As such, layering multiple security products onto their company-issued or personal device is simply not a good approach. The tools and processes that provide reasonable levels of protection, often compromise the performance and usability of the device so much that users will seek workarounds, bypassing security measures to achieve productivity. This exposes the device, and the data on it, to even greater unnecessary risk. What’s more, underlying OS level vulnerabilities on these devices can also be targeted by hackers as part of a ‘whaling’ attack against the organization’s executives.

As such, instead of using vulnerable mobile devices with bolted-on restrictive security, senior executives should be issued with specialized, secure devices in which the standard OS and the entire software layer from the kernel level upward is replaced with a secure, hardened version with built-in security layers implemented seamlessly, without affecting productivity, functionality or usability. These devices should deliver full encryption of data at rest, as well as all communications to and from the device, secure its externally available interfaces (Web, Cellular, Wifi, NFC, USB, Bluetooth, etc.), and actively monitor, block and alert on all targeted attacks and attempts to gain unauthorized access to on-device resources, plant malicious code or install rogue apps.

As a result, cyber criminals will hit a very high security bar when trying to target the device. Also, since security is built into the devices’ lowest software layers (instead of being added on), the end user can still enjoy a standard, familiar, fully functional operating system, leveraging a complete app ecosystem and standard ease of use:  ensuring that their productivity is not compromised in any way.

Referencing back at the ABUS cycle lock rating system analogy, this method would be ranked 9 in a scale of 1 to 10 in terms of security rating, assuming realistically that a 100% bullet proof, 10 out of 10 rating cannot be achieved.

Mid-level security

The second level of security is mid-tier management staff, senior external contractors, project managers and other specific functions that have access to some sensitive data but are unlikely to be primary targets for hackers. These personnel – and their devices – are not as critical to the organization as the first group, and therefore should be considered a medium security risk. For these individuals, a standard smartphone, protected with a comprehensive security application that delivers data and communications encryption, attack detection and protection capabilities together with advanced device management features should provide sufficient protection to satisfy the level of risk and security requirements identified at corporate level.  

This method could be applied on a corporate-issued smartphone, or on the user’s own device under a BYOD scheme, and would be rated 6 out of 10 on the Abus scale. The security is not as strong as with a full hardened device and OS, but will be sufficient for the majority of mid-tier staff.

The third level of security applies to employees who have low-level access to data, including contract and freelance staff that are not part of the organization for long enough to warrant being issued with a company device, or included as part of a comprehensive mobile security scheme. Each individual’s device usage and data access should be assessed and monitored, providing visibility at the corporate level with regards to the security postures and risk levels of each device under this scheme. This should be achieved by applying lightweight security software on these devices. This method would be rated as 3 out of 10, and devices under this scheme treated accordingly – low risk, low access level, low security processing power.

Real time visibility and policy enforcement

These three levels of security should be underpinned with a management system which enables the organization’s IT team to see the real-time risk level and security posture of each mobile device in its estate. Monitoring the overall security health of a specific device, a group or the entire organization can effectively point out security gaps, users’ negligence and specific areas of risk that may affect the way IT rollout new services and access to services. This also enables the team to manage and apply policies to mitigate risks on devices as they occur, reducing the organization’s attack surface, the potential impact of threats and attempts to breach security. 

This stratified contextual approach to security means that businesses can apply protection to each device and the data it holds, in a way that is appropriate to the device user’s role, and risk profile. In turn, this makes it easier for organizations to lock down and manage the complete mobile security cycle.

Copyright 2010 Respective Author at Infosec Island]]>
The Third Party Threat Thu, 16 Feb 2017 06:20:00 -0600 63% of all data breaches can be attributed to a third party vendor according to a Soha Systems Security survey. Everyone from LinkedIn to the Hard Rock Hotel and Casino have all been hacked exposing their clients data, thanks to a third party vendor.

The measures taken by organizations to protect corporate assets from electronic theft have to consider many avenues of access. Laptops, tablets and mobile phones that are hand carried into organizations everyday – right past the firewall. If these devices become infected off premises, it now becomes the corporate security teams’ responsibility to defend against it. Remote employees coming in via VPN connections must also be monitored. There is the additional issue of guests who need temporary access as well as contractors who need admittance to the Internet and possibly internal resources as well.

If these contagions aren’t gaining access through phishing attacks then there is always the assumption that someone – somewhere walked the infection right through the front door. The belief that malware of one form or another is always on the network is assumed. Currently, in every corporate network in every state, there is a computing device acting as a host for a bot that is waiting for just the right moment to make a move. 

The debacle of third party breaches hit prominence when Target revealed a massive data breach via a 3rd party contractor. According to the contractor, they utilized the remote access to Target’s internal network for electronic billing, contract submission and project management. Once Target was compromised, the hackers were able to access the point of sale machines (I.e. registers) and ultimately were able to get to roughly 40 million debit and credit card accounts. The data was then uploaded to compromised servers on the Internet which helped obfuscate the identity of the perpetrators. It was estimated that Target faced millions of dollars in losses as a result of the breach. 

Since then, the Yahoo breach has been one of the most spectacular incursions exposing more than a billion user accounts. Rather than breaching Yahoo's servers directly, email addresses and passwords were likely extracted from a third-party database according to Yahoo. "We have no evidence that they were obtained directly from Yahoo's systems," the company said. This unfortunate incident has lead to lawsuits and the delay of the acquisition by Verizon.

The Soha Systems Security survey also revealed:

·75% of the IT and security professionals said the risk of a breach from a 3rd party is serious and increasing

·2% of Enterprise IT and Security Managers, Directors and C-Level Execs consider 3rd party access a top priority

·87% of IT professionals report their organization’s use of contractors has increased

·56% of respondents had strong concerns about their ability to control and/or secure their own third party access

The gap between IT priorities and third party access risk is a serious problem that affects all industry segments and it appears to be getting worse. The use of 3rd party contractors is increasing and for some organizations this poses yet another risk to their security posture. 

A data compromise is inevitable for companies wherever it might emanate from.  Therefore an organizations’ ability to respond to an incident is key.  When responding to a cyber event, investigators almost always turn to the system logs and the history of the traffic patterns that occurred during the event. Having clear, historical visibility into traffic on the network is possible when NetFlow and IPFIX data is collected and archived. Since all major router and firewall companies support these flow technologies, they have become the critical tool for traffic analysis when investigating and sleuthing out the most covert insurgencies. Flow information provides a detailed foot print of every network connection leading up to, during and after a data compromise. Many technologies even leverage flow data for behavior monitoring where, end system behaviors are analyzed over time in an effort to uncover abnormal system communications. 

The faster data breaches can be detected and the entry points closed off, the faster damage can be mitigated. By monitoring and archiving all flow connections, companies stand a better chance of tracing malware back to the source.

Copyright 2010 Respective Author at Infosec Island]]>
When Ransomware Strikes: Does Your Company Have a Data Disaster Recovery Plan? Thu, 16 Feb 2017 05:20:29 -0600 Last year, nearly half of businesses were hit by ransomware. In the first half of 2016 alone, ransomware cost enterprises $209M. Even worse, experts predict that ransomware “will spin out of control” in 2017. Apparent in the headlines, ransomware is rampant and those who commit the attacks aren’t discriminating against any industry, company size, or company location. It’s no longer a question of if your company will be targeted by ransomware but rather when your company will be targeted by ransomware. To prepare, all enterprises should have a data disaster recovery plan to fight back.

The US Justice Department warns that “paying a ransom does not guarantee an organization will regain access to their data; in fact, some individuals or organizations were never provided with decryption keys after paying a ransom … [after paying,] some victims were asked to pay more to get the promised decryption key.”

With a little bit of preparation and forethought, your enterprise could quickly retrieve data backups needed to keep the business running instead of haggling with cybercriminals to get access to vital and sensitive documents and ending up in the headlines for the wrong reasons.

Here are three best practices to get your company started on building a personalized data disaster recovery plan to combat ransomware and other data loss disasters:

Know the Facts

You can’t protect your assets if you don’t know what they are and where they reside. The first step of any data disaster recovery plan should be to take inventory of assets. Conduct a full risk assessment and business impact analysis to examine the consequences of disruption to a business function and processes. Understanding the impact of data loss on business-critical functions is crucial for personalizing your data disaster recovery plan. Don’t forget to include legal and audit ramifications. 

Secondly, know the facts of your company’s agreement with third-party vendors who handle your data. Don’t be lulled into a false sense of security if you use collaboration platforms like Microsoft Office 365 or G Suite. While they provide great capabilities, these SaaS applications can’t fully protect customers from data loss caused by ransomware, sync errors from integrations, or human error. It’s not that these providers don’t want to help, they simply can’t. When data is encrypted, changed or deleted by ransomware, sync errors, or other destructive activity, these actions look just like their customers changing or deleting data for legitimate reasons to the SaaS provider.

Make It a Team Effort 

Long gone are the days where only one person is responsible for enterprise security. To succeed, the entire company needs to be involved in securing its data and assets as part of the data disaster recovery plan. To this end, spend time and resources on educating your users on security best practices to prevent ransomware and phishing. Identify high-value targets for ransomware, spear-phishing, etc. and monitor for unusual activity on their end.

A hacker only needs one careless employee to gain access to your whole network. By having your whole team engaged in good security practices, hackers will be hindered by a united front. As Ben Franklin once said, “an ounce of prevention is worth a pound of the cure.”

Back Up Data & Test the Process

Ransomware attackers rely on the fact that majority of users don’t have a good way to restore data from a backup. Counteract this ploy by regularly backing up your data with automated systems that ensure point-in-time restore.

Don’t stop there though. Backups are only as good as the recovery that comes with them. Take the time to periodically test the restore process to ensure that restored files from backups are useable and accurate. In a moment of panic, you should be able to recover your data without thinking, and get it back exactly the way it was before.

Don’t become a statistic – make the investment to build a data disaster recovery plan before you need it. Take time to do the research to know the facts of your data assets and risks, make security a team effort and back up your data and test the process. You’ll never regret preparing too much but you’ll definitely regret having to cough up tens of thousands of dollars in bitcoin to get your business-critical data back and landing in the headline of every security publication naming your company as the latest victim to ransomware. 

Copyright 2010 Respective Author at Infosec Island]]>