Dear CISO, please talk about business with your board, not technicality.

Dear CISO and Board

I think we should always consider our job as a part of the business. We finally started to consider cyber security and data protection as a serious issue but now the question is how we evaluate a risk in our analysis and business plans…

Current documentations and reports, for risk analysis, presented to most of the boards use just a flag (High, medium, low risk) but does not seems to specify any metric. Without metric it is hard to make sound evaluation and comparison so to the question raised by any member of the board : “does a high risk in XYZ be dangerous as a high risk in ABC” can’t have a credible answer if not on “perception” which is subjective if not backed up by facts..

Security metrics are, as of now, subject of interpretation and discussion but we can simplify the approach to make security analysis somehow credible and understandable.

First of all, to answer to board question what is needed is a common framework of evaluation, that include easy to read metrics, that make comparison understandable even to not cyber security experts, as most of the board member that have to take decision based upon those inputs are.

This is something that goes beyond the Cyber and Information Security Officer tasks, this requires the whole company to start thinking about its cyber security and digital assets, but unless the approach is to take a reactive way of do thing, inputs coming from you should be provided to start outlining this framework and metrics.

Alas cyber security risk analysis is all but simple, mostly if related to business impact, since it requires understanding of cyber security issue and, as well, the business in which the risk is analyzed.

There are two main aspects that need sound and readable metrics:

  1. Risk evaluation
  2. Risk consequences

The first item is used to define how “risky” is something. Measure a risk requires, to simplify a complex matter, to be able to evaluate the probability that something happens, the magnitude of the damage, and the cost for fixing things. Magnitude of the damage and cost to fix things are bound to Risk consequences, that are, basically, the metric that can be used in a board meeting to describe the risk in terms understandable to a non-cyber security aware audience.

I will not enter in the realm of risk evaluation deeply here, you have a deep knowledge and understanding of the issue and I do not want to bore you with my considerations, but let me notice how there is not, apparently, yet a common framework of evaluation spread through your company’s groups and BU on the matter.

If risk evaluation is one key, but mostly technical, aspect, let me point out something on the risk consequences aspect that can be of some use in the future business plans to make them useful from a business perspective and not just a sterile exercise.

Risk consequences can be presented, basically, in some dimensions that are somehow related, the aim here is to understand if a cyber security incident occurs what can be the measures that allow your company to describe it and, therefore, compare with another event.

Would make sense, in my point of view, to present any risk analysis to the board and other managers in those terms:

1)     Monetary cost in terms of loss revenues

2)     Monetary cost in terms of live costs

3)     Impact on market penetration

4)     Impact on brand perception

This would allow to compare an XYZ incident to a ABC incident and answer somehow to Board question, and, moreover, to give a metric to understand where and why to invest in an area instead of another one.

Let me quickly describe the 4 points.

1)     Monetary cost in terms of loss revenues

This is a dimension that can be easily perceived by sales and financial managers. This basically means to be able to estimate how many direct selling activities will be impacted by the incident. The timeframe taken into account is key, of course, since events can have different effect in terms of immediate, medium and long term timeframe.

The evaluation can be presented both in terms of net amount of money or % compared to budget. Both make sense to understand the impact.

2)     Monetary costs in terms of live costs

This basically means to put into account all the live costs related to the incident as fines, legal issues, HWSW replacements, people working on the issue and so on. It is important to separate costs related to the incident to the loss revenue related to the incident.

3)     Impact on market penetration

This is a metric that make sense for a vendor who is trying to expand its footprint in the market as your company is trying to do. It is strictly connected to the direct revenues but also to the growth expectations. This can be represented as a % of the market share.

4)     Impact on brand perception

This last item is the hardest to measure, since it depends on the metric used to value Brand inside your company, since I have been never told what metrics are used I can here just suggest to present the %variation related to the value before the incident.

For what I know this has not been done before on Cyber and Information Security Business Plans. It could be either something sound to present in your future BP or a task for the Cyber and Information Security Office to be implemented for this year if the structure is not able to do this kind of analysis and presentation.

With those 4 points would be possible to both:

make comparison between risks


provide to the board an output that can be objectively used to take decision.

Let take, as an example, privacy risk related to GDPR not compliancy.

This approach would allow you to present in the BP set of data to justify expenses and investments every time a risk is presented; something like:

Let me explain the the table to you, of course values are fictitious and timeframe can be adjusted to your reality but i think this can give you almost a basic understanding of what i suggest.

GDPR not compliancy:

1)     customer personal data breach: Columns headers

Short term impact (1-3 months)

It is what happen immediately after the problem, where you have to set up the required operations to make things running again somehow. If you have a Emergency Response Team (You should) this is where you put the costs…

Midterm impact (3 months – one year)

Let be honest, if it is a minor outbreak may be things will be solved quickly, but if the problem is bigger, as your marketing database exposed, you will start considering also legal costs, fines and the impact on your market…

Long Term Impact (1-3 years)

Things have an impact also after your BP, life is nt restricted to your daterange, business is not restricted to daterange, you you should be able to make prediction and analysis way longer than the simple one year timeframe. It is common in any business, so here too.

2)     customer personal data breach: rows headers

Revenue losses

This is the revenue losses that you will have to face upon your budget expectations.

Live costs

This contains what you have to pay, your direct costs that cove, as an example:

  • HWSW replacement
  • Fines
  • Estimated “damaged user” legal issues if someone sue you
  • ransom paid
  • eventual cyber security insurance policy fee rise
  • stop production costs
  • people working on the issue to solve the problem (eventual forensic analysts, cyber experts, lawyers …)

Impact on Market Penetration

This is where you put how the incident will damage your business in terms of your presence and future outlook.

Impact on Brand Perception

this is how your credibility will be affected

With this kind of matrix would be easy to make correct evaluations and comparison. I am not sure this is at the moment something that can be done with the current analysis tools but eventually would be a sound element to put in a BP for a future sound approach to cyber security risk evaluation.



var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = '',
    ru = '';

Dear CISO, please talk about business with your board, not technicality. was originally published on The Puchi Herald Magazine

Security and Datacenters

A Datacenter is a collection of several different elements, all working together to offer a platform to our digital needs.
A datacenter is actually a mix of different elements, some logic some physical, it is just not a mere collection of elements but a complex systems with a lot of interactions.
We can easily see inside the datacenter, cables, racks, servers, network equipments, storage units and so on but all are there (or should be there) for a purpose and are interconnected.
A big part of a datacenter is not even visible; it is the software and data running in it, going in and out from its connection, disks, memory and CPU’s.
So with a physical infrastructure we have also services, Processor power, Storage, Connectivity and services. All this is making the Datacenter what it is.

Security in the datacenter


If we think our datacenter is somehow valuable we should think how to protect it.

The protection of a datacenter is something that should take into account the entire datacenter components, physical and virtual.

In modern datacenters the two dimensions are interconnected and there is not one without the other.

I usually consider in the physical domain all the issues coming from planning a correct disaster recovery and backup solution, since they require a Hardware approach generally speaking. Virtualization moved this needs to a software level and so it is, I know, quite an arbitrary assumption. But as an example to try to explain my point of view, is there any disaster recovery in place if the DR Datacenters are in the same building? As well is any backup policy sound if backup unit are taken in a non physical secured environment? (with the same precaution level and redundancy that should be given to the DR consideration).

It is clear how to have a datacenter secured it is mandatory to provide a safe physical environment, so power lines, cooling, physical access control are all terms of the equation.

If the floor can’t stand the rack, it would be a clear security issue, as well as if datacenter goes to overheating or if power lines are not able to provide the needed energy or enough flexibility (which can be variable accordingly to the usage). Same could be told in regards of UPS Units that are critical to establish correct power maintenance. Of course the implications of many of those aspects go beyond the strictly physical environment, since almost all require control software to be monitored.

What it is usually unevaluated is that the entire physical environment, nowadays, has sensors that talk with the logical one; those data could be really useful to understand the general safety of the system even from a cyber security perspective. So a surge in power requirement, a spike in terms of heat or CPUDisk load can be a trigger to a failure as well as a symptom of an ongoing attack.

Logical security

Going deeper on Physical security for datacenter is today out of my scope, I would like to take a look on the Logical aspect of Datacenter security, since it collect almost (if not all) the classical Cyber and IT security requirement.

Server security

So let’s take a quick look at server security. The term “server” attain to the logical area. When we talk about servers we are not talking about a physical machine but a logical entity providing a service.
If we look for a definition:

  1. In information technology, a server is a computer program that provides services to other computer programs (and their users) in the same or other computers.
  2. The computer that a server program runs in is also frequently referred to as a server (though it may be used for other purposes as well).
  3. In the client/server programming model, a server is a program that awaits and fulfills requests from client programs in the same or other computers. A given application in a computer may function as a client with requests for services from other programs and also as a server of requests from other programs.

So server is definitively a logical issue.
So secure a server it is usually necessary to perform, as first instance, a hardening analysis and to set up a correct patch management procedure. This is the truth, actually, for any device and software that run under the IT realm. Alas a lot of time this is one of the first neglected areas.
Due to the importance of those elements I will go into Hardening and Patching later, but I would like to stress that those are Mandatory Requirement for a minimum security level.
There are a lot of different OS platforms that can be used as Server; classical OS like Windows, UNIX or Linux, all require to be carefully managed.
Since most of the OS are multipurpose (they offer more than a service) as file, application, web server in the same infrastructure it is essential to extend security consideration to all the resources in place, this means protocols, access management, user and so on.
Of course since a Datacenter lives of its cross relationship we should carefully understand what is in relationship with what, in order to avoid, as an example, to operate to some parameters affecting something else.
Alas all the control and configuration needed can require deep knowledge of the platform and others, as an example the implementation of Anti Malware and Anti Viruses or encryption implementation, could require a third party software.
This is also the truth for Virtual Hosting environments as VmWare, DropBox, Hyper V and so on.
The good news is that in the market there are a lot of solutions to help us to take in place correct security plans, but a correct vision and a sound process implementation is unavoidable.


Application Security


If “OS” servers have their requirement, it is obvious that all other servers have the same needs. No matter if we talk about a database or a web server, it needs security. Since those Application server lives over the OS, the security they require is “On Top” of the OS needs, it means that securing just one (Application Server or OS server) will leave the system unsecured.
A classical example is the implementation of AVAM (AntivirusAntimalware) solutions. Those solutions should be applied not only on the guest OS but also, when available, to the Application server running, since the security issues that they have to cover can be dramatically different.

Network Security

Inside all the security consideration, in the datacenter cannot be avoided the component dedicated to network security.
Network security is key in Datacenter, since all communication at the end flow inside it, and the entire component needs to communicate one to another.
Even in a virtual environment the Hypervisor have to manage network communication between the various internal virtual environments and the communication of these ones with the external ones. Speed is one of the key elements inside datacenter due to the great amount of data flowing, but accordingly to the nature of the services provided by the datacenter itself other consideration can be required. Since the datacenter, as an example, is a collection of servers running a IPSIDS can make absolutely sense, as well as implementing a correct monitoring (may be using SIEM and Big DATA analytical tools).


Access Management and IAM


Along with network security another neglected area is User Management. Identity User Access and Rights management should be big concerns for a Datacenter manager, because a poor management of those could affect the entire datacenter.
Too often it is not considered important, as an example, the use of administrative rights on contiguous platforms, or the “frozen” users (from the cancelled ones to the never used, to end with the service users) all with set of rights usually not correctly monitored.
This si the truth not only for Datacenters, actually, even end users devices, as laptops or company mobile phones, usually suffers of such glimpse of memory.


Protocol Management


And if we have concerns on user we should have concerns also on protocols. At any level they are the media trough our systems communicate, so their management should be in our best interest. Alas usually protocol management is related to Network Devices (as implementing firewall rules) and it is neglected the one of the basic rules of security: what is unmanaged is prone to risk.
There are tools to manage and secure DNS (beside DNSSEC that is a security implementation of DNS services) and but also DHCP should be managed. Why we should lease an internal address (at least till we are in the IPv4 Realms, but this is way more important on IPv6) without any check or control on who has made the request, and leave all the controls after at network device level? This Is a bad security approach. DDI solutions on the market try to address those problems.


Patching and Hardening

No matter we are dealing with a logical server, a physical server, software, a device or an appliance of any kind everything should be under patching and hardening management.



I will first look at the recurring nightmare that is patching. Years ago there were the ideas that “if the system runs, do not touch it”. It is still a mantra for many Datacenter and IT managers; alas every day more this approach is dangerous and should be really a concern.
Patching is a requirement nowadays mainly for security reasons. We finished the naïve era where software was almost perfect, not we are in a more realistic era of the flaws full software.
Every day new vulnerabilities are disclosed, and affect everything, even our cars, so patching is no more an option we can avoid.
What does Patching means:
Do patching means make configuration or code changes that are designed to secure the software from existing and potential “0” day vulnerabilities.
Patches are all around us, but what is actually a patch? A patch is usually software that is used to amend a problem in a code or a configuration. Patches are provided, usually, by vendors in form of recurring or emergency updates.
Of course updates contain not only security patches, but could also address compatibility problems, bugs, or add new features.
Those updates, at least the security ones, are usually provided within a service agreement accordingly to the license duration of the softwaredevice.
From time to time vendors tend to close support to older release for several reasons, not only commercial but also because the level of patching would become not sustainable. It is what happened to Windows XP or Windows 20032000. But to stay in the windows realms also what happened to the older versions of windows explorer or to the embedded internet browser distributed with android before chrome.

Microsoft is finally moving on from its aging Web browsers as Internet Explorer 8, 9, and 10 will receive their last security updates and enter end-of-life on January 12. Users will then see a tab with a download link to the most current Internet Explorer available for the operating system.
End-of-life doesn’t mean older versions of Internet Explorer suddenly stop working, and there are ways to turn off Microsoft’s nagging reminder to update. But not switching to a supported browser is a colossal security mistake considering that attackers frequently target unpatched vulnerabilities in Internet Explorer. A regularly updated browser is still a critical line of defense against Web-based attacks.
Things in the Datacenter follow the same rules so we have to keep an eye.

What should I patch?

Since patching means solving problems related to code or configuration everything is subject to patching.
– All that is subject to configuration and it is softwarefirmware based:
• ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
• Appliances OS networking devices firmwareOS
• Applications (SQL Databases, Web Serv
er, CRM, Mail, Videoconferences…)
• Virtualization Platforms (Vmware, HyperV, Virtualbox…)
• Driver, middleware and managed component (SCADA, ICS …)
• …
The reason for all this patching is always the same address:

• Critical and not critical vulnerabilities
• SwHw compatibility
• New Services
• Bug’s correction
• …

The Patching Cycle

Patching a system is not a onetime activity, but it require a cyclical approach.

Patching real need is related to the discovery of new bugs, problem and vulnerability, so it is mandatory to control if the vendor has in place a serious patching system, and a serious vulnerability disclosure system. Vendors that never provide patches are vendors that
• Create a perfect piece of software
• Dismissed the productservices
• Are not trustable
So beside the first point, that is unrealistic even for the Linux and Apple lovers the rest is clear, every vendor from time to time has to release patch that comes, generally, in form of update.
So the first question should be: do I need to patch the system?
This can be done just checking the vendor’s security bulletin or the vendor’s patchupdate release system. Most of those systems allow an automatic update, which is good for ALMOST all the situation.
Of course if a patch is available I should consider if I can or not apply the patch. This is a tough question if my system contains legacy components that could be affected by the patch itself.
A good way would be to have a test environment to check if everything is right and only after those test put the patch on a production environment, keeping in mind that test and production environment seldom are 100% the same, and so something could be missed.

To patch or not to patch

Due to the intrinsic risks related to patching, therefore, it is better to apply patch when there is a real need.
It is clear that in absence of issues reported, there is no need to patch.
But if a serious vulnerability or compatibility issue is reported patching should be in our best interest. Now we can face two problems

  • the presence of a patch from the vendor


  • the absence of the patch.

In the first occurrence we should start the patching cycle, in the second case we have a big problem; we are in presence of a security issue that is not addressed by the vendor.
In this case, luckily, we can opt for third party solution that applies a “Virtual” patching to the system.
Virtual patching cannot address compatibility issues, but can save us from immediate security risks.
In any situation it is suggested to perform test, and to save the job before applying any patch. How many system engineers I saw crying because they have not done a snapshot…

Note :Test
• Test
– It is always a good precaution to test patches before putting them into production to avoid unpleasant issues related to unforeseen and unexpected incompatibilities
– In virtual environments should take snapshots of the machines to be updated before and after applying the patch
• snapshot
• Update
• Fingers crossed
• If all goes I do anew snapshot otherwise I pull up the latest copy saved
– In physical environments do the same thing, better if I software for bare metal backup or similar
– For appliances or networking equipment it is safer to isolate the apparatus to avoid repercussions on the entire network in the event of problems


Note: Virtual Patching

With virtual patching we refers to the introduction of third party software that is able to “close” any flaws in the lack of an updatepatch..

  • Virtual patching allows you to apply virtual patches both “physical” and “virtual environments”, and can be either agent-based or agentless
  • virtual patching is not an alternative to patching but it can be a viable solution in the event of a EOL support by the vendor (ex: Windows XP and Windows 20002003)
  • virtual patching covers the security needs, not those of compatibility
  • Virtual Patching can be used as a support technology to cover complex vulnerabilities dependent on OS SW components dependencies


Hardening my dear


A less frequent but even more important thing is Hardening
Hardening is again an activity that try to address the possible vulnerability of a system, basically the idea is that if you do not have something this could not harm you, so Hardening essentially means to turn off, delete, erase, uninstall, block, wipe out all that is not essential to the service provided by a specific server or device.


What Hardening means

– Do hardening means operating on the configuration parameters of a system to “close” all nonessential services to the assigned task to decrease the attack surface area ..

In other words it means to check every service and activity of the system and close the ones that are not useful for the intended purpose.

What should I work on?

Hardening is something we should do on everything. If a service, a right, a tool is not essential it could harm the system, and therefore should be avoided.

This is the truth for everything: Router protocols and OS services.

Basically everything that is subject to patching is subject to hardening.

All that is subject to configuration and it is softwarefirmware based:

  • ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
  • Appliances OS networking devices firmwareOS
  • Applications (SQL Databases, Web Server, CRM, Mail, Videoconferences…)
  • Virtualization Platforms (Vmware, HyperV, Virtualbox…)
  • Driver, middleware and managed component (SCADA, ICS …)

Why Hardening

The reasons behind hardening are actually related to two aspects: security and performances. For once they go together. Getting rid of useless services allows the systems to access to a wider set of resources for the intended purpose.
So hardening is good for security and also good for performance.


Hardening is essentially a configuration activity that must be made to allow the optimal functioning of a platform. The Hardening has repercussions in terms of performance and safety.
• What does make hardening mean:

  • – Allow operating systems, software and services to carry out only the wanted operationsservices
  • – Close all non-essential services operation
  • – Ensure that the correct access procedures are carried out by users, applications and services on the critical resources
  • – Restrict or monitor non-essential activities of applications and services

Hardening Methodology

Hardening so is a great idea, but it comes with a big problem, it is something really hard to do.
Hardening a system requires a methodical and thorough analysis:

  • All aspect of the operating systemapplication in question should be taken into account by determining
  • What services are essential and which are not
  • Of essential services such characteristics are allowed and which you want to block
  • What are the configuration parameters to be modified
  • What is the chain of objectsservicesapplication called by the service Application

To be able to determine which services are essentials and which are not essentials is not such an easy task since, sometimes, a service that seems useless can be used for hidden background tasks. Sometimes the simple presence of a service is checked even if not used, and sometimes software engineers and developers simply made it wrong, making useless calls that when stopped makes the system unpredictable.

Chain of services are even more complicated to determine, since in absence of documentation it could be that a service can be called for specific occasion without a direct clear link of causeeffect.

Moreover in case of OS:

All applications (standard, legacy, running, installed and not sunning) that belong to the operating system in question

  • For each application it is necessary to define whether it is allowed or not running
  • For each application that is allowed to run it is necessary to define the list of allowed and not allowed characteristics as well the scope and execution environment

It is mandatory to change the configuration parameters of every application to get the required results.

Although could seems odd, even not running application can be subject to risks. Dormant services can be waked up by malicious software after an infection, typical example are SMTP and Web services that can be partially or totally activated by a botnet malware. But the same stuns for encryption services used by some ransomware and so on.

Alas all this require a big amount of knowledge.

• Hardening is difficult to do because:

– It requires detailed knowledge of the environment Application Software
– It requires a thorough knowledge of all the connections between the various components
– It requires an extremely precise control of all the software installed, access and utilities
– Not all configurations can be reached through a GUI or via CLI commands
– Sometimes writing code is required
– …
But there are on the market solutions that help you to get the result with an easier approach. What is required, as usual, is to know what you want and why, this is something no third party software can provide.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = '',
    ru = '';

Security and Datacenters was originally published on The Puchi Herald Magazine