Dear CISO, please talk about business with your board, not technicality.


Dear CISO and Board

I think we should always consider our job as a part of the business. We finally started to consider cyber security and data protection as a serious issue but now the question is how we evaluate a risk in our analysis and business plans…

Current documentations and reports, for risk analysis, presented to most of the boards use just a flag (High, medium, low risk) but does not seems to specify any metric. Without metric it is hard to make sound evaluation and comparison so to the question raised by any member of the board : “does a high risk in XYZ be dangerous as a high risk in ABC” can’t have a credible answer if not on “perception” which is subjective if not backed up by facts..

Security metrics are, as of now, subject of interpretation and discussion but we can simplify the approach to make security analysis somehow credible and understandable.

First of all, to answer to board question what is needed is a common framework of evaluation, that include easy to read metrics, that make comparison understandable even to not cyber security experts, as most of the board member that have to take decision based upon those inputs are.

This is something that goes beyond the Cyber and Information Security Officer tasks, this requires the whole company to start thinking about its cyber security and digital assets, but unless the approach is to take a reactive way of do thing, inputs coming from you should be provided to start outlining this framework and metrics.

Alas cyber security risk analysis is all but simple, mostly if related to business impact, since it requires understanding of cyber security issue and, as well, the business in which the risk is analyzed.

There are two main aspects that need sound and readable metrics:

  1. Risk evaluation
  2. Risk consequences

The first item is used to define how “risky” is something. Measure a risk requires, to simplify a complex matter, to be able to evaluate the probability that something happens, the magnitude of the damage, and the cost for fixing things. Magnitude of the damage and cost to fix things are bound to Risk consequences, that are, basically, the metric that can be used in a board meeting to describe the risk in terms understandable to a non-cyber security aware audience.

I will not enter in the realm of risk evaluation deeply here, you have a deep knowledge and understanding of the issue and I do not want to bore you with my considerations, but let me notice how there is not, apparently, yet a common framework of evaluation spread through your company’s groups and BU on the matter.

If risk evaluation is one key, but mostly technical, aspect, let me point out something on the risk consequences aspect that can be of some use in the future business plans to make them useful from a business perspective and not just a sterile exercise.

Risk consequences can be presented, basically, in some dimensions that are somehow related, the aim here is to understand if a cyber security incident occurs what can be the measures that allow your company to describe it and, therefore, compare with another event.

Would make sense, in my point of view, to present any risk analysis to the board and other managers in those terms:

1)     Monetary cost in terms of loss revenues

2)     Monetary cost in terms of live costs

3)     Impact on market penetration

4)     Impact on brand perception

This would allow to compare an XYZ incident to a ABC incident and answer somehow to Board question, and, moreover, to give a metric to understand where and why to invest in an area instead of another one.

Let me quickly describe the 4 points.

1)     Monetary cost in terms of loss revenues

This is a dimension that can be easily perceived by sales and financial managers. This basically means to be able to estimate how many direct selling activities will be impacted by the incident. The timeframe taken into account is key, of course, since events can have different effect in terms of immediate, medium and long term timeframe.

The evaluation can be presented both in terms of net amount of money or % compared to budget. Both make sense to understand the impact.

2)     Monetary costs in terms of live costs

This basically means to put into account all the live costs related to the incident as fines, legal issues, HWSW replacements, people working on the issue and so on. It is important to separate costs related to the incident to the loss revenue related to the incident.

3)     Impact on market penetration

This is a metric that make sense for a vendor who is trying to expand its footprint in the market as your company is trying to do. It is strictly connected to the direct revenues but also to the growth expectations. This can be represented as a % of the market share.

4)     Impact on brand perception

This last item is the hardest to measure, since it depends on the metric used to value Brand inside your company, since I have been never told what metrics are used I can here just suggest to present the %variation related to the value before the incident.

For what I know this has not been done before on Cyber and Information Security Business Plans. It could be either something sound to present in your future BP or a task for the Cyber and Information Security Office to be implemented for this year if the structure is not able to do this kind of analysis and presentation.

With those 4 points would be possible to both:

make comparison between risks

and

provide to the board an output that can be objectively used to take decision.

Let take, as an example, privacy risk related to GDPR not compliancy.

This approach would allow you to present in the BP set of data to justify expenses and investments every time a risk is presented; something like:

Let me explain the the table to you, of course values are fictitious and timeframe can be adjusted to your reality but i think this can give you almost a basic understanding of what i suggest.

GDPR not compliancy:

1)     customer personal data breach: Columns headers

Short term impact (1-3 months)

It is what happen immediately after the problem, where you have to set up the required operations to make things running again somehow. If you have a Emergency Response Team (You should) this is where you put the costs…

Midterm impact (3 months – one year)

Let be honest, if it is a minor outbreak may be things will be solved quickly, but if the problem is bigger, as your marketing database exposed, you will start considering also legal costs, fines and the impact on your market…

Long Term Impact (1-3 years)

Things have an impact also after your BP, life is nt restricted to your daterange, business is not restricted to daterange, you you should be able to make prediction and analysis way longer than the simple one year timeframe. It is common in any business, so here too.

2)     customer personal data breach: rows headers

Revenue losses

This is the revenue losses that you will have to face upon your budget expectations.

Live costs

This contains what you have to pay, your direct costs that cove, as an example:

  • HWSW replacement
  • Fines
  • Estimated “damaged user” legal issues if someone sue you
  • ransom paid
  • eventual cyber security insurance policy fee rise
  • stop production costs
  • people working on the issue to solve the problem (eventual forensic analysts, cyber experts, lawyers …)

Impact on Market Penetration

This is where you put how the incident will damage your business in terms of your presence and future outlook.

Impact on Brand Perception

this is how your credibility will be affected

With this kind of matrix would be easy to make correct evaluations and comparison. I am not sure this is at the moment something that can be done with the current analysis tools but eventually would be a sound element to put in a BP for a future sound approach to cyber security risk evaluation.

regards

Antonio

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Dear CISO, please talk about business with your board, not technicality. was originally published on The Puchi Herald Magazine

Security and Datacenters

Security and Datacenters

Diapositiva1
A Datacenter is a collection of several different elements, all working together to offer a platform to our digital needs.
Diapositiva2
A datacenter is actually a mix of different elements, some logic some physical, it is just not a mere collection of elements but a complex systems with a lot of interactions.
We can easily see inside the datacenter, cables, racks, servers, network equipments, storage units and so on but all are there (or should be there) for a purpose and are interconnected.
A big part of a datacenter is not even visible; it is the software and data running in it, going in and out from its connection, disks, memory and CPU’s.
So with a physical infrastructure we have also services, Processor power, Storage, Connectivity and services. All this is making the Datacenter what it is.

Security in the datacenter

Diapositiva3

If we think our datacenter is somehow valuable we should think how to protect it.

The protection of a datacenter is something that should take into account the entire datacenter components, physical and virtual.

In modern datacenters the two dimensions are interconnected and there is not one without the other.

I usually consider in the physical domain all the issues coming from planning a correct disaster recovery and backup solution, since they require a Hardware approach generally speaking. Virtualization moved this needs to a software level and so it is, I know, quite an arbitrary assumption. But as an example to try to explain my point of view, is there any disaster recovery in place if the DR Datacenters are in the same building? As well is any backup policy sound if backup unit are taken in a non physical secured environment? (with the same precaution level and redundancy that should be given to the DR consideration).

It is clear how to have a datacenter secured it is mandatory to provide a safe physical environment, so power lines, cooling, physical access control are all terms of the equation.

If the floor can’t stand the rack, it would be a clear security issue, as well as if datacenter goes to overheating or if power lines are not able to provide the needed energy or enough flexibility (which can be variable accordingly to the usage). Same could be told in regards of UPS Units that are critical to establish correct power maintenance. Of course the implications of many of those aspects go beyond the strictly physical environment, since almost all require control software to be monitored.

What it is usually unevaluated is that the entire physical environment, nowadays, has sensors that talk with the logical one; those data could be really useful to understand the general safety of the system even from a cyber security perspective. So a surge in power requirement, a spike in terms of heat or CPUDisk load can be a trigger to a failure as well as a symptom of an ongoing attack.

Logical security

Going deeper on Physical security for datacenter is today out of my scope, I would like to take a look on the Logical aspect of Datacenter security, since it collect almost (if not all) the classical Cyber and IT security requirement.

Server security

Diapositiva4
So let’s take a quick look at server security. The term “server” attain to the logical area. When we talk about servers we are not talking about a physical machine but a logical entity providing a service.
If we look for a definition:

  1. In information technology, a server is a computer program that provides services to other computer programs (and their users) in the same or other computers.
  2. The computer that a server program runs in is also frequently referred to as a server (though it may be used for other purposes as well).
  3. In the client/server programming model, a server is a program that awaits and fulfills requests from client programs in the same or other computers. A given application in a computer may function as a client with requests for services from other programs and also as a server of requests from other programs.

So server is definitively a logical issue.
So secure a server it is usually necessary to perform, as first instance, a hardening analysis and to set up a correct patch management procedure. This is the truth, actually, for any device and software that run under the IT realm. Alas a lot of time this is one of the first neglected areas.
Due to the importance of those elements I will go into Hardening and Patching later, but I would like to stress that those are Mandatory Requirement for a minimum security level.
There are a lot of different OS platforms that can be used as Server; classical OS like Windows, UNIX or Linux, all require to be carefully managed.
Since most of the OS are multipurpose (they offer more than a service) as file, application, web server in the same infrastructure it is essential to extend security consideration to all the resources in place, this means protocols, access management, user and so on.
Of course since a Datacenter lives of its cross relationship we should carefully understand what is in relationship with what, in order to avoid, as an example, to operate to some parameters affecting something else.
Alas all the control and configuration needed can require deep knowledge of the platform and others, as an example the implementation of Anti Malware and Anti Viruses or encryption implementation, could require a third party software.
This is also the truth for Virtual Hosting environments as VmWare, DropBox, Hyper V and so on.
The good news is that in the market there are a lot of solutions to help us to take in place correct security plans, but a correct vision and a sound process implementation is unavoidable.

 

Application Security

Diapositiva5

If “OS” servers have their requirement, it is obvious that all other servers have the same needs. No matter if we talk about a database or a web server, it needs security. Since those Application server lives over the OS, the security they require is “On Top” of the OS needs, it means that securing just one (Application Server or OS server) will leave the system unsecured.
A classical example is the implementation of AVAM (AntivirusAntimalware) solutions. Those solutions should be applied not only on the guest OS but also, when available, to the Application server running, since the security issues that they have to cover can be dramatically different.

Network Security

Diapositiva6
Inside all the security consideration, in the datacenter cannot be avoided the component dedicated to network security.
Network security is key in Datacenter, since all communication at the end flow inside it, and the entire component needs to communicate one to another.
Even in a virtual environment the Hypervisor have to manage network communication between the various internal virtual environments and the communication of these ones with the external ones. Speed is one of the key elements inside datacenter due to the great amount of data flowing, but accordingly to the nature of the services provided by the datacenter itself other consideration can be required. Since the datacenter, as an example, is a collection of servers running a IPSIDS can make absolutely sense, as well as implementing a correct monitoring (may be using SIEM and Big DATA analytical tools).

 

Access Management and IAM

Diapositiva7

Along with network security another neglected area is User Management. Identity User Access and Rights management should be big concerns for a Datacenter manager, because a poor management of those could affect the entire datacenter.
Too often it is not considered important, as an example, the use of administrative rights on contiguous platforms, or the “frozen” users (from the cancelled ones to the never used, to end with the service users) all with set of rights usually not correctly monitored.
This si the truth not only for Datacenters, actually, even end users devices, as laptops or company mobile phones, usually suffers of such glimpse of memory.

 

Protocol Management

Diapositiva8

And if we have concerns on user we should have concerns also on protocols. At any level they are the media trough our systems communicate, so their management should be in our best interest. Alas usually protocol management is related to Network Devices (as implementing firewall rules) and it is neglected the one of the basic rules of security: what is unmanaged is prone to risk.
There are tools to manage and secure DNS (beside DNSSEC that is a security implementation of DNS services) and but also DHCP should be managed. Why we should lease an internal address (at least till we are in the IPv4 Realms, but this is way more important on IPv6) without any check or control on who has made the request, and leave all the controls after at network device level? This Is a bad security approach. DDI solutions on the market try to address those problems.

 

Patching and Hardening

 
No matter we are dealing with a logical server, a physical server, software, a device or an appliance of any kind everything should be under patching and hardening management.

Patching

Diapositiva9

I will first look at the recurring nightmare that is patching. Years ago there were the ideas that “if the system runs, do not touch it”. It is still a mantra for many Datacenter and IT managers; alas every day more this approach is dangerous and should be really a concern.
Patching is a requirement nowadays mainly for security reasons. We finished the naïve era where software was almost perfect, not we are in a more realistic era of the flaws full software.
Every day new vulnerabilities are disclosed, and affect everything, even our cars, so patching is no more an option we can avoid.
What does Patching means:
Do patching means make configuration or code changes that are designed to secure the software from existing and potential “0” day vulnerabilities.
Patches are all around us, but what is actually a patch? A patch is usually software that is used to amend a problem in a code or a configuration. Patches are provided, usually, by vendors in form of recurring or emergency updates.
Of course updates contain not only security patches, but could also address compatibility problems, bugs, or add new features.
Those updates, at least the security ones, are usually provided within a service agreement accordingly to the license duration of the softwaredevice.
From time to time vendors tend to close support to older release for several reasons, not only commercial but also because the level of patching would become not sustainable. It is what happened to Windows XP or Windows 20032000. But to stay in the windows realms also what happened to the older versions of windows explorer or to the embedded internet browser distributed with android before chrome.

Note:
Microsoft is finally moving on from its aging Web browsers as Internet Explorer 8, 9, and 10 will receive their last security updates and enter end-of-life on January 12. Users will then see a tab with a download link to the most current Internet Explorer available for the operating system.
End-of-life doesn’t mean older versions of Internet Explorer suddenly stop working, and there are ways to turn off Microsoft’s nagging reminder to update. But not switching to a supported browser is a colossal security mistake considering that attackers frequently target unpatched vulnerabilities in Internet Explorer. A regularly updated browser is still a critical line of defense against Web-based attacks.
Things in the Datacenter follow the same rules so we have to keep an eye.

What should I patch?

Since patching means solving problems related to code or configuration everything is subject to patching.
– All that is subject to configuration and it is softwarefirmware based:
• ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
• Appliances OS networking devices firmwareOS
• Applications (SQL Databases, Web Serv
er, CRM, Mail, Videoconferences…)
• Virtualization Platforms (Vmware, HyperV, Virtualbox…)
• Driver, middleware and managed component (SCADA, ICS …)
• …
The reason for all this patching is always the same address:

Diapositiva10
• Critical and not critical vulnerabilities
• SwHw compatibility
• New Services
• Bug’s correction
• …

The Patching Cycle

Diapositiva11
Patching a system is not a onetime activity, but it require a cyclical approach.

Patching real need is related to the discovery of new bugs, problem and vulnerability, so it is mandatory to control if the vendor has in place a serious patching system, and a serious vulnerability disclosure system. Vendors that never provide patches are vendors that
• Create a perfect piece of software
• Dismissed the productservices
• Are not trustable
So beside the first point, that is unrealistic even for the Linux and Apple lovers the rest is clear, every vendor from time to time has to release patch that comes, generally, in form of update.
So the first question should be: do I need to patch the system?
This can be done just checking the vendor’s security bulletin or the vendor’s patchupdate release system. Most of those systems allow an automatic update, which is good for ALMOST all the situation.
Of course if a patch is available I should consider if I can or not apply the patch. This is a tough question if my system contains legacy components that could be affected by the patch itself.
A good way would be to have a test environment to check if everything is right and only after those test put the patch on a production environment, keeping in mind that test and production environment seldom are 100% the same, and so something could be missed.

To patch or not to patch

Diapositiva12
Due to the intrinsic risks related to patching, therefore, it is better to apply patch when there is a real need.
It is clear that in absence of issues reported, there is no need to patch.
But if a serious vulnerability or compatibility issue is reported patching should be in our best interest. Now we can face two problems

  • the presence of a patch from the vendor

and

  • the absence of the patch.

In the first occurrence we should start the patching cycle, in the second case we have a big problem; we are in presence of a security issue that is not addressed by the vendor.
In this case, luckily, we can opt for third party solution that applies a “Virtual” patching to the system.
Virtual patching cannot address compatibility issues, but can save us from immediate security risks.
In any situation it is suggested to perform test, and to save the job before applying any patch. How many system engineers I saw crying because they have not done a snapshot…

Diapositiva13
Note :Test
• Test
– It is always a good precaution to test patches before putting them into production to avoid unpleasant issues related to unforeseen and unexpected incompatibilities
– In virtual environments should take snapshots of the machines to be updated before and after applying the patch
• snapshot
• Update
• Fingers crossed
• If all goes I do anew snapshot otherwise I pull up the latest copy saved
– In physical environments do the same thing, better if I software for bare metal backup or similar
– For appliances or networking equipment it is safer to isolate the apparatus to avoid repercussions on the entire network in the event of problems

Diapositiva14

Note: Virtual Patching

With virtual patching we refers to the introduction of third party software that is able to “close” any flaws in the lack of an updatepatch..

  • Virtual patching allows you to apply virtual patches both “physical” and “virtual environments”, and can be either agent-based or agentless
  • virtual patching is not an alternative to patching but it can be a viable solution in the event of a EOL support by the vendor (ex: Windows XP and Windows 20002003)
  • virtual patching covers the security needs, not those of compatibility
  • Virtual Patching can be used as a support technology to cover complex vulnerabilities dependent on OS SW components dependencies

 

Hardening my dear

 

A less frequent but even more important thing is Hardening
Hardening is again an activity that try to address the possible vulnerability of a system, basically the idea is that if you do not have something this could not harm you, so Hardening essentially means to turn off, delete, erase, uninstall, block, wipe out all that is not essential to the service provided by a specific server or device.

Diapositiva15

What Hardening means

– Do hardening means operating on the configuration parameters of a system to “close” all nonessential services to the assigned task to decrease the attack surface area ..

In other words it means to check every service and activity of the system and close the ones that are not useful for the intended purpose.

What should I work on?

Hardening is something we should do on everything. If a service, a right, a tool is not essential it could harm the system, and therefore should be avoided.

This is the truth for everything: Router protocols and OS services.

Basically everything that is subject to patching is subject to hardening.

All that is subject to configuration and it is softwarefirmware based:

  • ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
  • Appliances OS networking devices firmwareOS
  • Applications (SQL Databases, Web Server, CRM, Mail, Videoconferences…)
  • Virtualization Platforms (Vmware, HyperV, Virtualbox…)
  • Driver, middleware and managed component (SCADA, ICS …)

Why Hardening

The reasons behind hardening are actually related to two aspects: security and performances. For once they go together. Getting rid of useless services allows the systems to access to a wider set of resources for the intended purpose.
So hardening is good for security and also good for performance.

Diapositiva16

Hardening is essentially a configuration activity that must be made to allow the optimal functioning of a platform. The Hardening has repercussions in terms of performance and safety.
• What does make hardening mean:

  • – Allow operating systems, software and services to carry out only the wanted operationsservices
  • – Close all non-essential services operation
  • – Ensure that the correct access procedures are carried out by users, applications and services on the critical resources
  • – Restrict or monitor non-essential activities of applications and services

Hardening Methodology

Hardening so is a great idea, but it comes with a big problem, it is something really hard to do.
Diapositiva17
Hardening a system requires a methodical and thorough analysis:

  • All aspect of the operating systemapplication in question should be taken into account by determining
  • What services are essential and which are not
  • Of essential services such characteristics are allowed and which you want to block
  • What are the configuration parameters to be modified
  • What is the chain of objectsservicesapplication called by the service Application

To be able to determine which services are essentials and which are not essentials is not such an easy task since, sometimes, a service that seems useless can be used for hidden background tasks. Sometimes the simple presence of a service is checked even if not used, and sometimes software engineers and developers simply made it wrong, making useless calls that when stopped makes the system unpredictable.

Chain of services are even more complicated to determine, since in absence of documentation it could be that a service can be called for specific occasion without a direct clear link of causeeffect.

Moreover in case of OS:

All applications (standard, legacy, running, installed and not sunning) that belong to the operating system in question

  • For each application it is necessary to define whether it is allowed or not running
  • For each application that is allowed to run it is necessary to define the list of allowed and not allowed characteristics as well the scope and execution environment

It is mandatory to change the configuration parameters of every application to get the required results.

Although could seems odd, even not running application can be subject to risks. Dormant services can be waked up by malicious software after an infection, typical example are SMTP and Web services that can be partially or totally activated by a botnet malware. But the same stuns for encryption services used by some ransomware and so on.

Alas all this require a big amount of knowledge.

Diapositiva18
• Hardening is difficult to do because:

– It requires detailed knowledge of the environment Application Software
– It requires a thorough knowledge of all the connections between the various components
– It requires an extremely precise control of all the software installed, access and utilities
– Not all configurations can be reached through a GUI or via CLI commands
– Sometimes writing code is required
– …
But there are on the market solutions that help you to get the result with an easier approach. What is required, as usual, is to know what you want and why, this is something no third party software can provide.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Security and Datacenters was originally published on The Puchi Herald Magazine

A lesson from VW: Vendors, reputation is everything


Just jumped on the news, between some soccer player affair and the wonderful Rugby world cup I put my eyes on the VW scandal: OMG they lie to customers and government agency… Why I am not at all surprised?

Let be clear, I have nothing against VW, it is a great brand with great product, but is a company driven by profit and so profit is the biggest interest, above ethic and other consideration. This is why government and consumers need to be vigilant and force companies to act fairly. It is surprising that something like that comes out from a German vendor, they are well-known for the quality of their product everywhere, but this simply show how difficult is to sure about quality everywhere.

It can be a hackable entertainment system in your car that allow an attacker to take control of your brakes, or a hacked result of pollution outcome from your diesel car engine all show that quality and control is a mandatory requirement for every vendor of any kind.

There are some interesting outcomes in this story:

we should be skeptic about everything, the moment we lower our attention here comes the problem, so at the end the more a vendor is under scrutiny the better is for the customers. even a major brand can make mistakes, willingly or not the vendor have to take full responsibility and put in place all the effort possible to avoid similar accident.

From a vendor perspective investing in quality is mandatory if they want to present themselves as a value added player, and not the cheap option. but quality is a complex issue, that require careful management of product, branding and communication.

quote-it-takes-20-years-to-build-a-reputation-and-five-minutes-to-ruin-it-if-you-think-about-that-warren-buffett-26787And the basic point is that once the damage is done the recovery will be painful and hard, ant it could burn all the profit we have made thanks to the cheat.

This is the same thing that happen with security, and information security (which is my field) is not an exception.

Security from a customer point of view should be a basic requirement, not just an add_on. Likewise for vendors security should be one of the core pillar because is strictly related to the quality of what a vendor do.

so let us take some consideration:

was the VW affair something done without the knowledge of the senior management?

If so, but at the moment I doubt it, this means that the senior management was not putting in place the correct set of control on quality. Quality should be a serious internal affair, and it means that you should know, check and control what is the output of your systems.

But to be able to check quality you should know exactly how to grade it, and what could come out form a not compliance. so if your process need to check the emission level of your engine you should e sure this is checked tested and cross referenced somehow by external entities before the government agency check.

If you do so you can be fairly secure you have your result consistent with your design, and think that a non compliance could be rally related to unpredictable events.

If you do not put in place something like that (that is important since it is a mandatory requirement from a specific market, well a lot of markets actually) you are guilty and you didn’t do your job correctly.

you made mistakes because you did not check correctly the risks and the consequences. you made mistakes because you didn’t put in place the correct chain of control, you made mistakes because you, basically, didn’t do your job. There is no excuse for bad management, managers are paid to take risk and make decision, so they are fully responsible. the fact they do bad their job can’t be a reason to be absolved.

pity there will be casualties for this mistakes that will hurt people working, so do not think for a moment this is something that can be taken lightly. every worker that will lose his job because of this should be accounted on those managers shoulders.

was the VW affair something done with the knowledge of the senior management?

150922175400-volkswagen-scandal-worsens-archer-intv-00001802-large-169well this is a completely different thing. or not? is being unable to do your job worse than willingly trying to scam customers and governments? because this is what we are talking about.

If higher management knew this it means that they were willingly trying to scam their customers to rise up their sales lowering costs. there is nothing bad in willing to raise sales and lower costs, till you di it in a fair, ethic, legal and fair way, not sure can be justified if this is done against the law (the comment is sarcastic, for the ones that didn’t get it).

so basically this means that the management did this math (I know I am over simplifying it):

cost without compliance =”X”

costs with compliance = “X+Y”

if we sell our product at “Z” our incomes will be “Z-X” if we are not compliant and “Z-(X+Y)” if we are compliant.

so we earn more with the not compliance.

now I hope they at least tried to check the cost fo being discovered and the %risk of being discovered, those 2 factors should be the point to analyze if it is worth to try the scam or not.

so basically they should have correct the math at least as

cost without compliance  = “X + (cost of being discovered * %of risk to be discovered)”

now the cost, apparently, will be as big as this year revenue for the company (may be more) this means that the cost of being discovered is almost Z, this rules out any chance to consider worthy the scam unless the %of risk to be discovered is really small, and for really small I mean several digit below zero.

but this % couldn’t be so small since there were external controls, chemistry and physics to work against them.

this means that they haven’t been able to correctly evaluated the costs of not compliance and so jeopardize their stream of revenue for nothing.

This basically means that:

  1. they were willing to scam
  2. they were fairly incompetent

so again this rules out any chance to be merciful, more for not being able to do their job actually.

Is this an isolated case?

Although I would like to say yes, I think this is a common practice in the industry of any country in any sector. Sometimes the cost of compliance is simply too high, sometimes management takes risks for not compliance knowing the eventual costs, most of the time they simply does not care because it is not in their targets (and we know that sales target are quarter based when we have a long vision, lol).

I am not talking here about honest mistakes, I am talking about willingly not being compliant or not doing all effort possible to carry out a serious, credible and reliable quality system.

Quality requirement could be mandatory (because of some law) or just best practice, or simply marketing claims but respect the quality baseline is always a serious matter that should be better evaluated.

VW scandals teach us that it is a priority for the management to act in a correct way because the cost of not compliance can be devastating. And in the connected world we are the repercussion are global. Let me claim also social responsibility for a company, a scandal like this can affect an entire country perception.

Bright side it happen in Germany, and so when I will talk with my German friends I will be able to say, come one stop making fun of FCA hacking car problem, you hack the EPA…

One last comment: when we will start to admit that “clean” cars and combustion engines are still way to come?

 

trust no one

var aid = ‘6055’,
v = ‘qGrn%2BlT8rPs5CstTgaa8EA%3D%3D’,
credomain = ‘adkengage.com’,
ru = ‘http://www.thepuchiherald.com/wp-admin/post.php’;
document.write(”);

A lesson from VW: Vendors, reputation is everything was originally published on The Puchi Herald Magazine

Time for enterprises to think about security, seriously


English: A map of Europe divided into countrie...
English: A map of Europe divided into countries; where EU member states are further divided by NUTS level 3 areas. The NUTS 3 areas are shaded green according to their GDP per capita in 2007 at current market prices in euros; darker green denotes higher GDP per capita and lighter green, lower GDP per capita. (Photo credit: Wikipedia)
View image | gettyimages.com

UE directive on Attack against information systems  give us no more excuse to deal seriously,

Under the new rules, illegal access, system interference or interception constitute criminal offence across the EU. But while the legislator is working to create tools to address cybercrime as a Whole system problem, that is affecting EU economy, what are enterprise doing on this side?

The problem is that if enterprises does not align their cyber security defence to the correct approach every legislation will be useless, because the target will be always too easy.

Makes absolutely no sense to start a security system while internally you use Explorer 8 and Windows 7 as default OS. make absolutely no sense to rely on firewall and ipsids inside without implementing a correct siem infrastructure.

Make absolutely no sense to try to keep Intellectual property if we do not add a correct dlp system, that means to have also categorization and processes.

Make absolutely no sense to beg for security if our Windows environment is poorly designed,

It is time to change our security approach from an annoying task to a foundation of our systems. we do not discuss the need of a CFO and risk analysis related to finance why it is so hard to make the same on information and cyber security (let me add also privacy)?

CSO role, and DPO ones, should be at the heart of every  board as the CFO, the HR and the other company roles.

Alas CSO and DPO need a high level of Independence, since their roles itself need to be a source of control and guidance for the entire company (no more no less than a CFO). And both the roles are not “IT geek guys stuff” since require specific knowledge, that goes beyond the IT implementation.

Alas if architectural roles are still a minority in the IT world, we can imagine how hard could be to find those other figures that requires the ability to see the security inside the business and deal with a wide range of interfaces not necessarily technical.

This is a wide problem that cover all sectors of the industries. there is no more area that can be safe from IT implications. The Jeep cars hack is just an example another example of how serious is the question.

a correct cyber and information security approach should take in account:

  1. how we protect ourself from the external threats
  2. how we implement internally a secure aware process to deal with the valuable information we process
  3. how we implement a secure aware production process
  4. how we contribute to the progress of the cyber and information safety in our environment and ecosystem.

does not matter who we are or what we do those 4 points can’t be avoided anymore.

and can’t be managed as a geek itch to be scratched.

  1. how we protect ourself from the external threats

Point one is historically the first implemented, but also one of the worst nightmare.

Security is usually seen as a series of Patches to be put on system after the design. and usually this is done putting a “firewall” or a “next generation firewall” or some other marketing driven Technologies, not considering that any insertion is useless if not seen into a serious context and design.

And the design start with the simplest questions:

  • what I want to do with my IT?
  • what is the value of IT for my business?
  • what is the implication of the IT process in our process?

Budget and design should follow accordingly to that.

but design can’t avoid simply facts as:

Things need to be patched and upgraded to maintain a minimum baseline of efficiency and security

process should be design accordingly to the technology, the people and the business

if you don’t do this you keep having people surprised by the End of Support of the old Windows versions and using Windows Explorer 8 browsers just for “compatibility issues”.

If you do this  to proof you do not understand anything about IT, you did a good job otherwise, well we have a problem.

2. how we implement internally a secure aware process to deal with the valuable information we process

We can implement whatever we want, but if we do not have a clear picture of what we are going to protect and why, all the design is useless.

I wrote in the past how hard is to understand what is and where is the value in our data. Still so many people does not consider that most of the Intellectual Property of our company is in our email servers or pst files, or that names, addresses and emails have a value for the criminal cyberworld even if we do not value it…

Internal processes are usually bad designed because they do not keep into account what need to be protected, :

  • resources
  • people
  • training
  • controls
  • metrics

And of course the most important request of all, KISS implementation (Keep It Simple Stupid).

having more than 1000 processes in place is not a good thing, is a nightmare.

3. how we implement a secure aware production process

No matter if we write code, make hardware or make paperwork, how secure is our work? how can be be sure the component we are using do what we want and have not be tampered? if we write code how we can be sure we write good, secure code? if we do cars how can we be sure that our entertainment system could not allow to take control of the car’s brakes?

it all the same, we need to implement security in our production process, this means being able to set up controls and metrics (again) that span all the production line, and involve also who provide us services or parts.

is our financial broker a secure interface? can we trust those derivates? can i trust this code?… is all about security.

if we delivery anything to anyone, HW, SW, Service of any kind we have a production system that need to be secured. sometimes the law help us putting references, sometimes is our job to create those references.

but if can’t provide a trustworthy production system why the customer should trust us?

it is not only IT, it is security, IT is just a part of the equation.

4. how we contribute to the progress of the cyber and information safety in our environment and ecosystem.

And we can’t be secure in an insecure world, we are all player of an interconnected world. we can’t think of security in the finance systems without the collaboration of all players (banks, governments, regulators bodies), the same should be for IT. But we are years behind, so it is time we take our part of responsibility and start collaborating to make the environment safer.

Kicking out the bad thing is a long, never ending process that require a lot of effort from everyone, all the players should be in charge of a part of the responsibility. if we are not cure we lower the overall security, so if a car can be hacked it is a danger for all the other cars on the streets, the same if enterprise do not keep this thing seriously they are a danger for all the rest.

collaborating, exchanging ideas, listening and Learning, there are a lot of different ways to do so.

Activities like the ENISA EU cyber security months that will be held in October are a great moment to think about security and related issues

just watch at the weeks arguments:

  • Week 1Cyber Security Training for Employees
  • Week 2Creating a Culture of Cyber Security at Work
  • Week 3Code Week for All
  • Week 4Understanding Cloud Solutions for All
  • Week 5Digital Single Market for All

this is what I am talking about. I strongly suggest that you all participate as citizens, companies, public entity. there is much to learn much to do, it’s time.

cheers

sent by Microsoft Edge

 

 

 

var aid = ‘6055’,
v = ‘qGrn%2BlT8rPs5CstTgaa8EA%3D%3D’,
credomain = ‘adkengage.com’,
ru = ‘http://www.thepuchiherald.com/wp-admin/post.php’;
document.write(”);

Time for enterprises to think about security, seriously was originally published on The Puchi Herald Magazine