Historical memory, what is this about?

I wrote on memories yesterday.

Personal memories and historical memories are the blocks of our life. We live for our memories since, at the end, are memories that create our thinking, our background, our experience, our knowledge.

Personal memories are something ease to understand, is what we directly lived through direct experience. but those memories are just a portion of the memories we have and have to deal with.

Another great portion of our memories is build into the society we are living, shaped trough communication (media, arts, word of mouth, storytelling), school and other tools.

Some of those memories are related to the cultural heritage, some are related to the moment we are living, some are just simply lies.

Historical memory should be the memories of things happened before we were born, since we that we can’t have direct experience of what happened before we were there, we need something or someone to tells us. Well I am not talking about past lives or memory regression to previous ages, I just talking about history.

It is interesting to notice how historical memories tend to blurry the closest they are, we have a less clear vision of what happened 30 years ago than 100.

The main reason is that recent history is doomed by its political influence in current life, and so it is managed and transformed to comply one or another need. Ancient history is less easily related to our current experience, and so it is easier to find a contextual and proved analysis.

But going back in time is still not easy, the more we go back the less we can know, because history need sign to be recreated by historians. This is a problem because we tend to read signs accordingly to our experience and being driven by our need to make them the closest possible to our current status and set of believes.

It is common in science history and history history to see this. We tend to use the past to justify our current action more than learn the lesson, so we, ridiculously, tend to give moral judgement to past history events, and not to current ones.

Historical memories are not something static, and not absolute. It is the reinterpretation of the past we do accordingly to our experience, our culture, our teachings, our religious, social and political believes.

You question this? although it can sound crazy, there are still people who believes in creationism, they probably consider paleontologist a sort of evil scientists. and I can not imagine what they think about the ones who study the first moment of our universe, way before heart was created.

Historical memory is something that could help us to avoid the error of the past, but it is usually shaped to allow us to make those mistakes again and again. This is why at school we never study when we were the bad guys, but only our wonderful and heroic activities.

Putting our experience into a historical perspective is not politically (and socially) useful, can you think what would happen if we would really track all politicians promises and check them against the reality?

Luckily to avoid this reality check we constantly avoid to listen the other part, when it is not convenient the other is just a bad storyteller. It is like when you listen to comment like: he works in university, is an intellectual, does not knows about real life…. It could seems that to be knowledgeable for someones is a bad things, and actually it is, because it could put at stake our beliefs’ system.

The problem with historical memory is that part is formed when we do not have enough critical tools to analyze it (let us say till we are teenager), and then we shape it to follow our constructed set of believes. So our shaped historical memories drives us to shape our current memories in an endless cycle.

I wrote about this in the past, I called it rational acts of faith.

Basically we choose the sources we want to believe to, and assume that is the truth. Since that is the truth, the rest is accordingly a lie.

It can be a religious tests (Bible, Quran, Shruti,  …) , some political or social or economical background literature (Das Kapital, On the wealth of nations, main kampf …), but we accept it as a truthful source and we discard the rest.

Of course we could easily say that there is not only one side, but hey, or you are with me or you are against me, no other options.


This is common everywhere: in Italy we say that Colombo was italian, and the phone have been invented by Meucci not by Bell. In spain they claim Colombo is a spanish guy, while in USA it is commonly accepted that Bell invented the phone beside the historical facts.

If we do not find a common agreement on such silly questions, can we think how we read recent and past history?

Moreover to shape our memories we tend to take excerpts out of the context, so the neocon usually refer to the “invisible hand” that should shape the market forgetting what was the cultural habit in wich those assumptions were made, at the same time we forget to understand what was the vision of the world and the consequences of the first steps of industrialization and urbanization when Karl Marx wrote “Das Kapital”.

Out of context anything can be used for the purpose we want or need. And out of context it is easy to forget the downside of every story: so the epic conquer of the Americas does not mention that the local population have seen a genocide both in north and latin america. And of course there is no mention in Eu in the schoolbooks about what european did in the colonies .

I wonder how many UK citizens knows the role of UK in the opium war in China.

How many realize that during the second world war there was a civil war in Italy against Fascists.

And what italian did in the colonies to the local people.

Or how many Japanese knows what happened in Manchukuo.

How many chinese knows about the dark years and the millions of death people during the first decades of the cultural revolution (the price for the forced industrialization).

Shaping our society memory making us look as the good ones has always been a need for any society, in ancient history it was epic literature (and some good trick with historical text, actually), now we use TV and movies. but nothing really change. Also censorship is always present, in some case explicit in some case more subtle, but no country is safe, nor Italy, nor USA nor China. Ok in China is clear almost evident.

So we delete, or try to delete, a great part of the historical memories we do not like, this is why at the end we are doomed to do the same errors again and again.

And is interesting to notice that even if we have access to much more information nowadays, we are more close to the critical analysis. Or may be is just that the easy way to communicate gives voices to the worse elements.


var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';

Historical memory, what is this about? was originally published on The Puchi Herald Magazine

Security and Datacenters

A Datacenter is a collection of several different elements, all working together to offer a platform to our digital needs.
A datacenter is actually a mix of different elements, some logic some physical, it is just not a mere collection of elements but a complex systems with a lot of interactions.
We can easily see inside the datacenter, cables, racks, servers, network equipments, storage units and so on but all are there (or should be there) for a purpose and are interconnected.
A big part of a datacenter is not even visible; it is the software and data running in it, going in and out from its connection, disks, memory and CPU’s.
So with a physical infrastructure we have also services, Processor power, Storage, Connectivity and services. All this is making the Datacenter what it is.

Security in the datacenter


If we think our datacenter is somehow valuable we should think how to protect it.

The protection of a datacenter is something that should take into account the entire datacenter components, physical and virtual.

In modern datacenters the two dimensions are interconnected and there is not one without the other.

I usually consider in the physical domain all the issues coming from planning a correct disaster recovery and backup solution, since they require a Hardware approach generally speaking. Virtualization moved this needs to a software level and so it is, I know, quite an arbitrary assumption. But as an example to try to explain my point of view, is there any disaster recovery in place if the DR Datacenters are in the same building? As well is any backup policy sound if backup unit are taken in a non physical secured environment? (with the same precaution level and redundancy that should be given to the DR consideration).

It is clear how to have a datacenter secured it is mandatory to provide a safe physical environment, so power lines, cooling, physical access control are all terms of the equation.

If the floor can’t stand the rack, it would be a clear security issue, as well as if datacenter goes to overheating or if power lines are not able to provide the needed energy or enough flexibility (which can be variable accordingly to the usage). Same could be told in regards of UPS Units that are critical to establish correct power maintenance. Of course the implications of many of those aspects go beyond the strictly physical environment, since almost all require control software to be monitored.

What it is usually unevaluated is that the entire physical environment, nowadays, has sensors that talk with the logical one; those data could be really useful to understand the general safety of the system even from a cyber security perspective. So a surge in power requirement, a spike in terms of heat or CPUDisk load can be a trigger to a failure as well as a symptom of an ongoing attack.

Logical security

Going deeper on Physical security for datacenter is today out of my scope, I would like to take a look on the Logical aspect of Datacenter security, since it collect almost (if not all) the classical Cyber and IT security requirement.

Server security

So let’s take a quick look at server security. The term “server” attain to the logical area. When we talk about servers we are not talking about a physical machine but a logical entity providing a service.
If we look for a definition:

  1. In information technology, a server is a computer program that provides services to other computer programs (and their users) in the same or other computers.
  2. The computer that a server program runs in is also frequently referred to as a server (though it may be used for other purposes as well).
  3. In the client/server programming model, a server is a program that awaits and fulfills requests from client programs in the same or other computers. A given application in a computer may function as a client with requests for services from other programs and also as a server of requests from other programs.

So server is definitively a logical issue.
So secure a server it is usually necessary to perform, as first instance, a hardening analysis and to set up a correct patch management procedure. This is the truth, actually, for any device and software that run under the IT realm. Alas a lot of time this is one of the first neglected areas.
Due to the importance of those elements I will go into Hardening and Patching later, but I would like to stress that those are Mandatory Requirement for a minimum security level.
There are a lot of different OS platforms that can be used as Server; classical OS like Windows, UNIX or Linux, all require to be carefully managed.
Since most of the OS are multipurpose (they offer more than a service) as file, application, web server in the same infrastructure it is essential to extend security consideration to all the resources in place, this means protocols, access management, user and so on.
Of course since a Datacenter lives of its cross relationship we should carefully understand what is in relationship with what, in order to avoid, as an example, to operate to some parameters affecting something else.
Alas all the control and configuration needed can require deep knowledge of the platform and others, as an example the implementation of Anti Malware and Anti Viruses or encryption implementation, could require a third party software.
This is also the truth for Virtual Hosting environments as VmWare, DropBox, Hyper V and so on.
The good news is that in the market there are a lot of solutions to help us to take in place correct security plans, but a correct vision and a sound process implementation is unavoidable.


Application Security


If “OS” servers have their requirement, it is obvious that all other servers have the same needs. No matter if we talk about a database or a web server, it needs security. Since those Application server lives over the OS, the security they require is “On Top” of the OS needs, it means that securing just one (Application Server or OS server) will leave the system unsecured.
A classical example is the implementation of AVAM (AntivirusAntimalware) solutions. Those solutions should be applied not only on the guest OS but also, when available, to the Application server running, since the security issues that they have to cover can be dramatically different.

Network Security

Inside all the security consideration, in the datacenter cannot be avoided the component dedicated to network security.
Network security is key in Datacenter, since all communication at the end flow inside it, and the entire component needs to communicate one to another.
Even in a virtual environment the Hypervisor have to manage network communication between the various internal virtual environments and the communication of these ones with the external ones. Speed is one of the key elements inside datacenter due to the great amount of data flowing, but accordingly to the nature of the services provided by the datacenter itself other consideration can be required. Since the datacenter, as an example, is a collection of servers running a IPSIDS can make absolutely sense, as well as implementing a correct monitoring (may be using SIEM and Big DATA analytical tools).


Access Management and IAM


Along with network security another neglected area is User Management. Identity User Access and Rights management should be big concerns for a Datacenter manager, because a poor management of those could affect the entire datacenter.
Too often it is not considered important, as an example, the use of administrative rights on contiguous platforms, or the “frozen” users (from the cancelled ones to the never used, to end with the service users) all with set of rights usually not correctly monitored.
This si the truth not only for Datacenters, actually, even end users devices, as laptops or company mobile phones, usually suffers of such glimpse of memory.


Protocol Management


And if we have concerns on user we should have concerns also on protocols. At any level they are the media trough our systems communicate, so their management should be in our best interest. Alas usually protocol management is related to Network Devices (as implementing firewall rules) and it is neglected the one of the basic rules of security: what is unmanaged is prone to risk.
There are tools to manage and secure DNS (beside DNSSEC that is a security implementation of DNS services) and but also DHCP should be managed. Why we should lease an internal address (at least till we are in the IPv4 Realms, but this is way more important on IPv6) without any check or control on who has made the request, and leave all the controls after at network device level? This Is a bad security approach. DDI solutions on the market try to address those problems.


Patching and Hardening

No matter we are dealing with a logical server, a physical server, software, a device or an appliance of any kind everything should be under patching and hardening management.



I will first look at the recurring nightmare that is patching. Years ago there were the ideas that “if the system runs, do not touch it”. It is still a mantra for many Datacenter and IT managers; alas every day more this approach is dangerous and should be really a concern.
Patching is a requirement nowadays mainly for security reasons. We finished the naïve era where software was almost perfect, not we are in a more realistic era of the flaws full software.
Every day new vulnerabilities are disclosed, and affect everything, even our cars, so patching is no more an option we can avoid.
What does Patching means:
Do patching means make configuration or code changes that are designed to secure the software from existing and potential “0” day vulnerabilities.
Patches are all around us, but what is actually a patch? A patch is usually software that is used to amend a problem in a code or a configuration. Patches are provided, usually, by vendors in form of recurring or emergency updates.
Of course updates contain not only security patches, but could also address compatibility problems, bugs, or add new features.
Those updates, at least the security ones, are usually provided within a service agreement accordingly to the license duration of the softwaredevice.
From time to time vendors tend to close support to older release for several reasons, not only commercial but also because the level of patching would become not sustainable. It is what happened to Windows XP or Windows 20032000. But to stay in the windows realms also what happened to the older versions of windows explorer or to the embedded internet browser distributed with android before chrome.

Microsoft is finally moving on from its aging Web browsers as Internet Explorer 8, 9, and 10 will receive their last security updates and enter end-of-life on January 12. Users will then see a tab with a download link to the most current Internet Explorer available for the operating system.
End-of-life doesn’t mean older versions of Internet Explorer suddenly stop working, and there are ways to turn off Microsoft’s nagging reminder to update. But not switching to a supported browser is a colossal security mistake considering that attackers frequently target unpatched vulnerabilities in Internet Explorer. A regularly updated browser is still a critical line of defense against Web-based attacks.
Things in the Datacenter follow the same rules so we have to keep an eye.

What should I patch?

Since patching means solving problems related to code or configuration everything is subject to patching.
– All that is subject to configuration and it is softwarefirmware based:
• ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
• Appliances OS networking devices firmwareOS
• Applications (SQL Databases, Web Serv
er, CRM, Mail, Videoconferences…)
• Virtualization Platforms (Vmware, HyperV, Virtualbox…)
• Driver, middleware and managed component (SCADA, ICS …)
• …
The reason for all this patching is always the same address:

• Critical and not critical vulnerabilities
• SwHw compatibility
• New Services
• Bug’s correction
• …

The Patching Cycle

Patching a system is not a onetime activity, but it require a cyclical approach.

Patching real need is related to the discovery of new bugs, problem and vulnerability, so it is mandatory to control if the vendor has in place a serious patching system, and a serious vulnerability disclosure system. Vendors that never provide patches are vendors that
• Create a perfect piece of software
• Dismissed the productservices
• Are not trustable
So beside the first point, that is unrealistic even for the Linux and Apple lovers the rest is clear, every vendor from time to time has to release patch that comes, generally, in form of update.
So the first question should be: do I need to patch the system?
This can be done just checking the vendor’s security bulletin or the vendor’s patchupdate release system. Most of those systems allow an automatic update, which is good for ALMOST all the situation.
Of course if a patch is available I should consider if I can or not apply the patch. This is a tough question if my system contains legacy components that could be affected by the patch itself.
A good way would be to have a test environment to check if everything is right and only after those test put the patch on a production environment, keeping in mind that test and production environment seldom are 100% the same, and so something could be missed.

To patch or not to patch

Due to the intrinsic risks related to patching, therefore, it is better to apply patch when there is a real need.
It is clear that in absence of issues reported, there is no need to patch.
But if a serious vulnerability or compatibility issue is reported patching should be in our best interest. Now we can face two problems

  • the presence of a patch from the vendor


  • the absence of the patch.

In the first occurrence we should start the patching cycle, in the second case we have a big problem; we are in presence of a security issue that is not addressed by the vendor.
In this case, luckily, we can opt for third party solution that applies a “Virtual” patching to the system.
Virtual patching cannot address compatibility issues, but can save us from immediate security risks.
In any situation it is suggested to perform test, and to save the job before applying any patch. How many system engineers I saw crying because they have not done a snapshot…

Note :Test
• Test
– It is always a good precaution to test patches before putting them into production to avoid unpleasant issues related to unforeseen and unexpected incompatibilities
– In virtual environments should take snapshots of the machines to be updated before and after applying the patch
• snapshot
• Update
• Fingers crossed
• If all goes I do anew snapshot otherwise I pull up the latest copy saved
– In physical environments do the same thing, better if I software for bare metal backup or similar
– For appliances or networking equipment it is safer to isolate the apparatus to avoid repercussions on the entire network in the event of problems


Note: Virtual Patching

With virtual patching we refers to the introduction of third party software that is able to “close” any flaws in the lack of an updatepatch..

  • Virtual patching allows you to apply virtual patches both “physical” and “virtual environments”, and can be either agent-based or agentless
  • virtual patching is not an alternative to patching but it can be a viable solution in the event of a EOL support by the vendor (ex: Windows XP and Windows 20002003)
  • virtual patching covers the security needs, not those of compatibility
  • Virtual Patching can be used as a support technology to cover complex vulnerabilities dependent on OS SW components dependencies


Hardening my dear


A less frequent but even more important thing is Hardening
Hardening is again an activity that try to address the possible vulnerability of a system, basically the idea is that if you do not have something this could not harm you, so Hardening essentially means to turn off, delete, erase, uninstall, block, wipe out all that is not essential to the service provided by a specific server or device.


What Hardening means

– Do hardening means operating on the configuration parameters of a system to “close” all nonessential services to the assigned task to decrease the attack surface area ..

In other words it means to check every service and activity of the system and close the ones that are not useful for the intended purpose.

What should I work on?

Hardening is something we should do on everything. If a service, a right, a tool is not essential it could harm the system, and therefore should be avoided.

This is the truth for everything: Router protocols and OS services.

Basically everything that is subject to patching is subject to hardening.

All that is subject to configuration and it is softwarefirmware based:

  • ServerCient operating systems (Linux, Microsoft Windows, Unix, Android, iOS, MacOS…)
  • Appliances OS networking devices firmwareOS
  • Applications (SQL Databases, Web Server, CRM, Mail, Videoconferences…)
  • Virtualization Platforms (Vmware, HyperV, Virtualbox…)
  • Driver, middleware and managed component (SCADA, ICS …)

Why Hardening

The reasons behind hardening are actually related to two aspects: security and performances. For once they go together. Getting rid of useless services allows the systems to access to a wider set of resources for the intended purpose.
So hardening is good for security and also good for performance.


Hardening is essentially a configuration activity that must be made to allow the optimal functioning of a platform. The Hardening has repercussions in terms of performance and safety.
• What does make hardening mean:

  • – Allow operating systems, software and services to carry out only the wanted operationsservices
  • – Close all non-essential services operation
  • – Ensure that the correct access procedures are carried out by users, applications and services on the critical resources
  • – Restrict or monitor non-essential activities of applications and services

Hardening Methodology

Hardening so is a great idea, but it comes with a big problem, it is something really hard to do.
Hardening a system requires a methodical and thorough analysis:

  • All aspect of the operating systemapplication in question should be taken into account by determining
  • What services are essential and which are not
  • Of essential services such characteristics are allowed and which you want to block
  • What are the configuration parameters to be modified
  • What is the chain of objectsservicesapplication called by the service Application

To be able to determine which services are essentials and which are not essentials is not such an easy task since, sometimes, a service that seems useless can be used for hidden background tasks. Sometimes the simple presence of a service is checked even if not used, and sometimes software engineers and developers simply made it wrong, making useless calls that when stopped makes the system unpredictable.

Chain of services are even more complicated to determine, since in absence of documentation it could be that a service can be called for specific occasion without a direct clear link of causeeffect.

Moreover in case of OS:

All applications (standard, legacy, running, installed and not sunning) that belong to the operating system in question

  • For each application it is necessary to define whether it is allowed or not running
  • For each application that is allowed to run it is necessary to define the list of allowed and not allowed characteristics as well the scope and execution environment

It is mandatory to change the configuration parameters of every application to get the required results.

Although could seems odd, even not running application can be subject to risks. Dormant services can be waked up by malicious software after an infection, typical example are SMTP and Web services that can be partially or totally activated by a botnet malware. But the same stuns for encryption services used by some ransomware and so on.

Alas all this require a big amount of knowledge.

• Hardening is difficult to do because:

– It requires detailed knowledge of the environment Application Software
– It requires a thorough knowledge of all the connections between the various components
– It requires an extremely precise control of all the software installed, access and utilities
– Not all configurations can be reached through a GUI or via CLI commands
– Sometimes writing code is required
– …
But there are on the market solutions that help you to get the result with an easier approach. What is required, as usual, is to know what you want and why, this is something no third party software can provide.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';

Security and Datacenters was originally published on The Puchi Herald Magazine