The Puchi Herald Reblog

A reblogging blog

NFV network function virtualization security considerations

NFV network function virtualization security considerations

I have been asked to write down a few things related to NFV and security. NFV is relatively a new thing in the IT world. It has been on the news in 2012 and since then it has followed the developing path common to the virtualization technologies.

Virtualization has made dramatic improvement in the last years. It all started at first with simple virtualization platforms, of course VMware on top of our mind, but not only. The idea was to abstract HW platforms from software ones.

Developing the idea, the abstraction growth covering multiple hw platforms moving also to the multisite in WAN and geographical development. We call this sort of implementation nowadays cloud, but all the cloud story started from the old virtualization idea.

While this platform change was taking place, the world of services was experimenting different clientserver options (web services and so on).

With the new platforms taking place it was clear the network part would have followed this trend, moving to software and virtual shores.

Form the network point of view the first step has been the SDN (Software Defined Network).

Software defined networks (SDN) allow dynamic changes of network configuration that can alter network function characteristics and behaviors. For example, SDN can render real-time topological changes of a network path. A SDN-enabled network provides a platform on which to implement a dynamic chain of virtualized network services that make up an end-to-end network service.

SDN basically allow to centrally administer, manage, configure network services creating policies that can be related to different needs and able to adapt to a changing environment.

But this level of abstraction was not enough to cover the needed flexibility of the new implementation of modern datacenter, cloud and virtualized environment.

In a SDN environment the network gears remain mainly real solid box in an environment that is way more virtualized.

The first attempt to hybridize the physical network with the virtual one was the introduction of the first virtual network element as switches and firewalls. Those components were sometimes part of the hypervisor of the virtualizing platform, sometimes virtual appliances able to run inside a virtual environment as virtual appliances.

Those solutions were (are, since actually exist) good t target specific needs but were not covering the needed flexibility, resilience and scalability required to modern virtualization systems. Products like VMware’s vShield, Cisco’s ASA 1000v and F5 Networks‘ vCMP brought improvements in management and licensing more suited to service provider needs. Each used different architectures to accomplish those goals, making a blending of approaches difficult. But the lack of a comprehensive approach was making difficult to expand those services extensively.

The natural step of the process of virtualization would have be to define something to address in a more comprehensive way the need to transfer part of the network function inside the virtual environment.

Communications service providers and network operators came together through ETSI to try to address the management issues around virtual appliances that handle network functions.

NFV represents a decoupling of the software implementation of network functions from the underlying hardware by leveraging virtualization techniques. NFV offers a variety of network functions and elements, including routing, content delivery networks, network address translation, virtual private networks (VPNs), load balancing, intrusion detection and prevention systems (IDPS), and firewalls. Multiple network functions can be consolidated into the same hardware or server. NFV allows network operators and users to provision and execute on-demand network functions on commodity hardware or CSP platforms.

NFV does not depend on SDN (and vice-versa) and can be implemented without it. However, SDN can improve performance and enable a rich feature set known as Dynamic Virtual Network Function Service Chaining (or VNF Service Chaining). This capability simplifies and accelerates deployment of NFV-based network functions.

Based on the framework introduced by the European Telecommunications Standards Institute (ETSI), NFV is built on three main domains:

  • VNF,
  • NFV infrastructure, and
  • NFV management and orchestration (MANO).

VNF can be considered as a container of network services provisioned by software, very similar to a VM operational model. The infrastructure part of NFV includes all physical resources (e.g., CPU, memory, and I/O) required for storage, computing and networking to prepare the execution of VNFs. The management of all virtualization-specific tasks in NFV framework is performed by NFV management and orchestration domain. For instance, this domain orchestrates and manages the lifecycle of resources and VNFs, and also controls the automatic remote installation of VNFs.

The resulting environment now is a little bit more complicated than a few years before.

Where in the past we used to have

  • physical servers running Operative Systems as Linux, Unix or Windows bound to the specific hardware platform, and almost monolithic services running on those solutions,
  • physical storage unit running on different technologies and network (Ethernet, iscasi, fiber optic and so on),
  • network connected through physical devices, with some specific unit providing external access (VPN servers)
  • and protected by some sort of security unit providing some sort of control (firewall, IPSIDS, 802.1x, AAA and so on)
  • managed quite independently trough different interfaces or programs

now we moved to a world where we have

a virtualized environment where services (think as an example at Docker implementations) or entire operating systems run on a virtual machines (VMs) that manage the abstraction with the hardware

and is able to allocate resources dynamically in terms of performance and even geographic locations,

a network environment which services are partially virtualized (as in VNF implementation) and partially physical and interact with the virtual environment dynamically

a network configured dynamically through control software (SDN) which can dynamically and easily modify the network topology itself in order to respond to the changing request coming from the environment (users, services, processes).

Nowadays, the impressive effects of network functions virtualization (NFV) are evident in the wide range of applications from IP node implementations (e.g., future Internet architecture) to mobile core networks. NFV allows network functions (e.g., packet forwarding and dropping) to be performed in virtual machines (VMs) in a cloud infrastructure rather than in dedicated devices. NFV as an agile and automated network is desirable for network operators due to the ability of easily developing new services and the capabilities of self-management and network programmability via software-defined networking (SDN). Furthermore, co-existence with current networks and services leads to improve customer experience, and reduces the complexity, capital expenditure (CAPEX), and operational expenditure (OPEX).

In theory, virtualization broadly describes the separation of resources or requests for a service from the underlying physical delivery of that service. In this view, NFV involves the implementation of network functions in software that can run on a range of hardware, which can be moved without the need for installation of new equipment. Therefore, all low-level physical network details are hidden and the users are provided with the dynamic configuration of network tasks.

Everything seems so better and easy, but all those transformation does not come out without a price in terms of security.

Every step into virtualization bring security concerns, related to the control plane (think of hypervisor security, orchestrator security), the communication plane, the virtual environment itself (that often inherit the same problem of the physical platform), and the transition interface between the physical and virtual world.

Despite many advantages, therefore NFV introduces new security challenges. Since all software-based virtual functions in NFV can be configured or controlled by an external entity (e.g., third-party provider or user), the whole network could be potentially compromised or destroyed. For example, in order to properly reduce hosts’ heavy workloads, a hypervisor in NFV can dynamically try to achieve the load-balance of assigned loads for multiple VMs through a flexible and programmable networking layer which is known as virtual switch; however, if the hypervisor is compromised, all network functions can be disabled completely (a good old Ddos) or priority can be provided to some services instead others.

Also, NFV’s attack surface is considerably increased, compared with traditional network systems. Besides network resources (e.g., routers, switches, etc.) in the traditional networks, virtualization environments, live migration, and multi-tenant common infrastructure could also be attacked in NFV. For example, an at- tacker can snare a dedicated virtualized network function (VNF) and then spread out its bots in a victim’s whole network using the migration and multicast ability of NFV. To make matters worse, the access to a common infrastructure for a multi-tenant network based on NFV inherently allows for other security risks due to the shared resources between VMs. For example, in a data center network (DCN), side-channels (e.g., cache-based side channel) attacks and/or operational interference could be introduced unless the shared resources between VMs is securely controlled with proper security policies. In practice, it is not easy to provide a complete isolation of VNFs in DCNs.

The challenge related to secure a VFN are complex because are related to all the element that compose the environment: physical, virtual and control.

According to CSA Securing this environment is challenging for at least the following reasons:

  1. Hypervisor dependencies: Today, only a few hypervisor vendors dominate the marketplace, with many vendors hoping to become market players. Like their operating system vendor counterparts, these vendors must address security vulnerabilities in their code. Diligent patching is critical. These vendors must also understand the underlying architecture, e.g., how packets flow within the network fabric, various types of encryption and so forth.
  2. Elastic network boundaries: In NFV, the network fabric accommodates multiple functions. Placement of physical controls are limited by location and cable length. These boundaries are blurred or non-existent in NFV architecture, which complicates security matters due to the unclear boundaries. VLANs are not traditionally considered secure, so physical segregation may still be required for some purposes.
  3. Dynamic workloads: NFV’s appeal is in its agility and dynamic capabilities. Traditional security models are static and unable to evolve as network topology changes in response to demand. Inserting security services into NFV often involves relying on an overlay model that does not easily coexist across vendor boundaries.
  4. Service insertion: NFV promises elastic, transparent networks since the fabric intelligently routes packets that meet configurable criteria. Traditional security controls are deployed logically and physically inline. With NFV, there is often no simple insertion point for security services that are not already layered into the hypervisor.
  5. Stateful versus stateless inspection: Today’s networks require redundancy at a system level and along a network path. This path redundancy cause asymmetric flows that pose challenges for stateful devices that need to see every packet in order to provide access controls. Security operations during the last decade have been based on the premise that stateful inspection is more advanced and superior to stateless access controls. NFV may add complexity where security controls cannot deal with the asymmetries created by multiple, redundant network paths and devices.
  6. Scalability of available resources: As earlier noted, NFV’s appeal lies in its ability to do more with less data center rack space, power, and cooling.

Dedicating cores to workloads and network resources enables resource consolidation. Deeper inspection technologies—next-generation firewalls and Transport Layer Security (TLS) decryption, for example—are resource intensive and do not always scale without offload capability. Security controls must be pervasive to be effective, and they often require significant compute resources.

Together, SDN and NFV create additional complexity and challenges for security controls. It is not uncommon to couple an SDN model with some method of centralized control to deploy network services in the virtual layer. This approach leverages both SDN and NFV as part of the current trend toward data center consolidation.

NFV Security Framework try to address those problems.

If we want to dig the security part a little deeper we can analyze

  • Network function-specific security issues

and

  • Generic virtualization-related security issues

Network function-specific threats refer to attacks on network functions and/or resources (e.g., spoofing, sniffing and denial of service).

The foundation of NFV is set on network virtualization. In this NFV environment, a single physical infrastructure is logically shared by multiple VNFs. For these VNFs, providing a shared, hosted network infrastructure introduces new security vulnerabilities. The general platform of network virtualization consists of three entities; the providers of the network infrastructure, VNF providers, and users. Since the system consists of different operators, undoubtedly, their cooperation cannot be perfect and each entity may behave in a non-cooperative or greedy way to gain benefits.

The virtualization threats of NFV can be originated by each entity and may target the whole or part of the system.

In this view, we need to consider the threats, such as side-channel or flooding attacks as common attacks, and hypervisor, malware injection or VM migration related attacks as the virtualization and cloud specific attacks.

Basically VNF add a new layer of security concerns to the virtualizedcloud platforms for at least 3 reasons:

  • It inherits all the classic network security issues and expand them to cloud level

This means once a VNF is compromised there are good chances it can spread the attack or problem to the whole environment affecting not only the resources directly assigned but anything connected to the virtual environment. Think, as an example, the level of damage that can be provided performing a Ddos that deplete rapidly all the cloud network resources modifying, as an example, the Qos parameters and not using the traditional flooding techniques (which are anyway available).

  • It depends to several layers of abstraction and controls

Orchestrator and hypervisor are, as a matter of fact, great attack point since can

  • It requires a better planned implementation than the classic physical one,

With a tighter control on who is managing the management interfaces since, in common with SDN, VNF is more exposed to unauthorized access and configuration-related issues.

Still VNF requires studies and analysis from security perspective, the good part is that this is a new technology under development therefore there are big space for improvement.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

NFV network function virtualization security considerations was originally published on The Puchi Herald Magazine

Happy new insecure 2017: my resolutions and wishlist for new year

Happy new insecure 2017: my resolutions and wishlist for new year

Here we are, a new year comes and we, as cyber security expert, will keep warning the world about the deeply insecure world we are living.

And we will announce new technologies and new devastating scenarios related to new technologies. IoT and Cloud will rise their evil face while bad people will be lurking in the dark waiting to attack the innocent lamb crossing the road.

But, in all of this, the most of the damage will be still done by bad designed systems, by managers that does not understand what means living in a digital world, by politicians that understand cyber security issues only when they have something to gain, by entrepreneurs that still will invest in security as a disturbing side effect.

If I can make a wish for the new year is to see finally a different approach to information security, an approach that take into account that

1) to be secure you need well designed systems first and then cover it with some security geek technologies. If the design is crap all your security is crap no matter what you use on top

2) there is not security if your devices are not designed with security in mind, good code and code lifecycle is the best insurance, so if you buy the cheapest then do not cry … is your job to look for what you need and so yes is your fault if something goes wrong.

3) that finally companies, managers, entrepreneurs understand that security is within process, and not just a bunch of technologies put on top of something that you do not have the slightest idea what it is, you can’t protect what you don’t understand

4) that if people do not understand then people will not follow even the most basic rules, so training is not an optional, but the very basic. And to be sure the first that have to learn are the “CxO” which should get off the throne and start learning the world they crafted.

5) that if we keep thinking that IoT is wonderful but do not understand what IoT will bring in terms of cultural and technical problem we still will never understand what means putting security on this.

6) that if you hire an expert and then you don’t listen to himher then you are wasting hisher and your time. then do not blame the messenger.

7) that if you think that this complex field we call security can be covered by a junior that knows it all you are probably wrong unless the junior is a genious

8) that if you, security expert, think your counterpart has the slightest idea what you are talking about, you are probably wrong because you did not realize they do not understand what they does not know.

9) that all of this is part of the business, and therefore the business should took all this as one of its element, and not just a nasty annoying add on.

10) that next time someone talk about APT tells you the truth, the only way to stop an APT is to stop the attacker otherwise…. it would not be an APT

I know I know I am a but naive and still believe in fairy tales…

 

happy safe and secure 2017 to you all

security awarenesssecuritysecurity culture2017

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Happy new insecure 2017: my resolutions and wishlist for new year was originally published on The Puchi Herald Magazine

Industria 4.0. Rivoluzione culturale prima che tecnologica

Industria 4.0. Rivoluzione culturale prima che tecnologica

Industria 4.0. Rivoluzione culturale prima che tecnologica

c5d0ea88-fca7-496b-90d3-f8fae042e105-large
Siamo ormai abituati ad avere a che fare con espressioni linguistiche costituite da un nome e due numeri puntati il cui secondo è uno zero: tipo 2.0, 3.0, 4.0 eccetera. Messe in ordine ascendente, le cifre dovrebbero suggerire un’evoluzione, un passaggio verso una versione più avanzata (o aggiornata) di una data situazione o di un certo oggetto.
Fra le prime ad imporsi e più note non solo fra gli addetti ai lavori c’è sicuramente “web 2.0”. Si tratta di un fenomeno affascinante dal punto di vista ideale, che ha fatto cultura, che ha dato l’avvio a molte discussioni sul futuro delle nostre società ma che da un punto di vista tecnologico è sostanzialmente vuoto, privo di contenuti. Ciò che il web 2.0 portava come straordinaria novità era il cambio di approccio all’uso della rete, con il passaggio da un sistema in cui solo un numero limitato di content provider produceva e forniva contenuti, ad un’altra modalità che, invece, prevedeva e favoriva la nascita di una comunità sempre più allargata di utenti, ognuno dei quali in grado non solo di produrre ma anche di condividere – o mettere in rete – questi contenuti.
In un certo senso, l’Industria 4.0 non è differente dal sopra citato web 2.0: più che di rivoluzione tecnologica – il digitale non è certamente una novità di questi ultimissimi anni – si deve parlare di nuovo atteggiamento o rinnovato approccio alle modalità di fare industria, di produrre. Un atteggiamento con forti legami a questioni di ruolo e di procedura che coinvolge molto meno il personale tecnico e molto più figure chiave in azienda come il direttore finanziario o l’amministratore delegato. Personaggi che nell’ecosistema aziendale delineano le strategie e prendono le decisioni, scegliendo una direzione piuttosto che un’altra.

Operando in una compagnia che di Industria 4.0 fornisce il backbone, cioè l’informatica e quegli strumenti che servono a collegarsi, sono fermamente convinto di quanto, per un’azienda, sia importante avere un progetto. Ogni implementazione di software senza un’idea seria e strutturata alle spalle è assolutamente inutile, se non dannosa.
Ecco perché l’Industria 4.0 è innanzitutto la necessità o la capacità di definire all’interno dell’azienda, qualunque essa sia, qualunque sia l’impatto economico, un percorso di nuova gestione delle risorse. E qui si intende gestione e integrazione di tutte le risorse, da quelle energetiche a quelle produttive a quelle informatiche e così via.
L’Industria 4.0 è una bellissima idea grazie alla quale tutti gli oggetti e tutti i soggetti che fanno parte di un’impresa smettono di essere isolati e diventano interconnessi. E non solamente come connessione fisica o di comunicazione, ma come vera e propria questione di processo. In questo senso, l’interconnessione vuol dire che tutti gli oggetti – fra loro “uniti” – devono poter lavorare insieme per fornire un risultato.
Ovviamente per poter operare in modo congiunto e per garantire un risultato servono dispositivi e strumenti (hardware e software) in grado di ben funzionare, dai connettori per collegamenti, ai sensori per monitoraggio dati, ai sistemi di analisi big data e di qualità del dato, fino ai sistemi di sicurezza informatica. Elementi che pur importanti, non sono decisivi per arrivare a un risultato pieno. Ciò che viene prima del buon funzionamento degli strumenti è la capacità di integrare la tecnologia nei processi e questi – a loro volta – nella cultura aziendale. In altre parole, significa che l’impresa è preparata su come utilizzare al meglio (ovvero in modo funzionale e strategico all’attività dell’impresa stessa) ciò che le nuove tecnologia potranno generare.
Un esempio su tutti: la mole di dati che gli oggetti interconnessi producono rimane inutilizzata o sottoutilizzata a causa di scarse capacità di analisi.
L’Industria 4.0 è rivoluzionaria nel suo essere elemento di rottura rispetto al modello industriale consolidato. E questo discorso vale tanto per i grandi gruppi, dove ogni intervento ha ripercussioni maggiori (basti pensare agli interventi di efficientamento energetico) sia per le PMI.
In Italia, in particolare, è importante che la piccola e media impresa si doti degli strumenti culturali per capire dove intervenire per diventare o rimanere competitiva in un panorama mondiale di forte cambiamento. Ciò significa saper scegliere sia la soluzione più adatta alle proprie esigenze sia il sistema che meglio si sposa con i propri piani strategici di crescita. E le offerte non mancano: piattaforme di proprietà, servizi cloud, affiancamento di consulenti, affidamento in outsourcing. Ogni scelta ha vantaggi e svantaggi: l’importante è che anche in una piccola realtà imprenditoriale vi sia qualcuno che abbia una visione più ampia, a medio-lungo termine.
Come sarà, dunque, questo passaggio all’Industria 4.0? Probabilmente lento, a piccoli step sia per le ragioni culturali sopra citate, sia per motivazioni più squisitamente economiche, considerando i costi non indifferenti per l’adeguamento della produzione a ai nuovi standard.
Senza dubbio sarà inevitabile e prima si inizierà a pensare in modo nuovo, prima recupereremo come sistema-Paese competitività a livello globale.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Industria 4.0. Rivoluzione culturale prima che tecnologica was originally published on The Puchi Herald Magazine

weak manager style

weak manager style

In a previous post (http://www.thepuchiherald.com/2016/03/04/management-style-common-error-to-avoid/)  I tried to put some rationale on my thoughts about management, designing some of the characteristics a manager usually have (bad ones of course).

One of the biggest “Ahas!” new and experienced managers (and the people who work for them) have experienced  is the realization that being a strong manager doesn’t mean being forceful or domineering.

It’s just the opposite — strong managers are strong enough to lead through trust, whereas weak managers have to use the force of their job titles to make people listen to them.

Most of the management style depicted (not all) were management style that needs leading thorough fear, since they does not use, require or being able to use trust as a management tool.

When we talk about fear-based management, it’s the weak managers we are referring to! You can spot a weak manager at a hundred paces or more, because weak managers are the ones who raise their voices, make threats and generally keep their teammates off-balance and worried about pleasing the manager when our customers need them to be happily focused on their work.

Strong managers lead through trust. They trust their teammates and their employees trust them. They don’t have to be right. They don’t care whether they are right or not, as long as the right answer emerges from the conversation. They don’t have to be bossy. They trust their employees to know what to do and to ask for help if they need it. But we know trust is a bi-directional thing.

Weak managers don’t trust themselves enough to lead that way! And moreover do no trust the others because they project their mindstate on other behaviours.

Here are five sure signs that your manager is a weak manager pretending to be strong.

We can feel sorry for him (really?!?) or her but you don’t have time to waste in a workplace that dims your flame. If your manager is not a mentor and an advocate for you, you deserve to work for someone who is!

Can’t Ask for Help

When a weak manager isn’t sure what to do next, he or she won’t ask the team for help. Instead, the weak manager will make up a solution on the spot and say “Just do it — I’m the manager, and I told you what I want!” A weak manager cannot ask for input from people s/he supervises. If you try to reason with your weak manager, s/he’ll get angry.

Needs a Handy Scapegoat

When a weak manager notices that something has gone wrong, he or she has one goal in mind: to find somebody to blame! A strong manager will take responsibility for anything that doesn’t work out as planned, and say “Well, what can we learn from this?” A weak manager can’t take on that responsibility. He or she must pin the blame on somebody else — maybe you!

Can’t Say “I Don’t Know”

A strong manager can say “I don’t know what the answer is” many times a day if necessary, but a weak manager is afraid to say “I don’t know.” He or she will lie or start throwing figurative spaghetti at the wall to see what sticks.

Strong managers learn fast because they learn from successes and misfires, both. Weak managers are not as open to that kind of learning, because so much of their mental and emotional energy goes to deflecting blame when something goes awry.

Measures Everything

Strong managers focus on big goals. They follow the adage “The main thing is to keep the main thing, the main thing.” Weak managers get sidetracked with small, insignificant things. That’s why a weak manager will know that you worked until nine p.m. last night averting disaster, but still call you out for walking into work five minutes late the next morning.

Weak managers rely on measurement instead of judgment when they manage people. They have a yardstick for everything. They will say “I manage by the numbers” when in fact, they aren’t managing at all.

Can’t Say “I’m Sorry”

The last sign of a weak manager is that this kind of manager cannot bring him- or herself to say “I’m sorry” when a stronger leader would. They can’t be criticized and they can’t accept feedback, however compassionate. They can’t take it in, because their ego is too fragile to acknowledge any room for growth.

Life is long, but it’s still too short to waste time working for someone who can’t be human and down-to-earth at work. Work can be a fun and creative place, or a sweat shop where you count the minutes until quitting time.

One of the biggest determining factors in your satisfaction at work is the personality of the manager you work for. Don’t you deserve to be led by a person with the courage to lead with a human voice?

People say many things about management, but one thing they seldom say is that the job is easy. If it were, we wouldn’t have chronically dismal employee engagement rates hovering nationally around the 30 percent mark. Accordingly, here are five basic skills to focus on – attributes, actually – five areas where it’s easy to stumble, but where improvements can make the difference between failure and success and are a portrait of strong managers.

Patience

Who doesn’t need more patience in a managerial role? I know I did. There are about 600,000 things – from your own boss, to deadlines, to the grinding pressure “to do more with less,” to those nettlesome customers and employees! – that can stress you out. Besides, patience has a long tail. Employees appreciate being treated with patience when things go a little off track. They’ll often remember it and reward you with better effort.

Patience means you think and evaluate things, weight them and make your dcision based on solid fact and not upon the heat of the moment.

Courage

Have the fortitude to hold your people accountable for the big stuff they need to get right. It’s easy to default to pesky micromanagement on trivial details, but what most matters as a manager is keeping the important work on track: the complex projects, the big-ticket budget items, the key strategic initiatives.

Numerous studies show managers have chronic problems with accountability. So focus your energy in the areas where it’s most needed – with the courage to hold people responsible for the results your organization requires.

There is another site of the accccountability, courage means also to protect your people when they need to, we know corporate environment is all but fair, so a manager must have the courage to erect a shield when its people is under attack.

Thoughtfulness

Have the thoughtfulness to take the modest amount of time required to praise your people when it’s deserved. Avoid the all-too-common trap of being parsimonious with praise. To what end? Well-placed praise is one of the simplest and best management investments you can make. It costs nothing and motivates effectively. Why don’t managers use it more? I never fully understood the reticence.

Praising people can goes to a “good Job” at coffe machine, to a fair setting of goals and evaluation. Not recognizing efforts will make your people just stop trying.

Fairness

Avoid the natural tendency to play favorites. Indeed, this is a perfectly natural human tendency. Some employees are just more likable, others more difficult. Good managers keep their personal emotions in check. Resist the understandable tendency toward favoritism. Fight it. Subdue it. Defeat it. You’ll be respected for it.

And try to push the same attitude in your group, if such problem arises better to deal them or, sooner or later, they will strike back harder.

Execution

Simply put, execution is everything. Business is no academic realm of abstract ideas. To the contrary. An excellent idea counts for nothing if not properly executed. As Ross Perot used to say, “The devil’s in the details.” Operations matter. Trains have to run on time. As a manager, you’ll be judged on execution. On results (hopefully). How effectively does your team get done what they need to? Were desired targets reached? Keep your eye always on the executional ball – it can make the difference between managerial success and failure.

Do not micromanage, but be ready to move away obstacle that can avoid your group to reach theyr (and your) goals. Work with your group to solve issues, not be part of the problem.

One thing I always liked about management was that it was a fundamentally practical exercise. Tangible and results-oriented. It’s by no means a simple job, but small improvements can yield big results.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

weak manager style was originally published on The Puchi Herald Magazine

ransomware again, really?


Malware logo Crystal 128.
Malware logo Crystal 128. (Photo credit: Wikipedia)

Some days ago a friend of mine reported me that his company has been affected by a ransomware cryptoloker style. I keep hearing people infected by this kind of infection and I am starting to wonder if people has really understood what a cryptomalware really is and how it works.

 

here from Wikipedia:

Ransomware is a type of malware that restricts access to a computer system that it infects in some way, and demands that the user pay a ransom to the operators of the malware to remove the restriction.

Some forms of ransomware systematically encrypt files on the system’s hard drive (cryptoviral extortion, a threat originally envisioned by Adam Young and Moti Yung) using a large key that may be technologically infeasible to breach without paying the ransom, while some may simply lock the system and display messages intended to coax the user into paying. Ransomware typically propagates as a trojan, whose payload is disguised as a seemingly legitimate file.”

 

now let first try to understand what this means in practical words:

a ransomware is a malware“, this should make clear that this is something bad.

that restricts access to a computer system” , this clearly means that the aim of this kind of malware is to make you hard to log in to your computer andor data.

those days the most common form of this malware type is the cryptomalware, a malware that specifically deal with your data encrypting them. this basically means that your data are not deleted or moved but, simply, the malware make them unreadable. if you want to get access to your data again it requires of a ransom to be paid , if you are lucky.

now let us try to understand why this kind of malware is so popular, the reason are basically 2:

  1. it is easy to get infected
  2. it allow a quick access to money

let try to understand why it is easy to get infected by a cryptomalware:

To Crypt or not to Crypt.

Unlike we commonly think, encrypting a file is really easy and need really low permissions: you just need the right to edit the file.

you don’t really need to create special algorithm all you need is deeply documented in literature, beside crypto API are present everywhere and it’s an easy job to reach needed libraries.

So the encryption technique is still hard to be understood by IT managers, not for bad people.

if encryption is easy likewise is easy to have enough right to encrypt a file, you just need your ordinary rights on a file. you do not need administrator right, privilege escalation or esoteric techniques, your right to edit (Write) is enough.

Just remember:

If you can save it, then you can change it

Now this kind of rights are common for any user in any O.S. Even in the most security savvy organization if you can’t open a file or edit you can’t work on it.

On the other end the number of applications, programs, apps or whatever that are able to read and write with your same rights are simply almost all the one present in your system.

this means that a ransomware has:

  • consolidated technology to rely on

  • greatest attack surface (basically any app, browser)

  • low rights needed

a heaven.

another interesting aspect of the ransomware is that the activities it does are almost standard inside the OS, does not open weird ports, does not change configuration settings, does not create users…it just write… as an ordinary user or app.

This makes the identification quite difficult for any antimalware system, since the operation is a normal one, and there are thousands of write operation on file every moment.

A good cryptomalware, moreover, does not need to target sensitive system files, that can require specific access permissions. due to its aim (allow the attacker to make money) it just need to target normal documents: .PDF, .DOC, .XLS, .PST …..

and those are the documents you commonly use, edit and save.

I want you to understand a critical point:

if your antivirusantimalware didn’t detected the ransomware on the infected machine, there is no way that other AVAM can detect the operation against normal readwrite operation on files, since a good ransomware just access what the user can access and do what the user usually do.

So what you need to be infected? All you need is your browser or the access to an infected application and you have an open windows to the world of encryption.

But I have antivirus on servers…..

good for you, good security practice to avoid infection spreads across your networks, almost useless against cryptomalware activities coming from an infected machine.

Got infected, and now?

It is easy to get infected, it is a different story to get rid of it.

Basically you need the key and the algorithm used to encrypt the file to decrypt it. This can be done usually in two ways, but neither of the two gives guaranties:

  1. you pay the ransom
  2. you ask support to an antivirus company

let try to understand option 1.

there is no guarantees that once the ransom has been paid you got your key. the reason can be different, and not necessarily related to the “ethic” of your attacker (please feel some irony in the previous statement).

there are a lot of old ransomware in the wild coming from old attack campaigns that are no longer monitored, and may be there is no one ready to accept your payment in bitcoin or any other virtual currency.

this is a more common issue than you think, a ransomware attack is not meant to last for ever, but the infected sources can remain infected for a lot of time even after the attack.

the attacker can been already been arrested or simply consider to risky to accept the payment.

and I didn’t mentions other unlucky condition, like been a collateral damage of a target attack to someone else, just so unlucky to find a test code to prepare an attack ……

so pay is an option but without guaranties…

let consider option 2

If nobody gives you the code you can try to analyze the encrypted files to find out if there are “fingerprints” resembling some known attack, in this case you can try to guess the encryption key somehow once you understand what is the cryptoware that makes the damage. luckily to avoid too much resource consumption usually keys and algorithm are not the most resource intensive, so some reverse engineering is still possible.

antivirus companies have samples and technology to try to save your data… try is the key.

there are no guaranties.

The problem is how much time you need to free your data form this unwanted encryption. it is a matter of time or, if you like more, processor power. even if well equipped even antimalware companies have limitation in terms of resources, so it is not always possible to encrypt your data.

I am sorry but this is the sad truth, in a world with unlimited resources we would not be affected, but we are not in this kind of world.

What should we do?

I wrote about this in the past (same subject actually). the very first step should be:

  1. isolate the infected machine
  2. report the incident to the local authorities
  3. report the incident to your antivirus software company
  4. start a recovery and mitigation activity.

1. isolate the infected machine

a ransomware can encrypt easily so it can spread easily: shared folders on servers are an easy target. before you can realize it your user can have create a lot of more damage. and if your antivirus didn’t catch it and you use the same antivirus on the servers there are no reason to expect a different behavior on your fileservers.

2. report the incident to the local authorities

believe it or not, police enforcement units can be of great support, you can be victim of a running ransomware attack that they are already monitoring or simply they can track down the attacker and get the key. Keep in mind that a ransom, unless is organized by a government in form of taxes, is never legal.

 3. report the incident to your antivirus software company

like for the previous point you can be lucky enough and they have a solution, as I wrote before it is not sure but is a possibility. beside reporting an attack that has not be detected makes possible to write protection signatures. don’t even think for a moment that since you got hit ones you are safe for the rest of your life. this is not like “chicken pots”  , you can’t be immunized.

4. start a recovery and mitigation activity.

this is the harsh point right?

what means recovery and mitigation?

well let be clear: till you do not have forensic proofs on how the infection strikes you, you can’t say you are safe. the malware that fucked you once can be still there lurking in the dark inside your network.

you should take all the needed precautions rising up the level of monitoring, checking for unusual write activity and alert your users on what are the steps to follow.

the target is to lower the kind of damage the ransomware can do again till you are not sure you are clean, and the incident is solved.

about recovery, well it is clear here that the king of the lab is a good backup policy. This means to have a system that can allow you to recover your data to a previous state, when data were not affected. this will lower the amount of damage you are going to face.

there are thousands of articles on how to manage correctly backup so I will not spend time here. just if you think backup is obsolete you probably didn’t understood what backup means (and what are the current available technologies).

just want to mention a couple of things:

disaster recovery and backup are two different things, so do not think you can use one instead of the other

some vaulting system, versioning , journaling and other technologies can be useful to mitigate and recover from this kind of accidents.

sometimes would be enough to plan correctly what you already have in your OS to survive this kind of problem, versioning and journaling of files are technologies present in windows and Linux, you just have to carry out them knowing what you are doing (possibly).

 

to the next, cheers.

Related articles

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

ransomware again, really? was originally published on The Puchi Herald Magazine

%d bloggers like this: