NFV network function virtualization security considerations

NFV network function virtualization security considerations

I have been asked to write down a few things related to NFV and security. NFV is relatively a new thing in the IT world. It has been on the news in 2012 and since then it has followed the developing path common to the virtualization technologies.

Virtualization has made dramatic improvement in the last years. It all started at first with simple virtualization platforms, of course VMware on top of our mind, but not only. The idea was to abstract HW platforms from software ones.

Developing the idea, the abstraction growth covering multiple hw platforms moving also to the multisite in WAN and geographical development. We call this sort of implementation nowadays cloud, but all the cloud story started from the old virtualization idea.

While this platform change was taking place, the world of services was experimenting different clientserver options (web services and so on).

With the new platforms taking place it was clear the network part would have followed this trend, moving to software and virtual shores.

Form the network point of view the first step has been the SDN (Software Defined Network).

Software defined networks (SDN) allow dynamic changes of network configuration that can alter network function characteristics and behaviors. For example, SDN can render real-time topological changes of a network path. A SDN-enabled network provides a platform on which to implement a dynamic chain of virtualized network services that make up an end-to-end network service.

SDN basically allow to centrally administer, manage, configure network services creating policies that can be related to different needs and able to adapt to a changing environment.

But this level of abstraction was not enough to cover the needed flexibility of the new implementation of modern datacenter, cloud and virtualized environment.

In a SDN environment the network gears remain mainly real solid box in an environment that is way more virtualized.

The first attempt to hybridize the physical network with the virtual one was the introduction of the first virtual network element as switches and firewalls. Those components were sometimes part of the hypervisor of the virtualizing platform, sometimes virtual appliances able to run inside a virtual environment as virtual appliances.

Those solutions were (are, since actually exist) good t target specific needs but were not covering the needed flexibility, resilience and scalability required to modern virtualization systems. Products like VMware’s vShield, Cisco’s ASA 1000v and F5 Networks‘ vCMP brought improvements in management and licensing more suited to service provider needs. Each used different architectures to accomplish those goals, making a blending of approaches difficult. But the lack of a comprehensive approach was making difficult to expand those services extensively.

The natural step of the process of virtualization would have be to define something to address in a more comprehensive way the need to transfer part of the network function inside the virtual environment.

Communications service providers and network operators came together through ETSI to try to address the management issues around virtual appliances that handle network functions.

NFV represents a decoupling of the software implementation of network functions from the underlying hardware by leveraging virtualization techniques. NFV offers a variety of network functions and elements, including routing, content delivery networks, network address translation, virtual private networks (VPNs), load balancing, intrusion detection and prevention systems (IDPS), and firewalls. Multiple network functions can be consolidated into the same hardware or server. NFV allows network operators and users to provision and execute on-demand network functions on commodity hardware or CSP platforms.

NFV does not depend on SDN (and vice-versa) and can be implemented without it. However, SDN can improve performance and enable a rich feature set known as Dynamic Virtual Network Function Service Chaining (or VNF Service Chaining). This capability simplifies and accelerates deployment of NFV-based network functions.

Based on the framework introduced by the European Telecommunications Standards Institute (ETSI), NFV is built on three main domains:

  • VNF,
  • NFV infrastructure, and
  • NFV management and orchestration (MANO).

VNF can be considered as a container of network services provisioned by software, very similar to a VM operational model. The infrastructure part of NFV includes all physical resources (e.g., CPU, memory, and I/O) required for storage, computing and networking to prepare the execution of VNFs. The management of all virtualization-specific tasks in NFV framework is performed by NFV management and orchestration domain. For instance, this domain orchestrates and manages the lifecycle of resources and VNFs, and also controls the automatic remote installation of VNFs.

The resulting environment now is a little bit more complicated than a few years before.

Where in the past we used to have

  • physical servers running Operative Systems as Linux, Unix or Windows bound to the specific hardware platform, and almost monolithic services running on those solutions,
  • physical storage unit running on different technologies and network (Ethernet, iscasi, fiber optic and so on),
  • network connected through physical devices, with some specific unit providing external access (VPN servers)
  • and protected by some sort of security unit providing some sort of control (firewall, IPSIDS, 802.1x, AAA and so on)
  • managed quite independently trough different interfaces or programs

now we moved to a world where we have

a virtualized environment where services (think as an example at Docker implementations) or entire operating systems run on a virtual machines (VMs) that manage the abstraction with the hardware

and is able to allocate resources dynamically in terms of performance and even geographic locations,

a network environment which services are partially virtualized (as in VNF implementation) and partially physical and interact with the virtual environment dynamically

a network configured dynamically through control software (SDN) which can dynamically and easily modify the network topology itself in order to respond to the changing request coming from the environment (users, services, processes).

Nowadays, the impressive effects of network functions virtualization (NFV) are evident in the wide range of applications from IP node implementations (e.g., future Internet architecture) to mobile core networks. NFV allows network functions (e.g., packet forwarding and dropping) to be performed in virtual machines (VMs) in a cloud infrastructure rather than in dedicated devices. NFV as an agile and automated network is desirable for network operators due to the ability of easily developing new services and the capabilities of self-management and network programmability via software-defined networking (SDN). Furthermore, co-existence with current networks and services leads to improve customer experience, and reduces the complexity, capital expenditure (CAPEX), and operational expenditure (OPEX).

In theory, virtualization broadly describes the separation of resources or requests for a service from the underlying physical delivery of that service. In this view, NFV involves the implementation of network functions in software that can run on a range of hardware, which can be moved without the need for installation of new equipment. Therefore, all low-level physical network details are hidden and the users are provided with the dynamic configuration of network tasks.

Everything seems so better and easy, but all those transformation does not come out without a price in terms of security.

Every step into virtualization bring security concerns, related to the control plane (think of hypervisor security, orchestrator security), the communication plane, the virtual environment itself (that often inherit the same problem of the physical platform), and the transition interface between the physical and virtual world.

Despite many advantages, therefore NFV introduces new security challenges. Since all software-based virtual functions in NFV can be configured or controlled by an external entity (e.g., third-party provider or user), the whole network could be potentially compromised or destroyed. For example, in order to properly reduce hosts’ heavy workloads, a hypervisor in NFV can dynamically try to achieve the load-balance of assigned loads for multiple VMs through a flexible and programmable networking layer which is known as virtual switch; however, if the hypervisor is compromised, all network functions can be disabled completely (a good old Ddos) or priority can be provided to some services instead others.

Also, NFV’s attack surface is considerably increased, compared with traditional network systems. Besides network resources (e.g., routers, switches, etc.) in the traditional networks, virtualization environments, live migration, and multi-tenant common infrastructure could also be attacked in NFV. For example, an at- tacker can snare a dedicated virtualized network function (VNF) and then spread out its bots in a victim’s whole network using the migration and multicast ability of NFV. To make matters worse, the access to a common infrastructure for a multi-tenant network based on NFV inherently allows for other security risks due to the shared resources between VMs. For example, in a data center network (DCN), side-channels (e.g., cache-based side channel) attacks and/or operational interference could be introduced unless the shared resources between VMs is securely controlled with proper security policies. In practice, it is not easy to provide a complete isolation of VNFs in DCNs.

The challenge related to secure a VFN are complex because are related to all the element that compose the environment: physical, virtual and control.

According to CSA Securing this environment is challenging for at least the following reasons:

  1. Hypervisor dependencies: Today, only a few hypervisor vendors dominate the marketplace, with many vendors hoping to become market players. Like their operating system vendor counterparts, these vendors must address security vulnerabilities in their code. Diligent patching is critical. These vendors must also understand the underlying architecture, e.g., how packets flow within the network fabric, various types of encryption and so forth.
  2. Elastic network boundaries: In NFV, the network fabric accommodates multiple functions. Placement of physical controls are limited by location and cable length. These boundaries are blurred or non-existent in NFV architecture, which complicates security matters due to the unclear boundaries. VLANs are not traditionally considered secure, so physical segregation may still be required for some purposes.
  3. Dynamic workloads: NFV’s appeal is in its agility and dynamic capabilities. Traditional security models are static and unable to evolve as network topology changes in response to demand. Inserting security services into NFV often involves relying on an overlay model that does not easily coexist across vendor boundaries.
  4. Service insertion: NFV promises elastic, transparent networks since the fabric intelligently routes packets that meet configurable criteria. Traditional security controls are deployed logically and physically inline. With NFV, there is often no simple insertion point for security services that are not already layered into the hypervisor.
  5. Stateful versus stateless inspection: Today’s networks require redundancy at a system level and along a network path. This path redundancy cause asymmetric flows that pose challenges for stateful devices that need to see every packet in order to provide access controls. Security operations during the last decade have been based on the premise that stateful inspection is more advanced and superior to stateless access controls. NFV may add complexity where security controls cannot deal with the asymmetries created by multiple, redundant network paths and devices.
  6. Scalability of available resources: As earlier noted, NFV’s appeal lies in its ability to do more with less data center rack space, power, and cooling.

Dedicating cores to workloads and network resources enables resource consolidation. Deeper inspection technologies—next-generation firewalls and Transport Layer Security (TLS) decryption, for example—are resource intensive and do not always scale without offload capability. Security controls must be pervasive to be effective, and they often require significant compute resources.

Together, SDN and NFV create additional complexity and challenges for security controls. It is not uncommon to couple an SDN model with some method of centralized control to deploy network services in the virtual layer. This approach leverages both SDN and NFV as part of the current trend toward data center consolidation.

NFV Security Framework try to address those problems.

If we want to dig the security part a little deeper we can analyze

  • Network function-specific security issues

and

  • Generic virtualization-related security issues

Network function-specific threats refer to attacks on network functions and/or resources (e.g., spoofing, sniffing and denial of service).

The foundation of NFV is set on network virtualization. In this NFV environment, a single physical infrastructure is logically shared by multiple VNFs. For these VNFs, providing a shared, hosted network infrastructure introduces new security vulnerabilities. The general platform of network virtualization consists of three entities; the providers of the network infrastructure, VNF providers, and users. Since the system consists of different operators, undoubtedly, their cooperation cannot be perfect and each entity may behave in a non-cooperative or greedy way to gain benefits.

The virtualization threats of NFV can be originated by each entity and may target the whole or part of the system.

In this view, we need to consider the threats, such as side-channel or flooding attacks as common attacks, and hypervisor, malware injection or VM migration related attacks as the virtualization and cloud specific attacks.

Basically VNF add a new layer of security concerns to the virtualizedcloud platforms for at least 3 reasons:

  • It inherits all the classic network security issues and expand them to cloud level

This means once a VNF is compromised there are good chances it can spread the attack or problem to the whole environment affecting not only the resources directly assigned but anything connected to the virtual environment. Think, as an example, the level of damage that can be provided performing a Ddos that deplete rapidly all the cloud network resources modifying, as an example, the Qos parameters and not using the traditional flooding techniques (which are anyway available).

  • It depends to several layers of abstraction and controls

Orchestrator and hypervisor are, as a matter of fact, great attack point since can

  • It requires a better planned implementation than the classic physical one,

With a tighter control on who is managing the management interfaces since, in common with SDN, VNF is more exposed to unauthorized access and configuration-related issues.

Still VNF requires studies and analysis from security perspective, the good part is that this is a new technology under development therefore there are big space for improvement.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

NFV network function virtualization security considerations was originally published on The Puchi Herald Magazine

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

One of the problem nowadays when we talk about firewalls is to understand what actually a firewall is and what means the acronym that are used to define the different type of firewalls.
The common definition today recognizes 3 main types of firewalls:

• Firewalls
• UTM
• NGFW

But what are the differences (if any) between those things?
Let’s start with the very basic: what a firewall is.

Simulação da participação de um Firewall entre...
Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

Firewall:

A firewall is software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both.
A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.
Firewalls generally use two or more of the following methods:

Packet Filtering: Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.

• Application Gateway: The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers.

Circuit-Level Gateway: A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move.

• Proxy Servers: Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.

Stateful Inspection or Dynamic Packet Filtering: This method compares not just the header information, but also a packet’s most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.

The limit of the firewall itself is that works only on the protocol side (IPTCPUDP) without knowledge of higher level of risks that can cross the network.

From virus to content filtering there is a hundreds thousands different technologies that can complement firewall works in order to protect our resources.

To address the more complex security environment firewall evolved into something new, that cover different aspect above the simple protocol inspection. Those devices uses different technologies to address different aspect of security in one single box, the so called UTM (Unified Threat Management)

Unified Threat Management (UTM)

Unified threat management (UTM) refers to a specific kind of IT product that combines several key elements of network security to offer a comprehensive security package to buyers.

A unified threat management solution involves combining the utility of a firewall with other guards against unauthorized network traffic along with various filters and network maintenance tools, such as anti-virus programs.

The emergence of unified threat management is a relatively new phenomenon, because the various aspects that make up these products used to be sold separately. However, by selecting a UTM solution, businesses and organization can deal with just one vendor, which may be more efficient. Unified threat management solutions may also promote easier installation and updates for security systems, although others contend that a single point of access and security can be a liability in some cases.

UTM are gaining momentum but have, yet, a lack of understanding of the context and the users, therefore are not the best suit to address the new environments. In order to drive those gap security researchers moved onto upper layer and form protocol moved to applications, where user behavior and context are key.

This moved from UTM to the so called Next Generation Firewall or NGFW

next-generation firewall (NGFW)

A next-generation firewall (NGFW) is a hardware- or software-based network security system that is able to detect and block sophisticated attacks by enforcing security policies at the application level, as well as at the port and protocol level.
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities

Next-generation firewalls combine the capabilities of traditional firewalls — including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) — with Quality of Service (QoS) functionality and features not traditionally found in firewall products.

These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.

The simple definition of application control is the ability to detect an application based on the application’s content vs. the traditional layer 4 protocol. Since many application providers are moving to a Web-based delivery model, the ability to detect an application based on the content is important while working only at protocol level is almost worthless.

Yet in the market is still not easy to understand what an UTM is and what is a NGFW

UTM vs NGFW

Next-Generation Firewalls were defined by Gartner as a firewall with Application Control, User-Awareness and Intrusion Detection. So basically a NGFW is a firewall that move from creating rules based on IPport to a firewall that create its rules based on User, Application and other parameters.
The difference is, basically, the shift from the old TCPIP protocol model to a new UserApplicationContext one.
On the other end UTM are a mix of technologies that address different security aspect, from antivirus to content filtering, from web security to email security, all upon a firewall. Some of those technologies can require to be configured to recognize users while seldom deal with applications.
In the market the problem is that nowadays traditional firewall does not exist anymore, even in the area of personalhomesoho environment. Most of them are UTM based.

NGUTM

Quite most of the firewall vendors moves from old firewalls to either UTM or NGFW offering, in most of the case NGFW offer also UTM functions while most of the UTM added NGFW application control functions creating, de facto a new generation of product changing the landscape with the introduction of Next Generation UTM

UTM vendors and NGFW vendors keep fighting on what is the best solution in modern environment, but this is a marketing fight more than a technical sound discussion.

The real thing is that UTM and NGFW are becoming more and more the same thing.

NOTE it’s all about rules.

Why security devices become so comprehensive and try to unify such a lot of services? Management is the last piece of the puzzle. In two separate studies, one by Gartner and one by Verizon Data’s Risk Analysis team, it was shown that an overwhelmingly large percentage of security breaches were caused by simple configuration errors. Gartner says “More than 95% of firewall breaches are caused by firewall misconfigurations, not firewall flaws.” Verizon’s estimate is even higher, at 96%. Both agree that the vast majority of our customers’ security problems are caused by implementing security products that are too difficult to use. The answer? Put it all in one place and make it easy to manage. The best security in the world is USELESS unless you can manage it effectively.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Firewall: Traditional, UTM and NGFW. Understanding the difference was originally published on The Puchi Herald Magazine

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Sono un poco preoccupato, perché la mia impressione è che in Italia, a fronte di una delle legislazioni più severe d’Europa e i nuovi vincoli introdotti od in via di introduzione dal GDPR, il concetto di privacy sia altamente sottovalutato.

Il problema ovviamente è insito nella storica sottovalutazione italica dell’impatto delle strutture informatiche all’interno dei processi produttivi, decisionali e manageriali.

Insomma non ci si interessa, non si capisce, e non si valuta. Di conseguenza non si correggono comportamenti errati e, allo stesso tempo, non si sfruttano le nuove possibilità rimanendo al palo delle nuove tecnologie con buona pace di chi (da Olivetti a Faggin, ma potremmo citare Marconi e Meucci) avevano fatto dell’Italia la piattaforma del nuovo.

Vabbè

Polemiche parte vediamo di capire con un esempio così semplice che persino un ufficio HR potrebbe capire, cosa significa gestire la privacy e la protezione del dato.

Ti arriva un nuovo CV: cosa hai capito della privacy e del GDPR?

Immaginiamo che il vostro ufficio HR riceva un CV di un possibile candidato. Cosa non tanto strana in tempi in cui la ricerca del lavoro è fondamentale (anche io ne ho mandati in giro centinaia recentemente).

Immaginiamo anche che il CV arrivi via posta elettronica (cosa abbastanza consueta) e che giusto perché ci sono posizioni aperte il medesimo venga fatto girare tra qualche potenziale interessato. Ad esempio l’hiring manager.

Non mi interessa in questo momento sapere come finirà la storia dell’essere umano dietro quel pezzo di carta, con i suoi bisogni, aspirazioni e potenziale. Mi interessa proprio il pezzo di carta, virtuale.

Come potete immaginare quel pezzo di carta contiene dati personali, in quanto sono riferibili ad una persona fisica.

OPS va a finire che per questo devo trattarli in maniera coerente alle disposizioni di legge? Va a finire che il GDPR (qualunque cosa esso sia) viene convolto?

Temo proprio di si.

CV, CV che farne?

Allora in teoria, ammesso e non concesso che tu sia interessato in qualche maniera ad essere allineato ai dettami di legge, dovresti processare questi dati di conseguenza.

Non voglio qui fare una dissertazione di dettaglio sul GDPR, ma mi limito ad alcune considerazioni banalotte, giusto per aiutarti ad evitare una multa salta.

Il cv in questione probabilmente finirà in:

  • Diverse caselle di posta
  • Come file in qualche cartella personale eo condivisa
  • Magari in un database se sei abbastanza grande da memorizzare i cv dei candidati
  • Stampato su qualche scrivania
  • ….

Ora siccome quel pezzo di carta (virtuale e non) contiene dati personali, e magari sensibili (che so il tuo ultimo stipendio, il tuo IBAN, l’indirizzo della tua amante … ) tu che lo ricevi dovresti avere in piedi un processo di gestione che tenga presente che questi dati devono:

  • essere conservati in maniera sicura
  • deve essere possibile per il proprietario dei dati (che non sei tu, è il tipo che ha scritto il cv) chiederne la modifica
  • deve essere possibile per il proprietario dei dati (ancora non sei tu) la cancellazione.
  • Dovresti anche essere in grado di determinare quale sia la vita, all’interno dei tuoi sistemi, di questi dati e l’uso che ne fai.

Che tu ci creda o meno questo richiede di avere dei processi definiti che riguardano la “vita” di quel coso che so, adesso, incominci ad odiare.

Insomma dovresti sapere cose del tipo (tra loro strettamente correlate):

per quanto tempo tengo questa cosa nei miei sistemi?

Come salvo questi dati?

Come li cancello?

Sembra facile ma tu sai veramente che succede ai CV che ricevi?

Hai definito una “retention policy” su questi dati?

Traduco, hai una regola standard che definisce quanto tempo puoi tenere questi dati? Mesi? Anni? Lustri? Per sempre? Ma che @#?§ vuoi che me ne freghi ammé?

Ok la ultima e la tua policy attuale lo so, ma temo non sia la risposta che meglio si adatta alla nostra legislazione.

Quanto tengo quell’oggetto ed i relativi dati in casa è importante perché:

  • Il dato, fino a che lo tengo, va gestito, conservato, protetto secondo legge
  • Ne sono legalmente responsabile
  • Quando lo cancello lo devo fare davvero

Ora il punto uno è già un punto dolente. Significa che tu dovresti sapere questa roba dove si trova nei tuoi sistemi. E non importa se in forma cartacea o elettronica….

Il punto è dolente anche perché ti impone di utilizzare tecniche coerenti per la protezione, salvataggio, recupero ed accesso al dato.

Insomma vediamo se riesco a spiegartelo: se lo salvi in un database o lo metti su una cartella, devi garantire in qualche maniera che l’accesso non sia concesso proprio a chiunque, anche all’interno della azienda.

Se poi ha iniziato a girare via email so che può essere complicato evitare che vada ovunque quindi, magari, sarebbe opportuno che queste cose le sappiano tutti in azienda, non solo lo sfigato di turno che deve farsi carico di sta roba pallosa che è la privacy.

Insomma di a chi lo ha ricevuto che va trattato in maniera adeguata, magari cancellandolo se non serve più, altrimenti come garantisci la adeguata protezione ed il ciclo di vita?

Poi, ovviamente, l’IT dovrebbe garantire la protezione anche da intrusioni esterne:

qualcuno la chiama cyber security,

altri sicurezza informatica,

tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”

tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”

In teoria dovresti anche avere sistemi di salvataggio e recupero adeguati. Una roba che si chiama backup e recovery, magari ne hai sentito parlare l’ultima volta che hai perso tutti i tuoi dati …

Il tutto perché se non lo fai e, disgraziatamente, ti becchi un ransomware, o ti entrano nei sistemi e finisci sui giornali perché hanno pubblicato la foto dell’amante che il tuo candidato che aveva messo sul cv, qualcuno che ti ha mandato il cv potrebbe porsi domande e chiedere conto delle tue azioni, e sai la cosa brutta quale è? … che non è tutta colpa dell’IT (fonte di tutti i mali, notoriamente) secondo la legge …

Lo odi sempre più sto cv vero?

Pensa che la cosa è ancora più complicata perché: si qualcuno si deve far carico dei backup, testare di tanto in tanto i restore.

Roba che il buon Monguzzi non finisce mai di ricordarci, ma che puntualmente ignoriamo J

Ma lasciami aggiungere un altro pezzettino. Se il dato lo cancelli deve essere cancellato davvero. Questo significa la cancellazione di tutte le copie presenti in azienda:

  • email
  • dischi
  • database
  • backups

Lo sapevi? No?

Se non lo sai sallo!

 

Privacy e gestione del dato

So di essere controcorrente scrivendo queste cose, e che hai cose molto più importanti a cui pensare. Ma se volessi potrei continuare parlando al tuo marketing, al tuo ufficio vendite, al tuo ufficio acquisti, a chi ti gestisce il sito web e probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!

 

probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!

Il punto è che queste cose sembrano complicate, ma in realtà non lo sono davvero. Basterebbe capire cosa significa integrare i dati nei processi aziendali, e disegnare i medesimi tenendo conto delle esigenze di legge, di business e della tecnologia corrente.

Certo significa anche che non puoi trattare la privacy come una rottura di scatole che non ti riguarda, esattamente come non dovresti fare con l’IT.

Pensaci e se eviterai una multa forse mi ringrazierai anche, finita la sequela di improperi che mi sono meritato per averti detto queste cose.Ciao

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella) was originally published on The Puchi Herald Magazine

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella)

Sono un poco preoccupato, perché la mia impressione è che in Italia, a fronte di una delle legislazioni più severe d’Europa e i nuovi vincoli introdotti od in via di introduzione dal GDPR, il concetto di privacy sia altamente sottovalutato.

Il problema ovviamente è insito nella storica sottovalutazione italica dell’impatto delle strutture informatiche all’interno dei processi produttivi, decisionali e manageriali.

Insomma non ci si interessa, non si capisce, e non si valuta. Di conseguenza non si correggono comportamenti errati e, allo stesso tempo, non si sfruttano le nuove possibilità rimanendo al palo delle nuove tecnologie con buona pace di chi (da Olivetti a Faggin, ma potremmo citare Marconi e Meucci) avevano fatto dell’Italia la piattaforma del nuovo.

Vabbè

Polemiche parte vediamo di capire con un esempio così semplice che persino un ufficio HR potrebbe capire, cosa significa gestire la privacy e la protezione del dato.

Ti arriva un nuovo CV: cosa hai capito della privacy e del GDPR?

Immaginiamo che il vostro ufficio HR riceva un CV di un possibile candidato. Cosa non tanto strana in tempi in cui la ricerca del lavoro è fondamentale (anche io ne ho mandati in giro centinaia recentemente).

Immaginiamo anche che il CV arrivi via posta elettronica (cosa abbastanza consueta) e che giusto perché ci sono posizioni aperte il medesimo venga fatto girare tra qualche potenziale interessato. Ad esempio l’hiring manager.

Non mi interessa in questo momento sapere come finirà la storia dell’essere umano dietro quel pezzo di carta, con i suoi bisogni, aspirazioni e potenziale. Mi interessa proprio il pezzo di carta, virtuale.

Come potete immaginare quel pezzo di carta contiene dati personali, in quanto sono riferibili ad una persona fisica.

OPS va a finire che per questo devo trattarli in maniera coerente alle disposizioni di legge? Va a finire che il GDPR (qualunque cosa esso sia) viene convolto?

Temo proprio di si.

CV, CV che farne?

Allora in teoria, ammesso e non concesso che tu sia interessato in qualche maniera ad essere allineato ai dettami di legge, dovresti processare questi dati di conseguenza.

Non voglio qui fare una dissertazione di dettaglio sul GDPR, ma mi limito ad alcune considerazioni banalotte, giusto per aiutarti ad evitare una multa salta.

Il cv in questione probabilmente finirà in:

  • Diverse caselle di posta
  • Come file in qualche cartella personale e\o condivisa
  • Magari in un database se sei abbastanza grande da memorizzare i cv dei candidati
  • Stampato su qualche scrivania
  • ….

Ora siccome quel pezzo di carta (virtuale e non) contiene dati personali, e magari sensibili (che so il tuo ultimo stipendio, il tuo IBAN, l’indirizzo della tua amante … ) tu che lo ricevi dovresti avere in piedi un processo di gestione che tenga presente che questi dati devono:

  • essere conservati in maniera sicura
  • deve essere possibile per il proprietario dei dati (che non sei tu, è il tipo che ha scritto il cv) chiederne la modifica
  • deve essere possibile per il proprietario dei dati (ancora non sei tu) la cancellazione.
  • Dovresti anche essere in grado di determinare quale sia la vita, all’interno dei tuoi sistemi, di questi dati e l’uso che ne fai.

Che tu ci creda o meno questo richiede di avere dei processi definiti che riguardano la “vita” di quel coso che so, adesso, incominci ad odiare.

Insomma dovresti sapere cose del tipo (tra loro strettamente correlate):

per quanto tempo tengo questa cosa nei miei sistemi?

Come salvo questi dati?

Come li cancello?

Sembra facile ma tu sai veramente che succede ai CV che ricevi?

Hai definito una “retention policy” su questi dati?

Traduco, hai una regola standard che definisce quanto tempo puoi tenere questi dati? Mesi? Anni? Lustri? Per sempre? Ma che @#?§ vuoi che me ne freghi ammé?

Ok la ultima e la tua policy attuale lo so, ma temo non sia la risposta che meglio si adatta alla nostra legislazione.

Quanto tengo quell’oggetto ed i relativi dati in casa è importante perché:

  • Il dato, fino a che lo tengo, va gestito, conservato, protetto secondo legge
  • Ne sono legalmente responsabile
  • Quando lo cancello lo devo fare davvero

Ora il punto uno è già un punto dolente. Significa che tu dovresti sapere questa roba dove si trova nei tuoi sistemi. E non importa se in forma cartacea o elettronica….

Il punto è dolente anche perché ti impone di utilizzare tecniche coerenti per la protezione, salvataggio, recupero ed accesso al dato.

Insomma vediamo se riesco a spiegartelo: se lo salvi in un database o lo metti su una cartella, devi garantire in qualche maniera che l’accesso non sia concesso proprio a chiunque, anche all’interno della azienda.

Se poi ha iniziato a girare via email so che può essere complicato evitare che vada ovunque quindi, magari, sarebbe opportuno che queste cose le sappiano tutti in azienda, non solo lo sfigato di turno che deve farsi carico di sta roba pallosa che è la privacy.

Insomma di a chi lo ha ricevuto che va trattato in maniera adeguata, magari cancellandolo se non serve più, altrimenti come garantisci la adeguata protezione ed il ciclo di vita?

Poi, ovviamente, l’IT dovrebbe garantire la protezione anche da intrusioni esterne:

qualcuno la chiama cyber security,

altri sicurezza informatica,

tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”

tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”

In teoria dovresti anche avere sistemi di salvataggio e recupero adeguati. Una roba che si chiama backup e recovery, magari ne hai sentito parlare l’ultima volta che hai perso tutti i tuoi dati …

Il tutto perché se non lo fai e, disgraziatamente, ti becchi un ransomware, o ti entrano nei sistemi e finisci sui giornali perché hanno pubblicato la foto dell’amante che il tuo candidato che aveva messo sul cv, qualcuno che ti ha mandato il cv potrebbe porsi domande e chiedere conto delle tue azioni, e sai la cosa brutta quale è? … che non è tutta colpa dell’IT (fonte di tutti i mali, notoriamente) secondo la legge …

Lo odi sempre più sto cv vero?

Pensa che la cosa è ancora più complicata perché: si qualcuno si deve far carico dei backup, testare di tanto in tanto i restore.

Roba che il buon Monguzzi non finisce mai di ricordarci, ma che puntualmente ignoriamo J

Ma lasciami aggiungere un altro pezzettino. Se il dato lo cancelli deve essere cancellato davvero. Questo significa la cancellazione di tutte le copie presenti in azienda:

  • email
  • dischi
  • database
  • backups

Lo sapevi? No?

Se non lo sai sallo!

 

Privacy e gestione del dato

So di essere controcorrente scrivendo queste cose, e che hai cose molto più importanti a cui pensare. Ma se volessi potrei continuare parlando al tuo marketing, al tuo ufficio vendite, al tuo ufficio acquisti, a chi ti gestisce il sito web e probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!

 

probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!

Il punto è che queste cose sembrano complicate, ma in realtà non lo sono davvero. Basterebbe capire cosa significa integrare i dati nei processi aziendali, e disegnare i medesimi tenendo conto delle esigenze di legge, di business e della tecnologia corrente.

Certo significa anche che non puoi trattare la privacy come una rottura di scatole che non ti riguarda, esattamente come non dovresti fare con l’IT.

Pensaci e se eviterai una multa forse mi ringrazierai anche, finita la sequela di improperi che mi sono meritato per averti detto queste cose.Ciao

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Caro HR Manager: Storia di un CV e della sua privacy (GDPR lo dici a tua sorella) was originally published on The Puchi Herald Magazine

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt email over the Internet, as well as authenticate messages with digital signatures and encrypted stored files.
Previously available as freeware and now only available as a low-cost commercial version, PGP was once the most widely used privacy-ensuring program by individuals and is also used by many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de facto standard for email security.

How PGP works

Pretty Good Privacy uses a variation of the public key system. In this system, each user has an encryption key that is publicly known and a private key that is known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver’s private key to decrypt the short key and then uses that key to decrypt the message.

PGP comes in two public key versions — Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
When sending digital signatures, PGP uses an efficient algorithm that generates a hash (a mathematical summary) from the user’s name and other signature information. This hash code is then encrypted with the sender’s private key. The receiver uses the sender’s public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, the receiver is sure that the message has arrived securely from the stated sender. PGP’s RSA version uses the MD5 algorithm to generate the hash code. PGP’s Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.

Getting PGP

To use Pretty Good Privacy, download or purchase it and install it on your computer system. It typically contains a user interface that works with your customary email program. You may also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key.

PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code, and soon stopped offering a freeware version of the technology. The vendor currently offers PGP technology in a variety of its encryption products, such as Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption Desktop Storage. Symantec also makes the Symantec Encryption Desktop source code available for peer review.
Though Symantec ended PGP freeware, there are other non-proprietary versions of the technology that are available. OpenPGP is an open source version of PGP that’s supported by the Internet Engineering Task Force (IETF). OpenPGP is used by several software vendors, including as Coviant Software, which offers a free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email service powered by OpenPGP. In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-compliant encryption software.

Where can you use PGP?

Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts, emails, files, directories and whole disk partitions. Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for encrypting files shared across a network and Symantec Endpoint Encryption for full disk encryption on desktops, mobile devices and removable storage. In the case of using PGP technology for files and drives instead of messages, the Symantec products allows users to decrypt and re-encrypt data via a single sign-on.
Originally, the U.S. government restricted the exportation of PGP technology and even launched a criminal investigation against Zimmermann for putting the technology in the public domain (the investigation was later dropped). Network Associates Inc. (NAI) acquired Zimmermann’s company, PGP Inc., in 1997 and was able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec in 2010).
Today, PGP encrypted email can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends.
There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms. There are also a number of technology companies that have released tools or services supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome, while Yahoo also began offering PGP encryption for its email service.

What is an asymmetric algorithm?

Asymmetric algorithms (public key algorithms) use different keys for encryption and decryption, and the decryption key cannot (practically) be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Types of Asymmetric algorithms
Types of Asymmetric algorithms (public key algorithms):
• RSA
• Diffie-Hellman
Digital Signature Algorithm
• ElGamal
• ECDSA
• XTR

Asymmetric algorithms examples:

RSA Asymmetric algorithm
Rivest-Shamir-Adleman is the most commonly used asymmetric algorithm (public key algorithm). It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.
RSA computation occurs with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be omitted); the public key contains only n and e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.
The key size should be greater than 1024 bits for a reasonable level of security. Keys of size, say, 2048 bits should allow security for decades. There are actually multiple incarnations of this algorithm; RC5 is one of the most common in use, and RC6 was a finalist algorithm for AES.

Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field. Allows two users to exchange a secret key over an insecure medium without any prior secrets.

Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to begin communicating. However, let’s assume they do not initially possess any common secret and thus cannot use secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the construction of a common secret key over an insecure communication channel. It is based on a problem related to discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-Hellman is not implemented on hardware.

Digital Signature Algorithm
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely used and accepted as a good algorithm.

ElGamal
The ElGamal is a public key cipher – an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.

ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for ECDSA is about twice the size of the security level, in bits.

XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA), small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall comparable with ECC for current security settings).
Symmetric and asymmetric algorithms
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt the actual message using a symmetric algorithm. This is sometimes called hybrid encryption

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Pretty Good Privacy (PGP) was originally published on The Puchi Herald Magazine

Dataprivacyasia: Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos

Dataprivacyasia:  Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos

Missed @AntonioIerano at Asia‘s premier #dataprotection, #privacy and #cybersecurity conference? Watch videos

— Data Privacy Asia (@dataprivacyasia) December 10, 2016
from http://twitter.com/dataprivacyasia

//platform.twitter.com/widgets.js

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Dataprivacyasia: Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos was originally published on The Puchi Herald Magazine

unhappy employee are a cyber security concern

unhappy employee are a cyber security concern

English: Planned and published ISA99 work prod...
English: Planned and published ISA99 work products for IACS Security Standards and Technical Report (Approved 2011) (Photo credit: Wikipedia)

Have you ever considered the fact that the “best place to work” is something a security chap should take into serious consideration?

A lot of people keep thinking that security is all about one of that technology, most of those expert master perfectly one of another specific technology and think they have the sacred graal of security.

Since I am not so a big tech expert I am allowed to think that security isn’t in that specific technology, but in a systemic approach where technology cover just one part, and is just a part of a whole process.

One of the aspect that is so often forgotten when we talk about security is that most of the incidents in the security realms comes from mistakes, honest mistake.

A mistake can be due to several reasons:

  • a not clear set of instruction (alas we are still far away from the KISS – Keep It Simple Stupid –  statement, isn’t it?)
  • a not clear process (I have to do what?)
  • lack of knowledge
  • lack of attention (I have too much to do …)
  • lack of committment (Why should I care)

Mostly a composition of all those points.

Uselessly complex processes, esoteric instructions, language for “believer only” are just a part of normal security implementation.

Another  big part is played by lack of understanding, knowledge is not just related to the internal process in place, but should be extended to the basic security elements that too many in the corporate environment (also at the highest levels) just does not understand.

concepts like social engineering, vulnerability, privilege escalation are just tapestry in the CEO office not real understood concepts.

Due to this underestimation of the basic of security it is not a surprise how few attention is given to the relationship between a satisfy employee and a pissed off one.

Why an unhappy employee is a cyber security risk is strictly related to higher level of attention and commitment to cyber security needs of the company. If you are unhappy you will be less prone to listen and understand, and if you sum to this attitude the ridiculously complicated rules that sometimes the company put in place, the result is devastating.

I am not talking about the unhappy employee that willingly want to damage the company, but I am talking about all those that do not care enough to take a proactive approach in security.

Security is, at the very basic, all about your attitude and behaviour. we can cover and patch element through technology and processes, but the user will remain the key point of any security implementation.

It is not a case that social engineering, phishing and other techniques target the users to breach into a company.

Lack of knowledge (therefore lack of training) and unhappiness are the perfect mix to lower employee attention level and give the key to an attacker, even if this is not the employee intention or will.

Let us be clear here, there is not a security technology at the moment that can guarantee 100% security. there is not even a process that can guarantee that kind of security. We are still at the Neanderthal phase of cyber security but now is time to realize that without a holistic approach that take into accounts all the components, people among them, we will lose the battle.

So CSO, CISO and all the security concerned guys should become advocate of employee happiness and employee knowledge, for they our own good.

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

unhappy employee are a cyber security concern was originally published on The Puchi Herald Magazine