Guida al GDPR per chi non ne vuole sapere: a chi hai dato i dati (“so spariti i dati”)?


Se ricordi ho scritto nel post precedente (faccio finta che qualcuno li legga, sai) di cosa dovresti fare per iniziare ad affrontare questa rogna del GDPR. La prima era assumere un DPO, la seconda riguardava i dati…

Ma che sono ‘sti dati?

voglio dire tutti parlano di dati, ma cosa vuol dire? dove sono? chi sono? cosa fanno?

Allora visto che sto scrivendo in italiano assumo che tu che leggi sia italiano e probabilmente interessato alla versione italiana della storia.

Quindi vediamo cosa dice il garante al riguardo:

 


http://www.garanteprivacy.it/web/guest/home/diritti/cosa-intendiamo-per-dati-personali

Sono dati personali le informazioni che identificano o rendono identificabile una persona fisica e che possono fornire dettagli sulle sue caratteristiche, le sue abitudini, il suo stile di vita, le sue relazioni personali, il suo stato di salute, la sua situazione economica, ecc..

Particolarmente importanti sono:

  • i dati identificativi: quelli che permettono l’identificazione diretta, come i dati anagrafici (ad esempio: nome e cognome), le immagini, ecc.;
  • i dati sensibili: quelli che possono rivelare l’origine razziale ed etnica, le convinzioni religiose, filosofiche o di altro genere, le opinioni politiche, l’adesione a partiti, sindacati, associazioni od organizzazioni a carattere religioso, filosofico, politico o sindacale, lo stato di salute e la vita sessuale;
  • i dati giudiziari: quelli che possono rivelare l’esistenza di determinati provvedimenti giudiziari soggetti ad iscrizione nel casellario giudiziale (ad esempio, i provvedimenti penali di condanna definitivi, la liberazione condizionale, il divieto od obbligo di soggiorno, le misure alternative alla detenzione) o la qualità di imputato o di indagato.

Con l’evoluzione delle nuove tecnologie, altri dati personali hanno assunto un ruolo significativo, come quelli relativi alle comunicazioni elettroniche (via Internet o telefono) e quelli che consentono la geolocalizzazione, fornendo informazioni sui luoghi frequentati e sugli spostamenti.

LE PARTI IN GIOCO

Interessato è la persona fisica cui si riferiscono i dati personali. Quindi, se un trattamento riguarda, ad esempio, l’indirizzo, il codice fiscale, ecc. di Mario Rossi, questa persona è l”interessato” (articolo 4, comma 1, lettera i), del Codice);

Titolare è la persona fisica, l’impresa, l’ente pubblico o privato, l’associazione, ecc., cui spettano le decisioni sugli scopi e sulle modalità del trattamento, oltre che sugli strumenti utilizzati (articolo 4, comma 1, lettera f), del Codice);

Responsabile è la persona fisica, la società, l’ente pubblico o privato, l’associazione o l’organismo cui il titolare affida, anche all’esterno della sua struttura organizzativa, specifici e definiti compiti di gestione e controllo del trattamento dei dati (articolo 4, comma 1, lettera g), del Codice). La designazione del responsabile è facoltativa (articolo 29 del Codice);

Incaricato è la persona fisica che, per conto del titolare, elabora o utilizza materialmente i dati personali sulla base delle istruzioni ricevute dal titolare e/o dal responsabile (articolo 4, comma 1, lettera h), del Codice).

____________________________________________________________________________________________________

Partiamo dalla definizione:

I dati che rendono identificabile o identificano una persona significa tutte le informazioni che ci permettono di risalire ad una persona fisica, con l’estensione anche al capire cosa fa, cosa gli piace ….

La quantità di dati che rientrano in questa categoria è estremamente ampia, il garante si è espresso diverse volte in merito mettendo persino gli indirizzi IP in questa categoria. Cosa significa?

Gestiamo quotidianamente una enorme mole di dati: li distribuiamo in giro, sia nostri che di altri, senza spesso neanche rendercene conto. Se usi un telefono, la posta elettronica, le chat, i social media allora magari sai di cosa sto parlando anche senza rendertene pienamente conto. Se vuoi sapere dove si trova il tuo amico, collega, cliente puoi magari verificare su una qualche funzione di geolocalizzazione offerta da diverse apps o condividere direttamente coordinate o …

Ops scusa sto divagando.

Il punto è che magari non ti rendi neanche conto di questa cosa.

Cosa sono questi fantomatici dati:

Proviamo a trasferirla in area aziendale per vedere se riesci ad allargare i tuoi confini di comprensione.

Molto di quello che si fa in una azienda è, oggi come oggi, legato a doppio filo con la digitalizzazione.

Pensaci bene:

  • Comunichi principalmente via e-mail: offerte, contratti, proposte, CV per le assunzioni, comunicazioni interne, chiacchiere …ci passa un sacco di roba
  • Utilizzi il web sia per recuperare informazioni (hai presente la pagina di google che interroghi sempre) quanto per comunicare esternamente (magari vendi online, magari hai un sito web, magari fai marketing online…)
  • Forse hai anche una presenza social (traduco roba tipo facebook o linkedin)
  • Probabilmente usi un sistema di contabilità informatizzato
  • Un CRM magari
  • Hai un elenco dei clienti, con le loro email, i telefoni, forse anche i riferimenti Skype e social e altre informazioni da qualche parte…se sei in gamba forse hai anche le date di nascita (sa come è, per fare gli auguri) e altri particolari.
  • hai un elenco di dipendenti con i loro dati tipo conto corrente, contatto familiare, stipendio …
  • trasporti questi dati, li salvi, li processi e magari qualche volta li vendi anche (ci sono aziende che lo fanno come mestiere)

E la cosa interessante è che magari non ti rendi conto che in quello scambio di informazioni hai mescolato elementi di business e personali.

Ora il problema del signor GDPR e del suo perfido assistente DPO che pretende di sapere dove sono questi dati per farteli gestire e proteggere.

Dove sono?

ti ho scritto qualche giorno fa che le prime 2 cose dovresti fare è iniziare a mappare i dati per capire dove sono e se sono importanti.

per fare questa operazione la cosa più semplice è passare piano piano le vare funzioni, operazioni e tools che usi, mappare i dati relativi in termini di:

  1. cosa sono
  2. come li raccolgo
  3. dove sono
  4. come li gestisco
  5. sono importanti per GDPR

ti suggerisco di usare un duplice approccio: uno funzionale e uno per tecnologia e ppoi incroci

ad esempio quello funzionale può essere:

  1. vendite -come gestisco la vendita, come viene fatta l’offerta, come viene comunicata, che dati trattengo del cliente, offro servizi post vendita …
  2. marketing – tramite che canali comunico, faccio eventi, uso database …
  3. gestione del personale – come gestisco i dati dei dipendenti, dove metto i cv se faccio richieste personali
  4. produzione – …

quello tecnologico invece può essere:

  1. cosa comunico tramite la posta elettronica
  2. gestisco l’accesso al web dei dipendenti, proxy
  3. uso app, chat, videoconferenza
  4. uso servizi cloud
  5. uso database
  6. uso archivi cartacei (si contano anche quelli)

il consiglio è, ovviamente, quello di incrociare poi le due cose per aiutarti a capire:

  1. quali dati effettivamente usi
  2. a cosa ti servono
  3. come li gestisci

siccome non ci hai mai pensato fare un lavoro su due fronti ti aiuta ad evitare a dimenticarti dei pezzi, e ti risulterà utilissimo poi quando dovrai fare la PIA … (non  nel senso di una persona molto religiosa…)

l’idea è quella di aiutarti a capire i dati nel suo complesso.

ricordi il mio post sulla gestione dei curriculum? ecco quello è un esempio.

ma voglio anche farti altri esempi: se fai andare i tuoi utenti su internet registri, anche se non lo sai, un sacco di dati che sarebbe opportuno tu gestissi correttamente:

i log dei tuoi firewall o proxy contengono dati a rischio, tipo l’IP, L’utente e il sito visitato

se hai un sito web con delle email pubbliche, servizi vari, blog, newsletter potresti ricevere comunicazioni che vanno gestite opportunamente o potresti ricevere sottoscrizioni che vanno gestite.

ma anche il semplice database dei tuoi dipendenti se dovesse essere attaccato e i relativi dati resi pubblici ti esporrebbe al rischio, se non hai messo in piedi le norme minime di protezione, di una sonora (e meritata) multa.

 

Che fare poi?

ok una volta che hai fatto la mappa dei dati ti ritrovi in mano un nuovo strumento che ti dice

  • che dati hai
  • a cosa ti servono
  • come li usi

a questo punto puoi iniziare a capire cosa dovresti fare per essere compliant con il GDPR.

Il primo punto è capire cosa rischi, qui entra in gioco la PIA

Il secondo punto è definire il valore di questi dati

Il terzo punto è iniziare a definire le azioni correttive che servono a gestire i dati.

Le azioni correttive coprono:

  1. la definizione delle procedure operative di raccolta e gestione dei dati
    1. metriche di misurazione e controllo per l’auditing
    2. definizione delle responsabilità operative
  2. l’introduzione delle corrette tecnologie
    1. valutazione delle tecnologie correnti
    2. definizione delle eventuali introduzioni tecnologiche di sicurezza o gestione dati
  3. la definizione delle procedure di monitoraggio ed auditing
  4. la definizione delle procedure di gestione delle eccezioni e degli incidenti

ora non voglio e non posso andare in ulteriori dettagli, non fosse per altro che:

  1. non esiste una soluzione adatta a tutti
  2. anche esistesse, se faccio io il lavoro qui gratis non ci guadagno
  3. mica sto scrivendo un libro, ma solo una serie di amichevoli consigli.

dai sorridi, almeno io ti ho lanciato qualche avvertimento, e sai come si dice: uomo avvisato ….

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Guida al GDPR per chi non ne vuole sapere: a chi hai dato i dati (“so spariti i dati”)? was originally published on The Puchi Herald Magazine

Advertisements

NFV network function virtualization security considerations


I have been asked to write down a few things related to NFV and security. NFV is relatively a new thing in the IT world. It has been on the news in 2012 and since then it has followed the developing path common to the virtualization technologies.

Virtualization has made dramatic improvement in the last years. It all started at first with simple virtualization platforms, of course VMware on top of our mind, but not only. The idea was to abstract HW platforms from software ones.

Developing the idea, the abstraction growth covering multiple hw platforms moving also to the multisite in WAN and geographical development. We call this sort of implementation nowadays cloud, but all the cloud story started from the old virtualization idea.

While this platform change was taking place, the world of services was experimenting different clientserver options (web services and so on).

With the new platforms taking place it was clear the network part would have followed this trend, moving to software and virtual shores.

Form the network point of view the first step has been the SDN (Software Defined Network).

Software defined networks (SDN) allow dynamic changes of network configuration that can alter network function characteristics and behaviors. For example, SDN can render real-time topological changes of a network path. A SDN-enabled network provides a platform on which to implement a dynamic chain of virtualized network services that make up an end-to-end network service.

SDN basically allow to centrally administer, manage, configure network services creating policies that can be related to different needs and able to adapt to a changing environment.

But this level of abstraction was not enough to cover the needed flexibility of the new implementation of modern datacenter, cloud and virtualized environment.

In a SDN environment the network gears remain mainly real solid box in an environment that is way more virtualized.

The first attempt to hybridize the physical network with the virtual one was the introduction of the first virtual network element as switches and firewalls. Those components were sometimes part of the hypervisor of the virtualizing platform, sometimes virtual appliances able to run inside a virtual environment as virtual appliances.

Those solutions were (are, since actually exist) good t target specific needs but were not covering the needed flexibility, resilience and scalability required to modern virtualization systems. Products like VMware’s vShield, Cisco’s ASA 1000v and F5 Networks‘ vCMP brought improvements in management and licensing more suited to service provider needs. Each used different architectures to accomplish those goals, making a blending of approaches difficult. But the lack of a comprehensive approach was making difficult to expand those services extensively.

The natural step of the process of virtualization would have be to define something to address in a more comprehensive way the need to transfer part of the network function inside the virtual environment.

Communications service providers and network operators came together through ETSI to try to address the management issues around virtual appliances that handle network functions.

NFV represents a decoupling of the software implementation of network functions from the underlying hardware by leveraging virtualization techniques. NFV offers a variety of network functions and elements, including routing, content delivery networks, network address translation, virtual private networks (VPNs), load balancing, intrusion detection and prevention systems (IDPS), and firewalls. Multiple network functions can be consolidated into the same hardware or server. NFV allows network operators and users to provision and execute on-demand network functions on commodity hardware or CSP platforms.

NFV does not depend on SDN (and vice-versa) and can be implemented without it. However, SDN can improve performance and enable a rich feature set known as Dynamic Virtual Network Function Service Chaining (or VNF Service Chaining). This capability simplifies and accelerates deployment of NFV-based network functions.

Based on the framework introduced by the European Telecommunications Standards Institute (ETSI), NFV is built on three main domains:

  • VNF,
  • NFV infrastructure, and
  • NFV management and orchestration (MANO).

VNF can be considered as a container of network services provisioned by software, very similar to a VM operational model. The infrastructure part of NFV includes all physical resources (e.g., CPU, memory, and I/O) required for storage, computing and networking to prepare the execution of VNFs. The management of all virtualization-specific tasks in NFV framework is performed by NFV management and orchestration domain. For instance, this domain orchestrates and manages the lifecycle of resources and VNFs, and also controls the automatic remote installation of VNFs.

The resulting environment now is a little bit more complicated than a few years before.

Where in the past we used to have

  • physical servers running Operative Systems as Linux, Unix or Windows bound to the specific hardware platform, and almost monolithic services running on those solutions,
  • physical storage unit running on different technologies and network (Ethernet, iscasi, fiber optic and so on),
  • network connected through physical devices, with some specific unit providing external access (VPN servers)
  • and protected by some sort of security unit providing some sort of control (firewall, IPSIDS, 802.1x, AAA and so on)
  • managed quite independently trough different interfaces or programs

now we moved to a world where we have

a virtualized environment where services (think as an example at Docker implementations) or entire operating systems run on a virtual machines (VMs) that manage the abstraction with the hardware

and is able to allocate resources dynamically in terms of performance and even geographic locations,

a network environment which services are partially virtualized (as in VNF implementation) and partially physical and interact with the virtual environment dynamically

a network configured dynamically through control software (SDN) which can dynamically and easily modify the network topology itself in order to respond to the changing request coming from the environment (users, services, processes).

Nowadays, the impressive effects of network functions virtualization (NFV) are evident in the wide range of applications from IP node implementations (e.g., future Internet architecture) to mobile core networks. NFV allows network functions (e.g., packet forwarding and dropping) to be performed in virtual machines (VMs) in a cloud infrastructure rather than in dedicated devices. NFV as an agile and automated network is desirable for network operators due to the ability of easily developing new services and the capabilities of self-management and network programmability via software-defined networking (SDN). Furthermore, co-existence with current networks and services leads to improve customer experience, and reduces the complexity, capital expenditure (CAPEX), and operational expenditure (OPEX).

In theory, virtualization broadly describes the separation of resources or requests for a service from the underlying physical delivery of that service. In this view, NFV involves the implementation of network functions in software that can run on a range of hardware, which can be moved without the need for installation of new equipment. Therefore, all low-level physical network details are hidden and the users are provided with the dynamic configuration of network tasks.

Everything seems so better and easy, but all those transformation does not come out without a price in terms of security.

Every step into virtualization bring security concerns, related to the control plane (think of hypervisor security, orchestrator security), the communication plane, the virtual environment itself (that often inherit the same problem of the physical platform), and the transition interface between the physical and virtual world.

Despite many advantages, therefore NFV introduces new security challenges. Since all software-based virtual functions in NFV can be configured or controlled by an external entity (e.g., third-party provider or user), the whole network could be potentially compromised or destroyed. For example, in order to properly reduce hosts’ heavy workloads, a hypervisor in NFV can dynamically try to achieve the load-balance of assigned loads for multiple VMs through a flexible and programmable networking layer which is known as virtual switch; however, if the hypervisor is compromised, all network functions can be disabled completely (a good old Ddos) or priority can be provided to some services instead others.

Also, NFV’s attack surface is considerably increased, compared with traditional network systems. Besides network resources (e.g., routers, switches, etc.) in the traditional networks, virtualization environments, live migration, and multi-tenant common infrastructure could also be attacked in NFV. For example, an at- tacker can snare a dedicated virtualized network function (VNF) and then spread out its bots in a victim’s whole network using the migration and multicast ability of NFV. To make matters worse, the access to a common infrastructure for a multi-tenant network based on NFV inherently allows for other security risks due to the shared resources between VMs. For example, in a data center network (DCN), side-channels (e.g., cache-based side channel) attacks and/or operational interference could be introduced unless the shared resources between VMs is securely controlled with proper security policies. In practice, it is not easy to provide a complete isolation of VNFs in DCNs.

The challenge related to secure a VFN are complex because are related to all the element that compose the environment: physical, virtual and control.

According to CSA Securing this environment is challenging for at least the following reasons:

  1. Hypervisor dependencies: Today, only a few hypervisor vendors dominate the marketplace, with many vendors hoping to become market players. Like their operating system vendor counterparts, these vendors must address security vulnerabilities in their code. Diligent patching is critical. These vendors must also understand the underlying architecture, e.g., how packets flow within the network fabric, various types of encryption and so forth.
  2. Elastic network boundaries: In NFV, the network fabric accommodates multiple functions. Placement of physical controls are limited by location and cable length. These boundaries are blurred or non-existent in NFV architecture, which complicates security matters due to the unclear boundaries. VLANs are not traditionally considered secure, so physical segregation may still be required for some purposes.
  3. Dynamic workloads: NFV’s appeal is in its agility and dynamic capabilities. Traditional security models are static and unable to evolve as network topology changes in response to demand. Inserting security services into NFV often involves relying on an overlay model that does not easily coexist across vendor boundaries.
  4. Service insertion: NFV promises elastic, transparent networks since the fabric intelligently routes packets that meet configurable criteria. Traditional security controls are deployed logically and physically inline. With NFV, there is often no simple insertion point for security services that are not already layered into the hypervisor.
  5. Stateful versus stateless inspection: Today’s networks require redundancy at a system level and along a network path. This path redundancy cause asymmetric flows that pose challenges for stateful devices that need to see every packet in order to provide access controls. Security operations during the last decade have been based on the premise that stateful inspection is more advanced and superior to stateless access controls. NFV may add complexity where security controls cannot deal with the asymmetries created by multiple, redundant network paths and devices.
  6. Scalability of available resources: As earlier noted, NFV’s appeal lies in its ability to do more with less data center rack space, power, and cooling.

Dedicating cores to workloads and network resources enables resource consolidation. Deeper inspection technologies—next-generation firewalls and Transport Layer Security (TLS) decryption, for example—are resource intensive and do not always scale without offload capability. Security controls must be pervasive to be effective, and they often require significant compute resources.

Together, SDN and NFV create additional complexity and challenges for security controls. It is not uncommon to couple an SDN model with some method of centralized control to deploy network services in the virtual layer. This approach leverages both SDN and NFV as part of the current trend toward data center consolidation.

NFV Security Framework try to address those problems.

If we want to dig the security part a little deeper we can analyze

  • Network function-specific security issues

and

  • Generic virtualization-related security issues

Network function-specific threats refer to attacks on network functions and/or resources (e.g., spoofing, sniffing and denial of service).

The foundation of NFV is set on network virtualization. In this NFV environment, a single physical infrastructure is logically shared by multiple VNFs. For these VNFs, providing a shared, hosted network infrastructure introduces new security vulnerabilities. The general platform of network virtualization consists of three entities; the providers of the network infrastructure, VNF providers, and users. Since the system consists of different operators, undoubtedly, their cooperation cannot be perfect and each entity may behave in a non-cooperative or greedy way to gain benefits.

The virtualization threats of NFV can be originated by each entity and may target the whole or part of the system.

In this view, we need to consider the threats, such as side-channel or flooding attacks as common attacks, and hypervisor, malware injection or VM migration related attacks as the virtualization and cloud specific attacks.

Basically VNF add a new layer of security concerns to the virtualizedcloud platforms for at least 3 reasons:

  • It inherits all the classic network security issues and expand them to cloud level

This means once a VNF is compromised there are good chances it can spread the attack or problem to the whole environment affecting not only the resources directly assigned but anything connected to the virtual environment. Think, as an example, the level of damage that can be provided performing a Ddos that deplete rapidly all the cloud network resources modifying, as an example, the Qos parameters and not using the traditional flooding techniques (which are anyway available).

  • It depends to several layers of abstraction and controls

Orchestrator and hypervisor are, as a matter of fact, great attack point since can

  • It requires a better planned implementation than the classic physical one,

With a tighter control on who is managing the management interfaces since, in common with SDN, VNF is more exposed to unauthorized access and configuration-related issues.

Still VNF requires studies and analysis from security perspective, the good part is that this is a new technology under development therefore there are big space for improvement.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

NFV network function virtualization security considerations was originally published on The Puchi Herald Magazine